Day 1 Linear algebra fundamentals [Including Matrix Operations]
Self Introduction¶
Welcome to the "120 Days of Quantum Computing" series! My name is Rohan Sai, and I write under the pen name Aiknight. This series is a structured journey into the fascinating world of quantum computing, designed to break down complex concepts into digestible lessons. Over the next 120 days, we will explore the mathematical foundations, quantum mechanics principles, and practical implementations of quantum algorithms using tools like Qiskit and many more
Quantum computing leverages the unique properties of quantum mechanics—such as superposition, entanglement, and interference—to solve problems that are infeasible for classical computers. Whether you're a student, researcher, or enthusiast, this series will guide you step-by-step through the theory and practice of quantum computation.
Each blog post will focus on specific topics, starting with foundational mathematics like linear algebra and progressing toward advanced quantum algorithms. Let’s embark on this exciting journey together! Let me know if you'd like to customize it further! italicized text This blog is the first in a series of my journey into quantum computing, where I will explore its foundations over the next 120 days.
The codes along with conceptual explanations for day 1 are available in my colab notebook : Colab Link
Please do take a look at it...
Linear Algebra Fundamentals¶
Quantum compuitng is a very challenging topic, quantum objects seems random and chaotic at first, but they also follow a certain set of rules. Once we undertsnad these rules, we can create new and powerdful tech.
A bit is the smallest unit of classical information, represented as 0 or 1. In quantum computing, we extend this idea to qubits, which can exist in a combination of 0 and 1 states, thanks to quantum mechanics.
To track qubit states and their transformations, we use vectors and matrices. These mathematical tools are often visualized using a Bloch Sphere, which provides an intuitive way to understand qubit behavior.
Vectors and Matrices in Quantum Computing¶
A qubit can exist in the state $|0\rangle $, $ |1\rangle $, or a superposition of both. Using linear algebra, the state of a qubit is described as a vector, represented as a single-column matrix:
$ \begin{bmatrix} a \\ b \end{bmatrix}$
This vector, also known as a quantum state vector, must satisfy the normalization condition:
$ |a|^2 + |b|^2 = 1 $
Here:
- $|a|^2$ is the probability of the qubit collapsing to the state $ |0\rangle $,
- $|b|^2$ is the probability of collapsing to the state $ |1\rangle $.
Examples of valid quantum state vectors include:
$ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \quad \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}, \quad \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} \end{bmatrix} $
In addition to representing states, quantum operations are also described by matrices. When a quantum operation (matrix) is applied to a qubit, the operation is performed by matrix multiplication. The resulting matrix represents the qubit's new state after the operation.
For example, if $ U $ is a quantum operation and $\psi $ is the state vector, the new state $ \psi' $ is given by:
$ \psi' = U \psi $
This framework of vectors and matrices forms the mathematical foundation for analyzing and manipulating qubits in quantum computing.
Introduction to Linear Algebra¶
This tutorial is designed to introduce the fundamental concepts of linear algebra, a key mathematical framework widely used in quantum computing. Linear algebra focuses on the study of matrices and vectors, which are essential for representing quantum states and performing operations on them.
While this tutorial provides an overview of the key concepts, it is not an exhaustive guide. However, it aims to give you a solid foundation in the linear algebra principles commonly applied in quantum computing.
Here’s what you’ll learn in this tutorial:
Understanding matrices and vectors
Performing basic matrix operations
Exploring the properties and operations of complex matrices
Working with inner and outer vector products
Understanding tensor products
Analyzing eigenvalues and eigenvectors
Matrices and Basic Operations¶
Matrices and Vectors¶
A matrix is set of numbers arranged in a rectangular grid. Here is a $2$ by $2$ matrix:
$$A = \begin{bmatrix} 4 & 12 \\ 23 & 54 \end{bmatrix}$$
$A_{i,j}$ refers to the element in row $i$ and column $j$ of matrix $A$ (all indices are 0-based). In the above example, $A_{0,1} = 12$.
An $n \times m$ matrix will have $n$ rows and $m$ columns, like so:
$$\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$
A $1 \times 1$ matrix is equivalent to a scalar:
$$\begin{bmatrix} 9 \end{bmatrix} = 9$$
Quantum computing uses complex-valued matrices: the elements of a matrix can be complex numbers. This, for example, is a valid complex-valued matrix:
$$\begin{bmatrix} -1 & -3i \\ i & 3 + 4i \end{bmatrix}$$
Complex Numbers¶
A complex number is a combination of a real part and an imaginary part. It is expressed in the form:
$ z = a + bi $
Where:
- $ a $ is the real part.
- $ b $ is the imaginary part.
- $ i $ is the imaginary unit, defined as $ i^2 = -1 $.
For example:
- $ z = 3 + 4i $ : Here, the real part is $ 3 $, and the imaginary part is $ 4i $.
- $ z = 5 $ : A purely real number.
- $ z = -2i $ : A purely imaginary number.
Complex numbers are fundamental in quantum computing, especially when working with quantum states and operations. We will explore their properties and applications in the next session.
Finally, a vector is an $n \times 1$ matrix. Here, for example, is a $3 \times 1$ vector:
$$V = \begin{bmatrix} 0 \\ i \\ 3 + 2i \end{bmatrix}$$
Since vectors always have a width of $1$, vector elements are sometimes written using only one index. In the above example, $V_0 = 0$ and $V_1 = i$.
Matrices and Basic Operations¶
Matrices and Vectors¶
A matrix is set of numbers arranged in a rectangular grid. Here is a $2$ by $2$ matrix:
$$A = \begin{bmatrix} 4 & 12 \\ 23 & 54 \end{bmatrix}$$
$A_{i,j}$ refers to the element in row $i$ and column $j$ of matrix $A$ (all indices are 0-based). In the above example, $A_{0,1} = 12$.
An $n \times m$ matrix will have $n$ rows and $m$ columns, like so:
$$\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$
A $1 \times 1$ matrix is equivalent to a scalar:
$$\begin{bmatrix} 9 \end{bmatrix} = 9$$
Quantum computing uses complex-valued matrices: the elements of a matrix can be complex numbers. This, for example, is a valid complex-valued matrix:
$$\begin{bmatrix} -1 & -3i \\ i & 3 + 4i \end{bmatrix}$$
Complex Numbers¶
A complex number is a combination of a real part and an imaginary part. It is expressed in the form:
$ z = a + bi $
Where:
- $ a $ is the real part.
- $ b $ is the imaginary part.
- $ i $ is the imaginary unit, defined as $ i^2 = -1 $.
For example:
- $ z = 3 + 4i $ : Here, the real part is $ 3 $, and the imaginary part is $ 4i $.
- $ z = 5 $ : A purely real number.
- $ z = -2i $ : A purely imaginary number.
Complex numbers are fundamental in quantum computing, especially when working with quantum states and operations. We will explore their properties and applications in the next session.
Finally, a vector is an $n \times 1$ matrix. Here, for example, is a $3 \times 1$ vector:
$$V = \begin{bmatrix} 0 \\ i \\ 3 + 2i \end{bmatrix}$$
Since vectors always have a width of $1$, vector elements are sometimes written using only one index. In the above example, $V_0 = 0$ and $V_1 = i$.
Matrix Addition¶
The easiest matrix operation is matrix addition. Matrix addition works between two matrices of the same size, and adds each number from the first matrix to the number in the same position in the second matrix:
$$\begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix} + \begin{bmatrix} y_{0,0} & y_{0,1} & \dotsb & y_{0,m-1} \\ y_{1,0} & y_{1,1} & \dotsb & y_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n-1,0} & y_{n-1,1} & \dotsb & y_{n-1,m-1} \end{bmatrix} = \begin{bmatrix} x_{0,0} + y_{0,0} & x_{0,1} + y_{0,1} & \dotsb & x_{0,m-1} + y_{0,m-1} \\ x_{1,0} + y_{1,0} & x_{1,1} + y_{1,1} & \dotsb & x_{1,m-1} + y_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} + y_{n-1,0} & x_{n-1,1} + y_{n-1,1} & \dotsb & x_{n-1,m-1} + y_{n-1,m-1} \end{bmatrix}$$
Similarly, we can compute $A - B$ by subtracting elements of $B$ from corresponding elements of $A$.
Matrix addition has the following properties:
- Commutativity: $A + B = B + A$
- Associativity: $(A + B) + C = A + (B + C)$
Matrix Multiplication¶
Matrix multiplication is a very important and somewhat unusual operation. The unusual thing about it is that neither its operands nor its output are the same size: an $n \times m$ matrix multiplied by an $m \times k$ matrix results in an $n \times k$ matrix. That is, for matrix multiplication to be applicable, the number of columns in the first matrix must equal the number of rows in the second matrix.
Here is how matrix product is calculated: if we are calculating $AB = C$, then
$$C_{i,j} = A_{i,0} \cdot B_{0,j} + A_{i,1} \cdot B_{1,j} + \dotsb + A_{i,m-1} \cdot B_{m-1,j} = \sum_{t = 0}^{m-1} A_{i,t} \cdot B_{t,j}$$
Here is a small example:
$$\begin{bmatrix} \color{blue} 1 & \color{blue} 2 & \color{blue} 3 \\ \color{red} 4 & \color{red} 5 & \color{red} 6 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} = \begin{bmatrix} (\color{blue} 1 \cdot 1) + (\color{blue} 2 \cdot 2) + (\color{blue} 3 \cdot 3) \\ (\color{red} 4 \cdot 1) + (\color{red} 5 \cdot 2) + (\color{red} 6 \cdot 3) \end{bmatrix} = \begin{bmatrix} 14 \\ 32 \end{bmatrix}$$
Matrix multiplication has the following properties:
- Associativity: $A(BC) = (AB)C$
- Distributivity over matrix addition: $A(B + C) = AB + AC$ and $(A + B)C = AC + BC$
- Associativity with scalar multiplication: $xAB = x(AB) = A(xB)$
Note that matrix multiplication is not commutative: $AB$ rarely equals $BA$.
Another very important property of matrix multiplication is that a matrix multiplied by a vector produces another vector.
An identity matrix $I_n$ is a special $n \times n$ matrix which has $1$s on the main diagonal, and $0$s everywhere else:
$$I_n = \begin{bmatrix} 1 & 0 & \dotsb & 0 \\ 0 & 1 & \dotsb & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dotsb & 1 \end{bmatrix}$$
What makes it special is that multiplying any matrix (of compatible size) by $I_n$ returns the original matrix. To put it another way, if $A$ is an $n \times m$ matrix:
$$AI_m = I_nA = A$$
This is why $I_n$ is called an identity matrix - it acts as a multiplicative identity. In other words, it is the matrix equivalent of the number $1$.
Quantum States and Operations - A Quick Overview¶
1. Quantum States¶
- Definition: A quantum state represents the state of a qubit or a system of qubits. It’s a vector in a complex vector space.
- Formula: For a single qubit: $ |\psi\rangle = \alpha|0\rangle + \beta|1\rangle $ For two qubits: $ |\psi\rangle = \alpha|00\rangle + \beta|01\rangle + \gamma|10\rangle + \delta|11\rangle $
2. Quantum Gates¶
Definition: Quantum gates manipulate qubit states. Common gates:
- Hadamard (H): Creates superposition.
- CNOT: Entangles qubits.
Example: Hadamard applied to $ |0\rangle $: $ H|0\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle) $
You can refer this video for more information about the quantum gates.
3. Quantum Matrix Multiplication¶
Definition: Quantum operations can simulate matrix multiplication using quantum statevectors.
Formula: $ \text{Result} = \text{Statevector} \times A \times B $
Some of the papers to refer are :
4. Examples¶
Single Qubit (Hadamard Gate):¶
- Initial State: $ |0\rangle $
- After H: $ \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle) $
Two Qubits (Hadamard + CNOT):¶
- Initial State: $ |00\rangle $
- After Operations: $ \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle) $
Conclusion¶
Quantum states are manipulated using quantum gates, and matrix multiplication can be simulated with quantum operations.
Transpose¶
The transpose operation, denoted as $A^T$, is essentially a reflection of the matrix across the diagonal: $(A^T)_{i,j} = A_{j,i}$.
Given an $n \times m$ matrix $A$, its transpose is the $m \times n$ matrix $A^T$, such that if:
$$A = \begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$
then:
$$A^T = \begin{bmatrix} x_{0,0} & x_{1,0} & \dotsb & x_{n-1,0} \\ x_{0,1} & x_{1,1} & \dotsb & x_{n-1,1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{0,m-1} & x_{1,m-1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$
For example:
$$\begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}^T = \begin{bmatrix} 1 & 3 & 5 \\ 2 & 4 & 6 \end{bmatrix}$$
A symmetric matrix is a square matrix which equals its own transpose: $A = A^T$. To put it another way, it has reflection symmetry (hence the name) across the main diagonal. For example, the following matrix is symmetric:
$$\begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 5 \\ 3 & 5 & 6 \end{bmatrix}$$
The transpose of a matrix product is equal to the product of transposed matrices, taken in reverse order:
$$(AB)^T = B^TA^T$$
Conjugate¶
The next important single-matrix operation is the matrix conjugate, denoted as $\overline{A}$. This, as the name might suggest, involves taking the complex conjugate of every element of the matrix: if
$$A = \begin{bmatrix} x_{0,0} & x_{0,1} & \dotsb & x_{0,m-1} \\ x_{1,0} & x_{1,1} & \dotsb & x_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n-1,0} & x_{n-1,1} & \dotsb & x_{n-1,m-1} \end{bmatrix}$$
Then:
$$\overline{A} = \begin{bmatrix} \overline{x}_{0,0} & \overline{x}_{0,1} & \dotsb & \overline{x}_{0,m-1} \\ \overline{x}_{1,0} & \overline{x}_{1,1} & \dotsb & \overline{x}_{1,m-1} \\ \vdots & \vdots & \ddots & \vdots \\ \overline{x}_{n-1,0} & \overline{x}_{n-1,1} & \dotsb & \overline{x}_{n-1,m-1} \end{bmatrix}$$
The conjugate of a matrix product equals to the product of conjugates of the matrices:
$$\overline{AB} = (\overline{A})(\overline{B})$$
Adjoint¶
The final important single-matrix operation is a combination of the above two. The conjugate transpose, also called the adjoint of matrix $A$, is defined as $A^\dagger = \overline{(A^T)} = (\overline{A})^T$.
A matrix is known as Hermitian or self-adjoint if it equals its own adjoint: $A = A^\dagger$. For example, the following matrix is Hermitian:
$$\begin{bmatrix} 1 & i \\ -i & 2 \end{bmatrix}$$
The adjoint of a matrix product can be calculated as follows:
$$(AB)^\dagger = B^\dagger A^\dagger$$
Unitary Matrices¶
Unitary matrices are very important for quantum computing. A matrix is unitary when it is invertible, and its inverse is equal to its adjoint: $U^{-1} = U^\dagger$. That is, an $n \times n$ square matrix $U$ is unitary if and only if $UU^\dagger = U^\dagger U = I_n$.
For example, the following matrix is unitary:
$$\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{i}{\sqrt{2}} & \frac{-i}{\sqrt{2}} \\ \end{bmatrix}$$
Advanced Operations¶
Inner Product¶
The inner product is yet another important matrix operation that is only applied to vectors. Given two vectors $V$ and $W$ of the same size, their inner product $\langle V , W \rangle$ is defined as a product of matrices $V^\dagger$ and $W$:
$$\langle V , W \rangle = V^\dagger W$$
Let's break this down so it's a bit easier to understand. A $1 \times n$ matrix (the adjoint of an $n \times 1$ vector) multiplied by an $n \times 1$ vector results in a $1 \times 1$ matrix (which is equivalent to a scalar). The result of an inner product is that scalar.
To put it another way, to calculate the inner product of two vectors, take the corresponding elements $V_k$ and $W_k$, multiply the complex conjugate of $V_k$ by $W_k$, and add up those products:
$$\langle V , W \rangle = \sum_{k=0}^{n-1}\overline{V_k}W_k$$
Here is a simple example:
$$\langle \begin{bmatrix} -6 \\ 9i \end{bmatrix} , \begin{bmatrix} 3 \\ -8 \end{bmatrix} \rangle = \begin{bmatrix} -6 \\ 9i \end{bmatrix}^\dagger \begin{bmatrix} 3 \\ -8 \end{bmatrix} = \begin{bmatrix} -6 & -9i \end{bmatrix} \begin{bmatrix} 3 \\ -8 \end{bmatrix} = (-6) \cdot (3) + (-9i) \cdot (-8) = -18 + 72i$$
Normalized vectors.¶
Input: A non-zero $n \times 1$ vector $V$.
Output: Return an $n \times 1$ vector $\frac{V}{||V||}$ - the normalized version of the vector $V$.
For More :
A video explanation can be found here. Note that when this method is used with complex vectors, you should take the modulus of the complex number for the division.
Outer Product¶
The outer product of two vectors $V$ and $W$ is defined as $VW^\dagger$. That is, the outer product of an $n \times 1$ vector and an $m \times 1$ vector is an $n \times m$ matrix. If we denote the outer product of $V$ and $W$ as $X$, then $X_{i,j} = V_i \cdot \overline{W_j}$.
Here is a simple example: outer product of $\begin{bmatrix} -3i \\ 9 \end{bmatrix}$ and $\begin{bmatrix} 9i \\ 2 \\ 7 \end{bmatrix}$ is:
$$\begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix} \begin{bmatrix} \color{red} {9i} \\ \color{red} 2 \\ \color{red} 7 \end{bmatrix}^\dagger = \begin{bmatrix} \color{blue} {-3i} \\ \color{blue} 9 \end{bmatrix} \begin{bmatrix} \color{red} {-9i} & \color{red} 2 & \color{red} 7 \end{bmatrix} = \begin{bmatrix} \color{blue} {-3i} \cdot \color{red} {(-9i)} & \color{blue} {-3i} \cdot \color{red} 2 & \color{blue} {-3i} \cdot \color{red} 7 \\ \color{blue} 9 \cdot \color{red} {(-9i)} & \color{blue} 9 \cdot \color{red} 2 & \color{blue} 9 \cdot \color{red} 7 \end{bmatrix} = \begin{bmatrix} -27 & -6i & -21i \\ -81i & 18 & 63 \end{bmatrix}$$
Tensor Product¶
The tensor product is a different way of multiplying matrices. Rather than multiplying rows by columns, the tensor product multiplies the second matrix by every element of the first matrix.
Given $n \times m$ matrix $A$ and $k \times l$ matrix $B$, their tensor product $A \otimes B$ is an $(n \cdot k) \times (m \cdot l)$ matrix defined as follows:
$$A \otimes B = \begin{bmatrix} A_{0,0} \cdot B & A_{0,1} \cdot B & \dotsb & A_{0,m-1} \cdot B \\ A_{1,0} \cdot B & A_{1,1} \cdot B & \dotsb & A_{1,m-1} \cdot B \\ \vdots & \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot B & A_{n-1,1} \cdot B & \dotsb & A_{n-1,m-1} \cdot B \end{bmatrix} = \begin{bmatrix} A_{0,0} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & b_{k-1,l-1} \end{bmatrix}} & \dotsb & A_{0,m-1} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} \\ \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot \color{blue} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {\begin{bmatrix}B_{0,0} & \dotsb & B_{0,l-1} \\ \vdots & \ddots & \vdots \\ B_{k-1,0} & \dotsb & B_{k-1,l-1} \end{bmatrix}} \end{bmatrix} =$$ $$= \begin{bmatrix} A_{0,0} \cdot \color{red} {B_{0,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{0,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{0,l-1}} \\ \vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\ A_{0,0} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{0,0} \cdot \color{red} {B_{k-1,l-1}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{0,m-1} \cdot \color{blue} {B_{k-1,l-1}} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ A_{n-1,0} \cdot \color{blue} {B_{0,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{0,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{0,l-1}} \\ \vdots & \ddots & \vdots & \dotsb & \vdots & \ddots & \vdots \\ A_{n-1,0} \cdot \color{blue} {B_{k-1,0}} & \dotsb & A_{n-1,0} \cdot \color{blue} {B_{k-1,l-1}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,0}} & \dotsb & A_{n-1,m-1} \cdot \color{red} {B_{k-1,l-1}} \end{bmatrix}$$
Here is a simple example:
$$\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \otimes \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 1 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 2 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \\ 3 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} & 4 \cdot \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} 1 \cdot 5 & 1 \cdot 6 & 2 \cdot 5 & 2 \cdot 6 \\ 1 \cdot 7 & 1 \cdot 8 & 2 \cdot 7 & 2 \cdot 8 \\ 3 \cdot 5 & 3 \cdot 6 & 4 \cdot 5 & 4 \cdot 6 \\ 3 \cdot 7 & 3 \cdot 8 & 4 \cdot 7 & 4 \cdot 8 \end{bmatrix} = \begin{bmatrix} 5 & 6 & 10 & 12 \\ 7 & 8 & 14 & 16 \\ 15 & 18 & 20 & 24 \\ 21 & 24 & 28 & 32 \end{bmatrix}$$
Notice that the tensor product of two vectors is another vector: if $V$ is an $n \times 1$ vector, and $W$ is an $m \times 1$ vector, $V \otimes W$ is an $(n \cdot m) \times 1$ vector.
Eigenvalues and Eigenvectors¶
Consider the following example of multiplying a matrix by a vector:
$$\begin{bmatrix} 1 & -3 & 3 \\ 3 & -5 & 3 \\ 6 & -6 & 4 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 4 \\ 4 \\ 8 \end{bmatrix}$$
Notice that the resulting vector is just the initial vector multiplied by a scalar (in this case 4). This behavior is so noteworthy that it is described using a special set of terms.
Given a nonzero $n \times n$ matrix $A$, a nonzero vector $V$, and a scalar $x$, if $AV = xV$, then $x$ is an eigenvalue of $A$, and $V$ is an eigenvector of $A$ corresponding to that eigenvalue.
The properties of eigenvalues and eigenvectors are used extensively in quantum computing. You can learn more about eigenvalues, eigenvectors, and their properties at Wolfram MathWorld or on Wikipedia.
For more indepth understanding of linear algebra completely you can review this youtube video YouTube.
A special thanks to the creator Monit Sharma whose posts have helped me a lot. You can visit his website Linear Algebra for more indepth numerical and computational linear algebra..
Comments
Post a Comment