Effie's question via email about Eigenvalues, Eigenvectors and Diagonalisation

In summary: If we let $\displaystyle \begin{align*} y = s \end{align*}$ where $\displaystyle \begin{align*} s \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} s\,\left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} -2 \\ \
  • #1
Prove It
Gold Member
MHB
1,465
24
Effie has correctly found that the eigenvalues of $\displaystyle \begin{align*} A = \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \end{align*}$ are $\displaystyle \begin{align*} \lambda_1 = -3 \end{align*}$ and $\displaystyle \begin{align*} \lambda_2 = 2 \end{align*}$. To find the eigenvectors we solve $\displaystyle \begin{align*} A \,\mathbf{x} = \lambda \, \mathbf{x} \end{align*}$ for each $\displaystyle \begin{align*} \lambda \end{align*}$. For $\displaystyle \begin{align*} \lambda_1 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= -3\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}6 & \phantom{-}2 \\ -3 & -1 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 6 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding half of row 1 to row 2 in row 2...} \end{align*}$

So we can see that $\displaystyle \begin{align*} 6\,x + 2\,y = 0 \implies y = -3\,x \end{align*}$, so by letting $\displaystyle \begin{align*} x = t \end{align*}$ where $\displaystyle \begin{align*} t \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} t\,\left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$ will do.

For $\displaystyle \begin{align*} \lambda_2 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= 2\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}1 & \phantom{-}2 \\ -3 & -6 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 1 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding three lots of row 1 to row 2 in row 2...} \end{align*}$

We can see that $\displaystyle \begin{align*} x + 2\,y = 0 \implies x = -2\,y \end{align*}$. If we let $\displaystyle \begin{align*} y = s \end{align*}$ where $\displaystyle \begin{align*} s \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} s\,\left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix}\right] \end{align*}$ will do.

So a modal matrix, whose columns are made up of the eigenvectors, is $\displaystyle \begin{align*} M = \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \end{align*}$. The spectral (diagonal) matrix has the corresponding eigenvalues on the main diagonal and 0 everywhere else, so $\displaystyle \begin{align*} D = \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \end{align*}$. We can show that $\displaystyle \begin{align*} D = M^{-1} \, A \, M \end{align*}$...

$\displaystyle \begin{align*} M^{-1} &= \frac{1}{1 \cdot 1 - \left( -2 \right) \cdot \left( -3 \right) } \, \left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ \\ M^{-1} \, A \, M &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} -3 & -4 \\ \phantom{-}9 & \phantom{-}2 \end{matrix} \right] \\ &= -\frac{1}{5} \, \left[ \begin{matrix} 15 & \phantom{-}0 \\ 0 & -10 \end{matrix} \right] \\ &= \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \\ &= D \end{align*}$
 
  • Like
Likes chwala
Mathematics news on Phys.org
  • #2
Prove It said:
Effie has correctly found that the eigenvalues of $\displaystyle \begin{align*} A = \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \end{align*}$ are $\displaystyle \begin{align*} \lambda_1 = -3 \end{align*}$ and $\displaystyle \begin{align*} \lambda_2 = 2 \end{align*}$. To find the eigenvectors we solve $\displaystyle \begin{align*} A \,\mathbf{x} = \lambda \, \mathbf{x} \end{align*}$ for each $\displaystyle \begin{align*} \lambda \end{align*}$. For $\displaystyle \begin{align*} \lambda_1 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= -3\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}6 & \phantom{-}2 \\ -3 & -1 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 6 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding half of row 1 to row 2 in row 2...} \end{align*}$

So we can see that $\displaystyle \begin{align*} 6\,x + 2\,y = 0 \implies y = -3\,x \end{align*}$, so by letting $\displaystyle \begin{align*} x = t \end{align*}$ where $\displaystyle \begin{align*} t \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} t\,\left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}1 \\ -3 \end{matrix} \right] \end{align*}$ will do.

For $\displaystyle \begin{align*} \lambda_2 \end{align*}$ we have

$\displaystyle \begin{align*} \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= 2\,\left[ \begin{matrix} x \\ y \end{matrix} \right] \\ \left[ \begin{matrix} \phantom{-}1 & \phantom{-}2 \\ -3 & -6 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \\ \left[ \begin{matrix} 1 & 2 \\ 0 & 0 \end{matrix} \right] \left[ \begin{matrix} x \\ y \end{matrix} \right] &= \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] \textrm{ after adding three lots of row 1 to row 2 in row 2...} \end{align*}$

We can see that $\displaystyle \begin{align*} x + 2\,y = 0 \implies x = -2\,y \end{align*}$. If we let $\displaystyle \begin{align*} y = s \end{align*}$ where $\displaystyle \begin{align*} s \in \mathbf{R} \end{align*}$ we find that the eigenvectors are of the family $\displaystyle \begin{align*} s\,\left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix} \right] \end{align*}$. We only need one of these eigenvectors to diagonalise the matrix, so $\displaystyle \begin{align*} \left[ \begin{matrix} -2 \\ \phantom{-}1 \end{matrix}\right] \end{align*}$ will do.

So a modal matrix, whose columns are made up of the eigenvectors, is $\displaystyle \begin{align*} M = \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \end{align*}$. The spectral (diagonal) matrix has the corresponding eigenvalues on the main diagonal and 0 everywhere else, so $\displaystyle \begin{align*} D = \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \end{align*}$. We can show that $\displaystyle \begin{align*} D = M^{-1} \, A \, M \end{align*}$...

$\displaystyle \begin{align*} M^{-1} &= \frac{1}{1 \cdot 1 - \left( -2 \right) \cdot \left( -3 \right) } \, \left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \\ \\ M^{-1} \, A \, M &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}3 & \phantom{-}2 \\ -3 & -4 \end{matrix} \right] \left[ \begin{matrix} \phantom{-}1 & -2 \\ -3 & \phantom{-}1 \end{matrix} \right] \\ &= -\frac{1}{5}\,\left[ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right] \left[ \begin{matrix} -3 & -4 \\ \phantom{-}9 & \phantom{-}2 \end{matrix} \right] \\ &= -\frac{1}{5} \, \left[ \begin{matrix} 15 & \phantom{-}0 \\ 0 & -10 \end{matrix} \right] \\ &= \left[ \begin{matrix} -3 & 0 \\ \phantom{-}0 & 2 \end{matrix} \right] \\ &= D \end{align*}$
Correct!...the learner has to first come up with what i call characteristic equation; ##(3-λ)(-4-λ)+6=0## in finding the eigenvalues...
 

FAQ: Effie's question via email about Eigenvalues, Eigenvectors and Diagonalisation

What are Eigenvalues and Eigenvectors?

Eigenvalues and Eigenvectors are concepts in linear algebra that are used to describe the behavior of linear transformations. Eigenvalues represent the scaling factor of the transformation, while Eigenvectors represent the direction of the transformation.

How are Eigenvalues and Eigenvectors related to Diagonalisation?

Diagonalisation is the process of finding a matrix that has the same Eigenvectors as the original matrix. This matrix is called a diagonal matrix, and its diagonal entries are the Eigenvalues of the original matrix.

What is the importance of Eigenvalues, Eigenvectors, and Diagonalisation?

Eigenvalues, Eigenvectors, and Diagonalisation are important in many areas of mathematics, including linear algebra, differential equations, and physics. They are used to describe the behavior of linear systems, such as in quantum mechanics and electrical circuits.

How do you calculate Eigenvalues and Eigenvectors?

To calculate Eigenvalues and Eigenvectors, you first need to find the characteristic polynomial of the matrix. Then, you can use various methods such as the quadratic formula or Gaussian elimination to find the Eigenvalues and Eigenvectors.

Are there any real-world applications of Eigenvalues, Eigenvectors, and Diagonalisation?

Yes, there are many real-world applications of Eigenvalues, Eigenvectors, and Diagonalisation. Some examples include image and signal processing, data compression, and principal component analysis in statistics. They are also used in engineering and physics to solve problems involving linear systems.

Similar threads

Back
Top