Proving the Diagonalization of a Real Matrix with Distinct Eigenvalues

  • Thread starter gtfitzpatrick
  • Start date
  • Tags
    Eigenvalues
In summary, the real matrix A= \begin{pmatrix}\alpha & \beta \\ 1 & 0 \end{pmatrix} has distinct eigenvalues \lambda_1 and \lambda_2. If P=\begin{pmatrix}\lambda_1 & \lambda_2 \\ 1 & 1 \end{pmatrix}, then P^{-1}AP = D = \begin{pmatrix}\lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix}. This can be proven by showing that the columns of P are the eigenvectors of A. Additionally, the characteristic equations for A are x^2-\alpha x-\beta=0, which can be used
  • #1
gtfitzpatrick
379
0
The real matrix A= [tex]

\begin{pmatrix}\alpha & \beta \\ 1 & 0 \end{pmatrix}


[/tex] has distinct eigenvalues [tex]\lambda1[/tex] and [tex]\lambda2[/tex].
If P=[tex]

\begin{pmatrix}\lambda1 & \lambda2 \\ 1 & 0 \end{pmatrix}


[/tex]

proove that P[tex]^{}^-^1[/tex]AP = D =diag{[tex]\lambda1[/tex] , [tex]\lambda2[/tex]}.

deduce that, for every positive integer m, A[tex]^{}m[/tex] = PD[tex]^{}m[/tex]P[tex]^{}^-^1)[/tex]


so i just tryed to multiply the whole lot out, (p^-1 is easy to find, just swap,change signs)
and i got
[tex]

\begin{pmatrix}\lambda1(\alpha - \lambda2) + \beta & \lambda2(\alpha - \lambda2) + \beta \\ \lambda1(-\alpha + \lambda1) - \beta & \lambda2(-\alpha + \lambda1) - \beta \end{pmatrix}


[/tex]

am i going the right road with this or should i be approaching it differently?
 
Physics news on Phys.org
  • #2


Just doing the calculation should show that, but I don't get that for the calculation.

If
[tex]P= \begin{bmatrix}\lambda_1 & \lambda_2 \\ 1 & 0\end{bmatrix}[/tex]
then
[tex]P^{-1}= \begin{bmatrix}0 & 1 \\ \frac{1}{\lambda_1} & -\frac{\lambda_1}{\lambda_2}\end{bmatrix}[/tex]

Is that what you got?

You will also want to use the fact that the characteristic equation for A is [itex]x^2- \alpha x- \beta= 0[/itex] so [itex]\lambda_1^2- \alpha \lambda_1- \beta= 0[/itex] and [itex]\lambda_2^2- \alpha \lambda_2- \beta= 0[/itex].
 
  • #3


HallsofIvy said:
Just doing the calculation should show that, but I don't get that for the calculation.

If
[tex]P= \begin{bmatrix}\lambda_1 & \lambda_2 \\ 1 & 0\end{bmatrix}[/tex]
then
[tex]P^{-1}= \begin{bmatrix}0 & 1 \\ \frac{1}{\lambda_1} & -\frac{\lambda_1}{\lambda_2}\end{bmatrix}[/tex]

Is that what you got?

Nice reply!

HallsofIvy said:
You will also want to use the fact that the characteristic equation for A is [itex]x^2- \alpha x- \beta= 0[/itex] so [itex]\lambda_1^2- \alpha \lambda_1- \beta= 0[/itex] and [itex]\lambda_2^2- \alpha \lambda_2- \beta= 0[/itex].

Perhaps, something like:

[tex]\lambda_{1} = \frac{\alpha \pm \sqrt{\alpha^{2}+4\beta}}{2}[/tex]

[tex]\lambda_{2} = \frac{\alpha \pm \sqrt{\alpha^{2}+4\beta}}{2}[/tex]

not sure though how it will solve the problem.
 
  • #4


Couldn't you prove it by showing that the columns of P are the eigenvectors of A?
 
  • #5


Yes, if you already have that theorem. But the obvious way to do it is to just do the product [itex]P^{-1}AP[/itex].
 
  • #6


Thanks for all the replies. i had it wrong when getting p^-1 i only went and forgot 1/detA!

so for P^1AP i get [tex]
= \begin{bmatrix}\lambda_1 & \lambda_1 \\ \alpha-\lambda_1^2/\lambda_2+\alpha/\lambda_1 & \alpha - \lambda_1^2/\lambda_2\end{bmatrix}
[/tex]

I thought diag([tex]\lambda_1[/tex],[tex]\lambda_2[/tex]) = [tex]
D= \begin{bmatrix}\lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}
[/tex]

i still think I am doing something wrong
confussed!
 
  • #7


After repeatedly trying and not getting anywhere I am after realising i had the question wrong P should read [tex]


\begin{pmatrix}\lambda1 & \lambda2 \\ 1 & 1 \end{pmatrix}



[/tex]

So I am off the try this new version.

But if i wanted to prove it like you said random variable how would i go about that?
take
[itex]
\lambda_1^2- \alpha \lambda_1- \beta= 0 and
\lambda_2^2- \alpha \lambda_2- \beta= 0
[/itex] and let them = columns of p?
 
  • #8


right so now i got 1/([tex]\lambda_1-\lambda_2[/tex]) [tex]\begin{bmatrix}\lambda_1(\alpha - \lambda _2 ) + \beta & \lambda_2(\alpha - \lambda_2) + \beta \\ \lambda_1(-\alpha + \lambda_1) - \beta & \lambda_2(-\alpha + \lambda_1) - \beta\end{bmatrix}

[/tex]

not sure where to go from here...
 

FAQ: Proving the Diagonalization of a Real Matrix with Distinct Eigenvalues

What is the distinct eigenvalues problem?

The distinct eigenvalues problem is a mathematical problem that involves finding the eigenvalues and eigenvectors of a given matrix. Eigenvalues are special numbers that represent the scaling factor of a vector when it is multiplied by a matrix, and eigenvectors are the corresponding vectors that do not change direction during the transformation. In the distinct eigenvalues problem, the matrix has distinct, or different, eigenvalues, which makes the problem easier to solve compared to when the matrix has repeated eigenvalues.

Why is the distinct eigenvalues problem important?

The distinct eigenvalues problem is important because it has many practical applications in fields such as physics, engineering, and computer science. It is used to analyze and solve problems involving linear transformations, such as determining the stability of a system or finding the optimal solution to a system of equations. It is also a fundamental concept in linear algebra and is essential in understanding more advanced topics, such as diagonalization and Jordan canonical form.

How is the distinct eigenvalues problem solved?

The distinct eigenvalues problem is typically solved by finding the characteristic polynomial of the matrix, which is a polynomial function that has the eigenvalues as its roots. The eigenvalues can then be found by solving the characteristic polynomial, and the corresponding eigenvectors can be found by plugging each eigenvalue into the original matrix and solving for the eigenvector using Gaussian elimination or other methods.

What are the properties of matrices with distinct eigenvalues?

Matrices with distinct eigenvalues have a number of important properties, such as being diagonalizable and having linearly independent eigenvectors. This means that the matrix can be transformed into a diagonal matrix using a change of basis, and the eigenvectors can be used to form a basis for the vector space. Matrices with distinct eigenvalues also have unique solutions to the distinct eigenvalues problem and are easier to analyze compared to matrices with repeated eigenvalues.

Can a matrix have an infinite number of distinct eigenvalues?

No, a matrix can have at most n distinct eigenvalues, where n is the dimension of the matrix. This is because the characteristic polynomial of a matrix is a polynomial function of degree n, and a polynomial can have at most n distinct roots. However, a matrix can have fewer than n distinct eigenvalues, depending on the values of the entries in the matrix.

Similar threads

Replies
2
Views
695
Replies
6
Views
785
Replies
7
Views
2K
Replies
6
Views
2K
Replies
12
Views
2K
Replies
16
Views
1K
Replies
11
Views
2K
Back
Top