Why do we need generalized eigenvectors for matrices with repeated eigenvalues?

In summary: If the matrix can't be diagonalized, then it can't be in Jordan form. If it can be in Jordan form, then you can factor it into a matrix of eigenvectors and a diagonal matrix of eigenvalues. However, if the matrix has a repeated eigenvalue, then you also have to factor it into a matrix of eigenvectors and a matrix of generalized eigenvectors. And finally, you need to inverse the eigenvector/generalized eigenvector matrix to get back to the original matrix.In summary, the Jordan form is a way to represent a matrix in a standard form, and it is useful because it allows for the easy diagonalization of certain matrices. Additionally
  • #1
daviddoria
97
0
So I understand that if an nxn matrix has n distinct eigenvalues that you can diagonalize the matrix into [tex]S\LambdaS^{-1}[/tex]. This is important because then this form has lots of good properties (easy to raise to powers, etc)

So when there are not n distinct eigenvalues, you then solve
[tex](A-\lambda_n)x_n = x_{n-1} [/tex]

Why is this exactly? Also, it is then true that [tex](A-\lambda_n)^2 x= 0 [/tex]. I don't follow why that is either.

I believe all this has to do with the Jordan form. I read this http://en.wikipedia.org/wiki/Jordan_form but I didn't follow some of it. Under the "Example" section, it says "the equation [tex] Av = v[/tex] should be solved". What is that equation for?

I am a EE not a mathematician so please keep your responses at my level! haha. I'm just looking for a "whats the point" kind of explanation.

Thanks!

Dave
 
Physics news on Phys.org
  • #2
Play with an example. The matrix

[tex]\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right)[/tex]

is one of the simplest matrices that exhibits 'bad' behavior. How does it act on vectors? What are the most interesting features of that action?
 
  • #3
So any vector (a, b) multiplied by your A will produce (b, 0). The eigenvalues are both 0. I actually don't know how to find the eigenvector associated with eigenvalue 0 because [tex] det(A-0I) = 0[/tex] just results in 0 = 0. So this just means that there are no vectors which when multiplied by A don't change direction?

Then what is the meaning of a generalized eigenvalue since we just decided that there no vectors which remain unchanged?

Thanks,
Dave
 
  • #4
The eigenvectors of A are all vectors of the form (a,b) with b zero.

A note worthy feature of A is that it 'shifts' vectors backwards: (a,b) -> (b,0) -> (0,0). This idea is somewhat captured in the notion generalized eigenvectors. The theory of Jordan forms tells us that every (complex) matrix can be 'written/decomposed in a unique way' as a sum of diagonal matrices and shifts. The existence of shifts in this decomposition is essentially what prevents a matrix from being diagonalizable.
 
  • #5
I follow how (a,b) goes to (b,0), but then why does it go to (0,0)? Also, I don't know what you mean by "shifts"? I guess I've been thinking of a matrix as being able to rotate and scale a vector - is that not correct?
And I still don't see how this relates back to the equations I wrote in the original post. Sorry if I'm a bit slow!

Dave

Dave
 
  • #6
One noteworthy feature of this matrix A is that it annihilates any vector of the form (a, 0). Can you say anything noteworthy about what it does to other vectors?


Also, A is made out of rotations and rescaling: one factorization is

[tex]
\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right)
=
\left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right)
\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)
[/tex]

(the left matrix in the product is diagonal -- it rescales each component of a vector, albeit it's degenerate on the second component)


Incidentally, if the 0 eigenvalue is tripping you up, the matrix
[tex]
\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right)
[/tex]
has similar 'bad' behavior, but with the eigenvalue 1. But I think the eigenvalue of 0 case is somewhat easier to see what's happening, since there you're interested in null vectors (and generalized null vectors)
 
Last edited:
  • #7
I feel like I'm missing the point still. If a matrix has no repeating eigenvalues, then you can factor it into a matrix of eigenvectors, a diagonal matrix of eigenvalues, and the inverse of the eigenvector matrix. If there is a repeated eigenvalue, then you factor it into a matrix of eigenvectors and generalized eigenvectors, a Jordan matrix of eigenvalues on the diagonal and ones on the superdiagonal, and the inverse of the eigenvector/generalized eigenvector matrix. The question is: why is this form important? The first case you now have a diagonal matrix, but the second case you dont. I guess maybe I'm missing the importance of Jordan matrices? And I still don't understand how you find the generalized eigenvectors the way I showed in the original post.

Dave
 
  • #8
Jordan form is useful because it provides a standard representation form for ALL matrices. Compare this to the process of diagonalization, which only applies to certain matrices. Moreover, once you put a matrix into Jordan form, then in some sense that is the closest your matrix will get to being diagonal; in particular, a matrix is diagonalizable if and only if its Jordan form is a diagonal matrix.

If you want to learn how to find the Jordan form or the generalized eigenvectors of a given matrix, I recommend you look at some place better than Wikipedia! http://math.rice.edu/~friedl/math355_fall04/Jordan.pdf is a good place to start.
 
Last edited:
  • #9
Ok I'm getting there...
So the question to ask is first "can the matrix be diagonalized?". If yes, find the eigen vectors and put them in S. Put the eigen values on the diagonal of Lambda. Then A = S*Lambda*inv(S).

If no, then we ask "What is the Jordan form of A?" and instead of Lambda we get a Jordan form matrix, and S has both eigenvectors and generalized eigenvectors.

I still don't understand why we do this though?
[tex] (A - I\lambda_n)x_n = x_{n-1} [/tex]

With normal eigenvectors, we are saying we want to find the vector x for which when we multiply x by A we get a scaled version of x, namely [tex] Ax = \lambda x [/tex]

But now we are saying, when we multiply A by x, we get the first eigenvector corresponding to the current eigenvalue?? And are [tex]x_n[/tex] and [tex]x_{n-1}[/tex] orthogonal now??

Can anyone shed some light on this?

Thanks,
Dave
 
  • #10
i have completely explained this subject in my notes for math 4050 on my website.

of course i realize few people care to do the work of reading them, and prefer to ask individual one sentence questions.

but if you are the exception, be my guest.
 
  • #11
So here is my latest understanding:

A matrix will have repeated eigenvalues when the output space (Ax) is a lower dimension than the input space (x) (ie A takes any vector in R3 and puts in into a plane subspace of R3). So therefore we really only need 2 vectors to form a basis for the new space.

If that is correct, then why do we even need a third vector (the generalized eigenvector) at all?

Am I way off base here?

Thanks,
Dave
 

FAQ: Why do we need generalized eigenvectors for matrices with repeated eigenvalues?

What are generalized eigenvectors?

Generalized eigenvectors are vectors that satisfy a specific condition involving the eigenvalues and eigenvectors of a matrix. They are used in linear algebra to generalize the concept of eigenvectors to non-diagonalizable matrices.

What is the difference between eigenvectors and generalized eigenvectors?

The main difference between eigenvectors and generalized eigenvectors is that eigenvectors are only defined for diagonalizable matrices, while generalized eigenvectors can be defined for non-diagonalizable matrices.

How are generalized eigenvectors calculated?

Generalized eigenvectors are calculated by solving a system of linear equations involving the matrix and its eigenvalues. The solutions to these equations are the generalized eigenvectors.

What is the significance of generalized eigenvectors?

Generalized eigenvectors are important because they allow us to find a complete basis for the vector space spanned by a non-diagonalizable matrix. They also have applications in solving differential equations and studying the behavior of dynamical systems.

Can a matrix have more than one generalized eigenvector for the same eigenvalue?

Yes, a matrix can have multiple generalized eigenvectors for the same eigenvalue. In fact, the number of generalized eigenvectors for a given eigenvalue is equal to the algebraic multiplicity of that eigenvalue.

Similar threads

Replies
5
Views
926
Replies
12
Views
2K
Replies
5
Views
4K
Replies
6
Views
2K
Replies
16
Views
2K
Replies
6
Views
10K
Replies
3
Views
1K
Back
Top