Why Is (A-3I)^2 Used Instead of (A-3I)^3 in Finding Eigenvectors?

In summary: The parentheses indicate that (A-3I)^2 is being used as a multiplicative factor. The reason why you might think there exists a set of three linearly independent eigenvectors is because the eigenvalues of (A-3I)^2 are all multiples of 1.3.
  • #1
franky2727
132
0
my question is take A= {(5,0,-1),(2,3,-1),(4,0,1)} find all eigenvalues and eigenvectors

by using the characteristic equation i get -(lamda-3)3

however its the next bit i don't understand, in the answers (A-3I)(x,y,z)=(0,0,0) is used which I'm perfectly ok with and then (A-3I)2 is used and not (A-3I)3, my point is how am i supost to know which to use and when? i have a similar question that i am trying to apply this to with (lamda-1)2(lamda-4) and am not sure which values to take?
 
Physics news on Phys.org
  • #2
I assume you meant to say "generalized eigenvectors"?

What happens if you 'use' (A-3I)^3? Have you tried it?

my point is how am i supost to know which to use and when?
Presumably, you know some basic structural theorems about generalized eigenspaces. Apply them!

(e.g. you apparently knew not to bother with (A-3I)^4. Why is that? What would happen if you did 'use' (A-3I)^4?)
 
Last edited:
  • #3
Try each in turn. Once you realize that 3 is a triple root of the characteristic equation, it follows that there may 1, 2, or 3 independent eigenvectors. The first thing you should do is try (A- 3I)(x,y,z)= (0,0,0) as you say, or A(x,y,z)= 3(x,y,z) (which is, in my mind more fundamental- the basic definition of "eigenvector". A(x,y,z)= 3(x,y,z) gives 5x- z= 3x, 2x+ 2y- z= 3y, and 4x+ z= 3z. The first equation is, of course, the same as z= 2x and so is the third. Putting z= 2x into the second equation gives 2y= 3y which gives y= 0. Any vector of the form <x, 0, 2x>= x<1, 0, 2> is an eigenvector. Since we do not get two independent eigenvectors, we look for a vector that does NOT satisfy (A- 3I)x= 0 but DOES satisfy (A- 3I)2x= 0. We can rewrite that as (A-3I)[(A-3I)x]= 0. Since any vector satisfying (A- 3I)v= 0 must be a multiple of <1, 0, 2>, we look for x so that (A- 3I)(x,y,z)= <1, 0, 2>. That is the same as 2x- z= 1, 2x+y-z= 0, 4x- 2z= 2. Again the first equation says z= 2x-1. Putting that into the third, 4x-2(2x-1)= 4x-4x+2= 2 is automatically satisfied. Putting z= 2x-1 into the third, 2x+ y-(2x-1)= y+ 1= 0 so y= -1. Taking x= 1, such a vector is <1, -1, 1>.

NOW to get a third independent vector, you could try (A- 3I)^3x= 0 which would lead to (A-3I)(x,y,z)= <1,-1,1>.
 
  • #4
i think i understand you, do you mean that using these different multiplications we are simply trying to get the same amount of eigenvectors as the dimension and it doesn't really matter which ones we use as long as we end up with 3 answers, the only reason we are using ^1 then ^2 is logical order?
 
  • #5
and no i didnt mean to say generalised eigenvectors the question isn't about generalised eigenvectors its simply eigenvectors and eigenvalues
 
  • #6
and no i didnt mean to say generalised eigenvectors the question isn't about generalised eigenvectors its simply eigenvectors and eigenvalues
Then why is (A-3I)^2 involved? And why would you think there exists a set of three linearly independent eigenvectors?
 

FAQ: Why Is (A-3I)^2 Used Instead of (A-3I)^3 in Finding Eigenvectors?

What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are concepts in linear algebra that are used to represent the behavior of a linear transformation or a matrix. Eigenvalues are scalar values that represent the amount by which an eigenvector is scaled when it is transformed by a matrix. Eigenvectors are non-zero vectors that are unchanged in direction when transformed by a matrix, but may be scaled by a factor known as the eigenvalue.

How are eigenvalues and eigenvectors calculated?

Eigenvalues and eigenvectors can be calculated using a variety of methods, such as the power method, the inverse iteration method, or the QR algorithm. These methods involve finding the roots of the characteristic polynomial of a matrix, which is a polynomial that has the eigenvalues as its roots. The eigenvectors can then be found by solving a system of linear equations using the eigenvalues.

What is the significance of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors have many applications in mathematics, physics, and engineering. They are used to study the behavior of linear systems, such as in analyzing vibrations in structures, predicting population dynamics, or understanding quantum mechanics. They also have practical applications in data analysis, as they can be used to reduce the dimensionality of data and identify patterns.

Can a matrix have multiple eigenvalues and eigenvectors?

Yes, a matrix can have multiple eigenvalues and corresponding eigenvectors. In fact, most matrices have multiple eigenvalues and eigenvectors. However, the eigenvalues and eigenvectors must be distinct, meaning that no two eigenvectors can have the same eigenvalue. This is known as the algebraic multiplicity of an eigenvalue.

What is the relationship between eigenvalues and eigenvectors?

The relationship between eigenvalues and eigenvectors is that eigenvectors are associated with specific eigenvalues. In other words, each eigenvalue has a corresponding eigenvector. Additionally, the eigenvectors associated with different eigenvalues are linearly independent, meaning they are not scalar multiples of each other. This relationship is important because it allows us to decompose a matrix into a diagonal form, making it easier to analyze and manipulate.

Similar threads

Replies
7
Views
2K
Replies
3
Views
2K
Replies
3
Views
2K
Replies
8
Views
2K
Replies
19
Views
3K
Back
Top