Eigenvalues, eigenvectors question

In summary, the Cayley-Hamilton theorem states that if p(t) is the characteristic polynomial for a matrix A, then p(A) is the zero matrix. This is true in all cases, regardless of the linear independence of the eigenvectors. However, in this problem, the eigenvectors are linearly independent and the matrix is diagonalizable, making the proof easier. The characteristic polynomial can be written as (t-y1)(t-y2)(t-y3), and since the eigenvectors are a basis for R^3, any vector can be written as a combination of them. Therefore, p(A) must be the zero matrix, as it annihilates each eigenvector individually.
  • #1
jhson114
82
0
1). suppose that y1, y2, y3 are the eigenvalues of a 3 by 3 matrix A, and suppose that u1, u2,u3 are corresponding eigenvectors. Prove that if { u1, u2, u3 } is a linearly independent set and if p(t) is the characteristic polynomial for A, then p(A) is the zero matrix.

I thought cayley-hamilton theorem simply states that if p(t) is the characteristic polynomial for A, then p(A) is the zero matrix. do the eigenvectors have to be learly independent for this to be true? i thought it was true in all cases.
 
Physics news on Phys.org
  • #2
It is true in all cases. The problem is a special case and is thus easier to prove. hint: the matrix is diagnalizable
 
  • #3
Man... I had linear algebra last semester, but i think our notation is different, what is meant by p(A)?
 
  • #4
Over the reals, the special case implies the general one, as a polynomial that vanishes on a dense set of matrices (diagonalizable ones) also vanishes on all matrices.
 
  • #5
Don't diagonalize would be my advice.

I think the question is getting at this:

We know p(t)=(t-y1)(t-y2)(t-y3)

and that u1,u2, and u3 are linearly independent (so far no one has used this fact explicitly).

As they are LI in R^3 they are a basis. so we can write any v in R^3 as a combination of the u1,u2,u3, and p(A) must annihilate the vector since it annihilates each u1,u2,u3 individually.
 

FAQ: Eigenvalues, eigenvectors question

What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are concepts in linear algebra that are used to understand the behavior of a linear transformation. Eigenvalues are the scalars that represent the scaling factor of the eigenvectors when the transformation is applied. Eigenvectors are the non-zero vectors that remain in the same direction after the transformation is applied.

How are eigenvalues and eigenvectors related?

Eigenvalues and eigenvectors are related in that each eigenvector has a corresponding eigenvalue. The eigenvalue represents the amount by which the eigenvector is scaled when the transformation is applied.

What is the importance of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are important in many areas of mathematics and science, including physics, engineering, and data analysis. They are used to solve systems of linear equations, understand the behavior of linear transformations, and find patterns and relationships in data.

How are eigenvalues and eigenvectors calculated?

Eigenvalues and eigenvectors can be calculated using various methods, such as the characteristic polynomial, diagonalization, and the power method. These methods involve finding the roots of certain equations or performing matrix operations.

What are some real-life applications of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors have many real-life applications, such as in image and signal processing, population dynamics, and quantum mechanics. They are also used in machine learning algorithms and in analyzing networks and social systems.

Similar threads

Back
Top