Proving PsubA(A) = 0 for Any Square Matrix Using Jordan Canonical Form

In summary, the homework statement is that the characteristic polynomial of a 2x2 matrix is det(A-\lambdaI). I can show this for a general 2x2 matrix case with entries a, b,c,d and understand how it would be true of all square matrices, but I'm just not sure how to show this is true for any square matrix. We are studying Jordan Canonical Form of a matrix so, I'm thinking I should somehow use that. Any help would be appreciated.
  • #1
Wildcat
116
0

Homework Statement



Let A be any square matrix and PsubA(lambda) be its characteristic polynomial, show that
PsubA(A) = 0.



Homework Equations





The Attempt at a Solution



I can show this for a general 2x2 matrix case with entries a, b,c,d and understand how it would be true of all square matrices, but I'm just not sure how to show this is true for any square matrix. We are studying Jordan Canonical Form of a matrix so, I'm thinking I should somehow use that. Any help would be appreciated.
 
Physics news on Phys.org
  • #2
Isn't det(A - [itex]\lambda[/itex]I) the characteristic polynomial?

If [itex]\lambda[/itex] is an eigenvalue of A, then the expression above evaluates to zero. I.e., PA([itex]\lambda[/itex]) = 0.
 
  • #3
I'm sorry Mark, but I can't agree with your proof. The Cayley-Hamilton theorem seems more difficult than that.

The characteristic polynomial of a 2x2-matrix is

[tex]det\left(\begin{array}{cc} a-\lambda & b\\ c & d-\lambda\end{array}\right)[/tex]

You can't go on substituting A for lambda. Because then you will have a matrix in a matrix...
 
  • #4
Well, clearly a - A and d - A don't make sense, since you can't subtract a matrix from a scalar, but what's wrong with A - AI?
 
  • #5
Well, [tex]\lambda I[/tex] is a scalar product, while AI is a matrix product. It's not obvious to me that you can suddenly change the meaning of a product.

For example, consider the polynomial [tex]det(\lambda I)=0[/tex], then this polynomial is actually [tex]\lambda^n=0[/tex] (with n the dimension of I). Thus a root of this polynomial is a nilpotent matrix.
However, if you immediately substitute A for lambda, then you get det(A)=0. And thus a root of this polynomial would be a noninvertible matrix.

Since not every noninvertible matrix is nilpotent, we get two different answers. So the two methods are not equivalent.
 
  • #6
I retract what I said before. I remembered most of what Cayley-Hamilton says (roughly, square matrices satisfy their own characteristic equations), and misapplied it here.
 
  • #7
micromass said:
Well, [tex]\lambda I[/tex] is a scalar product, while AI is a matrix product. It's not obvious to me that you can suddenly change the meaning of a product.

For example, consider the polynomial [tex]det(\lambda I)=0[/tex], then this polynomial is actually [tex]\lambda^n=0[/tex] (with n the dimension of I). Thus a root of this polynomial is a nilpotent matrix.
However, if you immediately substitute A for lambda, then you get det(A)=0. And thus a root of this polynomial would be a noninvertible matrix.

Since not every noninvertible matrix is nilpotent, we get two different answers. So the two methods are not equivalent.


There was a hint to use Schur's theorem to show that A may be assumed to be upper triangular, then the characteristic polynomial would be (a11 - λ1)(a22 - λ2) ...(ann-λn) right?
 
  • #8
Yes, so you only need to prove things for triangular matrices.

Your characteristic polynomial is indeed correct. Now try to fill in A (your triangular matrix) in the polynomial. Do you get 0?
 
  • #9
micromass said:
Yes, so you only need to prove things for triangular matrices.

Your characteristic polynomial is indeed correct. Now try to fill in A (your triangular matrix) in the polynomial. Do you get 0?

yes, because that will make the a11 entry 0 in the first matrix then the a22 0 in the 2nd matrix and so on which will result in the zero matrix??
 
  • #10
Well, you still have to multiply all those matrices...
 
  • #11
micromass said:
Well, you still have to multiply all those matrices...

are you saying I need to show that or are you trying to move me in another direction? I know the det =0 for each (ann-λn). can I use that?
 
  • #12
No, I'm not trying to push you in another direction. You were going in a great direction!
 
  • #13
micromass said:
No, I'm not trying to push you in another direction. You were going in a great direction!

thats where I'm having trouble. When I multiply the 1st two together the first two diagonal entries become zero then times the 3rd makes the 3rd diagonal entry zero and so on, but how do I notate that elegantly? Isn't that what I'm supposed to do??
 
  • #14
That is exactly what you should do!

However, I do not think that there is a clean notation for this. It's going to get messy no matter what...
 
  • #15
micromass said:
That is exactly what you should do!

However, I do not think that there is a clean notation for this. It's going to get messy no matter what...

In 2 previous problems (not on here) I had to show that if J is any diagonal matrix then
PsubJ(J)=0 also I had to show that any Jordan block J PsubJ(J)=0 where PsubJ(λ) is its characteristic polynomial. With this being proven, can I use the fact that A has a Jordan canonical form and A is similar to J? Looking at my notes, I think this may be what I need to use to prove this problem. So I need to use A=QJQ^-1 replace A with this expression. I'm getting stuck though. Micromass, have you any ideas on this?
 

FAQ: Proving PsubA(A) = 0 for Any Square Matrix Using Jordan Canonical Form

1. What is Linear Algebra?

Linear Algebra is a branch of mathematics that deals with linear equations and their representations in vector spaces. It involves the study of vectors, matrices, and linear transformations. It has many applications in fields such as physics, engineering, economics, and computer science.

2. What is a proof in Linear Algebra?

A proof in Linear Algebra is a logical argument that uses mathematical concepts and principles to show that a statement or theorem is true. It involves using definitions, axioms, and previously proven theorems to arrive at a conclusion. Proofs are essential in mathematics as they provide a rigorous and systematic way of verifying the validity of a statement.

3. Why is Linear Algebra important?

Linear Algebra is important because it provides a powerful and versatile framework for solving problems in various fields. It allows for efficient and elegant solutions to be found for complex systems of equations and provides a deeper understanding of mathematical concepts. Additionally, it is the basis for many advanced mathematical topics, such as differential equations and multivariable calculus.

4. How is Linear Algebra used in real life?

Linear Algebra has many real-life applications, such as in computer graphics, machine learning, and data analysis. It is used to solve problems in physics, engineering, and economics, including optimizing processes, predicting outcomes, and analyzing data. It also has practical applications in everyday life, such as in navigation systems, image processing, and cryptography.

5. What are some common techniques used in Linear Algebra?

Some common techniques used in Linear Algebra include Gaussian elimination, matrix operations, eigenvalue and eigenvector analysis, and vector projection. Other techniques such as least squares regression, singular value decomposition, and matrix factorization are also frequently used in applications such as data analysis and machine learning. Additionally, geometric interpretations and transformations are often used to visualize and understand concepts in Linear Algebra.

Similar threads

Back
Top