Eigenvalues and characteristic polynomials

In summary, there is a method of diagonalizing an nxn-matrix by finding the characteristic polynomial, finding the roots of the polynomial, and using them to construct a diagonal matrix and a matrix of eigenvectors. This method can be applied to randomly created polynomials of degree m, and is related to the quadratic form. Additionally, the companion matrix of a monic polynomial can be used if the eigenvalues are not known.
  • #1
dobedobedo
28
0
Hello guise.

I am familiar to a method of diagonalizing an nxn-matrix which fulfills the following condition:
the sum of the dimensions of the eigenspaces is equal to n.

As to the algorithm itself, it says:

1. Find the characteristic polynomial.
2. Find the roots of the characteristic polynomial.
3. Let the eigenvectors [itex]v_{i}[/itex] be the column vectors of some matrix S.
4. Let the eigenvalues [itex]\lambda_{i}[/itex] be the elements of some diagonal matrix, ordered to CORRESPOND the order of the eigenvectors in S.
5. Our Diagonalization of A should be:
[itex]A = S \cdot A \cdot S^{-1} = (v_{1}...v_ {i}...v_ {n}) \cdot (\lambda_{1}...\lambda_{i}...\lambda_{n}) \cdot S^{-1}[/itex]

My question is: how do I find at least one such matrix A Corresponding to some randomly created polynomial of degree [itex]m[/itex] with integer roots? If it is too difficult to solve this for an arbitrary [itex] m [/itex], that's okay. But let's say for [itex] m = 5[/itex]? Or for the much simpler case of [itex] m = 2[/itex]?
Is this somehow related to the quadratic form?
 
Physics news on Phys.org
  • #2
Given the m roots, construct the m by m diagonal matrix, D, having those roots on the main diagonal. For any invertible matrix, P, [itex]A= PDP^{-1}[/itex] will have characteristic polynomial with those roots.
 
  • #3
Jesus christ, I feel stupid. Thanks.
 
  • #4
thats how people make up exercises and test problems.
 
  • #5
A follow-up question: how do I find a matrix with integer elements whose inverse also has integer elements? ^^
 
  • #6
You need its determinant to be 1 or -1.

Any integer matrix is a product of integer elementary matrices corresponding to the row operations:
  • Adding an integer multiple of one row to another
  • Swapping two rows
  • Multiplying a row by a constant

As long as you only use 1 or -1 for that last one, all three sorts are invertible.
 
  • #7
Hahahaha! How brilliant. Okay. Now I can finally randomly create integer matrices A such that [itex]A = SDS^{-1}[/itex] are integer matrices as well! Other questions:

-How do I find a matrix A with integer elements, which has an inverse with rational elements? I get the feeling that it always is that a "rational" matrix always has a "rational" inverse [but I do not know how to show it]?

-How do I find a matrix A with rational elements which has an inverse with integer elements?

-How do I prove that a real matrix never has an inverse with complex elements?
 
Last edited by a moderator:
  • #8
dobedobedo said:
Hahahaha! How brilliant. Okay. Now I can finally randomly create integer matrices A such that [itex]A = SDS^{-1}[/itex] are integer matrices as well!
You can't create a matrix A such that this will work for all S. You need both A and S having determinants 1 or -1.

Other questions:

-How do I find a matrix A with integer elements, which has an inverse with rational elements? I get the feeling that it always is that a "rational" matrix always has a "rational" inverse [but I do not know how to show it]?
The inverse of matrix A is the matrix of cofactors divided by the determinant of A so, yes, since the rational numbers is closed under addition, subtraction, multiplication, and division by non-zero numbers, the inverse of a matrix with rational entries (including integers) will always have rational entries.

-How do I find a matrix A with rational elements which has an inverse with integer elements?
An obvious way is to start with an invertible matrix with integer coefficients and let A be its inverse! More generally if the determinant of A is a common multiple of its denominators (and so an integer) then its inverse will have integer entries.

-How do I prove that a real matrix never has an inverse with complex elements?
Again, every element of A-1 is its cofactor divided by the matrix of A. Since the real numbers is closed under the four basic operations, that will always give real numbers.
 
  • #9
dobedobedo said:
Hello guise.

I am familiar to a method of diagonalizing an nxn-matrix which fulfills the following condition:
the sum of the dimensions of the eigenspaces is equal to n.

As to the algorithm itself, it says:

1. Find the characteristic polynomial.
2. Find the roots of the characteristic polynomial.
3. Let the eigenvectors [itex]v_{i}[/itex] be the column vectors of some matrix S.
4. Let the eigenvalues [itex]\lambda_{i}[/itex] be the elements of some diagonal matrix, ordered to CORRESPOND the order of the eigenvectors in S.
5. Our Diagonalization of A should be:
[itex]A = S \cdot A \cdot S^{-1} = (v_{1}...v_ {i}...v_ {n}) \cdot (\lambda_{1}...\lambda_{i}...\lambda_{n}) \cdot S^{-1}[/itex]

My question is: how do I find at least one such matrix A Corresponding to some randomly created polynomial of degree [itex]m[/itex] with integer roots? If it is too difficult to solve this for an arbitrary [itex] m [/itex], that's okay. But let's say for [itex] m = 5[/itex]? Or for the much simpler case of [itex] m = 2[/itex]?
Is this somehow related to the quadratic form?



If you don't have the eigenvalues you use the companion matrix of a monic polynomial:
$${}$$
$$\left(\begin{array} {}0&1&0&0&...&0\\0&0&1&0&...&0\\...&...&...&...&...&...\\0&0&0&...&0&1\\\!\!-c_0&\!\!-c_1&\!\!-c_2&\!\!-c_3&...&\!\!-c_{n-1}\end{array}\right)$$

The coolest feature of the above matrix is that [itex]\,f(x)=c_0+c_1x+...+c_{n-1}x^{n-1}+x^n\,[/itex] is not only its

characteristic but also its minimal polynomial...!

DonAntonio
 
  • #10
DonAntonio said:
If you don't have the eigenvalues you use the companion matrix of a monic polynomial:
$${}$$
$$\left(\begin{array} {}0&1&0&0&...&0\\0&0&1&0&...&0\\...&...&...&...&...&...\\0&0&0&...&0&1\\\!\!-c_0&\!\!-c_1&\!\!-c_2&\!\!-c_3&...&\!\!-c_{n-1}\end{array}\right)$$

The coolest feature of the above matrix is that [itex]\,f(x)=c_0+c_1x+...+c_{n-1}x^{n-1}+x^n\,[/itex] is not only its

characteristic but also its minimal polynomial...!

DonAntonio

Very interesting! But I am somewhat confused. How do I actually use it? I was wondering how to find my matrix A out of some given polynomial - but that would require me to know it's roots, due to the design of the algorithm. For instance, if I have my matrix {{2,1},{1,2}} then it's characteristic polynomial is x^2-4x+3. If I use the formula you gave me, it's corresponding matrix would be {{0,1},{0,0},{-3,-2}}, which is non-square and therefore not a matrix A which fulfills the criteria of the algorithm? Or am I completely mistaken?
 
Last edited:
  • #11
dobedobedo said:
Very interesting! But I am somewhat confused. How do I actually use it? I was wondering how to find my matrix A out of some given polynomial - but that would require me to know it's roots, due to the design of the algorithm. For instance, if I have my matrix {{2,1},{1,2}} then it's characteristic polynomial is x^2-4x+3. If I use the formula you gave me, it's corresponding matrix would be {{0,1},{0,0},{-3,-2}}, which is non-square and therefore not a matrix A which fulfills the criteria of the algorithm? Or am I completely mistaken?

[itex] \left( \begin{array}{cc}
0 & 1\\ -3 & 4\\ \end{array} \right) ?
[/itex]
 

Related to Eigenvalues and characteristic polynomials

1. What is an eigenvalue?

An eigenvalue is a scalar (a single number) that represents a special characteristic of a square matrix. It is often denoted by the Greek letter lambda (λ) and is associated with a corresponding eigenvector.

2. What is an eigenvector?

An eigenvector is a non-zero vector (a line or arrow in space) that, when multiplied by a square matrix, produces a scaled version of itself. The scalar value by which the eigenvector is scaled is known as the eigenvalue.

3. How are eigenvalues and eigenvectors calculated?

To find the eigenvalues and eigenvectors of a matrix, one must solve the characteristic polynomial, which is a polynomial equation formed by setting the determinant of the matrix minus a multiple of the identity matrix (λI) equal to zero. The roots of this polynomial are the eigenvalues, and the corresponding eigenvectors can be found by plugging each eigenvalue back into the original equation and solving for the vector.

4. What are the applications of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors have many applications in science and engineering, including image processing, physics, chemistry, and computer graphics. They are also used in data analysis and machine learning algorithms, such as principal component analysis, for dimensionality reduction.

5. Can a matrix have more than one set of eigenvalues and eigenvectors?

Yes, it is possible for a matrix to have multiple eigenvalues and corresponding eigenvectors. However, each eigenvalue-eigenvector pair must be unique. In other words, no two eigenvectors can share the same eigenvalue, and vice versa.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
1K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
965
  • Linear and Abstract Algebra
Replies
1
Views
965
  • Linear and Abstract Algebra
Replies
14
Views
2K
Back
Top