Why eigenvalue specification reduces the no. of LI equations?

  • Thread starter neelakash
  • Start date
  • Tags
    Eigenvalue
In summary: A-x*I)^2 has rank 2In summary, the conversation is discussing the development of the idea of the Eigenvalue problem, specifically in regards to the equation A-lambda*I*X=0 and its implications for the number of linearly independent equations. It is noted that when all eigenvalues are distinct, the maximum number of linearly independent equations is N-1. However, this may not always hold true depending on the multiplicity of the eigenvalues. The conversation ends with a request for further discussion to clarify this concept.
  • #1
neelakash
511
1
Hi everyone, I am stuck with the following for last couple of days.

Many books mention during the development in the idea of Eigenvalue problem: say, you have the equation
[tex] [\ A-\lambda\ I]\ X=\ 0[/tex] where A is an NxN matrix and X is an Nx1 vector.

The above consists of n equations.Say,all eigenvalues are non-degenerate.

If you specify one of the non-degenerate eigenvalues,the number of linearly independent equations will be (N-1).This is written in book.I am looking for the explanation.

The linear independence of the equations come from the vectors in the matrix [tex] [\ A-\lambda\ I][/tex].Since,the matrix [tex] [\ A-\lambda\ I][/tex] is singular,its rank can be at most (N-1).Means,the maximum number of the linearly independent vectors in the matrix A after specifying one of its eigenvalues is (N-1).At least one of the vectors can be expanded in terms of the (N-1) vectors.

I find it difficult to see how the specification of the eigenvalue results in this.It is clear that in specifying the eigenvalue,all the matrix elements become known to us.And we can readily calculate its deerminant=0.That way it is OK.But how do we know the rank is precisely (N-1) and not (N-2) or (N-3)...etc.?

-Please take part in the discussion so that the thing becomes clear.

Neel
 
Last edited:
Physics news on Phys.org
  • #2
I assume you have n eigenvalues. Your result only holds when they are all different. If x is an eigenvalue of (algebraic) multiplicity k (A-lambda*I)^k has rank N-k. It is also possible that (A-lambda*I)^l has rank N-k for some l=1,2,...,k-1. This helps to keep things interesting. We can see this because (A-x1*I)...(A-xN*I) has rank 0. When eigenvalues are distinct we conclude each (A-xk*I) has rank N-1.
example the matrix
{{x,1}
{0,x}}
The eigenvector x is multiplicity 2 (A-x*I) has rank 1 (A-x*I)^2 has rank 2

example the matrix
{{x,0}
{0,x}}
The eigenvector x is multiplicity 2 (A-x*I) has rank 2
 

FAQ: Why eigenvalue specification reduces the no. of LI equations?

Why do eigenvalues reduce the number of LI equations?

The eigenvalues of a matrix represent the scaling factor of the corresponding eigenvector, which essentially tells us how the eigenvector is transformed by the matrix. By specifying the eigenvalues, we are essentially determining the scaling factor for each eigenvector, which in turn reduces the number of linearly independent equations needed to describe the matrix.

How do eigenvalues affect the linear independence of equations?

Eigenvalues are directly related to the linear independence of equations in a matrix. This is because the number of distinct eigenvalues of a matrix is equal to the number of linearly independent eigenvectors. By specifying the eigenvalues, we are essentially specifying the linear independence of the equations in the matrix.

Is there a specific formula for determining the number of LI equations based on eigenvalue specification?

Yes, there is a formula that relates the number of distinct eigenvalues to the number of linearly independent equations. This formula is known as the Cayley-Hamilton theorem and states that the number of distinct eigenvalues is equal to the degree of the characteristic polynomial of the matrix, which in turn is equal to the number of linearly independent equations.

How does eigenvalue specification affect the solutions of a matrix?

Eigenvalue specification has a major impact on the solutions of a matrix. By specifying the eigenvalues, we are essentially determining the transformation that the matrix will have on its eigenvectors. This in turn affects the solutions of the matrix, as the eigenvectors are used to find the solutions to linear systems of equations.

Can eigenvalue specification change the dimension of the solution space?

Yes, eigenvalue specification can change the dimension of the solution space. This is because by specifying the eigenvalues, we are essentially determining the linear transformation of the matrix, which can change the dimension of the solution space. For example, if the eigenvalues are all zero, then the matrix will have a trivial solution space with only the zero vector as a solution.

Similar threads

Replies
10
Views
1K
Replies
4
Views
2K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
1
Views
2K
Replies
1
Views
3K
Back
Top