Why Do Rank 1 Matrices Have Eigenvalues 0 and Trace?

In summary, a square matrix with rank one has eigenvalues of 0 and the trace of the matrix. There are other proofs besides solving det(A-λI)=0, such as using row operations and orthogonal transformations. However, these proofs may not hold true for matrices with other ranks.
  • #1
brownman
13
0
How come a square matrix has eigenvalues of 0 and the trace of the matrix?
Is there any other proof other than just solving det(A-λI)=0?
 
Physics news on Phys.org
  • #2
I might argue something like the following: By row operations, a rank 1 matrix may be reduced to a matrix with only the first row being nonzero. The eigenvectors of such a matrix may be chosen to be the ordinary Euclidian basis, in which the eigenvalues become zero's and the 11-component of this reduced matrix. As row operations are invertible, the trace is unchanged, and thus this nonzero eigenvalue equals the trace of the original matrix.

Afterthought: But that is probably erroneous, because even though the row operations are indeed invertible, they do not generally preserve the trace. So the last part of my argument fails.

A better argument seems to be the following: For a rank k matrix there exists a basis in which k of its columns are nonzero, the other ones being zero. The transformation between bases may be chosen to be orthogonal, thus preserving the trace.
 
Last edited:
  • #3
We assume ##A## is an ##n \times n## rank one matrix. If ##n > 1##, any rank one matrix is singular. Therefore ##\lambda = 0## is an eigenvalue: for an eigenvector, just take any nonzero ##v## such that ##Av = 0##.

So let's see if there are any nonzero eigenvalues.

If ##A## is a rank one matrix, then all of its columns are scalar multiples of each other. Thus we may write ##A = xy^T## where ##x## and ##y## are nonzero ##n \times 1## vectors.

If ##\lambda## is an eigenvalue of ##A##, then there is a nonzero vector ##v## such that ##Av = \lambda v##. This means that ##(xy^T)v = \lambda v##. By associativity, we may rewrite the left hand side as ##x(y^T v) = \lambda v##.

Note that ##y^T v## is a scalar, and of course ##\lambda## is also a scalar. If we assume ##\lambda \neq 0##, then this means that ##v## is a scalar multiple of ##x##: specifically, ##v = x(y^T v)/\lambda##.

Therefore ##x## itself is an eigenvector associated with ##\lambda##, so we have ##x(y^T x) = \lambda x##, or equivalently, ##x(\lambda - y^T x) = 0##. As ##x## is nonzero, this forces ##\lambda = y^T x##.

All that remains is to recognize that ##y^T x = \sum_{n = 1}^{N} x_n y_n## is the trace of ##A = xy^T##.
 
  • #4
By the way, note that this does not necessarily mean that ##A## has two distinct eigenvalues. The trace may well be zero, for example
$$A = \begin{bmatrix}
1 & 1 \\
-1 & -1
\end{bmatrix}$$
is a rank one matrix whose only eigenvalue is 0.
 
  • #5


The eigenvalues of a rank 1 matrix are indeed 0 and the trace of the matrix. This can be seen by considering the definition of a rank 1 matrix, which is a matrix that can be written as the outer product of two vectors. Let's call these vectors u and v, then the rank 1 matrix can be written as A = uv^T.

Now, let's consider the eigenvalue equation Av = λv. Substituting A = uv^T, we get (uv^T)v = λv. This can be simplified to u(v^Tv) = λv. Since v is an eigenvector, it cannot be the zero vector, so v^Tv is a non-zero scalar. This means that u must be a multiple of v, and we can write u = cv for some scalar c.

Substituting this back into the original equation, we get A = cvv^T = c(vv^T). This means that A is a scalar multiple of the outer product of v with itself, which is the definition of a rank 1 matrix.

Now, the trace of A is given by tr(A) = tr(c(vv^T)) = ctr(vv^T). Since v is an eigenvector, we know that v^Tv = 1, so tr(vv^T) = 1. Therefore, tr(A) = c.

This shows that the trace of A is indeed one of the eigenvalues, with the corresponding eigenvector being v. The other eigenvalue is 0, with the corresponding eigenvector being any vector orthogonal to v.

As for other proofs, there are various approaches that can be taken. One approach is to use the fact that the eigenvectors of a rank 1 matrix are orthogonal to each other, and thus the matrix can be diagonalized. Another approach is to use the singular value decomposition of the matrix. However, ultimately, all of these approaches will lead to the same result of the eigenvalues being 0 and the trace of the matrix.
 

FAQ: Why Do Rank 1 Matrices Have Eigenvalues 0 and Trace?

What is a rank 1 matrix?

A rank 1 matrix is a square matrix where all the columns are multiples of each other, resulting in a one-dimensional column space.

How do you find the eigenvalues of a rank 1 matrix?

To find the eigenvalues of a rank 1 matrix, you can simply take the single non-zero column and use it as the eigenvector. This means that the eigenvalue will be equal to the only non-zero entry in that column.

Can a rank 1 matrix have more than one eigenvalue?

No, a rank 1 matrix can only have one eigenvalue. This is because the matrix only has one linearly independent column, so there is only one possible eigenvector and eigenvalue.

What is the significance of the eigenvalues in a rank 1 matrix?

The eigenvalues of a rank 1 matrix represent the scaling factor of the single non-zero column. This means that the matrix only has one possible direction of transformation, and the eigenvalue determines the magnitude of that transformation.

Can a rank 1 matrix have a zero eigenvalue?

No, a rank 1 matrix cannot have a zero eigenvalue. This is because the eigenvalue is equal to the only non-zero entry in the single column, and a rank 1 matrix, by definition, cannot have any zero entries.

Back
Top