Eigenvalues of a Linear Transformation

In summary, the eigenvalue of a linear transformation is the value of the eigenvector corresponding to that eigenvalue.
  • #1
Sudharaka
Gold Member
MHB
1,568
1
Hi everyone, :)

Here's a question I got stuck. Hope you can shed some light on it. :)

Find all eigenvalues of a linear transformation \(f\) whose matrix in some basis is \(A^{t}.A\) where \(A=(a_1,\cdots, a_n)\).

Of course if we write the matrix of the linear transformation we get,

\[A^{t}.A=\begin{pmatrix}a_1^2 & a_{1}a_2 & \cdots & a_{1}a_{n}\\a_2 a_1 & a_2^2 &\cdots & a_{2}a_{n}\\.&.&\cdots&.\\.&.&\cdots&.\\a_n a_1 & a_{n}a_2 & \cdots & a_{n}^2\end{pmatrix}\]

Now this is a symmetric matrix. So it could be written as \(A^{t}.A=QDQ^T\) where \(Q\) is a orthogonal matrix and \(D\) is a diagonal matrix. If we can do this the diagonal elements of the diagonal matrix gives all the eigenvalues we need. However I have no idea how break \(A^{t}.A\) into \(QDQ^T\). Or does any of you see a different approach to this problem which is much more easier? :)


 
Physics news on Phys.org
  • #2
Sudharaka said:
Hi everyone, :)

Here's a question I got stuck. Hope you can shed some light on it. :)
Of course if we write the matrix of the linear transformation we get,

\[A^{t}.A=\begin{pmatrix}a_1^2 & a_{1}a_2 & \cdots & a_{1}a_{n}\\a_2 a_1 & a_2^2 &\cdots & a_{2}a_{n}\\.&.&\cdots&.\\.&.&\cdots&.\\a_n a_1 & a_{n}a_2 & \cdots & a_{n}^2\end{pmatrix}\]

Now this is a symmetric matrix. So it could be written as \(A^{t}.A=QDQ^T\) where \(Q\) is a orthogonal matrix and \(D\) is a diagonal matrix. If we can do this the diagonal elements of the diagonal matrix gives all the eigenvalues we need. However I have no idea how break \(A^{t}.A\) into \(QDQ^T\). Or does any of you see a different approach to this problem which is much more easier? :)



I think I found a way to solve this problem. The method seems quite obvious but if you see any mistakes in it please let me know. :)

So we know that,

\[(A^{T}A)x=\lambda x\]

where \(x\) is the eigenvector corresponding to \(\lambda\). We simply multiply both sides by \(A\) and use the associative property of matrix multiplication.

\[A(A^{T}A)x=\lambda (Ax)\]

\[(AA^{T})(Ax)=\lambda (Ax)\]

\[(a_1^2+a^2_2+\cdots+a_n^2)(Ax)=\lambda (Ax)\]

Therefore,

\[\lambda = a_1^2+a^2_2+\cdots+a_n^2\]

And that's it! Yay, we found the eigenvalue. :p
 
  • #3
You have found one eigenvalue, namely $\lambda = a_1^2+a_2^2+\ldots+a_n^2$. In fact, if $x = (a_1,a_2,\ldots,a_n)^T$ then $x$ is an eigenvector, with eigenvalue $\lambda$.

Now suppose that $y = (b_1,b_2,\ldots,b_n)^T$ is a (nonzero) vector orthogonal to $x$, $x.y = 0$. If you form the product $A^TAy$, you will find that its $i$th coordinate is $a_i(x.y) = 0$ for $i=1,2,\ldots,n$, and so $A^TAy = 0$. That shows that $y$ is an eigenvector of $A^TA$, corresponding to the eigenvalue $0$. In other words, all the other eigenvalues of $A^TA$ are $0$.
 
  • #4
Opalg said:
You have found one eigenvalue, namely $\lambda = a_1^2+a_2^2+\ldots+a_n^2$. In fact, if $x = (a_1,a_2,\ldots,a_n)^T$ then $x$ is an eigenvector, with eigenvalue $\lambda$.

Now suppose that $y = (b_1,b_2,\ldots,b_n)^T$ is a (nonzero) vector orthogonal to $x$, $x.y = 0$. If you form the product $A^TAy$, you will find that its $i$th coordinate is $a_i(x.y) = 0$ for $i=1,2,\ldots,n$, and so $A^TAy = 0$. That shows that $y$ is an eigenvector of $A^TA$, corresponding to the eigenvalue $0$. In other words, all the other eigenvalues of $A^TA$ are $0$.

Wow, thanks very much for completing my answer. It never occurred me that 0 could be a possibility of an eigenvalue. :)
 
  • #5


Hi there,

First, let's define what eigenvalues are in the context of linear transformations. Eigenvalues are the scalar values that, when multiplied by a vector, result in a new vector that is parallel to the original vector. In other words, the vector does not change direction, only its magnitude changes.

In this case, we are looking for the eigenvalues of a linear transformation whose matrix is \(A^{t}.A\). As you correctly pointed out, this is a symmetric matrix, meaning it can be diagonalized as \(A^{t}.A=QDQ^T\), where \(Q\) is an orthogonal matrix and \(D\) is a diagonal matrix.

To find the eigenvalues, we can use the fact that the eigenvalues of a diagonal matrix are simply the diagonal elements. So, to find the eigenvalues of \(A^{t}.A\), we need to find the diagonal elements of \(D\).

To do this, we can use the Singular Value Decomposition (SVD) of \(A\). SVD states that any matrix can be decomposed as \(A=UDV^T\), where \(U\) and \(V\) are orthogonal matrices and \(D\) is a diagonal matrix. In our case, we have \(A^{t}.A=(UDV^T)^{t}(UDV^T)=VD^{t}U^TUVD^TU^T=VD^{t}DV^T\).

Comparing this to our original expression of \(A^{t}.A=QDQ^T\), we can see that \(Q=V\) and \(D^{t}D=D\). Therefore, the diagonal elements of \(D\) are the eigenvalues of \(A^{t}.A\).

I hope this helps! Let me know if you have any other questions. :)
 

FAQ: Eigenvalues of a Linear Transformation

What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are mathematical concepts used to describe the behavior of a linear transformation. An eigenvalue is a scalar value that represents how much an eigenvector is stretched or compressed by the transformation. An eigenvector is a non-zero vector that, when multiplied by the linear transformation, results in a scalar multiple of itself.

How do eigenvalues and eigenvectors relate to linear transformations?

Eigenvalues and eigenvectors are properties of linear transformations. They provide information about how the transformation affects certain vectors. Eigenvalues can be used to determine whether a transformation stretches, compresses, or rotates a vector, while eigenvectors represent the direction of this change.

What is the significance of calculating eigenvalues?

Calculating eigenvalues can be useful in many applications. In linear algebra, they are used to find the solutions to systems of linear equations. They are also used in physics to describe the behavior of quantum systems and in data analysis to identify patterns in large datasets.

How do you find the eigenvalues of a linear transformation?

To find the eigenvalues of a linear transformation, you must first represent the transformation as a matrix. Then, find the characteristic polynomial of the matrix, set it equal to zero, and solve for the values of lambda (the eigenvalues). These values will be the eigenvalues of the linear transformation.

What is the relationship between eigenvalues and determinants?

The determinant of a matrix is equal to the product of its eigenvalues. This means that the determinant can be used to find the eigenvalues of a matrix. Conversely, the eigenvalues can be used to find the determinant of a matrix. Additionally, the determinant of a matrix can determine if the matrix has any repeated eigenvalues.

Similar threads

Back
Top