A question about a new way to find eigenvectors that i noticed

  • Thread starter transgalactic
  • Start date
  • Tags
    Eigenvectors
In summary: TIn summary, the solver appears to have realized that he didn't have to compute the eigenvalues of T^2+2*T since he already knew that the eigenvalues of T were +1 and -1, apparently from a previous problem. This let him immediately conclude the eigenvalues of T^2+2*T are 3 and -1. Once he knew the eigenvalues he substituted them in for lambda and seems to have read off the eigenvectors more or less by inspection. It's not a new way of computing eigenvalues.
  • #1
transgalactic
1,395
0
i got this question in which we are given the matrix T
and we need to find the eigenvalues and the independent spaces (i don't know what is independent space) of T^2 +2*T

the problem is that he started to solve the question as i would have solved it
but then he puts a big X on it and does something else
i can't understand it??(and he gets all the point for it)

it looks as if he skips the finding the roots of polinomial step
why?

http://img253.imageshack.us/my.php?image=img86091xg4.jpg
 
Last edited:
Physics news on Phys.org
  • #2
The solver appears to have realized that he didn't have to compute the eigenvalues of T^2+2*T since he already knew that the eigenvalues of T were +1 and -1, apparently from a previous problem. This let him immediately conclude the eigenvalues of T^2+2*T are 3 and -1. Once he knew the eigenvalues he substituted them in for lambda and seems to have read off the eigenvectors more or less by inspection. It's not a new way of computing eigenvalues.
 
  • #3
If [itex]\lambda[/itex] is an eigenvalue of T, with eigenvector v, then Tv= [itex]\lambda[/itex]v. From that, [itex](T^2+ 2T)v= T(T(v))+ 2T(v)= T(\lambda v)- 2\lambda= \lambda T(v)- 2\lambda= \lambda(\lambda v)- 2\lambda v= (\lambda^2- 2\lambda) v[/itex].

In other words if [itex]\lambda[/itex] is an eigenvalue of T with eigenvector v, then [itex]\lambda^2- 2\lambda[/itex] is an eigenvalue of T2- 2T with eigenvector v.

It is easier to find the eigenvalues of T and then use that formula than to find the eigenvalues of T2- 2T directly.
 
Last edited by a moderator:
  • #4
ok i understood how you got the formula from the T^2 + 2T epression

what now??
how do i mix up the eigenvalues of T with this formula in order to get the new values
??
 
  • #5
transgalactic said:
ok i understood how you got the formula from the T^2 + 2T epression

what now??
how do i mix up the eigenvalues of T with this formula in order to get the new values
??

What do you mean by "mix up the eigenvalues of T" and what "new values" are you talking about?

If you mean "How do I go from the eigenvalues of T to the eigenvalues of T2- 2T?", that's exactly what I told you before.:
In other words if [itex]\lambda[/itex] is an eigenvalue of T with eigenvector v, then [itex]\lambda^2- 2\lambda[/itex] is an eigenvalue of T2- 2T with eigenvector v.
 
  • #6
correct me if i am wrong

x-eigenvalue of T
y-eigen value of the expression
y(x)=x^2-2*x

so if x=-1 then for that "old" eigen value we get y=3
and we do that process for every eigenvalue
 

FAQ: A question about a new way to find eigenvectors that i noticed

What is the new way to find eigenvectors?

The new way to find eigenvectors is through a method called the power iteration method. This method involves repeatedly multiplying a vector by a matrix until it converges to the dominant eigenvector.

How does the power iteration method work?

The power iteration method works by first choosing a random vector as an initial approximation to the eigenvector. This vector is then multiplied by the matrix, and the resulting vector is normalized. This process is repeated until the vector converges to the dominant eigenvector.

What are the advantages of using the power iteration method?

The power iteration method is a simple and efficient way to find the dominant eigenvector of a matrix. It also works well for large matrices and can easily be implemented on a computer. Additionally, it can be used to find multiple eigenvectors by repeating the process with different initial vectors.

Are there any limitations to the power iteration method?

One limitation of the power iteration method is that it only works for finding the dominant eigenvector of a matrix. It may also converge to a non-dominant eigenvector if the initial vector is not chosen carefully. Additionally, it may not converge if the matrix has repeated eigenvalues.

How is the power iteration method different from other methods for finding eigenvectors?

The power iteration method is different from other methods because it does not require knowledge of the eigenvalues of the matrix. It also does not require the matrix to be diagonalizable, making it a more versatile method. However, it may take longer to converge compared to other methods such as the QR algorithm.

Similar threads

Replies
7
Views
2K
Replies
6
Views
2K
Replies
12
Views
2K
Replies
19
Views
3K
Replies
5
Views
14K
Replies
5
Views
2K
Back
Top