Vector iteration/Power method for eigen values

In summary: I shall be obliged/grateful..vishalIn summary, the Vector iteration/power method is a process for computing eigenvalues and eigenvectors. First, the dominant eigenvalue and corresponding eigenvector are obtained using the power method. Then, the deflation method is used to compute the remaining eigenvalues and eigenvectors. This involves subtracting a matrix from the original matrix, which maps the eigenspace of the dominant eigenvalue to 0 and other subspaces to the same thing that the original matrix did. However, the power method may fail if there is no dominant eigenvalue or if two eigenvalues are close in absolute value, resulting in slow convergence or the method failing.
  • #1
svishal03
129
1
I have been reading the Vector iteration/power method for computing the Eigen values and eigen vectors and have got some questions. I shall be grateful if someone can help me.

Now, in power method we get the dominant eigen value and corresponding eigen vector.

This is followed by deflation method wherein we compute the reamining Eigen values and eigen vectors.

To compute the dominant eigen value and corresponding eigen vector we carry out the following steps:

i. Assign to the candidate matrix an arbitrary eigenvector with at least one element being nonzero.
ii. Compute a new eigenvector.
iii. Normalize the eigenvector, where the normalization scalar is taken for an initial eigenvalue.
iv. Multiply the original matrix by the normalized eigenvector to calculate a new eigenvector.
v. Normalize this eigenvector, where the normalization scalar is taken for a new eigenvalue.
vi. Repeat the entire process until the absolute relative error between successive eigenvalues satisfies an arbitrary tolerance (threshold) value.


My questions here is:

1) How can it be said that the eigen value is the dominant eigen value? I mean what is the physical interpretation of it being dominant?


Then in deflation emthod we do the following:

First, we use the Power Method to find the largest eigenvalue and eigenvector of matrix A.
a) multiply the largest eigenvector by its transpose and then by the largest eigenvalue. This produces the matrix Z* = c *X*(X)T
b) compute a new matrix A* = A - Z* = A - c *X*(X)T
c) Apply the Power Method to A* to compute its largest eigenvalue. This in turns should be the second largest eigenvalue of the initial matrix A.

Though mathematically easy to do/program these steps but what is it exactly are we doing in a),b) and c). IS it some kind of similarity transformation?

Please help
 
Physics news on Phys.org
  • #2
Please can anyone help? I shall be obliged/grateful..

vishal
 
  • #3
Take a simple example. The matrix
[tex]\begin{bmatrix}3 & 0 \\ 0 & 1\end{bmatrix}[/tex]
obviously has 3 and 1 as eigenvalues with eigenvectors [1, 0] and [0, 1] respectively.

The larger eigenvalue is 3 so that is the one the power method would give. Your "cXXT" would be
[tex]3\begin{bmatrix}1 \\ 0 \end{bmatrix}\begin{bmatrix}1 & 0 \end{bmatrix}= \begin{bmatrix}3 & 0 \\ 0 & 0\end{bmatrix}[/tex]

So A- 3XXT becomes
[tex]\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}[/tex]

Essentially, cXXT, with c an eigenvalue and X an eigenvector corresponding to c, is a matrix that has c as eigenvalue with multiples of X as eigenspace but maps all other subspaces to 0. Subtracting, A- cXXT gives a matrix which maps vectors in the eigenspace of c to 0 while mapping other subspaces to the same thing A did. In effect the eigenvalue for that eigenspace is now 0 which is obviously the smallest eigenvalue rather than the largest.
 
  • #4
Thanks a lot for the reply.

Now, you said:

Essentially, cXXT, with c an eigenvalue and X an eigenvector corresponding to c, is a matrix that has c as eigenvalue with multiples of X as eigenspace

1) When you say "eigen space", does it mean: we have a matrix (say "A") and if it has an eigen value "c" then all the eigen vectors possible for "A" which have an eigen value "c" constitute an "eigen space"- Right? (by strict definition including the zero vector).

2)I dot follow when you say:

but maps all other subspaces to 0.

Does this mean all other vectors which are not eigen vectors of A are mapped to 0? That is, all columns of the matrix are zero except the column(s) which constitutes a vector(s) which has eigen value c?


Subtracting, A- cXXT gives a matrix which maps vectors in the eigenspace of c to 0 while mapping other subspaces to the same thing A did.

Does this mean after doing A- cXXT the resultanant matrix has all columns non zero except those columns which constitute eigen vectors which have some other eigen value than that of 'c'?
 
  • #5
Yes. A- cXXT maps all eigenvectors of A with eigenvalue c to the 0 vector while mapping an eigenvector, v, with egenvalue [itex]a\ne c[/itex] to av.
 
  • #6
Thanks a lot again.

It is said that the power method fails if there is no dominant eigen value. Does it mean that if two eigenvalues are close in absolute value, the power iteration will converge slowly and if two eigen values are similar the method will not converge/will fail?

Why is it so numerically?
 
  • #7
Hi,

Please can anyone help?
 

FAQ: Vector iteration/Power method for eigen values

What is vector iteration/power method for eigenvalues?

Vector iteration/power method is a numerical algorithm used to find the eigenvalues of a square matrix. It involves repeatedly multiplying a starting vector by the matrix until it converges to the dominant eigenvector, which corresponds to the largest eigenvalue of the matrix.

How does vector iteration/power method work?

The vector iteration/power method starts with an arbitrary vector and repeatedly multiplies it by the matrix. As the iterations continue, the vector approaches the dominant eigenvector of the matrix. Once the vector has converged, the dominant eigenvalue can be calculated.

What is the significance of the dominant eigenvector and eigenvalue?

The dominant eigenvector and eigenvalue provide important information about the behavior of the matrix. The dominant eigenvector represents the direction in which the matrix has the most influence, while the dominant eigenvalue represents the strength of that influence.

How accurate is the vector iteration/power method?

The accuracy of the vector iteration/power method depends on the starting vector and the convergence criteria. Generally, the more iterations that are performed, the closer the vector will be to the dominant eigenvector. However, if the starting vector is not close to the dominant eigenvector, the method may take a long time to converge.

Can the vector iteration/power method be used for non-square matrices?

No, the vector iteration/power method can only be used for square matrices. For non-square matrices, other methods such as singular value decomposition (SVD) or QR decomposition must be used to find the eigenvalues and eigenvectors.

Back
Top