Approximation of eigenvalue with power method

In summary, the conversation discusses using the power method to approximate the largest (in absolute value) eigenvalue of a given matrix. The power method involves iterating with a given initial vector and using the norm of the resulting vector to approximate the eigenvalue. The conversation also touches on the use of spectral shift and the conditions necessary for the power method to converge. Ultimately, it is determined that the given matrix in this case is resistant to the power method and does not converge to an eigenvector.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :eek:

We have \begin{equation*}A:=\begin{pmatrix}-5.7 & -61.1 & -32.9 \\ 0.8 & 11.9 & 7.1 \\ -1.1 & -11.8 & -7.2\end{pmatrix} \ \text{ and } \ z^{(0)}:=\begin{pmatrix}1\\ 1 \\ 1\end{pmatrix}\end{equation*}

I want to approximate the biggest (in absolute value) eigenvalue of $A$ with the power method (4 steps using the infinity norm).
I have done the following:

From the power method we have that $z^{(k+1)} = \frac{Az^{(k)}}{\|Az^{(k)}\|}$ and it holds that $\|Az^{(k)}\|\rightarrow |\lambda_m|$, where $\lambda_m$ is the largest in absolute value eigenvalue of $A$.

We have that \begin{equation*}Az^{(0)}=\begin{pmatrix}-5.7 & -61.1 & -32.9 \\ 0.8 & 11.9 & 7.1 \\ -1.1 & -11.8 & -7.2\end{pmatrix}\begin{pmatrix}1\\ 1 \\ 1\end{pmatrix}=\begin{pmatrix}-99.7\\ 19.8 \\ -20.1\end{pmatrix} \Rightarrow \|Az^{(0)}\|_{\infty}=\max_{i=1}^3|(Az^{(0)})_i|=99.7\end{equation*}

So, we get \begin{equation*}z^{(1)} = \frac{Az^{(0)}}{\|Az^{(0)}\|}=\frac{1}{99.7}\begin{pmatrix}-99.7\\ 19.8 \\ -20.1\end{pmatrix}=\begin{pmatrix}-1\\ 0.198596 \\ -0.201605\end{pmatrix}\end{equation*}

We have that \begin{equation*}Az^{(1)}=\begin{pmatrix}-5.7 & -61.1 & -32.9 \\ 0.8 & 11.9 & 7.1 \\ -1.1 & -11.8 & -7.2\end{pmatrix}\begin{pmatrix}-1\\ 0.198596 \\ -0.201605\end{pmatrix}=\begin{pmatrix}0.198589\\ 0.131897 \\ 0.208123\end{pmatrix} \Rightarrow \|Az^{(1)}\|_{\infty}=\max_{i=1}^3|(Az^{(1)})_i|=0.208123\end{equation*}

So, we get \begin{equation*}z^{(2)} = \frac{Az^{(1)}}{\|Az^{(1)}\|}=\frac{1}{0.208123}\begin{pmatrix}0.198589\\ 0.131897 \\ 0.208123\end{pmatrix}=\begin{pmatrix}0.954191\\ 0.633745 \\ 1\end{pmatrix}\end{equation*}

We have that \begin{equation*}Az^{(2)}=\begin{pmatrix}-5.7 & -61.1 & -32.9 \\ 0.8 & 11.9 & 7.1 \\ -1.1 & -11.8 & -7.2\end{pmatrix}\begin{pmatrix}0.954191\\ 0.633745 \\ 1\end{pmatrix}=\begin{pmatrix}-77.0607\\ 15.4049 \\ -15.7278\end{pmatrix} \Rightarrow \|Az^{(2)}\|_{\infty}=\max_{i=1}^3|(Az^{(2)})_i|=77.0607\end{equation*}

So, we get \begin{equation*}z^{(3)} = \frac{Az^{(2)}}{\|Az^{(2)}\|}=\frac{1}{77.0607}\begin{pmatrix}-77.0607\\ 15.4049 \\ -15.7278\end{pmatrix}=\begin{pmatrix}-1\\ 0.199906 \\ -0.204096\end{pmatrix}\end{equation*}

We have that \begin{equation*}Az^{(3)}=\begin{pmatrix}-5.7 & -61.1 & -32.9 \\ 0.8 & 11.9 & 7.1 \\ -1.1 & -11.8 & -7.2\end{pmatrix}\begin{pmatrix}-1\\ 0.199906 \\ -0.204096\end{pmatrix}=\begin{pmatrix}0.200502\\ 0.1298 \\ 0.2106\end{pmatrix} \Rightarrow \|Az^{(3)}\|_{\infty}=\max_{i=1}^3|(Az^{(3)})_i|=0.2106\end{equation*}

So, we get \begin{equation*}z^{(4)}= \frac{Az^{(3)}}{\|Az^{(3)}\|}=\frac{1}{0.2106}\begin{pmatrix}0.200502\\ 0.1298 \\ 0.2106\end{pmatrix}=\begin{pmatrix}0.952051\\ 0.616334 \\ 1\end{pmatrix}\end{equation*}

We have that \begin{equation*}Az^{(4)}=\begin{pmatrix}-5.7 & -61.1 & -32.9 \\ 0.8 & 11.9 & 7.1 \\ -1.1 & -11.8 & -7.2\end{pmatrix}\begin{pmatrix}0.952051\\ 0.616334 \\ 1\end{pmatrix}=\begin{pmatrix}-75.9847\\ 15.196 \\ -15.52\end{pmatrix} \Rightarrow \|Az^{(4)}\|_{\infty}=\max_{i=1}^3|(Az^{(4)})_i|=75.9847\end{equation*}

Therefore we have that $\|Az^{(4)}\|_{\infty}=75.9847\rightarrow |\lambda_m|$, so an approximation to the largest in absolute value eigenvalue is $75.9847$.
According to Wolfram the largent eigenvalue is $4$. Have I done something wrong? Or do I get such a large approximation? (Wondering)
 
Mathematics news on Phys.org
  • #2
I have not tried to check your arithmetic. But I can't help noticing that $z^{(3)}$ is very close to $z^{(1)}$, and $z^{(4)}$ is very close to $z^{(2)}$. But $z^{(1)}$ and $z^{(3)}$ are very different from $z^{(2)}$ and $z^{(4)}$. So after the first four iterations, the sequence $(z^{(n)})$ appears to be oscillating. It certainly shows no sign of converging to an eigenvector, so it is hardly surprising that $\|Az^{(n)}\|$ shows no sign of converging to an eigenvalue.

It looks as though the combination of the matrix $A$ and the initial vector $z^{(0)}$ is craftily rigged to be resistant to the power method.
 
Last edited:
  • #3
Opalg said:
I have not tried to check your arithmetic. But I can't help noticing that $z^{(3)}$ is very close to $z^{(1)}$, and $z^{(4)}$ is very close to $z^{(2)}$. But $z^{(1)}$ and $z^{(3)}$ are very different from $z^{(2)}$ and $z^{(4)}$. So after the first four iterations, the sequence $(z^{(n)})$ appears to be oscillating. It certainly shows no sign of converging to an eigenvector, so it is hardly surprising that $\|Az^{(n)}\|$ shows no sign of converging to an eigenvalue.

It looks as though the combination of the matrix $A$ and the initial vector $z^{(0)}$ is craftily rigged to be resistant to the power method.

So, with this matrix and initial vector the method doesn't converge, right? Are there some conditions that the matrix and the initial vector has to satisfy so that the method converges? (Wondering) At the second subquestion, we have to apply the spectral shift with $s=5$ and approximate the largest in absolute value eigenvalue of the resulted matrix.
So, we have to apply the power method at $A' = A-sI=A-5L$, right? (Wondering)

I have found that $\|A'z^{(4)}\|_{\infty}=9.01262\rightarrow |\lambda_m|$, so an approximation of largest in absolute value eigenvalue of $A'$ is $9.01262$.

Let $\lambda$ be an approximation of largest in absolute value eigenvalue of $A$. Does it then hold that $\lambda-5=9.01262 \Rightarrow \lambda=9.01262+5=14.01262$ ? (Wondering)
 
Last edited by a moderator:
  • #4
mathmari said:
So, with this matrix and initial vector the method doesn't converge, right? Are there some conditions that the matrix and the initial vector has to satisfy so that the method converges?

From Power iteration:
If we assume A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector $b_0$ has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence $( b_{k})$ converges to an eigenvector associated with the dominant eigenvalue.

Without the two assumptions above, the sequence $( b_{k})$ does not necessarily converge.

Are those assumptions satisfied? (Wondering)
mathmari said:
At the second subquestion, we have to apply the spectral shift with $s=5$ and approximate the largest in absolute value eigenvalue of the resulted matrix.
So, we have to apply the power method at $A' = A-sI=A-5L$, right? (Wondering)

I have found that $\|A'z^{(4)}\|_{\infty}=9.01262\rightarrow |\lambda_m|$, so an approximation of largest in absolute value eigenvalue of $A'$ is $9.01262$.

Let $\lambda$ be an approximation of largest in absolute value eigenvalue of $A$. Does it then hold that $\lambda-5=9.01262 \Rightarrow \lambda=9.01262+5=14.01262$ ?

Aren't there 2 cases, since we only know the absolute value? (Wondering)
 
  • #5
I like Serena said:
From Power iteration:
If we assume A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector $b_0$ has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence $( b_{k})$ converges to an eigenvector associated with the dominant eigenvalue.

Without the two assumptions above, the sequence $( b_{k})$ does not necessarily converge.

Are those assumptions satisfied? (Wondering)

In this case $A$ have not an eigenvalue that is strictly greater in magnitude than its other eigenvalues.

What exactly is meant by "in the direction of an eigenvector associated with the dominant eigenvalue" ? (Wondering)
I like Serena said:
Aren't there 2 cases, since we only know the absolute value? (Wondering)

Oh yes (Blush)

It holds that $|\lambda-5|=9.01262 \Rightarrow \lambda-5=\pm 9.01262 \Rightarrow \lambda=\pm 9.01262+5$, so $\lambda=9.01262+5=14.01262$ or $\lambda=-9.01262+5=-4.01262$, right? (Wondering)
 
  • #6
mathmari said:
In this case $A$ have not an eigenvalue that is strictly greater in magnitude than its other eigenvalues.

What exactly is meant by "in the direction of an eigenvector associated with the dominant eigenvalue" ?

Normally that would mean that the dot product with the eigenvector is greater than zero.
Or equivalently, that their angle is less than 90 degrees.

However, in this case I don't think that's the case.
That's because if our initial vector would happen to a non-optimal eigenvector, we'd never find the greatest magnitude eigenvector with a 2-norm, since we'd be stuck with the non-optimal one.
Instead I think it should be that if we write the initial vector as a linear combination of the eigenvectors, that we have a non-zero contribution of the eigenvector with the largest eigenvalue magnitude. (Thinking)

mathmari said:
It holds that $|\lambda-5|=9.01262 \Rightarrow \lambda-5=\pm 9.01262 \Rightarrow \lambda=\pm 9.01262+5$, so $\lambda=9.01262+5=14.01262$ or $\lambda=-9.01262+5=-4.01262$, right?

Yep. (Nod)

And now we see that the spectral shift caused one of the eigenvalues ($\lambda=-4$) to become the greatest in magnitude, after which the assumptions of the power method were satisfied. (Nerd)
 
  • #7
I see! Thank you very much! (Handshake)
 
  • #8
I want to apply a deflation of the eigenvalue $\lambda=4$ of the Matrix $A$ (of post #1) and to calculate with this result an approximation (3 steps) of the smallest in absolute value eigenvalue. How exactly can we apply a deflation of the eigenvalue $\lambda=4$ ? I haven't really understood that.

Do we have to find the eigenvector $v$ that correspond to $\lambda=4$ by solving $(A−\lambda I)v=0$ ? Then do we have to calculate $HAH^{-1}$ where $H$ is the matrix that we get by the Householder transformation, and then we apply the power matrix at the matrix $HAH^{-1}$ ?

(Wondering)
 
  • #9
mathmari said:
I want to apply a deflation of the eigenvalue $\lambda=4$ of the Matrix $A$ (of post #1) and to calculate with this result an approximation (3 steps) of the smallest in absolute value eigenvalue. How exactly can we apply a deflation of the eigenvalue $\lambda=4$ ? I haven't really understood that.

Do we have to find the eigenvector $v$ that correspond to $\lambda=4$ by solving $(A−\lambda I)v=0$ ? Then do we have to calculate $HAH^{-1}$ where $H$ is the matrix that we get by the Householder transformation, and then we apply the power matrix at the matrix $HAH^{-1}$ ?

(Wondering)

I found that we can deflate an eigenvalue $\lambda$ with eigenvector $\mathbf v$ from $A$ with:
$$A'=A-\frac 1{v_p}\mathbf v \cdot\mathbf a_p^T$$
where $p$ is the index of an non-zero element in $\mathbf v$, and $\mathbf a_p^T$ is the matrix consisting of only row $p$ of $A$.
That is, $A'$ has the same eigenvalues as $A$, except that it has eigenvalue $0$ instead of $\lambda$ for eigenvector $\mathbf v$. (Thinking)

To find $\lambda$ and $\mathbf v$ can't we apply the Power method?
That is, doesn't it yield approximations of the eigenvalue with the greatest magnitude and the corresponding eigenvector?
Alternatively, we can also use other methods. Householder transformations imply that we want to achieve the best numerically stable approximation. Do we? (Wondering)
 
  • #10
I like Serena said:
I found that we can deflate an eigenvalue $\lambda$ with eigenvector $\mathbf v$ from $A$ with:
$$A'=A-\frac 1{v_p}\mathbf v \cdot\mathbf a_p^T$$
where $p$ is the index of an non-zero element in $\mathbf v$, and $\mathbf a_p^T$ is the matrix consisting of only row $p$ of $A$.
That is, $A'$ has the same eigenvalues as $A$, except that it has eigenvalue $0$ instead of $\lambda$ for eigenvector $\mathbf v$. (Thinking)

What exactly is $v_p$ ? (Wondering)
I like Serena said:
To find $\lambda$ and $\mathbf v$ can't we apply the Power method?
That is, doesn't it yield the eigenvalue with the greatest magnitude and the corresponding eigenvector? (Wondering)

But as we see in #1 the power method doesn't converge. Do we use an other initial vector or can we also use the spectral shift (#3) ? (Wondering)
I like Serena said:
Alternatively, we can also use other methods. Householder transformations imply that we want to achieve the best numerically stable approximation. Do we? (Wondering)

Does this mean that Householder transformations is appropriate here? (Wondering)
 
  • #11
mathmari said:
What exactly is $v_p$ ?

The entry in vector $\mathbf v$ with index $p$.
For numerical stability we should pick the one with the largest magnitude.

mathmari said:
But as we see in #1 the power method doesn't converge. Do we use an other initial vector or can we also use the spectral shift (#3) ?

Yep. With the spectral shift we found an approximation of the eigenvalue -4 and its corresponding eigenvector.
Wouldn't the logical continuation be to deflate -4, and repeat to find eigenvalue 4?
And then deflate again to finally find the last eigenvalue? (Wondering)

mathmari said:
Does this mean that Householder transformations is appropriate here?

I'm not aware that the power method is generally in use to solve eigensystems.
I believe there are better alternatives, such as Singular Value Decomposition (SVD). And if I'm not mistaken that can include Householder transformations to improve numerical stability.

So I'm guessing this is an exercise. Is it?
If so, does it say we should use Householder transformations? (Wondering)
 
  • #12
I like Serena said:
The entry in vector $\mathbf v$ with index $p$.
For numerical stability we should pick the one with the largest magnitude.

Ah ok!

So, $p$ can be any index of an non-zero element in $\mathbf v$, no matter which, right? (Wondering)
I like Serena said:
Yep. With the spectral shift we found an approximation of the eigenvalue -4 and its corresponding eigenvector.
Wouldn't the logical continuation be to deflate -4, and repeat to find eigenvalue 4?
And then deflate again to finally find the last eigenvalue? (Wondering)

With the spectral shift we get an eigenvalue and eigenvector of the matrix $A-5I$. Can we get from there the eigenvector of the matrix $A$ ? (Wondering)
I like Serena said:
I'm not aware that the power method is generally in use to solve eigensystems.
I believe there are better alternatives, such as Singular Value Decomposition (SVD). And if I'm not mistaken that can include Householder transformations to improve numerical stability.

So I'm guessing this is an exercise. Is it?
If so, does it say we should use Householder transformations? (Wondering)

At the statement it doesn't say anything, but in some notes (in german) I found the following:

View attachment 7685

View attachment 7686

View attachment 7687

That's why I thought that we have to use the Householder transformation.
 

Attachments

  • Deflation1.JPG
    Deflation1.JPG
    31.3 KB · Views: 97
  • Deflation2.JPG
    Deflation2.JPG
    12.5 KB · Views: 89
  • Deflation3.JPG
    Deflation3.JPG
    13 KB · Views: 88
  • #13
mathmari said:
So, $p$ can be any index of an non-zero element in $\mathbf v$, no matter which, right?

Yep. (Nod)

mathmari said:
With the spectral shift we get an eigenvalue and eigenvector of the matrix $A-5I$. Can we get from there the eigenvector of the matrix $A$ ?

We found a vector $\mathbf v$ such that $(A-5I)\mathbf v \approx -9\mathbf v$ didn't we?
Doesn't that mean that $A\mathbf v \approx -4\mathbf v$? (Wondering)

mathmari said:
At the statement it doesn't say anything, but in some notes (in german) I found the following:

That's why I thought that we have to use the Householder transformation.

Ah okay, that is another algorithm to do a deflation.
Its input is also an eigenvalue and its corresponding eigenvector.
Can we apply it? (Wondering)
 
  • #14
I like Serena said:
We found a vector $\mathbf v$ such that $(A-5I)\mathbf v \approx -9\mathbf v$ didn't we?
Doesn't that mean that $A\mathbf v \approx -4\mathbf v$? (Wondering)

Ah ok!

I like Serena said:
Ah okay, that is another algorithm to do a deflation.
Its input is also an eigenvalue and its corresponding eigenvector.
Can we apply it? (Wondering)

So, we have the following:

At the $4$th step of the spectral shift we get \begin{equation*}A'z^{(4)}=\begin{pmatrix}-10.7 & -61.1 & -32.9 \\ 0.8 & 6.9 & 7.1 \\ -1.1 & -11.8 & -12.2\end{pmatrix}\begin{pmatrix}1\\ -0.157302 \\ 0.240844\end{pmatrix}=\begin{pmatrix}-9.01262\\ 1.42461 \\ -2.18213\end{pmatrix} \Rightarrow \|A'z^{(4)}\|_{\infty}=\max_{i=1}^3|(A'z^{(4)})_i|=9.01262\end{equation*}
That means that an approximation of the largest in absolute value eigenvalue of $A'=A-5I$ is $9.01262$ and the respective eigenvector is $\mathbf v=\begin{pmatrix}-9.01262\\ 1.42461 \\ -2.18213\end{pmatrix}$. Since $(A-5I)\mathbf v \approx -9\mathbf v$, we get that $A\mathbf v \approx -4\mathbf v$, so $v$ is also an eigenvector of $A$ that correspond to the eigenvalue $\lambda_v=-4$.

We want to apply a deflation of the eigenvalue $\lambda=4$. Is this eigenvalue meant in absolute value? Because above we have the eigenvalue $-4$ instead of $4$. (Wondering)
 
  • #15
Indeed. the questions don't seem to make a logical sequence.
Up till now we didn't even know that 4 is also an eigenvalue.
I can only guess at what is intended. Can you clarify? (Wondering)

Afterwards we're supposed to find the smallest eigenvalue don't we?
One way to do it would be to first deflate eigenvalue -4. Then find 4 that we didn't have yet, and deflate it as well, after which the matrix is 1x1, meaning it's just the last and smallest eigenvalue. (Thinking)
Alternatively we can just get 4 and its eigenvector by solving $(A-4I)v=0$ as a prerequisite to do the deflation. (Thinking)
 
  • #16
I like Serena said:
I haven't checked your calculations, but yes tbere must be something wrong.

Yes, I found my mistake! We have infinitely many solutions. We get $ v_2=- \frac{851}{555}v_3$ and $ v_1=\frac{4559}{675}v_3$. So, we get can get for example the vector for $v_3=1$:
$$v=\begin{pmatrix} \frac{4559}{675} \\ - \frac{851}{555} \\ 1\end{pmatrix}$$ right? (Wondering) Now we can define $$u=\frac{v-\alpha\cdot e_1}{\|v-\alpha\cdot e_1\|_2}$$ where $\alpha=- \|v\|$ and so the matrix $H$.
(We take at $\alpha=\pm \|v\|$ the sign so that the value of $\|v-\alpha\cdot e_1\|_2$ is big, this will be if $\alpha=- \|v\|$, right? (Wondering) )

We have that $$\|v\|_2=\sqrt{\left (\frac{4559}{675}\right )^2+\left (- \frac{851}{555}\right )^2+1^2}=\frac{\sqrt{22311331}}{675}$$

So, we get $$v-\alpha\cdot e_1=v+\frac{\sqrt{22311331}}{675}\cdot e_1=\begin{pmatrix}\frac{4559}{675}+\frac{\sqrt{22311331}}{675} \\ - \frac{851}{555} \\ 1\end{pmatrix}=\begin{pmatrix}\frac{4559+\sqrt{22311331}}{675}\\ - \frac{851}{555} \\ 1\end{pmatrix}$$

Then $$\|v-\alpha\cdot e_1\|_2=\sqrt{\left (\frac{4559+\sqrt{22311331}}{675}\right )^2+\left ( - \frac{851}{555} \right )^2+1^2}\approx 13.87$$

Therefore $$u=\frac{1}{13.87}\begin{pmatrix}\frac{4559+\sqrt{22311331}}{675}\\ - \frac{851}{555} \\ 1\end{pmatrix}$$

So, we have to calculate the matrix $H=I-2uu^T$. Do we have to calculate next the product $HAH^{-1}=HAH$ ? (Wondering)

Then, if I have understood that correctly, we get a matrix where the element $a_{11}$ is the eigenvalue $4$. But what do we do next? (Wondering)
 
  • #17
mathmari said:
Yes, I found my mistake!

Ok!

mathmari said:
So, we have to calculate the matrix $H=I-2uu^T$. Do we have to calculate next the product $HAH^{-1}=HAH$ ?

Then, if I have understood that correctly, we get a matrix where the element $a_{11}$ is the eigenvalue $4$. But what do we do next?

Yes. That is also how I understand it.
The next step is simply to take the 2x2 sub matrix B, which is the result of the deflation.
That is, B should have the same eigenvalues as A, except for the eigenvalue 4 that we deflated.
So B should have eigenvalues -4 and -1. (Thinking)
 
  • #18
I like Serena said:
The next step is simply to take the 2x2 sub matrix B, which is the result of the deflation.
That is, B should have the same eigenvalues as A, except for the eigenvalue 4 that we deflated.
So B should have eigenvalues -4 and -1. (Thinking)

Do we maybe apply the Inverse Iteration, i.e. we apply the power method for $(B-sI)^{-1}$, for some $s$, then we find the largest in absolute value eigenvalue of $(B-sI)^{-1}$ ? That would mean that this eigenvalue is the smallest in absolute value of the matrix $B-sI$ and then we add to the calculated eigenvalue $s$ to get the smallest in absolute value eigenvalue of $B$.

Are my thought correct? (Wondering)
 
  • #19
mathmari said:
Do we maybe apply the Inverse Iteration, i.e. we apply the power method for $(B-sI)^{-1}$, for some $s$, then we find the largest in absolute value eigenvalue of $(B-sI)^{-1}$ ? That would mean that this eigenvalue is the smallest in absolute value of the matrix $B-sI$ and then we add to the calculated eigenvalue $s$ to get the smallest in absolute value eigenvalue of $B$.

Are my thought correct?

I think we should pick $s=0$ and then indeed apply the inverse iteration algorithm.
The result should be an approximation of the eigenvector $\mathbf v$ that belongs to the eigenvalue with the least magnitude.
We can then find the corresponding eigenvalue by using that $A\mathbf v=\lambda \mathbf v\implies |\lambda|=\frac{\|A\mathbf v\|}{\|\mathbf v\|}$. (Thinking)
 
  • #20
I like Serena said:
I think we should pick $s=0$ and then indeed apply the inverse iteration algorithm.
The result should be an approximation of the eigenvector $\mathbf v$ that belongs to the eigenvalue with the least magnitude.
We can then find the corresponding eigenvalue by using that $A\mathbf v=\lambda \mathbf v\implies |\lambda|=\frac{\|A\mathbf v\|}{\|\mathbf v\|}$. (Thinking)

Ok! But which vector should I take as the initial vector at the power method? (Wondering)
 
  • #21
mathmari said:
Ok! But which vector should I take as the initial vector at the power method? (Wondering)

The method on wiki suggests either an approximation of the eigenvector, or a random vector.
I guess we mighr also just pick a vector like the one in the OP.(Thinking)
 
  • #22
I like Serena said:
The method on wiki suggests either an approximation of the eigenvector, or a random vector.
I guess we mighr also just pick a vector like the one in the OP.(Thinking)

Ok! Thank you very much! (Happy)
 

FAQ: Approximation of eigenvalue with power method

What is the power method for approximating eigenvalues?

The power method is an iterative algorithm used to approximate the dominant eigenvalue of a square matrix. It involves computing successive powers of the matrix and using the resulting vectors to approximate the dominant eigenvector and eigenvalue.

How does the power method work?

The power method works by repeatedly multiplying a starting vector by the matrix and normalizing the resulting vector. As the number of iterations increases, the resulting vector will converge to the dominant eigenvector, and the corresponding eigenvalue can be approximated using the Rayleigh quotient.

What are the advantages of using the power method for eigenvalue approximation?

The power method is relatively simple to implement and computationally efficient. It also works well for large matrices and can be easily parallelized. Additionally, it can be used to approximate any eigenvalue, not just the dominant one.

What are the limitations of the power method?

The power method can only approximate the dominant eigenvalue and corresponding eigenvector of a matrix. If a matrix has multiple dominant eigenvalues, or if the dominant eigenvalue is not unique, the power method may not produce accurate results. It also may not converge if the starting vector is not chosen properly.

How can the power method be used in real-world applications?

The power method has many applications in various fields, including physics, engineering, and economics. It can be used to model and predict the behavior of dynamic systems, such as population growth or chemical reactions. It is also used in machine learning algorithms, such as principal component analysis, for dimensionality reduction and data analysis.

Similar threads

Replies
4
Views
1K
Replies
9
Views
3K
Replies
9
Views
5K
Replies
7
Views
2K
Replies
16
Views
4K
Replies
3
Views
2K
Replies
10
Views
1K
Replies
1
Views
1K
Replies
9
Views
1K
Back
Top