Numerically Calculating Eigenvalues

  • A
  • Thread starter member 428835
  • Start date
  • Tags
    Eigenvalues
In summary, the conversation is discussing the process of solving the eigenvalue problem ##Ax = \lambda Bx## using numerical entries for the square matrices ##A## and ##B##. The approach involves taking the equation $$Ax = \lambda Bx\implies\\B^{-1}Ax- \lambda Ix=0\implies\\(B^{-1}A-\lambda I)x=0$$ and using a built-in function in programs like Mathematica and MATLAB to compute the eigenvalues and eigenvectors of ##B^{-1}A##. However, the results are off by an order of magnitude and a sign error, leading to a discussion about potential mistakes in using these programs and the sensitivity of the system to
  • #1
member 428835
Hi PF!

I am trying to solve the eigenvalue problem ##Ax = \lambda Bx## where I have numerical entries for the square matrices ##A## and ##B##. I solve this by taking $$Ax = \lambda Bx\implies\\
B^{-1}Ax- \lambda Ix=0\implies\\
(B^{-1}A-\lambda I)x=0$$
where I then use a built in function in Mathematica and MATLAB (I'm using both to double check my work) to compute the eigenvalues/vectors of the matrix ##B^{-1}A##. But my answers are off by about an order of magnitude and a sign error. Any ideas if there is a more accurate way to check for eigenvalues?

Thanks!
 
Physics news on Phys.org
  • #2
How big are these matrices? It is possible that they are very badly conditioned such that computer round-off is causing problems, but if both Mathematica and Matlab are giving the same answer, then maybe the solution you are comparing it to is wrong. If you plug the solution you get back into the matrix equation, does it satisfy the equality?
 
  • #3
Hey joshmccraney! ;)

If Mathematica and Matlab give significantly different results, my first thought is that there was a mistake in using them somewhere.
Can you rule that out?

Otherwise it would suggest that the system is sensitive to numerical errors, which could typically happen if ##B## is close to being singular.
In particular that would also mean that we have eigenvalues close to zero.
Do you have those?

And, as NFuller mentioned, it would indeed be good to verify if the solution satisifies the original equation to figure out what is going on.
 
Last edited:
  • Like
Likes Ackbach
  • #4
What's the largest singular value (##\sigma_1##) and the small singular value (##\sigma_n##) for ##\mathbf A## and ##\mathbf B##?

I.e. suppose you populate this table:

##
\begin{bmatrix}
& \mathbf A & \mathbf B\\
\sigma_1 & & \\
\sigma_n & &
\end{bmatrix}
##
 
  • Like
Likes Greg Bernhardt
  • #5
NFuller said:
How big are these matrices? If you plug the solution you get back into the matrix equation, does it satisfy the equality?
The matrices are 3x3. Yes, plugging the solution satisfies the eigenvalue problem, but I spoke to a professor once and he said if there is a numerical mistake simply plugging in the eigenvalues/vectors into the problem to verify won't help identify the problem.

I like Serena said:
If Mathematica and Matlab give significantly different results, my first thought is that there was a mistake in using them somewhere.
Can you rule that out?

Otherwise it would suggest that the system is sensitive to numerical errors, which could typically happen if ##B## is close to being singular.
In particular that would also mean that we have eigenvalues close to zero.
Do you have those?
Eigenvalues are the same in value. Two eigenvalues are ##O(1)## and the third is ##O(10^{-2})##.
 
  • #6
StoneTemplePython said:
What's the largest singular value (##\sigma_1##) and the small singular value (##\sigma_n##) for ##\mathbf A## and ##\mathbf B##?

I.e. suppose you populate this table:

##
\begin{bmatrix}
& \mathbf A & \mathbf B\\
\sigma_1 & & \\
\sigma_n & &
\end{bmatrix}
##
I'm unsure what ##\sigma## is and how to compute it. If you can describe it to me or direct me to a link I'll find it for you. I tried googling it but nothing definite came up.
 
  • #7
joshmccraney said:
I'm unsure what ##\sigma## is and how to compute it. If you can describe it to me or direct me to a link I'll find it for you. I tried googling it but nothing definite came up.

Look into Singular Value Decomposition. For instance these links.
- - - -

https://en.wikipedia.org/wiki/Singular_value_decomposition

https://math.mit.edu/~gs/linearalgebra/linearalgebra5_7-1.pdf
https://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/positive-definite-matrices-and-applications/singular-value-decomposition/MIT18_06SCF11_Ses3.5sum.pdf

- - - -
If you agree on measuring discrepancies / changes using a 2 norm, you can use singular values to calculate the ill conditioning of a matrix. Singular values (##\sigma##) also happen to give bounds on the spectral radius (eigenvalues), can be used in calculating various matrix norms and so on.

Calculating these things is standard fare for a numerical library. I don't use Matlab or Mathematica, but getting the singular values should be pretty simple.

For example, in Python, you'd use a command like:

Python:
import numpy as np
A = np.random.random((2,2))
U, sigma, Vt = np.linalg.svd(A)print("this has the singular values for A", sigma)
 
  • Like
Likes Ackbach
  • #8
joshmccraney said:
The matrices are 3x3. Yes, plugging the solution satisfies the eigenvalue problem, but I spoke to a professor once and he said if there is a numerical mistake simply plugging in the eigenvalues/vectors into the problem to verify won't help identify the problem.

Eigenvalues are the same in value. Two eigenvalues are ##O(1)## and the third is ##O(10^{-2})##.

How about posting those matrices and your results with discrepancies?
3x3 matrices should be small enough to process for us.
 
  • #9
joshmccraney said:
Yes, plugging the solution satisfies the eigenvalue problem
Then the solution is correct.
joshmccraney said:
I spoke to a professor once and he said if there is a numerical mistake simply plugging in the eigenvalues/vectors into the problem to verify won't help identify the problem.
It's true that if you wrote your own numerical solver, that it may sometimes give the right answer and sometimes not. So getting the correct solution one time does not verify an algorithm. In this case however, you are using algorithms written by the developers of Mathematica and Matlab. Those algorithms are correct and if they are giving the correct solution to the equation, then I'm not sure what the issue is.

The only other thing I can think of is that if you are directly comparing two eigenvectors, the one you calculated (call it ##\mathbf{v}##) and the one from a solution sheet (call it ##\mathbf{u}##), they are allowed to be off by a multiplicative factor such that
$$\mathbf{v}=\alpha\mathbf{u}$$
Is this the case?
 
Last edited:
  • #10
Ok, sorry for taking so long! The ratio of the largest singular value to the smallest for ##A## is 3 and for ##B## is 22. The matrices are $$
A =\begin{bmatrix}
-1.0231 &0.5571 & 0.9796\\
0.5571 & -0.5749 & 0.2227\\
0.9796 & 0.2227 & -0.2982
\end{bmatrix}\\
B =
\begin{bmatrix}
10.3513 &3.7790 & 6.7384\\
3.7790 &2.3295 & 2.5858\\
6.7384 & 2.5858 & 5.4928
\end{bmatrix}
$$

I should say, I am calculating the numeric values of these matrices from a very complicated set of equations. Since my math there look correct, I was asking you all how robust typical built-in algorithms are for computing eigenvalues. It is possible my matrices, and hence the eigenvalues, are wrong. I just thought of troubleshooting this since I've triple checked everything else.
 
  • #11
joshmccraney said:
I was asking you all how robust typical built-in algorithms are for computing eigenvalues.
They are very robust.
joshmccraney said:
It is possible my matrices, and hence the eigenvalues, are wrong. I just thought of troubleshooting this since I've triple checked everything else.
I think it would be more likely that you've set something up wrong rather than there being a problem with the algorithms.
 
  • Like
Likes member 428835
  • #12
Thanks for your response!
 
  • #13
joshmccraney said:
Ok, sorry for taking so long! The ratio of the largest singular value to the smallest for ##A## is 3 and for ##B## is 22. The matrices are $$
A =\begin{bmatrix}
-1.0231 &0.5571 & 0.9796\\
0.5571 & -0.5749 & 0.2227\\
0.9796 & 0.2227 & -0.2982
\end{bmatrix}\\
B =
\begin{bmatrix}
10.3513 &3.7790 & 6.7384\\
3.7790 &2.3295 & 2.5858\\
6.7384 & 2.5858 & 5.4928
\end{bmatrix}
$$

I should say, I am calculating the numeric values of these matrices from a very complicated set of equations. Since my math there look correct, I was asking you all how robust typical built-in algorithms are for computing eigenvalues. It is possible my matrices, and hence the eigenvalues, are wrong. I just thought of troubleshooting this since I've triple checked everything else.

I'm a bit confused now. What is your problem exactly?
I haven't checked your eigenvalues, but didn't you say there was a discrepancy between the two math programs?
What's the discrepancy?
 

FAQ: Numerically Calculating Eigenvalues

What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are important concepts in linear algebra. Eigenvalues represent the scaling factor of an eigenvector when it is multiplied by a transformation matrix. Eigenvectors are non-zero vectors that remain in the same direction after being transformed by a matrix.

Why is it important to calculate eigenvalues?

Calculating eigenvalues is important in many areas of mathematics and science, including engineering, physics, and computer graphics. Eigenvalues can provide insights into the behavior of a system and help with solving differential equations and finding critical points.

How are eigenvalues calculated numerically?

Eigenvalues can be calculated numerically using various methods, such as the power method, inverse iteration, and QR algorithm. These methods involve iterative processes and matrix operations to find the eigenvalues and corresponding eigenvectors of a matrix.

What are the applications of numerically calculating eigenvalues?

Numerically calculating eigenvalues has many practical applications, such as in image and signal processing, data compression, principal component analysis, and machine learning. It is also used in physics and engineering to study the behavior of systems and to design efficient algorithms.

Are there any limitations to numerically calculating eigenvalues?

There are some limitations to numerically calculating eigenvalues, such as the potential for round-off errors and the convergence rate of iterative methods. The size and complexity of the matrix can also affect the accuracy and efficiency of the calculations.

Back
Top