Calculating Eigen Values of a Sparse Matrix

In summary: In what field?)(Do you have any constraints on how the root is represented?)I think the answer is that there is no general, exact solution to this problem.I think my answer is now:In summary, the conversation discusses the need for higher precision eigenvalues of a sparse matrix and the use of different methods to derive an equation for the eigenvalues. The possibility of using the Householder method to transform the matrix into an upper Hessenberg matrix is mentioned, but it is noted that this may not work for all matrices. The conversation also touches on the limitations of finding exact solutions for eigenvalues of an nth degree polynomial and suggests using numerical methods or considering the need for ultra-high precision eigenvalues. It
  • #1
jeff.sadowski
11
0
I would like to get the eigen values of a sparse matrix

of form

[ a, b, c, d, e;
1, 0, 0, 0, 0;
0, 1, 0, 0, 0;
0, 0, 1, 0, 0;
0, 0, 0, 1, 0]

in matlab/octave I use the following to generate the matrix

% filename nfactor.m written by Jeff Sadowski
% nfactor finds the factors for an nth degree polynomial
% if you have an equation a(x^n) + b(x^(n-1)) + c(x^(n-2)) + ...
% use nfactor like so nfactor([a,b,c,...])
% example: equation=x^2 + 2*x + 1
% octave:1> nfactor([1,2,1])
% ans =
%
% -1
% -1
%
function ans=nfactor(VECTOR) vsize=size(VECTOR);
ans=eig([-1/VECTOR(1)*VECTOR(2:vsize(2));eye(vsize(2)-2,vsize(2)-1)]);

However I need higher precision than this more accurately I need a resultant formula for each eigen value. I was hoping to use something like the householder method to transform this matrix into an upper Hessenberg matrix so that I have the eigen formulas down the diagonal. Can this be done?

I found this sight
http://www.matf.bg.ac.yu/r3nm/NumericalMethods/EigenValues/Householder.html
but when I plug my matrices in the example transformation doesn't work.
So I'm guessing this method will not work for my matrices. Are there other methods (non numerical) that can be used to transform the matrix to an upper diagonal without effecting the eigen values?
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
You can write down a polynomial equation for the eigenvalues of your matrix as a simple function of the matrix coefficients - and you did that already, of course.

I don't think that trying to use a different method to derive an equation for the eigvenvalues is going to change the numerical behaviour. Unless there are some relations between your coefficient a, b, c, d, e, you are not going to get a closed-form solution for the roots of a polynomial of degree 5 however you try to do it.

I suspect your problem is that the roots of a polynomial can be very sensitive to small perturbations in the coefficients. There's a classic example by Wilkinson (c. 1960). He showed that the polynomial with roots 1, 2, ... 20, plus a perturbation term of order 1e-20 in one coefficient, changed many of the roots by large amounts, and several roots changed from real to complex.

You could refine the approximate values you already have by something like Newton-Raphson iteration using extended precision.

Or you could take a step back and consider what you are going to use the eigenvalues for, and if you really need ultra-high precision eigenvalues. For most physics or engineering applications, you don't need them. Paradoxically, you can easily calculate the eigenvectors to much higher precision than the eigenvalues, using simple methods like inverse iteration.
 
Last edited:
  • #3
The idea is to solve for a polynomial using some matrix manipulations not the other way round. There has to be some other methods to solve for eigen values. I have not learned the house holder equations but I know it is just a series of matrix manipulations to transform a matrix into an upper Hessenberg matrix without effecting the eigen values and thus the eigen values are the diagonal. So if i can use the house holder equations I have formulas for the roots of an nth degree polynomial.

"You could refine the approximate values you already have by something like Newton-Raphson iteration using extended precision."
This just seems to get them further and further off. There seems to be some unstable point at which I'm trying to find. I'd like to find it though. Thus why I want a "non numerical" method.

The step back is I want a systematic approach at finding the "exact" (non numerical method) roots of an nth degree polynomial. I want the imaginary roots as well.

There has to be a some ways to deal with a matrix without effecting the eigen values and getting it into a useful form to be able to use the householder method.
 
  • #4
jeff.sadowski said:
have not learned the house holder equations but I know it is just a series of matrix manipulations to transform a matrix into an upper Hessenberg matrix without effecting the eigen values and thus the eigen values are the diagonal. So if i can use the house holder equations I have formulas for the roots of an nth degree polynomial.

It's a while since I looked at the theory, but my memory (based on a lot of work with the EISPACK library in the past) says you finish up finding the eigenvalues of a tridiagonal matrix, numerically.

The step back is I want a systematic approach at finding the "exact" (non numerical method) roots of an nth degree polynomial. I want the imaginary roots as well.

But doesn't Galois Theory say that's impossible in the general case, except when n < 5?
 
  • #5
jeff.sadowski said:
The step back is I want a systematic approach at finding the "exact" (non numerical method) roots of an nth degree polynomial. I want the imaginary roots as well.
In what field do you want the roots? (i.e. rationals, Q(i), some number field, the complexes, some finite field, ...)

Do you have any constraints on how the root is represented? (Aside from requiring an exact representation)
 
  • #6
AlephZero said:
But doesn't Galois Theory say that's impossible in the general case, except when n < 5?

No. Galois theory states that the generic quintic (or higher degree) poly cannot have its roots expressed as 'nice' functions of its coefficients. This states nothing about

(i) any specific case: I know what all the roots of x^n-1 are for any n, algebraically

(ii) the general case using methods other than extracting roots etc: elliptic functions, for example give the roots of degree 5s, I believe.
 
  • #7
matt grime said:
No. Galois theory states that the generic quintic (or higher degree) poly cannot have its roots expressed as 'nice' functions of its coefficients.

OK, I was cutting some corners in my logic there.

But the matrix transformations (e.g. Householder, Givens, etc) used in the standard eigensolution methods (e.g. the EISPACK library) only involve "nice" functions like sqrt, sin, and cos, so they won't give a complete solution to the OP's question:

I want a systematic approach at finding the "exact" (non numerical method) roots of an nth degree polynomial.
 

FAQ: Calculating Eigen Values of a Sparse Matrix

What is a sparse matrix?

A sparse matrix is a matrix that has a large number of zero entries, making it significantly smaller than a dense matrix. In a sparse matrix, most of the elements are zero, and only a few are non-zero.

Why is it important to calculate eigen values of a sparse matrix?

Calculating eigen values of a sparse matrix is important because it helps in understanding the properties and behavior of the matrix. Eigen values provide information about the matrix's stability, convergence, and other important characteristics.

How do you calculate eigen values of a sparse matrix?

To calculate eigen values of a sparse matrix, we use specialized algorithms, such as the power method or the inverse power method. These methods are designed to handle the sparsity of the matrix and can efficiently compute the eigen values.

Can eigen values of a sparse matrix be complex numbers?

Yes, eigen values of a sparse matrix can be complex numbers. In fact, a sparse matrix may have both real and complex eigen values, depending on its size and properties. The presence of complex eigen values can provide valuable insights into the matrix's behavior.

How can we use eigen values of a sparse matrix in practical applications?

Eigen values of a sparse matrix are used in various practical applications, such as data compression, image processing, and network analysis. They can also be used to solve differential equations and to identify patterns and structures in data sets.

Similar threads

Replies
2
Views
688
Replies
1
Views
861
Replies
1
Views
935
Replies
2
Views
3K
Replies
4
Views
2K
Replies
2
Views
1K
Back
Top