Fortran eigenvalue decomposition

In summary, the conversation discusses a problem with inverting a real symmetric matrix using eigenvalue decomposition and the use of a LU-Decomposition as a better alternative. The conversation also mentions the use of FORTRAN and suggestions for finding code to solve the problem, specifically the NLPQLP code for solving non-linear constraint problems.
  • #1
brydustin
205
0
I've been trying to invert a real symmetric matrix and the inverse that I compute via eigenvalue decomposition is not the inverse (using QV^-1Q^T), the stranger thing is that QVQ^T gets back my orginal matrix matrix. Even more unusual is that the matrix starts off at approximately identity (in general it won't be), and so the analytical eigenvalue-eigenvector decomposition should be I = I*I*I, but instead V~=I (the eigenvalues) and Q is just a random orthogonal matrix. V is roughly I but when computing the inverse (which should also be I ) it all goes bad. I thought that I could switch over to the cholesky decomposition because the matrix will be positive definite always (for any iteration), but I really don't want to do anything artificial on the first iteration (i.e. if its first iteration, set the inverse equal to the matrix). Plus I need to compute the determinant of this matrix as well which I thought was most practically compute as a product of the eigenvalues. I don't think that I should have to compromise, but its not working...
 
Physics news on Phys.org
  • #2
Even more unusual is that the matrix starts off at approximately identity (in general it won't be), and so the analytical eigenvalue-eigenvector decomposition should be I = I*I*I, but instead V~=I (the eigenvalues) and Q is just a random orthogonal matrix.

What you got for Q is not too surprising. Ignoring rounding errors, if you have two identical eigenvalues, the corresponding two vectors willl span the 2-dimensional subspace, but the two vectors can be any linear combinations of two basis vectors for the subspace.

When rounding errors occur, this can also happen when two eigenvalues are very close but (theoretically) not identical. The individual eigenvalues may be very poorly determined, but their orthogonality properties will be well conditioned.

I would check numerically if your Q matrix really is orthonormal (i.e. Q Q^T = I), though the fact that "QVQ^T gets back my orginal matrix" suggests that it is.

Also, try multiplying your "wrong" inverse by the original matrix and see what you get. The result might be quite close to I, even though the inverse "looks obviously wrong".

The real question here might be "why you want to calculate an explicit inverse at all." Doing that is almost always the wrong way to implement a numerical algorithm, even though it's often a neat way to write down the math.
 
  • #3
Computing a matrix inverse and determinant using the eigenvalues is quite a complicated and roundabout way of doing things--unless you happen to have some code handy that you want to use. Otherwise, the usual method is to do an LU-Decomposition. Once that is done, so many other tasks become easy: solving an [A](x) = (b) system, computing the inverse, computing the determinant, etc. And there is a lot of code available for doing these tasks.

Is this program for a school assignment?

Do you have to use FORTRAN?
 
  • #4
DuncanM said:
Computing a matrix inverse and determinant using the eigenvalues is quite a complicated and roundabout way of doing things--unless you happen to have some code handy that you want to use. Otherwise, the usual method is to do an LU-Decomposition. Once that is done, so many other tasks become easy: solving an [A](x) = (b) system, computing the inverse, computing the determinant, etc. And there is a lot of code available for doing these tasks.

Is this program for a school assignment?

Do you have to use FORTRAN?

No its not a school assignment, I'm a researcher.
Do I have to use Fortran... yes, unfortunately.
And yeah, I decided that LU was the best route to go, thanks for the input.
 
  • #5
How is that (the LU-Decomposition) going?

Have you checked out the NETLIB site? There is a lot of good code there--in FORTRAN.

I have some experience with the older routines (LINPACK).
The LINPACK routine for an LU-Decomposition is SGEFA.F.
Source code is posted on the NETLIB site here:

SGEFA.F

There is also an online utility based on this code that you might find useful:

Simultaneous Equation Solver

If you are not solving a system, but just want the determinant and inverse, just enter a dummy vector for (b). It works pretty good, and is a convenient way to get some quick numbers.

If you'd rather use a routine more modern than the LINPACK routine, there are the LAPACK routines:

SGETRF.F for single precision, and

DGETRF.F for double precision.

They are both written in FORTRAN, so they should be easy for you to use in your own program.

Hope this helps.
 
  • #6
DuncanM said:
How is that (the LU-Decomposition) going?

Have you checked out the NETLIB site? There is a lot of good code there--in FORTRAN.

I have some experience with the older routines (LINPACK).
The LINPACK routine for an LU-Decomposition is SGEFA.F.
Source code is posted on the NETLIB site here:

SGEFA.F

There is also an online utility based on this code that you might find useful:

Simultaneous Equation Solver

If you are not solving a system, but just want the determinant and inverse, just enter a dummy vector for (b). It works pretty good, and is a convenient way to get some quick numbers.

If you'd rather use a routine more modern than the LINPACK routine, there are the LAPACK routines:

SGETRF.F for single precision, and

DGETRF.F for double precision.

They are both written in FORTRAN, so they should be easy for you to use in your own program.

Hope this helps.


Yes thank you very much. I'm actually trying to find the following code right now:
NLPQLP
Its to solve the non-linear constraint problem (optimization with lagrange multipliers).
This post was because I wanted to invert my lagrangian hessian, however I don't really understand what to do next? However, NLPQLP seems to be a nice blackbox program that would solve it... unfortunately, I can't find the source code nor do I know what package it comes in (LAPACK, LINPACK, etc.)
 
  • #7
brydustin said:
Yes thank you very much. I'm actually trying to find the following code right now:
NLPQLP
Its to solve the non-linear constraint problem (optimization with lagrange multipliers).
This post was because I wanted to invert my lagrangian hessian, however I don't really understand what to do next? However, NLPQLP seems to be a nice blackbox program that would solve it... unfortunately, I can't find the source code nor do I know what package it comes in (LAPACK, LINPACK, etc.)

I don't think it is posted on the NETLIB site.

The only mention of it on the entire NETLIB site is here:

http://www.netlib.org/na-digest-html/09/v09n37.html

Perhaps make a post in the LAPACK Support Forums and see if they can offer some suggestions.

I also notice that Northwestern University has a page on optimization software that might be helpful:

http://optimization.eecs.northwestern.edu/software/

And Wikipedia has a list too:

http://en.wikipedia.org/wiki/List_of_optimization_software

Many of these products offer free versions for academic use.

Hopefully, something among these many packages can help you.
 

FAQ: Fortran eigenvalue decomposition

What is Fortran eigenvalue decomposition?

Fortran eigenvalue decomposition is a mathematical method used to decompose a square matrix into its eigenvalues and eigenvectors. It is commonly used in scientific and engineering applications to solve systems of linear equations, perform data analysis, and solve differential equations.

How does Fortran eigenvalue decomposition work?

Fortran eigenvalue decomposition uses iterative algorithms to find the eigenvalues and eigenvectors of a matrix. The algorithm involves repeatedly multiplying the original matrix by itself until it converges to a solution. The resulting eigenvalues and eigenvectors can then be used to solve various mathematical problems.

What are the benefits of using Fortran eigenvalue decomposition?

Fortran eigenvalue decomposition is a stable and efficient method for solving systems of linear equations, making it a valuable tool for scientists and engineers. It also allows for the analysis of large datasets and the solution of complex mathematical problems, making it a versatile tool for a wide range of applications.

Are there any limitations to using Fortran eigenvalue decomposition?

One limitation of Fortran eigenvalue decomposition is that it can only be used for square matrices. It also may not be the most efficient method for solving certain types of problems, such as systems with repeated eigenvalues. Additionally, the convergence of the algorithm may be affected by the properties of the original matrix.

How can Fortran eigenvalue decomposition be implemented in scientific research?

Fortran eigenvalue decomposition can be implemented in scientific research by using Fortran libraries or writing custom code. Many scientific software packages also have built-in functions for performing eigenvalue decomposition. It is commonly used in fields such as physics, engineering, and finance for data analysis, modeling, and simulation.

Similar threads

Replies
1
Views
2K
Replies
2
Views
2K
Replies
18
Views
3K
Replies
2
Views
1K
Replies
12
Views
2K
Replies
0
Views
1K
Back
Top