Gauss elimination algorithm for matrix inversion

In summary, the conversation discusses the process of inverting a matrix and the concerns regarding diagonal elements being zero. It is stated that swapping rows is a legitimate operation and that the most important thing is achieving the identity matrix. Pivoting is also mentioned as a useful technique for certain types of problems. It is also mentioned that detecting a non-invertible matrix can be done by checking for rows with only zeros. Finally, it is noted that a matrix is not invertible if any of its diagonal entries are zero.
  • #1
TheDestroyer
402
1
Hello guys,

I'm writing a C++ function for matrix inversion. I saw many algorithms online, but I'm concerned about the case when a diagonal element equals zero, which is, for some reason, not taken into account by those websites.

When we do the elimination for solving a system of linear equations, it's fine to exchange 2 lines, since the columns represent the variables for which the equations are solved.

But in the case of inverting the matrix to multiply it with another column matrix, which is a vector whose rows have elements that comply to the columns in that matrix I want to invert (I need this for Levenberg-Marquardt algorithm). How can I solve this problem? can I simply do the reciprocal exchange after the inverting is done (e.g., start exchange 1 with 2, and at the end redo the exchange to restore it the way it was)? could someone please explain the theoretical scheme of the problem? (I mean when the matrix is invertible, what kind of conditions will I get in the matrix elements).

Thank you for any efforts :)
 
Mathematics news on Phys.org
  • #2
I'm not sure I see a problem. It sounds like you are using the method of "row-reduce A to the identity matrix while simultaneously applying those row operations to the identity matrix to get [itex]A^{-1}[/itex]". If that is what you are doing swapping two rows is a perfectly legitimate operation.

To take a very simple example, consider inverting
[tex]\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}[/tex]
which can be done by a single "swap":
[tex]\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}[/tex]

Swap the two rows to get
[tex]\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix}[/tex]

And, indeed, this matrix is its own inverse.
 
  • #3
Ah, I see! so you're saying swapping or not is totally irrelevant! the most important thing is achieving the identity matrix on the other side, even if I do million swaps and or any other row elementary operations.

I think I get it! thank you, was a stupid question anyway due to my confusion :-)
 
  • #4

The buzzword for "swapping rows and/or columns" in computational methods is "pivoting". You should be able to find equation solving algorithms that use pivoting on the web. Look at the LINPACK or LAPACK libraries for example.

For some types of problem, the underlying maths guarantees than pivoting is never required, so equation solvers that don't do pivoting are useful in practice.

In a computer algorithm, you don't necessarily have to explicitly swap the rows and columns around. All you need to do is remember the order in which you did the row operations on the matrix. That can often be done with just a list of integers representing the "logical" ordering of the matrix compared with its "physical" ordering in the computer's memory.
 
  • #5
Thank you for your idea AlephZero.

How could it be possible that swapping rows isn't necessary? what if the one of the diagonal elements equals zero? this simply means we have to swap the rows with some other row that makes this diagonal component a non-zero value, right?

I have a question actually:

how can I detect that the matrix is non-invertible during the gaussian eliminations? what's going to happen during the elementary row operations? am I going to get a column of zeros? or what exactly?
 
  • #6
TheDestroyer said:
Thank you for your idea AlephZero.

How could it be possible that swapping rows isn't necessary? what if the one of the diagonal elements equals zero? this simply means we have to swap the rows with some other row that makes this diagonal component a non-zero value, right?

I have a question actually:

how can I detect that the matrix is non-invertible during the gaussian eliminations? what's going to happen during the elementary row operations? am I going to get a column of zeros? or what exactly?

If you get a row with only zeros it's non-invertible, because that shows that the rows are linearly dependant, so it will reduce to a matrix with linearly dependant rows, so it can't reduce to the identity matrix.
 
  • #7
You will always be ending up with a diagonal matrix after gaussian elimination. If any of the diagonal entries are 0, the matrix is not invertible.
 
  • #8
Thank you so much :-)
 

Related to Gauss elimination algorithm for matrix inversion

1. What is the Gauss elimination algorithm for matrix inversion?

The Gauss elimination algorithm is a method for solving linear systems of equations and calculating the inverse of a matrix. It involves performing a series of row operations on the matrix until it is transformed into an upper triangular matrix, which can then be easily inverted using back substitution.

2. How does the Gauss elimination algorithm work?

The algorithm works by using row operations, such as multiplying a row by a scalar or adding one row to another, to transform the original matrix into an upper triangular matrix. This process eliminates variables one by one, making it easier to solve for the remaining variables and ultimately find the inverse of the matrix.

3. What are the benefits of using the Gauss elimination algorithm for matrix inversion?

The Gauss elimination algorithm is a widely used method for matrix inversion because it is efficient and can be easily automated. It also provides a step-by-step process that makes it easy to follow and understand. Additionally, it is applicable to both square and non-square matrices.

4. Are there any limitations or drawbacks to using the Gauss elimination algorithm?

One limitation of the Gauss elimination algorithm is that it may not work for singular matrices, which are matrices that do not have an inverse. Additionally, it can be computationally expensive for large matrices, as it involves multiple operations and can become more complex as the matrix size increases.

5. How is the Gauss elimination algorithm used in real-world applications?

The Gauss elimination algorithm has many real-world applications, particularly in fields such as engineering, physics, and economics. It is commonly used to solve systems of linear equations, which arise in various types of scientific and mathematical models. Additionally, it is used in computer graphics and machine learning for tasks such as image processing and data analysis.

Similar threads

Replies
5
Views
2K
Replies
1
Views
963
Replies
2
Views
3K
Replies
16
Views
3K
Replies
1
Views
671
Replies
6
Views
3K
Back
Top