How can I convince myself that I can find the inverse of this matrix?

In summary: U = 0## which implies that ##U## is invertible.In summary, the conversation discussed the possibility of converting a matrix into an identity matrix using Gauss-Jordan elimination method. It was concluded that this method can be used if all the entries on the main diagonal are nonzero. Alternatively, it was suggested to show that the matrix has a trivial kernel, meaning that the only solution to Ux=0 is the 0 vector. This can be proven by showing that one of the entries in x must be zero, which then leads to all entries being zero and the matrix being invertible.
  • #1
Hall
351
88
TL;DR Summary
An upper triangular matrix.
If I have a ##n\times n## matrix
$$
U=
\begin{bmatrix}
u_{11} & u_{12} &u_{13} & \cdots u_{1n} \\
0 & u_{22} & u_{23} & \cdots u_{2n} \\
0&0 &u_{33} &\cdots u_{3n}\\
\vdots & \vdots &\vdots & \cdots \vdots \\
0 & 0 & 0 &\cdots u_{nn}
\end{bmatrix}
$$
Now, I don't want to use the fact that it's determinant is nonzero therefore I can find its inverse (because that's what I'm trying to prove).

I want to know if I can convert that matrix into an identity matrix using Gauss-Jordan elimination method? Well, you see in integrals we usually look for if the integral can be found out, in that subject we can prove if something is possible without finding the thing needed; while here how can I ascertain myself that I can convert ##U## into an identity matrix (well if I can do that then it has an inverse) without applying the processes of Gauss-Jordan elimination method.
 
Physics news on Phys.org
  • #2
How about showing it has a trivial kernel?
 
  • #3
WWGD said:
How about showing it has a trivial kernel?
You mean a column vector which when multiplied by U gives a zero matrix?
 
  • #4
Hall said:
I want to know if I can convert that matrix into an identity matrix using Gauss-Jordan elimination method?
Yes, provided that all the entries on the main diagonal are nonzero. If so, each leading entry in a row could be used to eliminate all of the entries directly above it.
Hall said:
how can I ascertain myself that I can convert U into an identity matrix (well if I can do that then it has an inverse) without applying the processes of Gauss-Jordan elimination method.
Again assuming that the ##u_{i, i}## terms on the diagonal are nonzero, it's easy to see that the n columns are linearly independent, so the matrix is invertible. Alternatively, you could show that the nullspace consists solely of the 0 vector (following @WWGD's suggestion), which also means that the matrix is invertible.
 
  • Like
Likes DrClaude
  • #5
Hall said:
Now, I don't want to use the fact that it's determinant is nonzero therefore I can find its inverse (because that's what I'm trying to prove).
You can show the determinant is non-zero directly by induction on ##n##. I.e. the determinant is ##u_{nn}## times the determinant of an ##(n-1) \times (n-1)## matrix of the same form.

And, indeed, show directly that ##det(U) = u_{11}u_{22} \dots u_{nn}##.
 
  • Like
Likes DrClaude
  • #6
Hall said:
You mean a column vector which when multiplied by U gives a zero matrix?
Yes, something of that sort. Producing one such example, or showing the only such example is the 0 vector would work.
 
  • #7
$$
U=
\begin{bmatrix}
u_{11} & u_{12} &u_{13} & \cdots u_{1n} \\
0 & u_{22} & u_{23} & \cdots u_{2n} \\
0&0 &u_{33} &\cdots u_{3n}\\
\vdots & \vdots &\vdots & \cdots \vdots \\
0 & 0 & 0 &\cdots u_{nn}
\end{bmatrix}
\times
\begin{bmatrix}
0\\
0\\
0\\
\vdots\\
0\\
\end{bmatrix}
=
\begin{bmatrix}
0\\
0\\
0\\
\vdots\\
0\\
\end{bmatrix}
$$

Yes, but can you please tell me what does this show?
 
  • Sad
Likes PeroK
  • #8
Hall said:
$$
U=
\begin{bmatrix}
u_{11} & u_{12} &u_{13} & \cdots u_{1n} \\
0 & u_{22} & u_{23} & \cdots u_{2n} \\
0&0 &u_{33} &\cdots u_{3n}\\
\vdots & \vdots &\vdots & \cdots \vdots \\
0 & 0 & 0 &\cdots u_{nn}
\end{bmatrix}
\times
\begin{bmatrix}
0\\
0\\
0\\
\vdots\\
0\\
\end{bmatrix}
=
\begin{bmatrix}
0\\
0\\
0\\
\vdots\\
0\\
\end{bmatrix}
$$

Yes, but can you please tell me what does this show?
I mean, if you can show that's the only element in the kernel. Otherwise, true, it's tautological.
 
  • #9
Hall said:
Yes, but can you please tell me what does this show?
Not a lot! For any matrix ##M## we have ##M\vec 0 = \vec 0##
 
  • Like
Likes malawi_glenn
  • #10
PeroK said:
You can show the determinant is non-zero directly by induction on ##n##. I.e. the determinant is ##u_{nn}## times the determinant of an ##(n-1) \times (n-1)## matrix of the same form.

And, indeed, show directly that ##det(U) = u_{11}u_{22} \dots u_{nn}##.
I have to prove the statement:
$$
\text{If}~ det~U \neq 0 ~\text{ then U is invertible}$$
 
  • #11
Hall said:
I have to prove the statement:
$$
\text{If}~ det~U \neq 0 ~\text{ then U is invertible}$$
That doesn't make a lot of sense for a specific type of matrix as it's generally true for all matrices.

I thought you said you were trying to prove that ##U## is invertible. Given, as pointed out above, that all the diagonal entries are non-zero.
 
  • #12
I guess he's using the LU decomposition into upper triangular matrix times a unitary matrix.

How about a specific inverse of this matrix? It's easy to show it exists by induction.
 
  • Like
Likes mathwonk
  • #13
Suppose ##x=(x_1,...,x_n)## satisfies ##Ux=0##.

Can you prove that one of the entries of x is zero? Using that, can you then prove another entry is zero?
 
  • #14
Office_Shredder said:
Suppose ##x=(x_1,...,x_n)## satisfies ##Ux=0##.

Can you prove that one of the entries of x is zero? Using that, can you then prove another entry is zero?
##det U \neq 0## implies ##u_{ii} \neq 0## for all ##i##.
$$
\begin{bmatrix}
u_{11} & u_{12}& u_{13} &\cdots & u_{1n}\\
0 & u_{22} & u_{23} & \cdots & u_{2n} \\
0 & 0 & u_{33} & \cdots & u_{3n} \\
\vdots & \vdots & \cdots & \cdots &\vdots \\
0 & 0 & 0 &\cdots u_{nn}\\
\end{bmatrix}
\times
\begin{bmatrix}
x_1\\
x_2\\
x_3\\
\vdots\\
x_n\\
\end{bmatrix}
=
\begin{bmatrix}
0\\
0\\
0\\
\vdots\\
0\\
\end{bmatrix}
$$

##\implies x_n u_{nn} = 0 \implies x_n = 0## (as we stated in beginning ##u_{ii} \neq 0)##.
Similarly, ## x_{n-1} u_{(n-1)(n-1)} +x_n u_{(n-1)n} = 0## and as we know ##x_n = 0## we have ##x_{n-1}=0##. Going in the same line, we can show that all ##x_i = 0##. Therefore,
$$
x = (0, 0 ,0 \cdots 0)$$
So, we have shown that kernel of ##U## is the zero matrix only.

P.S. : Is there a difference between
$$
x = [x_1, x_2, \cdots x_n]$$
And
$$
x =
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n\\
\end{bmatrix}
$$
I mean is it only about turning it 90 degrees?
 
Last edited by a moderator:
  • Like
Likes Maarten Havinga
  • #15
Hall said:
##\implies x_n u_{nn} = 0 \implies x_n = 0## (as we stated in beginning ##u_{ii} \neq 0)##.
Similarly, ## x_{n-1} u_{(n-1)(n-1)} +x_n u_{(n-1)n} = 0## and as we know ##x_n = 0## we have ##x_{n-1}=0##. Going in the same line, we can show that all ##x_i = 0##. Therefore,
$$
x = (0, 0 ,0 \cdots 0)$$
So, we have shown that kernel of ##U## is the zero matrix only.
That's the idea, but this is more hand waving than an actual proof, which would involve mathematical induction.
Hall said:
P.S. : Is there a difference between
$$
x = [x_1, x_2, \cdots x_n]$$
And
$$
x =
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_n\\
\end{bmatrix}
$$
I mean is it only about turning it 90 degrees?
The first is a row vector. The second is a column vector, which is the transpose of the row vector. IOW, the column vector is ##x^T##, using the usual notation.
 
  • #16
I usually just write things out as a list like in my last post because I'm too lazy to figure out the tex to make it a column vector.

There's a deeply mathematical explanation that row vectors are functionals/are elements of the dual space but it's probably not very helpful for you at this stage
 
  • #17
To enlarge on the remark of Maarten Havinga, just try to solve for a matrix V such that V is also upper diagonal, and such that VU = Id matrix. Start finding the rows of V from the bottom up. In the bottom row, all entries are zero except the last, since V is upper triangular, and then the last entry is easy to find so that the last entry on the diagonal in the product matrix is 1. Now move up to the next to last row of V, and see what the next to last entry of that row must be, to make the next to last diagonal entry of the product matrix also 1. Then solve for the last entry in that next to last row of V, to get the zero you want at the end of the next to last row in the product matrix. Try to continue your way up, one row at a time, to complete the matrix V. Start with a 2x2, then try a 3x3, until you get the idea. There is nothing at all sophisticated here, just that you can solve ax= b, when a is non zero. Oh yes, and the somewhat sophisticated fact that any left inverse is then also a right inverse.

Of course if you know about row reduction, or Gaussian elimination, the matrix is already in echelon form with the maximal number of pivots, hence is invertible, and the reduced echelon form is clearly the identity matrix. Indeed performing the row operations to reduce it on an expanded matrix starting with the identity matrix on the left, i.e. on [Id U], will also construct the (left) inverse for you.
 
Last edited:
  • Like
Likes Maarten Havinga
  • #18
Another way of thinking about this is that inverting an invertible matrix ##\mathbf{U}## is equivalent to solving a set of simultaneous equations $$
\mathbf{y} = \mathbf{Ux} \implies \mathbf{x} = \mathbf{U}^{-1}\mathbf{y}
$$##\mathbf{U}## is invertible if and only if the simultaneous equations have a unique solution. Solving the simultaneous equations is a lot easier when ##\mathbf{U}## is triangular.
 
  • Like
Likes Hall and mathwonk
  • #19
DrGreg said:
Another way of thinking about this is that inverting an invertible matrix ##\mathbf{U}## is equivalent to solving a set of simultaneous equations $$
\mathbf{y} = \mathbf{Ux} \implies \mathbf{x} = \mathbf{U}^{-1}\mathbf{y}
$$##\mathbf{U}## is invertible if and only if the simultaneous equations have a unique solution. Solving the simultaneous equations is a lot easier when ##\mathbf{U}## is triangular.
Yes, ##x_n =y_n##, ## x_{n-1}= (y_{n-1} -y_nu_{(n-1)n})/(u_{(n-1)(n-1)}##, and we can go on.

But why do we say that if the system has a unique solution then the coefficient matrix is invertible? Is it an axiom?
 
  • #20
Hall said:
Yes, but can you please tell me what does this show?
Suppose ##Ax=0##. Does it follow that ##x=0##? If so, then ##A## would be injective. A transformation on a finite dimensional space is injective if and only if it is surjective. Thus, ##A## would be invertible.
 
  • #21
nuuskur said:
Suppose ##Ax=0##. Does it follow that ##x=0##? If so, then ##A## would be injective. A transformation on a finite dimensional space is injective if and only if it is surjective. Thus, ##A## would be invertible.
Suppose ##A = \begin{bmatrix} 1 & 0\\ 0 & 0\end{bmatrix}##.
I notice that Ax = 0 implies that x = 0 (among other solutions). Does this mean that A is invertible?

Maybe it was just how you worded what I quoted above. I would completely agree if you had said "Does it follow that ##x = 0## is the unique solution?"

One concept that new students of linear algebra have difficulties with is distinguishing between equations that have exactly one solution versus those that have more than one solution. This quandary comes up in discussions about linear independence vs. linear dependence and matrix nullspaces.
 
  • #22
Mark44 said:
Suppose ##A = \begin{bmatrix} 1 & 0\\ 0 & 0\end{bmatrix}##.
I notice that Ax = 0 implies that x = 0 (among other solutions). Does this mean that A is invertible?

Maybe it was just how you worded what I quoted above. I would completely agree if you had said "Does it follow that ##x = 0## is the unique solution?"
Fair play. It is true that ##x=0## implies ##Ax=0##. If it is also true that ## Ax=0## implies ##x=0##, then ##x=0## is the unique solution. For the example you gave, the implication is false, because, as you said, there are also nonzero solutions. By default, if a statement is not explicitly quantified, it is to be regarded as universally quantified, i.e,
[tex]
\forall x\in K^n,\quad Ax = 0\Rightarrow x=0.
[/tex]
 
Last edited:
  • #23
nuuskur said:
For the example you gave, the implication is false
Not necessarily. Let's break it down with a couple of example vectors: <0, 1> and <0, 0> and the matrix I gave.

Case 1: x = <0, 1>T
p: Ax = ##\begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}\begin{bmatrix} 0 \\1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}## -- True
q: x = <0, 0>T -- False
Implication: False

Case 2:
x = <0, 0>T
p: Ax = ##\begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}\begin{bmatrix} 0 \\0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}## -- True
q: x = <0, 0>T -- True
Implication: True
All I'm really saying is that it's important to state explicitly that Ax = 0 implies = 0 is the unique solution.
 
  • #24
I'm confused Mark. The standard if A then B statement is only true if every example of A gives B being true. The word implies normally has the same meaning. I would 100% agree with the statement "Ax=0 implies x=0" is equivalent to "A is injective"
 
  • #25
Office_Shredder said:
The standard if A then B statement is only true if every example of A gives B being true. The word implies normally has the same meaning.
No, that's not right. An implication ##A \Rightarrow B## is false only if A is true and B is false. All three other combinations of the truth values A and B result in the implication being true.

IOW,
A true, B true, implication is true.
A false, B true, implication is true.
A false, B false, implication is true.
A true, B false, implication is false.
 
  • #26
##Ax=0## only if ##x=0##. If there are nonzero solutions, then the statement is clearly false.
Mark44 said:
No, that's not right. An implication ##A \Rightarrow B## is false only if A is true and B is false.
Well, fair enough. What Officer_Shredder meant is that the implication is true only if "##B## is true whenever ##A## is true" (i.e any of the first three items in the list you gave). In practice we don't care about vacuous truths (because deduction preserves truth value).

It's also reasonable to think of the current problem as ##A(x)\Rightarrow B(x)##, where ##A(x) :=## "##Ax=0##" and ##B(x) :=## "##x=0##". The question is whether it's a tautology.
 
Last edited:
  • #27
Hall said:
Summary: An upper triangular matrix.

If I have a ##n\times n## matrix
$$
U=
\begin{bmatrix}
u_{11} & u_{12} &u_{13} & \cdots u_{1n} \\
0 & u_{22} & u_{23} & \cdots u_{2n} \\
0&0 &u_{33} &\cdots u_{3n}\\
\vdots & \vdots &\vdots & \cdots \vdots \\
0 & 0 & 0 &\cdots u_{nn}
\end{bmatrix}
$$
Now, I don't want to use the fact that it's determinant is nonzero therefore I can find its inverse (because that's what I'm trying to prove).

I want to know if I can convert that matrix into an identity matrix using Gauss-Jordan elimination method? Well, you see in integrals we usually look for if the integral can be found out, in that subject we can prove if something is possible without finding the thing needed; while here how can I ascertain myself that I can convert ##U## into an identity matrix (well if I can do that then it has an inverse) without applying the processes of Gauss-Jordan elimination method.
A (square) that has a non-null determinant is invertible. You must first calculate the determinant; there is no other way. When you do the "backside" of Gauss-Jordan elimination, you end up with the inverse, but would get to an unavoidable division by 0 if the determinant is 0.
 
  • #28
nuuskur said:
Fair play. It is true that ##x=0## implies ##Ax=0##. If it is also true that ## Ax=0## implies ##x=0##, then ##x=0## is the unique solution. For the example you gave, the implication is false, because, as you said, there are also nonzero solutions. By default, if a statement is not explicitly quantified, it is to be regarded as universally quantified, i.e,
[tex]
\forall x\in K^n,\quad Ax = 0\Rightarrow x=0.
[/tex]
What about the eigenproblem equation? A x = 0, but the eigenvectors are solutions of x that are NOT 0; of course, this is possible because the determinant of A must be 0 for this to work.
 
  • #29
Mark44 said:
That's the idea, but this is more hand waving than an actual proof, which would involve mathematical induction.

The first is a row vector. The second is a column vector, which is the transpose of the row vector. IOW, the column vector is ##x^T##, using the usual notation.
I thought it was the other way around.
 
  • #30
Zero is an eigenvalue of ##A## if and only if ##\det A=0##.
 
  • #31
Mark44 said:
The first is a row vector. The second is a column vector, which is the transpose of the row vector. IOW, the column vector is ##x^T##, using the usual notation.
swampwiz said:
I thought it was the other way around.
Not sure what you're considering to be the other way around. My reply to @Hall, which is quoted above, was a response to his asking what is the difference between ##[x_1, x_2, \dots, x_n]## and ##\begin{bmatrix}x_1 \\ x_2 \\ \dots \\ x_n \end{bmatrix}##.
Rows are horizontal and columns (like the columns of a building are vertical, so the first vector above is a row vector, and the second vector is a column vector.

If your confusion is with the notation ##x^T##, a transpose can be either a row vector or a column vector, depending on how ##x## is originally defined.
 
  • #32
Mark44 said:
Not sure what you're considering to be the other way around. My reply to @Hall, which is quoted above, was a response to his asking what is the difference between ##[x_1, x_2, \dots, x_n]## and ##\begin{bmatrix}x_1 \\ x_2 \\ \dots \\ x_n \end{bmatrix}##.
Rows are horizontal and columns (like the columns of a building are vertical, so the first vector above is a row vector, and the second vector is a column vector.

If your confusion is with the notation ##x^T##, a transpose can be either a row vector or a column vector, depending on how ##x## is originally defined.
What I was saying was the I thought the nominal form of a vector is as a column, not as a row. Certainly if written as A x, x is a column vector.
 
  • #33
swampwiz said:
What I was saying was the I thought the nominal form of a vector is as a column, not as a row. Certainly if written as A x, x is a column vector.
I don't think there is a nominal form of a vector. However, in the context of the expression Ax, with A being a matrix, x would have to be a column vector.
 
  • #34
More generally, given an ##n \times n## matrix, the vector must be ##n \times 1## for the product to be defined. So, yes, a column vector.
 
  • #35
swampwiz said:
What about the eigenproblem equation? A x = 0, but the eigenvectors are solutions of x that are NOT 0; of course, this is possible because the determinant of A must be 0 for this to work.
The eigenproblem EQ is [ A ]{ x } = λ { x }, which leads to [ [ A ] - λ [ I ] ] { x } = { 0 }, and only works for the case of Δ( [ [ A ] - λ [ I ] ] ) = 0. Eigenvectors correspond to the nullspace of a matrix.
 

FAQ: How can I convince myself that I can find the inverse of this matrix?

How can I find the inverse of a matrix?

To find the inverse of a matrix, you can use the Gauss-Jordan elimination method or the adjugate matrix method. Both methods involve manipulating the original matrix using elementary row operations to reduce it to its identity matrix form. The resulting matrix will be the inverse of the original matrix.

Why is finding the inverse of a matrix important?

Finding the inverse of a matrix is important in many applications, such as solving systems of linear equations, calculating determinants, and solving optimization problems. It also allows for easier computation of other matrix operations, such as matrix division.

What is the condition for a matrix to have an inverse?

A matrix must be square and have a non-zero determinant in order to have an inverse. If the determinant is equal to zero, the matrix is said to be singular and does not have an inverse.

Can every matrix be inverted?

No, not every matrix can be inverted. As mentioned before, a matrix must be square and have a non-zero determinant in order to have an inverse. If the determinant is equal to zero, the matrix does not have an inverse.

Is there a shortcut or formula for finding the inverse of a matrix?

There is no general formula or shortcut for finding the inverse of a matrix. However, there are some special cases, such as diagonal or triangular matrices, where the inverse can be found more easily. In most cases, the Gauss-Jordan elimination method or adjugate matrix method must be used to find the inverse.

Back
Top