A regular matrix <=> mA isomorphism

In summary, the conversation discusses the relationship between regular matrices and isomorphisms. The two statements are equivalent, and in order to prove this, the group discusses the implications in both directions and the importance of showing that the matrix has an inverse. Ultimately, it is determined that a matrix is regular if and only if it is the matrix of an isomorphism, and this is shown by considering the standard basis and verifying the existence of the inverse function.
  • #1
member 587159
Hello all

Let ##m_A: \mathbb{K^n} \rightarrow \mathbb{K^n}: X \mapsto AX## and ##A \in M_{m,n}(\mathbb{K})##
(I already proved that this function is linear)

I want to prove that:

A regular matrix ##\iff m_A## is an isomorphism.

So, here is my approach. Can someone verify whether this is correct?

##\Rightarrow##

##A## is a regular matrix, thus ##A^{-1}## exists
Then is ##m_A## an isomorphism, because we can find an inverse function ##m_A^{-1}: X \mapsto A^{-1}X## It's straightforward to see that this function is the inverse. (##m_A \circ m_A^{-1} = 1_{\mathbb{K^n}} ## and ##m_A^{-1} \circ m_A = 1_{\mathbb{K^n}}##)

##\Leftarrow##

##m_A## is an isomorphism, thus ##m_A## has an inverse ##m_A^{-1}: X \mapsto BX##.

When we apply the definition of inverse function, we would deduce that:

##AB = I_n## and ##BA = I_n##. Then it follows, by definition of the inverse matrix, that ##B = A^{-1}##. Thus ##A^{-1}## must exist, because the inverse is unique and there is only one possibility to construct this inverse. Therefore, A is a regular matrix.

Thanks in advance.
 
Physics news on Phys.org
  • #2
Math_QED said:
Hello all

Let ##m_A: \mathbb{K^n} \rightarrow \mathbb{K^n}: X \mapsto AX## and ##A \in M_{m,n}(\mathbb{K})##
(I already proved that this function is linear)

I want to prove that:

A regular matrix ##\iff m_A## is an isomorphism.

So, here is my approach. Can someone verify whether this is correct?

##\Rightarrow##

##A## is a regular matrix, thus ##A^{-1}## exists
Then is ##m_A## an isomorphism, because we can find an inverse function ##m_A^{-1}: X \mapsto A^{-1}X## It's straightforward to see that this function is the inverse. (##m_A \circ m_A^{-1} = 1_{\mathbb{K^n}} ## and ##m_A^{-1} \circ m_A = 1_{\mathbb{K^n}}##)
Correct. Only "##A## regular ##\Longrightarrow \, n=m##" could be added.
##\Leftarrow##

##m_A## is an isomorphism, thus ##m_A## has an inverse ##m_A^{-1}: X \mapsto BX##.
Why can be assume, that ##m_A^{-1}## is of the form ##m_B##? And again, why is ##n=m##?
When we apply the definition of inverse function, we would deduce that:

##AB = I_n## and ##BA = I_n##. Then it follows, by definition of the inverse matrix, that ##B = A^{-1}##. Thus ##A^{-1}## must exist, because the inverse is unique and there is only one possibility to construct this inverse. Therefore, A is a regular matrix.

Thanks in advance.
 
  • Like
Likes member 587159
  • #3
fresh_42 said:
Correct. Only "##A## regular ##\Longrightarrow \, n=m##" could be added.

Why can be assume, that ##m_A^{-1}## is of the form ##m_B##? And again, why is ##n=m##?

n = m because ##AX \in \mathbb{K^n}##, by definition of ##m_A##.

Your second question made me realize that I cannot assume that. I would need to do something like:

##(m_A \circ m_A^{-1})(X) = X \iff m_A(m_A^{-1}(X)) = X \iff Am_A^{-1}(X) = X \Rightarrow m_A^{-1}(X) = BX ## such that## AB = I_n##
And analogue for ##(m_A^{-1} \circ m_A)(X)##?
 
  • #4
The basic difficulty here (at least mine) is to keep the parts apart. Since it is almost obviously true, I find it difficult, to separate both sides.
Math_QED said:
##n = m## because ##AX \in \mathbb{K^n}##, by definition of ##m_A##.
Isn't this the same as to start with ##A \in \mathbb{M}_{n,n}(\mathbb{K})## in the first place? Only demanding ##n=m##?
I would have expected two arguments, that regular matrices as well as isomorphisms can only be established between vector spaces of the same dimension. Of course this is true (in the finite dimensional case), but is it already known or part of the proof? I would have started with "Let ##m_A : \mathbb{K}^n
\rightarrow \mathbb{K}^m## given by ##X \mapsto AX## ..." because that is how you defined ##A##! In this case ##n=m## would have to be shown in both directions of the statement.

Math_QED said:
##Am^{-1}_A(X) = X \Rightarrow m^{-1}_A (X) = BX## such that ##AB=I_n##
Yes, this is the crucial part. That there cannot be two different inverse elements from the right and from the left should be shown separately in an auxiliary proposition because it is needed everywhere (IMO).

To do this I would consider ##(Am^{-1}_A - I_n)(X)## and apply the definitions of ##0_n## and next that of inverse elements.
 
  • Like
Likes member 587159
  • #5
fresh_42 said:
The basic difficulty here (at least mine) is to keep the parts apart. Since it is almost obviously true, I find it difficult, to separate both sides.

Isn't this the same as to start with ##A \in \mathbb{M}_{n,n}(\mathbb{K})## in the first place? Only demanding ##n=m##?
I would have expected two arguments, that regular matrices as well as isomorphisms can only be established between vector spaces of the same dimension. Of course this is true (in the finite dimensional case), but is it already known or part of the proof? I would have started with "Let ##m_A : \mathbb{K^n}
\rightarrow \mathbb{K}^m## given by ##X \mapsto AX## ..." because that is how you defined ##A##! In this case ##n=m## would have to be shown in both directions of the statement.Yes, this is the crucial part. That there cannot be two different inverse elements from the right and from the left should be shown separately in an auxiliary proposition because it is needed everywhere (IMO).

To do this I would consider ##(Am^{-1}_A - I_n)(X)## and apply the definitions of ##0_n## and next that of inverse elements.

Hm, I have been thinking about this and I believe I might have found an easier explanation

The ##\Rightarrow## was clear. For the ##\Leftarrow## part, we know that a matrix is regular if and only of it's the matrix of an isomorphism (after we choose certain basises), and this is equivalent with saying that ##A^{-1}## exists. Well, now consider ##[m_A]_{E,E}##, with E standard basis of ##\mathbb{K^n}##. Then we see that ##A = [m_A]_{E,E}##. Thus, A is regular and this is what we wanted to show.

Now, we can verify that ##m_A^{-1}: X \mapsto A^{-1}X## is the inverse, by applying the definition and regarding the fact that ##A^{-1}## exists and the inverse function is unique.

What do you think about this?
 
Last edited by a moderator:
  • #6
I still struggle to keep assumption and implication apart. So let me proceed in small steps.
Math_QED said:
For the ##\Leftarrow## part,...
This means: Given an isomorphism ##m_A## defined by ##m_A(X)=AX##.

So if we denote the matrix of ##m_A## according to the basis ##E=\{E_1,\ldots,E_N\}## by ##[m_A]##, we have ##[m_A](E_i)=AE_i## by definition of ##m_A## and thus ##[m_A]=A##.
...we know that a matrix is regular if and only of it's the matrix of an isomorphism (after we choose certain basises), and this is equivalent with saying that ##A^{-1}## exists.
Now, why isn't this exactly what we have to show: ##m_A## isomorphic ##\Longrightarrow A = [m_A]## regular? Somewhere the existence of ##m_A^{-1}## has to be used and we cannot assume, yet, that ##[m_A^{-1}]=A^{-1}## or that ##A^{-1}## even exists. Only ##[m_A^{-1}]##.

(This were my thoughts, and I hope I didn't confuse something, because the longer I think about it, the more I get confused ... The entire exercise sounds like the question about hen and egg.)
 
  • #7
Well, let me write out everything more clearly.

Definitions:

1) An ##n## x ##n## matrix ##A## is regular if it's the matrix of an isomorphism, or equivalent: if ##A^{-1}## exists.
2) ##m_A: \mathbb{K^n} \rightarrow \mathbb{K^n}: X \mapsto AX##

Theorem: A regular matrix ##\iff m_A## is an isomorphism.

Proof
:

##\Rightarrow## If ##A## is a regular matrix, then ##m_A## is an isomorphism.

If A is regular, then we know, by definition, that ##A^{-1}## exists. I now claim that ##m_A^{-1}: X \mapsto A^{-1}X##

This claim is true, because ##m_A \circ m_A^{-1} = 1_{\mathbb{K^n}} ## and ##m_A^{-1} \circ m_A = 1_{\mathbb{K^n}}##
and thus this function is the (unique) inverse. Because ##m_A## has an inverse, we know that ##m_A## is bijective and thus an isomorphism (It's clear that ##m_A## is a linear function)

##\Leftarrow## If ##m_A## is an isomorphism, then ##A## is a regular matrix.

##m_A## is an isomorphism, thus every matrix associated with it is regular. ##[m_A]_{E,E} = A## when we choose that ##E## is the standard basis of ##\mathbb{K^n}##. Thus, because ##A## is the matrix of an isomorphism, ##A## must be regular (by definition) and this ends the proof.

QED.
 
  • #8
Math_QED said:
##m_A## is an isomorphism, thus every matrix associated with it is regular.
Why? And why isn't this exactly what has to be shown? ##m_A## is an isomorphism only guarantees another isomorphism ##\varphi## such that ##\varphi m_A = 1##. If you already know, that these belong to regular matrices, why to prove something at all?
 
  • #9
fresh_42 said:
Why? And why isn't this exactly what has to be shown? ##m_A## is an isomorphism only guarantees another isomorphism ##\varphi## such that ##\varphi m_A = 1##. If you already know, that these belong to regular matrices, why to prove something at all?

By definition! And you don't know that A is a matrix associated with ##m_A## until you have found a basis ##B## such that ##[m_A]_{B,B}## = A. But such a basis exists, as it is the standard basis.
 
  • #10
Math_QED said:
By definition! And you don't know that A is a matrix associated with ##m_A## until you have found a basis ##B## such that ##[m_A]_{B,B}## = A. But such a basis exists, as it is the standard basis.
We know by construction of ##m_A## as ##m_A(X)=AX## that ##[m_A]_{B,B} = A## by applying unit vectors. But how do you get from ##\varphi m_A = 1## to ##\varphi(X) = A^{-1}X##? In this direction we only have ##\varphi## and know nothing about a matrix.
Math_QED said:
##m_A## is an isomorphism, thus every matrix associated with it is regular.
(why?) and
Thus, because ##A## is the matrix of an isomorphism, ##A## must be regular (by definition).
is what I don't see. By definition we only get ##\varphi##, no matrix of ##\varphi##. If we automatically have matrices, then there is nothing left to show, because it's clear then that:
$$m_A \text{ isomorphism with matrix } A \Leftrightarrow m_A^{-1} \text{ isomorphism with matrix } B \Leftrightarrow AB=1 $$
To me the bridge between regular matrices and isomorphisms is what it's all about, i.e. why is ##[\varphi] = A^{-1}##.
 
  • #11
fresh_42 said:
We know by construction of ##m_A## as ##m_A(X)=AX## that ##[m_A]_{B,B} = A## by applying unit vectors. But how do you get from ##\varphi m_A = 1## to ##\varphi(X) = A^{-1}X##? In this direction we only have ##\varphi## and know nothing about a matrix.

(why?) and

is what I don't see. By definition we only get ##\varphi##, no matrix of ##\varphi##. If we automatically have matrices, then there is nothing left to show, because it's clear then that:
$$m_A \text{ isomorphism with matrix } A \Leftrightarrow m_A^{-1} \text{ isomorphism with matrix } B \Leftrightarrow AB=1 $$
To me the bridge between regular matrices and isomorphisms is what it's all about, i.e. why is ##[\varphi] = A^{-1}##.

In the ##\Leftarrow## part I never use the existence of ##m_A^{-1}## (check my new proof that I wrote 2 posts ago). We know from the definition that every matrix of an isomorphism is regular. That's the only thing we use. We choose the standard basis to show that the isomorphism ##m_A## can be represented by ##A## and we know from the definition that, when this is possible, A is regular. And, yes we have shown it's possible.
 
  • #12
Math_QED said:
In the ##\Leftarrow## part I never use the existence of ##m_A^{-1}## (check my new proof that I wrote 2 posts ago).
But that's all you have. How can you not use it?
We know from the definition that every matrix of an isomorphism is regular.
In this case I fold. If you have such a definition, then all of the above is trivial since ##m_A(X)=AX## implies that there is already a basis (to express ##A##), and all which is left to show is, that there cannot be two different matrices.
 

FAQ: A regular matrix <=> mA isomorphism

What is a regular matrix?

A regular matrix is a square matrix that has an inverse matrix, meaning it can be multiplied by another matrix to give the identity matrix. In other words, every element in the matrix can be uniquely solved for.

What is an mA isomorphism?

An mA isomorphism is a type of matrix isomorphism, which is a one-to-one mapping between two matrices. In this case, it refers to a mapping between a regular matrix and an mA matrix, which is a matrix with all elements equal to 1.

How do you determine if a matrix is regular?

To determine if a matrix is regular, you can check if it has an inverse matrix. This can be done by finding the determinant of the matrix - if the determinant is not equal to 0, then the matrix is regular and has an inverse matrix.

What is the significance of a regular matrix <=> mA isomorphism?

A regular matrix <=> mA isomorphism is significant in linear algebra and matrix theory. It allows for a simplified representation of a regular matrix by mapping it to an mA matrix, which has all elements equal to 1. This can make certain calculations and operations easier and more efficient.

Are all regular matrices mA isomorphic?

No, not all regular matrices are mA isomorphic. Only square matrices can be mA isomorphic, and even then, not all square matrices are regular. Additionally, the dimension of the matrix must be equal to the number of rows or columns in the mA matrix for them to be mA isomorphic.

Similar threads

Back
Top