Understanding Matrix Units for Linear Transformations in M2(Re)

In summary: That all makes sense, right?In summary, the matrix unit in question is a matrix that is the inverse of a vectorization function. The standard basis for $M_2(\Bbb R)$ is created by taking the inverse of the vectorization function and using the standard matrix operations.
  • #1
Sudharaka
Gold Member
MHB
1,568
1
Hi everyone, :)

We are given the following question.

Find the matrix of the linear transformation \(f:M_{2}(\Re )\rightarrow M_{2}(\Re )\) given by \(f(X)=X\begin{pmatrix}a&b\\c&d\\ \end{pmatrix}\), with respect to the basis of matrix units \(E_{11},\,E_{12},\,E_{21},\,E_{22}\).

I don't expect a full answer to this question, but I don't have any clue as to what is a Matrix Unit. Do any of you people know what is a matrix unit?
 
Physics news on Phys.org
  • #2
Sudharaka said:
Hi everyone, :)

We are given the following question.
I don't expect a full answer to this question, but I don't have any clue as to what is a Matrix Unit. Do any of you people know what is a matrix unit?
This is what they mean! Notice that 11 means row 1 columne 1
k1fjb4.jpg
Regards,
\(\displaystyle |\pi\rangle\)
 
  • #3
Petrus said:
This is what they mean! Notice that 11 means row 1 columne 1
k1fjb4.jpg
Regards,
\(\displaystyle |\pi\rangle\)

Hi Petrus, :)

Great, thanks very much. My guess was correct then. But can you please let me know where you found this information.
 
  • #4
Sudharaka said:
Hi Petrus, :)

Great, thanks very much. My guess was correct then. But can you please let me know where you found this information.

Okay, I found the verification I needed from >>Planetmath<<. Thanks again Petrus. :)
 
  • #5
Those matrices are the standard basis for $M_2(\Bbb R)$ given by the inverse of the vectorization function: $M_2(\Bbb R) \to \Bbb R^4$, which sends:

$E_{11} \to e_1$
$E_{12} \to e_2$
$E_{21} \to e_3$
$E_{22} \to e_4$.

The reason this works so well, is that the diagonal matrices of the form:

$aI = \begin{bmatrix}a&0\\0&a \end{bmatrix},\ a \in \Bbb R$

form a field isomorphic to the real numbers, and these matrices commute with all 2x2 matrices, so we have an extension ring with a field in its center, which naturally forms an algebra (a ring that is also a vector space).

(Probably more than you wanted to know (Tongueout))
 
  • #6
Deveno said:
Those matrices are the standard basis for $M_2(\Bbb R)$ given by the inverse of the vectorization function: $M_2(\Bbb R) \to \Bbb R^4$, which sends:

$E_{11} \to e_1$
$E_{12} \to e_2$
$E_{21} \to e_3$
$E_{22} \to e_4$.

The reason this works so well, is that the diagonal matrices of the form:

$aI = \begin{bmatrix}a&0\\0&a \end{bmatrix},\ a \in \Bbb R$

form a field isomorphic to the real numbers, and these matrices commute with all 2x2 matrices, so we have an extension ring with a field in its center, which naturally forms an algebra (a ring that is also a vector space).

(Probably more than you wanted to know (Tongueout))

Hi Denevo, :)

Thanks very much for the insight. I haven't studied things such as extension rings much and my knowledge about these things are pretty much the basic. Let me ask some questions for clarification.

So the given set of diagonal matrices; \(\{aI: \mbox{I is the identity matrix},\,a\in\Re\}\) with with the set of all 2x2 matrices (as the scaler field) is the algebra isn't?
 
  • #7
Hi again, :)

And here's my way of solving my original question. Let me know if I am wrong. We plug in the basis vectors to the linear transformation equation to get,

\[f(E_{11})=\begin{pmatrix}a&b\\0&0\end{pmatrix}\]

\[f(E_{12})=\begin{pmatrix}c&d\\0&0\end{pmatrix}\]

\[f(E_{21})=\begin{pmatrix}0&0\\a&b\end{pmatrix}\]

\[f(E_{22})=\begin{pmatrix}0&0\\c&d\end{pmatrix}\]

Then the transformation matrix \(A\) with respect to the basis matrices \(\{E_{11},\, E_{21},\, E_{21},\, E_{22}\}\) will be,

\[A=\begin{pmatrix} a&b&c&d&0&0&0&0\\0&0&0&0&a&b&c&d \end{pmatrix}\]
 
  • #8
Naively, most people think of vectors as "$n$-tuples" of something.

But really, a vector space is composed of 2 things:

1) stuff we can add together (these are the vectors),

2) stuff we can use to "stretch/shrink" the vectors (these are the scalars).

Formally, a vector space $V$ over a field $F$ is an abelian group:

$(V,+)$ together with an operation:

$\cdot :F \times V \to V$ with:

1) $\alpha \cdot (u + v) = \alpha \cdot u + \alpha \cdot v, \forall \alpha \in F, u,v \in V$

(the dot is usually omitted to avoid confusion with the "dot product", I just want to call attention to the fact that there is an operation here).

2) $(\alpha + \beta)\cdot u = \alpha\cdot u + \beta \cdot u, \forall \alpha,\beta \in F, u \in V$

These two conditions tell us the "scalar product" is compatible with the vector addition and the field addition.

3) $\alpha \cdot (\beta \cdot u) = (\alpha\beta)\cdot u, \forall \alpha,\beta \in F, u \in V$

4) $1_F \cdot u = u, \forall u \in V$

These two conditions tells us that the scalar product is a kind of multiplication compatible with the field multiplication.

Now in a (square) matrix ring with entries in a field, the scalar multiples of the identity matrix act just like the underlying field (a ring, by the way, is pretty much like a field but with no division...often (but not always) because it has "more zero-like things"...with matrices these "zero-like things" are called SINGULAR matrices).

In such a matrix ring, we can "keep the scalar multiplication entirely in the ring" by DEFINING the scalar multiplication to be:

$\alpha M = (\alpha I)(M)$

On the LHS, we have a "vector-looking" scalar product, on the RHS, we have a product of 2 matrices (in the ring).

There is nothing special about matrix rings in this regard...for example, we have an algebra of polynomials as well:

1) we can add polynomials
2) we can multiply polynomials by a number (field element) <--a scalar multiplication
3) we can multiply polynomials together (the "ring multiplication")

In THIS algebra, the constant polynomials play the role of the embedded field in the center (unlike matrix multiplication, this multiplication is commutative, which makes polynomials "nicer" to work with than matrices).

So, to recap:

IN an algebra (which is what the set of all 2x2 real matrices is), we have a number of different things going on:

1) We can add, subtract, mutliply and (for non-zero elements) divide field elements (the matrix entries)
2) We can add (or subtract) matrices together
3) We can multiply two matrices together
4) We can use a field element to "scale" the matrix

and all these different things work together harmoniously, to create a very satisfying structure that let's us use tools of abstract algebra, arithmetic or geometry as ways of gaining insight.

In your last sentence you seem to have it backwards: the scalar multiples of the identity act as the field, and it is the 2x2 matrices that act as the vectors. This let's us "throw away" our notion of some kind of "hybrid multiplication" (mixing scalars and vectors), and just keep the single matrix multiplication as the one we use. This streamlines having to keep track of "what came from where".
 
  • #9
Sudharaka said:
Hi again, :)

And here's my way of solving my original question. Let me know if I am wrong. We plug in the basis vectors to the linear transformation equation to get,

\[f(E_{11})=\begin{pmatrix}a&b\\0&0\end{pmatrix}\]

\[f(E_{12})=\begin{pmatrix}c&d\\0&0\end{pmatrix}\]

\[f(E_{21})=\begin{pmatrix}0&0\\a&b\end{pmatrix}\]

\[f(E_{22})=\begin{pmatrix}0&0\\c&d\end{pmatrix}\]

Then the transformation matrix \(A\) with respect to the basis matrices \(\{E_{11},\, E_{21},\, E_{21},\, E_{22}\}\) will be,

\[A=\begin{pmatrix} a&b&c&d&0&0&0&0\\0&0&0&0&a&b&c&d \end{pmatrix}\]

I don't usually double-post, but I want to correct some misunderstandings you have.

First, let's look at what the dimension of $M_2(\Bbb R)$ is: since we have a basis with 4 elements, it must have dimension 4. This means that the matrix $A$ should be a 4x4 matrix.

Secondly, let's look explicitly at what $f(E_{11})$ is:

$f(E_{11}) = \begin{bmatrix}1&0\\0&0 \end{bmatrix} \begin{bmatrix}a&b\\c&d \end{bmatrix} = \begin{bmatrix}a&b\\0&0 \end{bmatrix} = aE_{11} + bE_{12}$

So the first column of $A$ should be:

$\begin{bmatrix}a\\b\\0\\0 \end{bmatrix}$

Can you continue?

NOTE: the ORDER of the basis you choose will affect the columns you get. My column is based on the ordered basis: $\{E_{11},E_{12},E_{21},E_{22}\}$ (reading the entries like a book).
 
  • #10
Deveno said:
Naively, most people think of vectors as "$n$-tuples" of something.

But really, a vector space is composed of 2 things:

1) stuff we can add together (these are the vectors),

2) stuff we can use to "stretch/shrink" the vectors (these are the scalars).

Formally, a vector space $V$ over a field $F$ is an abelian group:

$(V,+)$ together with an operation:

$\cdot :F \times V \to V$ with:

1) $\alpha \cdot (u + v) = \alpha \cdot u + \alpha \cdot v, \forall \alpha \in F, u,v \in V$

(the dot is usually omitted to avoid confusion with the "dot product", I just want to call attention to the fact that there is an operation here).

2) $(\alpha + \beta)\cdot u = \alpha\cdot u + \beta \cdot u, \forall \alpha,\beta \in F, u \in V$

These two conditions tell us the "scalar product" is compatible with the vector addition and the field addition.

3) $\alpha \cdot (\beta \cdot u) = (\alpha\beta)\cdot u, \forall \alpha,\beta \in F, u \in V$

4) $1_F \cdot u = u, \forall u \in V$

These two conditions tells us that the scalar product is a kind of multiplication compatible with the field multiplication.

Now in a (square) matrix ring with entries in a field, the scalar multiples of the identity matrix act just like the underlying field (a ring, by the way, is pretty much like a field but with no division...often (but not always) because it has "more zero-like things"...with matrices these "zero-like things" are called SINGULAR matrices).

In such a matrix ring, we can "keep the scalar multiplication entirely in the ring" by DEFINING the scalar multiplication to be:

$\alpha M = (\alpha I)(M)$

On the LHS, we have a "vector-looking" scalar product, on the RHS, we have a product of 2 matrices (in the ring).

There is nothing special about matrix rings in this regard...for example, we have an algebra of polynomials as well:

1) we can add polynomials
2) we can multiply polynomials by a number (field element) <--a scalar multiplication
3) we can multiply polynomials together (the "ring multiplication")

In THIS algebra, the constant polynomials play the role of the embedded field in the center (unlike matrix multiplication, this multiplication is commutative, which makes polynomials "nicer" to work with than matrices).

So, to recap:

IN an algebra (which is what the set of all 2x2 real matrices is), we have a number of different things going on:

1) We can add, subtract, mutliply and (for non-zero elements) divide field elements (the matrix entries)
2) We can add (or subtract) matrices together
3) We can multiply two matrices together
4) We can use a field element to "scale" the matrix

and all these different things work together harmoniously, to create a very satisfying structure that let's us use tools of abstract algebra, arithmetic or geometry as ways of gaining insight.

In your last sentence you seem to have it backwards: the scalar multiples of the identity act as the field, and it is the 2x2 matrices that act as the vectors. This let's us "throw away" our notion of some kind of "hybrid multiplication" (mixing scalars and vectors), and just keep the single matrix multiplication as the one we use. This streamlines having to keep track of "what came from where".

Thanks very much. This explains everything and clarifies all my doubts. :D

Deveno said:
I don't usually double-post, but I want to correct some misunderstandings you have.

First, let's look at what the dimension of $M_2(\Bbb R)$ is: since we have a basis with 4 elements, it must have dimension 4. This means that the matrix $A$ should be a 4x4 matrix.

Secondly, let's look explicitly at what $f(E_{11})$ is:

$f(E_{11}) = \begin{bmatrix}1&0\\0&0 \end{bmatrix} \begin{bmatrix}a&b\\c&d \end{bmatrix} = \begin{bmatrix}a&b\\0&0 \end{bmatrix} = aE_{11} + bE_{12}$

So the first column of $A$ should be:

$\begin{bmatrix}a\\b\\0\\0 \end{bmatrix}$

Can you continue?

NOTE: the ORDER of the basis you choose will affect the columns you get. My column is based on the ordered basis: $\{E_{11},E_{12},E_{21},E_{22}\}$ (reading the entries like a book).

Yes, I understand now. The other columns of \(A\) can be obtained similarly by multiplying the basis matrices with $\begin{pmatrix}a&b\\c&d \end{pmatrix}$.
 
  • #11
May I ask what Will \(\displaystyle A^{-1}\) mean in words? Then I mean inverse of that transformation matrix, what transformation Will that be? Will it be transformation for vice versa?

Regards,
\(\displaystyle |\pi\rangle\)
 

FAQ: Understanding Matrix Units for Linear Transformations in M2(Re)

What is the basis of matrix units?

The basis of matrix units refers to the set of vectors that can be used to create any other vector within a given vector space. It is used to represent linear transformations and operations on matrices.

How is the basis of matrix units determined?

The basis of matrix units is determined by finding the linearly independent vectors that span the vector space. These vectors serve as the building blocks for creating other vectors within the space.

Why is understanding the basis of matrix units important?

Understanding the basis of matrix units is important because it allows for the efficient representation and computation of linear transformations and operations. It also provides insight into the structure and properties of vector spaces.

Can the basis of matrix units change?

Yes, the basis of matrix units can change depending on the chosen coordinate system or basis vectors. However, the number of basis vectors and their linear independence will remain the same.

How is the basis of matrix units related to eigenvalues and eigenvectors?

The basis of matrix units and eigenvalues/eigenvectors are closely related. The eigenvectors of a matrix form a basis for the vector space, and the eigenvalues represent the scaling factor for each eigenvector. This allows for a simplified representation of linear transformations using the basis of eigenvectors.

Similar threads

Replies
34
Views
2K
Replies
4
Views
2K
Replies
52
Views
3K
Replies
1
Views
946
Replies
5
Views
2K
Back
Top