Matrix Rings - Exercise 1.1.4 (iii) - Berrick and Keating (B&K) - page 12

In summary, we can use the First Isomorphism Theorem for Rings to show that M_n(R)/M_n(\mathfrak{a}) is isomorphic to M_n(R/\mathfrak{a}), by defining a surjective ring homomorphism \phi : M_n(R) \to M_n(R/\mathfrak{a}) with a kernel of M_n(\mathfrak{a}). This shows that there is a relationship between the two rings and that they are isomorphic.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading An Introduction to Rings and Modules With K-Theory in View by A.J. Berrick and M.E. Keating (B&K).

I need help with Exercise 1.1.4 (iii) (Chapter 1: Basics, page 12) concerning matrix rings ... ...

Exercise 1.1.4 (iii) (page 12) reads as follows:View attachment 2985

I can show that \(\displaystyle M_n (\alpha)\) is a two sided ideal of \(\displaystyle M_n (R)\), but am unable to frame a proof that

\(\displaystyle M_n (R) / M_n ( \alpha ) \cong M_n ( R/ \alpha )\)

Can someone please help me to get started on a proof of

\(\displaystyle M_n (R) / M_n ( \alpha ) \cong M_n ( R/ \alpha )
\)Peter
 
Physics news on Phys.org
  • #2
The way to do these things is to write the first isomorphism theorem in reverse.

You want a mapping:

$M_n(R) \to M_n(R/\mathfrak{a})$ whose kernel is $M_n(\mathfrak{a})$.

Without even thinking about it, the first thing that comes to mind is:

$(a_{ij}) \mapsto (a_{ij} + \mathfrak{a})$.

For example if $R = \Bbb Z$ and $\mathfrak{a} = (4)$ this would be on one matrix:

$\begin{bmatrix}3&5\\-7&2 \end{bmatrix} \mapsto \begin{bmatrix}\overline{3}&\overline{1}\\ \overline{1}&\overline{2} \end{bmatrix}$

How about it, do you think that will work?*********************************

Something else to think about:

Suppose for the moment we had $\text{Hom}_{\ R}(R^n,R^n)$. Can you think of a mapping:

$\phi:\text{Hom}_{\ R}(R^n,R^n) \to M_n(R)$?

Is this mapping an $R$-module homomorphism? Is it injective? Surjective?

Suppose we have a ring-homomorpism $f:R \to f(R)$. Can you think of any relationship between:

$\text{Hom}_{\ R}(R^n,R^n)$ and $\text{Hom}_{\ f(R)}(f(R)^n,f(R)^n)$?

Might we have a functor from $\mathbf{Ring} \to \mathbf{Mod}_{\ R}$?

What implication might this have for "extension of scalars"?

(In linear algebra there is a canonical form called Jordan normal form for matrices. But a real matrix might have complex eigenvalues, so in order to write down the matrix corresponding to the JNF, we often have to "enlarge the field". Such a procedure is often called "complexification". Engineers do this all the time (perhaps they do not even know its name). Sometimes they just want "the real part" but they have to work with the "imaginary parts" to get their answers. This happens, for example, when stresses are applied to various points of a structure at an angle (resolving into cosines and sines is about "the same thing", sometimes that approach is easier), and a matrix is used to store all the data of the various "test" points simultaneously. It also occurs in "pseudo-3d modelling" (also known as 2.5 d) in applying rotations to the "background plane". The point being, these abstract constructions have practical utility, besides being beautiful in their own right.)
 
Last edited:
  • #3
Deveno said:
The way to do these things is to write the first isomorphism theorem in reverse.

You want a mapping:

$M_n(R) \to M_n(R/\mathfrak{a})$ whose kernel is $M_n( \mathfrak{a} )$.

Without even thinking about it, the first thing that comes to mind is:

$(a_{ij}) \mapsto (a_{ij} + \mathfrak{a})$.

For example if $R = \Bbb Z$ and $\mathfrak{a} = (4)$ this would be on one matrix:

$\begin{bmatrix}3&5\\-7&2 \end{bmatrix} \mapsto \begin{bmatrix}\overline{3}&\overline{1}\\ \overline{1}&\overline{2} \end{bmatrix}$

How about it, do you think that will work?*********************************

Something else to think about:

Suppose for the moment we had $\text{Hom}_{\ R}(R^n,R^n)$. Can you think of a mapping:

$\phi:\text{Hom}_{\ R}(R^n,R^n) \to M_n(R)$?

Is this mapping an $R$-module homomorphism? Is it injective? Surjective?

Suppose we have a ring-homomorpism $f:R \to f(R)$. Can you think of any relationship between:

$\text{Hom}_{\ R}(R^n,R^n)$ and $\text{Hom}_{\ f(R)}(f(R)^n,f(R)^n)$?

Might we have a functor from $\mathbf{Ring} \to \mathbf{Mod}_{\ R}$?

What implication might this have for "extension of scalars"?

(In linear algebra there is a canonical form called Jordan normal form for matrices. But a real matrix might have complex eigenvalues, so in order to write down the matrix corresponding to the JNF, we often have to "enlarge the field". Such a procedure is often called "complexification". Engineers do this all the time (perhaps they do not even know its name). Sometimes they just want "the real part" but they have to work with the "imaginary parts" to get their answers. This happens, for example, when stresses are applied to various points of a structure at an angle (resolving into cosines and sines is about "the same thing", sometimes that approach is easier), and a matrix is used to store all the data of the various "test" points simultaneously. It also occurs in "pseudo-3d modelling" (also known as 2.5 d) in applying rotations to the "background plane". The point being, these abstract constructions have practical utility, besides being beautiful in their own right.)

Hi Deveno ... thanks for the help ...

You write:

" ... ... Without even thinking about it, the first thing that comes to mind is:

$(a_{ij}) \mapsto (a_{ij} + \mathfrak{a})$. ... ...

... ... How about it, do you think that will work? ... ..."
Well, as far as I can see, that will work, because ...

The First Isomorphism Theorem for Rings states:

Let \(\displaystyle R, S\) be rings and let \(\displaystyle \phi \ : \ R \to S \) be a ring homomorphism.

Then \(\displaystyle ker \ \phi\) is a ideal of \(\displaystyle R\) and

\(\displaystyle R/ker \ \phi \cong \phi(R)\)

Now, in the case that \(\displaystyle \phi\) is surjective (so that \(\displaystyle \phi(R) = S\)) we have

\(\displaystyle R/ ker \ \phi \cong S
\)So ... ...

... in terms of our exercise/problem we have:\(\displaystyle M_n(R), \ M_n(R/ \mathfrak{a} )\) are ringsLet \(\displaystyle \phi \ : \ M_n(R) \to M_n(R/ \mathfrak{a} )\) be a ring homomorphism where:\(\displaystyle \phi ( (a_{ij} ) ) = ( a_{ij} + \mathfrak{a} )
\)where \(\displaystyle a_{ij} + \mathfrak{a} = a_{ij} \ mod \ \mathfrak{a} = \overline{ a_{ij} }\)

so that \(\displaystyle \phi\) is surjective ...Note that:

\(\displaystyle ker \ \phi = \{ (x_{ij}) \in M_n(R) \ | \ \phi ( (x_{ij})) = 0 \}\)

so

\(\displaystyle ker \ \phi = \{ (x_{ij}) \in M_n(R) \ | \ x_{ij} \in \mathfrak{a} \text{ for all } i, j \} = M_n ( \mathfrak{a} )\)So we have a surjective ring homomorphism \(\displaystyle \phi \) such that

\(\displaystyle \phi \ : \ M_n(R) \to M_n(R/ \mathfrak{a} )\)

where \(\displaystyle ker \ \phi = M_n ( \mathfrak{a} )
\)

Thus by the First Isomorphism Theorem for Rings we have:

\(\displaystyle M_n(R) / M_n( \mathfrak{a} ) \cong M_n(R/ \mathfrak{a}) \)

Can someone please confirm that the above analysis is OK?

Peter
 
Last edited:
  • #4
Peter said:
Hi Deveno ... thanks for the help ...

You write:

" ... ... Without even thinking about it, the first thing that comes to mind is:

$(a_{ij}) \mapsto (a_{ij} + \mathfrak{a})$. ... ...

... ... How about it, do you think that will work? ... ..."
Well, as far as I can see, that will work, because ...

The First Isomorphism Theorem for Rings states:

Let \(\displaystyle R, S\) be rings and let \(\displaystyle \phi \ : \ R \to S \) be a ring homomorphism.

Then \(\displaystyle ker \ \phi\) is a ideal of \(\displaystyle R\) and

\(\displaystyle R/ker \ \phi \cong \phi(R)\)

Now, in the case that \(\displaystyle \phi\) is surjective (so that \(\displaystyle \phi(R) = S\)) we have

\(\displaystyle R/ ker \ \phi \cong S
\)So ... ...

... in terms of our exercise/problem we have:\(\displaystyle M_n(R), \ M_n(R/ \mathfrak{a} )\) are ringsLet \(\displaystyle \phi \ : \ M_n(R) \to M_n(R/ \mathfrak{a} )\) be a ring homomorphism where:\(\displaystyle \phi ( (a_{ij} ) ) = ( a_{ij} + \mathfrak{a} )
\)where \(\displaystyle a_{ij} + \mathfrak{a} = a_{ij} \ mod \ \mathfrak{a} = \overline{ a_{ij} }\)

so that \(\displaystyle \phi\) is surjective ...Note that:

\(\displaystyle ker \ \phi = \{ (x_{ij}) \in M_n(R) \ | \ \phi ( (x_{ij})) = 0 \}\)

so

\(\displaystyle ker \ \phi = \{ (x_{ij}) \in M_n(R) \ | \ x_{ij} \in \mathfrak{a} \text{ for all } i, j \} = M_n ( \mathfrak{a} )\)So we have a surjective ring homomorphism \(\displaystyle \phi \) such that

\(\displaystyle \phi \ : \ M_n(R) \to M_n(R/ \mathfrak{a} )\)

where \(\displaystyle ker \ \phi = M_n ( \mathfrak{a} )
\)

Thus by the First Isomorphism Theorem for Rings we have:

\(\displaystyle M_n(R) / M_n( \mathfrak{a} ) \cong M_n(R/ \mathfrak{a}) \)

Can someone please confirm that the above analysis is OK?

Peter

Looks good to me.
 
  • #5
Deveno said:
The way to do these things is to write the first isomorphism theorem in reverse.

You want a mapping:

$M_n(R) \to M_n(R/\mathfrak{a})$ whose kernel is $M_n(\mathfrak{a})$.

Without even thinking about it, the first thing that comes to mind is:

$(a_{ij}) \mapsto (a_{ij} + \mathfrak{a})$.

For example if $R = \Bbb Z$ and $\mathfrak{a} = (4)$ this would be on one matrix:

$\begin{bmatrix}3&5\\-7&2 \end{bmatrix} \mapsto \begin{bmatrix}\overline{3}&\overline{1}\\ \overline{1}&\overline{2} \end{bmatrix}$

How about it, do you think that will work?*********************************

Something else to think about:

Suppose for the moment we had $\text{Hom}_{\ R}(R^n,R^n)$. Can you think of a mapping:

$\phi:\text{Hom}_{\ R}(R^n,R^n) \to M_n(R)$?

Is this mapping an $R$-module homomorphism? Is it injective? Surjective?

Suppose we have a ring-homomorpism $f:R \to f(R)$. Can you think of any relationship between:

$\text{Hom}_{\ R}(R^n,R^n)$ and $\text{Hom}_{\ f(R)}(f(R)^n,f(R)^n)$?

Might we have a functor from $\mathbf{Ring} \to \mathbf{Mod}_{\ R}$?

What implication might this have for "extension of scalars"?

(In linear algebra there is a canonical form called Jordan normal form for matrices. But a real matrix might have complex eigenvalues, so in order to write down the matrix corresponding to the JNF, we often have to "enlarge the field". Such a procedure is often called "complexification". Engineers do this all the time (perhaps they do not even know its name). Sometimes they just want "the real part" but they have to work with the "imaginary parts" to get their answers. This happens, for example, when stresses are applied to various points of a structure at an angle (resolving into cosines and sines is about "the same thing", sometimes that approach is easier), and a matrix is used to store all the data of the various "test" points simultaneously. It also occurs in "pseudo-3d modelling" (also known as 2.5 d) in applying rotations to the "background plane". The point being, these abstract constructions have practical utility, besides being beautiful in their own right.)
Hi Deveno ... I have been reflecting on your second problem ... but have reached a point of confusion/desperation! at which I need your help/guidance ...

You write:

" ... ... Something else to think about:

Suppose for the moment we had $\text{Hom}_{\ R}(R^n,R^n)$. Can you think of a mapping:

$\phi:\text{Hom}_{\ R}(R^n,R^n) \to M_n(R)$?

Is this mapping an $R$-module homomorphism? Is it injective? Surjective? ... ... ... "
My thoughts on this matter ... such as they are ... are as follows ...We know that \(\displaystyle Hom_R (R^n , R^n ) \) is the set of all \(\displaystyle R\)-module homomorphisms (actually \(\displaystyle R\)-module endomorphisms)

\(\displaystyle \alpha \ : \ R^n \to R^n\)

If we define operations for addition and multiplication appropriately then \(\displaystyle Hom_R (R^n , R^n ) \) can be viewed as a ring and we can look to find a ring homomorphism to the ring of \(\displaystyle n \times n\) matrices \(\displaystyle M_n(R)\) ... or at least that is one thought ...

So define an addition operation, \(\displaystyle \alpha + \beta\) for \(\displaystyle \alpha , \beta \in Hom_R (R^n , R^n )\) as follows:

\(\displaystyle ( \alpha + \beta ) (x) = \alpha (x) + \beta (x)\) for all \(\displaystyle x \in R^n\)

Then \(\displaystyle \alpha , \beta \in Hom_R (R^n , R^n ) \Longrightarrow \alpha + \beta \in Hom_R (R^n , R^n )\)

Further, define a 'multiplication' (composition) as follows:

For \(\displaystyle \alpha , \beta \in Hom_R (R^n , R^n ) \) we have \(\displaystyle \alpha \circ \beta \in Hom_R (R^n , R^n ) \)

With addition and multiplication defined as above we have that \(\displaystyle Hom_R (R^n , R^n )\) is a ring with \(\displaystyle 1\).NOW ... ... with \(\displaystyle Hom_R (R^n , R^n )\) as a ring and with \(\displaystyle M_n (R)\) as a ring there is at least a possibility to find a ring homomorphism between them ... ... thinking ... (but of course you were talking module homomorphisms ... but \(\displaystyle M_n (R)\) is a ring ... do we need to think of \(\displaystyle M_n (R)\) as a module over itself ... )

One thing that is occurring to me is that there is certainly a matrix associated with linear transformation (vector space homomorphism) between vector spaces ... but how to apply this to rings ... hmmm? ... very conscious that \(\displaystyle M_n (R)\) is a ring ...

We could define an action of \(\displaystyle R\) on the abelian group \(\displaystyle Hom_R (R^n , R^n )\) and construct a module \(\displaystyle Hom_R (R^n , R^n )\) and the start thinking of free modules (with bases) ... and consider \(\displaystyle M_n (R)\) as a module over itself ...

At this point I am somewhat confused ... can you help ...

I am thinking that I need to revise

(1) free modules (and their bases)

(2) linear transformations of vector spaces and how these apply to free modules

Hope you can help ...

Peter
 
  • #6
It is likewise easy to define an $R$-action on $\text{Hom}_{\ R}(R^n,R^n)$, by:

$r\cdot\alpha(x) = \alpha(r\cdot x)$. (Note: I am going to consider these LEFT $R$-modules. There is a parallel development of these as RIGHT $R$-modules. If $R$ is not commutative, these will, in general, yield different modules (the scalar multiples are different). The two are anti-isomorphic (we can consider a right $R$-module as a left $R^{\text{op}}$-module), see note below).

Note that we have, for $x\in R^n$:

$x = (a_1,a_2,\dots,a_n) = a_1(1,0,\dots,0) + a_2(0,1,\dots,0) + \cdots + a_n(0,0,\dots,1)$.

We can write this more compactly as:

$\displaystyle x = \sum_{j=1}^n a_je_j$

Since $\alpha \in \text{Hom}_{\ R}(R^n,R^n)$ is an $R$-module homomorphism:

$\displaystyle \alpha(x) = \sum_{j=1}^n a_j\alpha(e_j)$

so it suffices to specify the images $\alpha(e_j)$, to specify $\alpha$. The recovery of $\alpha$ from the $\alpha(e_j)$ is called "extension by $R$-linearity".

So suppose $\alpha(e_j) = y_j$, for each $j$.

Since each $y_j \in R^n$ we have:

$y_j = (y_{1j},y_{2j},\dots,y_{nj})$ for some elements $y_{ij} \in R$.

Define $\phi: \text{Hom}_{\ R}(R^n,R^n) \to M_n(R)$ by:

$\phi(\alpha) = (y_{ij})$ where $y_j = \alpha(e_j)$ as above.

Is this an $R$-module homomorphism?

First we check additivity. Let:

$\alpha(e_j) = y_j$
$\beta(e_j) = y_j'$, for each $j$.

Then $(\alpha + \beta)(e_j) = \alpha(e_j) + \beta(e_j)$, so:

$\phi(\alpha + \beta) = (y_{ij} + y_{ij}') = (y_{ij}) + (y_{ij}') = \phi(\alpha) + \phi(\beta)$, and:

$\phi(r\cdot\alpha) = (r(y_{ij})) = r\cdot (y_{ij}) = r\cdot\phi(\alpha)$.

This is patently onto, since if we take, for every pair $i,j$

$\gamma_{ij} \in \text{Hom}_{\ R}(R^n,R^n)$ to be the map that takes $e_j \to e_i$ with:

$\gamma_{ij}(e_k) = 0$ for all $k \neq j$, we have:

$\phi(\gamma_{ij}) = E_{ij}$, and these span (generate) $M_n(R)$.

The only thing left to do is determine $\text{ker }\phi$.

If $\phi(\alpha) = 0$ (the zero matrix), then by definition of $\phi$ we have:

$\alpha(e_j) = 0$ (the 0-element of $R^n$) for every $j$, so by $R$-linearity, $\alpha$ is the 0-map.

*****************************

Your observation that both $\text{Hom}_{\ R}(R^n,R^n)$ and $M_n(R)$ are rings raises an interesting question: is $\phi$ a ring-homomorphism?

What we need to do is establish whether or not: $\phi(\alpha \circ \beta) = (\phi(\alpha))(\phi(\beta))$.

Suppose that $\beta(e_j) = (y_{1j}',\dots,y_{nj}')$.

If $\displaystyle \alpha(x) = \alpha((a_1,\dots,a_n)) = \sum_{i=1}^n a_i\alpha(e_i)$ then:

$(\alpha \circ \beta)(e_j) = \alpha(\beta(e_j)) = \alpha((y_{1j}',\dots,y_{nj}'))$

$\displaystyle = \sum_{i=1}^n y_{ij}'\alpha(e_i) = \sum_{i=1}^n y_{ij}'\left(\sum_{k=1}^n y_{ki}e_k\right)$

$\displaystyle = \sum_{i=1}^n\left(\sum_{k=1}^n y_{kj}'y_{ik}\right)e_i$

So the $i$-th coordinate of $(\alpha \circ \beta)(e_j)$ is $\displaystyle \sum_{k=1}^n y_{kj}'y_{ik}$.

On the other hand we have that the $i$-th entry in the $j$-th column of $(\phi(\alpha))(\phi(\beta))$ is:

$\displaystyle \sum_{k=1}^n y_{ik}y_{kj}'$, and unless $R$ is commutative, these are not the same.

I suspect this is why your text has developed this material in terms of RIGHT-actions, so that we do get a ring-isomorphism, when composition is performed right-to-left, and identified with left-to-right matrix multiplication.
 
  • #7
Deveno said:
It is likewise easy to define an $R$-action on $\text{Hom}_{\ R}(R^n,R^n)$, by:

$r\cdot\alpha(x) = \alpha(r\cdot x)$. (Note: I am going to consider these LEFT $R$-modules. There is a parallel development of these as RIGHT $R$-modules. If $R$ is not commutative, these will, in general, yield different modules (the scalar multiples are different). The two are anti-isomorphic (we can consider a right $R$-module as a left $R^{\text{op}}$-module), see note below).

Note that we have, for $x\in R^n$:

$x = (a_1,a_2,\dots,a_n) = a_1(1,0,\dots,0) + a_2(0,1,\dots,0) + \cdots + a_n(0,0,\dots,1)$.

We can write this more compactly as:

$\displaystyle x = \sum_{j=1}^n a_je_j$

Since $\alpha \in \text{Hom}_{\ R}(R^n,R^n)$ is an $R$-module homomorphism:

$\displaystyle \alpha(x) = \sum_{j=1}^n a_j\alpha(e_j)$

so it suffices to specify the images $\alpha(e_j)$, to specify $\alpha$. The recovery of $\alpha$ from the $\alpha(e_j)$ is called "extension by $R$-linearity".

So suppose $\alpha(e_j) = y_j$, for each $j$.

Since each $y_j \in R^n$ we have:

$y_j = (y_{1j},y_{2j},\dots,y_{nj})$ for some elements $y_{ij} \in R$.

Define $\phi: \text{Hom}_{\ R}(R^n,R^n) \to M_n(R)$ by:

$\phi(\alpha) = (y_{ij})$ where $y_j = \alpha(e_j)$ as above.

Is this an $R$-module homomorphism?

First we check additivity. Let:

$\alpha(e_j) = y_j$
$\beta(e_j) = y_j'$, for each $j$.

Then $(\alpha + \beta)(e_j) = \alpha(e_j) + \beta(e_j)$, so:

$\phi(\alpha + \beta) = (y_{ij} + y_{ij}') = (y_{ij}) + (y_{ij}') = \phi(\alpha) + \phi(\beta)$, and:

$\phi(r\cdot\alpha) = (r(y_{ij})) = r\cdot (y_{ij}) = r\cdot\phi(\alpha)$.

This is patently onto, since if we take, for every pair $i,j$

$\gamma_{ij} \in \text{Hom}_{\ R}(R^n,R^n)$ to be the map that takes $e_j \to e_i$ with:

$\gamma_{ij}(e_k) = 0$ for all $k \neq j$, we have:

$\phi(\gamma_{ij}) = E_{ij}$, and these span (generate) $M_n(R)$.

The only thing left to do is determine $\text{ker }\phi$.

If $\phi(\alpha) = 0$ (the zero matrix), then by definition of $\phi$ we have:

$\alpha(e_j) = 0$ (the 0-element of $R^n$) for every $j$, so by $R$-linearity, $\alpha$ is the 0-map.

*****************************

Your observation that both $\text{Hom}_{\ R}(R^n,R^n)$ and $M_n(R)$ are rings raises an interesting question: is $\phi$ a ring-homomorphism?

What we need to do is establish whether or not: $\phi(\alpha \circ \beta) = (\phi(\alpha))(\phi(\beta))$.

Suppose that $\beta(e_j) = (y_{1j}',\dots,y_{nj}')$.

If $\displaystyle \alpha(x) = \alpha((a_1,\dots,a_n)) = \sum_{i=1}^n a_i\alpha(e_i)$ then:

$(\alpha \circ \beta)(e_j) = \alpha(\beta(e_j)) = \alpha((y_{1j}',\dots,y_{nj}'))$

$\displaystyle = \sum_{i=1}^n y_{ij}'\alpha(e_i) = \sum_{i=1}^n y_{ij}'\left(\sum_{k=1}^n y_{ki}e_k\right)$

$\displaystyle = \sum_{i=1}^n\left(\sum_{k=1}^n y_{kj}'y_{ik}\right)e_i$

So the $i$-th coordinate of $(\alpha \circ \beta)(e_j)$ is $\displaystyle \sum_{k=1}^n y_{kj}'y_{ik}$.

On the other hand we have that the $i$-th entry in the $j$-th column of $(\phi(\alpha))(\phi(\beta))$ is:

$\displaystyle \sum_{k=1}^n y_{ik}y_{kj}'$, and unless $R$ is commutative, these are not the same.

I suspect this is why your text has developed this material in terms of RIGHT-actions, so that we do get a ring-isomorphism, when composition is performed right-to-left, and identified with left-to-right matrix multiplication.
Thanks for a most interesting and informative post Deveno ... ...

I am now working through this post carefully and in detail ...

Thanks again,

Peter
 
  • #8
Deveno said:
It is likewise easy to define an $R$-action on $\text{Hom}_{\ R}(R^n,R^n)$, by:

$r\cdot\alpha(x) = \alpha(r\cdot x)$. (Note: I am going to consider these LEFT $R$-modules. There is a parallel development of these as RIGHT $R$-modules. If $R$ is not commutative, these will, in general, yield different modules (the scalar multiples are different). The two are anti-isomorphic (we can consider a right $R$-module as a left $R^{\text{op}}$-module), see note below).

Note that we have, for $x\in R^n$:

$x = (a_1,a_2,\dots,a_n) = a_1(1,0,\dots,0) + a_2(0,1,\dots,0) + \cdots + a_n(0,0,\dots,1)$.

We can write this more compactly as:

$\displaystyle x = \sum_{j=1}^n a_je_j$

Since $\alpha \in \text{Hom}_{\ R}(R^n,R^n)$ is an $R$-module homomorphism:

$\displaystyle \alpha(x) = \sum_{j=1}^n a_j\alpha(e_j)$

so it suffices to specify the images $\alpha(e_j)$, to specify $\alpha$. The recovery of $\alpha$ from the $\alpha(e_j)$ is called "extension by $R$-linearity".

So suppose $\alpha(e_j) = y_j$, for each $j$.

Since each $y_j \in R^n$ we have:

$y_j = (y_{1j},y_{2j},\dots,y_{nj})$ for some elements $y_{ij} \in R$.

Define $\phi: \text{Hom}_{\ R}(R^n,R^n) \to M_n(R)$ by:

$\phi(\alpha) = (y_{ij})$ where $y_j = \alpha(e_j)$ as above.

Is this an $R$-module homomorphism?

First we check additivity. Let:

$\alpha(e_j) = y_j$
$\beta(e_j) = y_j'$, for each $j$.

Then $(\alpha + \beta)(e_j) = \alpha(e_j) + \beta(e_j)$, so:

$\phi(\alpha + \beta) = (y_{ij} + y_{ij}') = (y_{ij}) + (y_{ij}') = \phi(\alpha) + \phi(\beta)$, and:

$\phi(r\cdot\alpha) = (r(y_{ij})) = r\cdot (y_{ij}) = r\cdot\phi(\alpha)$.

This is patently onto, since if we take, for every pair $i,j$

$\gamma_{ij} \in \text{Hom}_{\ R}(R^n,R^n)$ to be the map that takes $e_j \to e_i$ with:

$\gamma_{ij}(e_k) = 0$ for all $k \neq j$, we have:

$\phi(\gamma_{ij}) = E_{ij}$, and these span (generate) $M_n(R)$.

The only thing left to do is determine $\text{ker }\phi$.

If $\phi(\alpha) = 0$ (the zero matrix), then by definition of $\phi$ we have:

$\alpha(e_j) = 0$ (the 0-element of $R^n$) for every $j$, so by $R$-linearity, $\alpha$ is the 0-map.

*****************************

Your observation that both $\text{Hom}_{\ R}(R^n,R^n)$ and $M_n(R)$ are rings raises an interesting question: is $\phi$ a ring-homomorphism?

What we need to do is establish whether or not: $\phi(\alpha \circ \beta) = (\phi(\alpha))(\phi(\beta))$.

Suppose that $\beta(e_j) = (y_{1j}',\dots,y_{nj}')$.

If $\displaystyle \alpha(x) = \alpha((a_1,\dots,a_n)) = \sum_{i=1}^n a_i\alpha(e_i)$ then:

$(\alpha \circ \beta)(e_j) = \alpha(\beta(e_j)) = \alpha((y_{1j}',\dots,y_{nj}'))$

$\displaystyle = \sum_{i=1}^n y_{ij}'\alpha(e_i) = \sum_{i=1}^n y_{ij}'\left(\sum_{k=1}^n y_{ki}e_k\right)$

$\displaystyle = \sum_{i=1}^n\left(\sum_{k=1}^n y_{kj}'y_{ik}\right)e_i$

So the $i$-th coordinate of $(\alpha \circ \beta)(e_j)$ is $\displaystyle \sum_{k=1}^n y_{kj}'y_{ik}$.

On the other hand we have that the $i$-th entry in the $j$-th column of $(\phi(\alpha))(\phi(\beta))$ is:

$\displaystyle \sum_{k=1}^n y_{ik}y_{kj}'$, and unless $R$ is commutative, these are not the same.

I suspect this is why your text has developed this material in terms of RIGHT-actions, so that we do get a ring-isomorphism, when composition is performed right-to-left, and identified with left-to-right matrix multiplication.
Hi Deveno,

Just a clarification regarding the above post ...

You write:

" ... ... First we check additivity. Let:

$\alpha(e_j) = y_j$
$\beta(e_j) = y_j'$, for each $j$.

Then $(\alpha + \beta)(e_j) = \alpha(e_j) + \beta(e_j)$, so:

$\phi(\alpha + \beta) = (y_{ij} + y_{ij}') = (y_{ij}) + (y_{ij}') = \phi(\alpha) + \phi(\beta)$, and:

$\phi(r\cdot\alpha) = (r(y_{ij})) = r\cdot (y_{ij}) = r\cdot\phi(\alpha)$. ... ... etc "


I am unsure how you justify the step:

\(\displaystyle \phi(\alpha + \beta) = (y_{ij} + y_{ij}') \)

Can you explain why this follows?

Further, I am having similar troubles seeing exactly why this step follows:

\(\displaystyle \phi(r\cdot\alpha) = (r(y_{ij}))\)

Can you please explain the justification for this step?

Would appreciate the help ...

Peter
 
Last edited:
  • #9
Peter said:
Hi Deveno,

Just a clarification regarding the above post ...

You write:

" ... ... First we check additivity. Let:

$\alpha(e_j) = y_j$
$\beta(e_j) = y_j'$, for each $j$.

Then $(\alpha + \beta)(e_j) = \alpha(e_j) + \beta(e_j)$, so:

$\phi(\alpha + \beta) = (y_{ij} + y_{ij}') = (y_{ij}) + (y_{ij}') = \phi(\alpha) + \phi(\beta)$, and:

$\phi(r\cdot\alpha) = (r(y_{ij})) = r\cdot (y_{ij}) = r\cdot\phi(\alpha)$. ... ... etc "


I am unsure how you justify the step:

\(\displaystyle \phi(\alpha + \beta) = (y_{ij} + y_{ij}') \)

Can you explain why this follows?

Further, I am having similar troubles seeing exactly why this step follows:

\(\displaystyle \phi(r\cdot\alpha) = (r(y_{ij}))\)

Can you please explain the justification for this step?

Would appreciate the help ...

Peter

Let's be a bit more specific on how $\alpha$ works. We want to turn $\alpha$ into a matrix. Since:

$\displaystyle \alpha(x) = \alpha\left(\sum_{j=1}^n a_je_j\right) = \sum_{j=1}^na_j\alpha(e_j)$

we are just going to specify the $\alpha(e_j)$.

Let's look at a particular example, to see how this actually works, for $n = 3, R = \Bbb Z$

Suppose $\alpha(a_1,a_2,a_3) = (a_1+2a_2-a_3,a_3,-4a_1+2a_2)$

This is clearly an $\Bbb Z$-linear map $\Bbb Z^3 \to \Bbb Z^3$.

We have:

$\alpha(e_1) = \alpha((1,0,0)) = (1,0,-4)$ <--this is $j = 1$
$\alpha(e_2) = \alpha((0,1,0)) = (2,0,2)$ <--this is $j = 2$
$\alpha(e_3) = \alpha((0,0,1)) = (-1,1,0)$ <--this is $j = 3$.

So for $y_j = \alpha(e_j) = (y_{1j},y_{2j},y_{3j})$ the matrix $(y_{ij})$ is:

$\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix}$

Note that:

$\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix}\begin{bmatrix}a_1\\a_2\\a_3\end{bmatrix} = \begin{bmatrix}a_1+2a_2-a_3\\a_3\\-4a_1+2a_2 \end{bmatrix}$

Similarly, if:

$\beta(a_1,a_2,a_3) = (a_1+a_2,2a_2-a_3,4a_2)$

we obtain the matrix:

$\begin{bmatrix}1&1&0\\0&2&-1\\0&4&0 \end{bmatrix}$

Now, by definition:

$(\alpha + \beta)(a_1,a_2,a_3) = \alpha(a_1,a_2,a_3) + \beta(a_1 + a_2 + a_3) = (a_1+2a_2-a_3,a_3,-4a_1+2a_2) + (a_1+a_2,2a_2-a_3,4a_2) = (2a_1+3a_2-a_3,2a_2,-4a_1+6a_2)$

From this we obtain the matrix:

$\begin{bmatrix}2&3&-1\\0&2&0\\-4&6&0\end{bmatrix}$

Note that to obtain the VALUE of $(\alpha + \beta)(e_j)$ we just add the values of $\alpha(e_j)$ and $\beta(e_j)$, so that the $i,j$-th entry is $y_{ij} + y'_{ij}$.

For example:

$\alpha(e_1) = (1,0,-4)$ so for $i = 2$, we have $y_{21} = 0$.
$\beta(e_1) = (1,0,0)$ so for $i = 2$, we have $y_{21}' = 0$.

We would expect, then, that the 2nd coordinate of $(\alpha + \beta)(e_1)$ will be $0 = 0+0$, and indeed:

$(\alpha + \beta)(e_1) = (1,0,-4) + (1,0,0) = (2,0,-4)$ <--the second coordinate is 0.

Now, it is a simple matter to verify that:

$\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix} + \begin{bmatrix}1&1&0\\0&2&-1\\0&4&0 \end{bmatrix} = \begin{bmatrix}2&3&-1\\0&2&0\\-4&6&0\end{bmatrix}$

The matrix on the right is $\phi(\alpha+\beta)$, the two matrices on the left are $\phi(\alpha) + \phi(\beta)$.

Similarly, by definition, we have:

$(r\cdot \alpha)(a_1,a_2,a_3) = r\alpha((a_1,a_2,a_3))$

$= r(ra_1+2a_2-a_3,a_3,-4a_1+2a_2) = (ra_1+r2a_2-ra_3,ra_3,-r4a_1+r2a_2)$, and we see that:

$(r\cdot\alpha)((1,0,0)) = (r,0,-4r)$
$(r\cdot\alpha)((0,1,0)) = (2r,0,2r)$<--I put the $r$'s on the right since $\Bbb Z$ is commutative
$(r\cdot\alpha)((0,0,1)) = (-r,r,0)$

and we obtain the matrix:

$\begin{bmatrix}r&2r&-r\\0&0&r\\-4r&2r&0\end{bmatrix} = r\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix}$

which is just $\phi(r\cdot\alpha) = r\cdot(\phi(\alpha))$
 
  • #10
Deveno said:
Let's be a bit more specific on how $\alpha$ works. We want to turn $\alpha$ into a matrix. Since:

$\displaystyle \alpha(x) = \alpha\left(\sum_{j=1}^n a_je_j\right) = \sum_{j=1}^na_j\alpha(e_j)$

we are just going to specify the $\alpha(e_j)$.

Let's look at a particular example, to see how this actually works, for $n = 3, R = \Bbb Z$

Suppose $\alpha(a_1,a_2,a_3) = (a_1+2a_2-a_3,a_3,-4a_1+2a_2)$

This is clearly an $\Bbb Z$-linear map $\Bbb Z^3 \to \Bbb Z^3$.

We have:

$\alpha(e_1) = \alpha((1,0,0)) = (1,0,-4)$ <--this is $j = 1$
$\alpha(e_2) = \alpha((0,1,0)) = (2,0,2)$ <--this is $j = 2$
$\alpha(e_3) = \alpha((0,0,1)) = (-1,1,0)$ <--this is $j = 3$.

So for $y_j = \alpha(e_j) = (y_{1j},y_{2j},y_{3j})$ the matrix $(y_{ij})$ is:

$\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix}$

Note that:

$\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix}\begin{bmatrix}a_1\\a_2\\a_3\end{bmatrix} = \begin{bmatrix}a_1+2a_2-a_3\\a_3\\-4a_1+2a_2 \end{bmatrix}$

Similarly, if:

$\beta(a_1,a_2,a_3) = (a_1+a_2,2a_2-a_3,4a_2)$

we obtain the matrix:

$\begin{bmatrix}1&1&0\\0&2&-1\\0&4&0 \end{bmatrix}$

Now, by definition:

$(\alpha + \beta)(a_1,a_2,a_3) = \alpha(a_1,a_2,a_3) + \beta(a_1 + a_2 + a_3) = (a_1+2a_2-a_3,a_3,-4a_1+2a_2) + (a_1+a_2,2a_2-a_3,4a_2) = (2a_1+3a_2-a_3,2a_2,-4a_1+6a_2)$

From this we obtain the matrix:

$\begin{bmatrix}2&3&-1\\0&2&0\\-4&6&0\end{bmatrix}$

Note that to obtain the VALUE of $(\alpha + \beta)(e_j)$ we just add the values of $\alpha(e_j)$ and $\beta(e_j)$, so that the $i,j$-th entry is $y_{ij} + y'_{ij}$.

For example:

$\alpha(e_1) = (1,0,-4)$ so for $i = 2$, we have $y_{21} = 0$.
$\beta(e_1) = (1,0,0)$ so for $i = 2$, we have $y_{21}' = 0$.

We would expect, then, that the 2nd coordinate of $(\alpha + \beta)(e_1)$ will be $0 = 0+0$, and indeed:

$(\alpha + \beta)(e_1) = (1,0,-4) + (1,0,0) = (2,0,-4)$ <--the second coordinate is 0.

Now, it is a simple matter to verify that:

$\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix} + \begin{bmatrix}1&1&0\\0&2&-1\\0&4&0 \end{bmatrix} = \begin{bmatrix}2&3&-1\\0&2&0\\-4&6&0\end{bmatrix}$

The matrix on the right is $\phi(\alpha+\beta)$, the two matrices on the left are $\phi(\alpha) + \phi(\beta)$.

Similarly, by definition, we have:

$(r\cdot \alpha)(a_1,a_2,a_3) = r\alpha((a_1,a_2,a_3))$

$= r(ra_1+2a_2-a_3,a_3,-4a_1+2a_2) = (ra_1+r2a_2-ra_3,ra_3,-r4a_1+r2a_2)$, and we see that:

$(r\cdot\alpha)((1,0,0)) = (r,0,-4r)$
$(r\cdot\alpha)((0,1,0)) = (2r,0,2r)$<--I put the $r$'s on the right since $\Bbb Z$ is commutative
$(r\cdot\alpha)((0,0,1)) = (-r,r,0)$

and we obtain the matrix:

$\begin{bmatrix}r&2r&-r\\0&0&r\\-4r&2r&0\end{bmatrix} = r\begin{bmatrix}1&2&-1\\0&0&1\\-4&2&0 \end{bmatrix}$

which is just $\phi(r\cdot\alpha) = r\cdot(\phi(\alpha))$
Thanks for the extensive help ... Appreciate it ...

Just working through your post in detail now ...

Peter
 

FAQ: Matrix Rings - Exercise 1.1.4 (iii) - Berrick and Keating (B&K) - page 12

What is a matrix ring?

A matrix ring is a mathematical structure consisting of a set of matrices with certain properties, and operations of addition and multiplication defined between them. It is denoted by R[x], where R is a ring and x is a variable.

What is Exercise 1.1.4 (iii) in B&K?

Exercise 1.1.4 (iii) in B&K refers to a specific problem in the textbook "A First Course in Rings and Ideals" by Gary F. Berrick and Jonathan D.H. Keating. It asks the reader to prove that if R is a ring, then the set of all nxn matrices over R is a ring under matrix addition and multiplication.

What is the significance of Exercise 1.1.4 (iii) in B&K?

Exercise 1.1.4 (iii) in B&K is significant because it demonstrates the construction of a new ring (the set of nxn matrices over a given ring R) from an existing ring. This is an important concept in abstract algebra and is used in many other areas of mathematics and science.

What are some applications of matrix rings?

Matrix rings have numerous applications in mathematics, physics, engineering, and computer science. They are used to solve systems of linear equations, represent transformations in geometry and physics, and in the development of algorithms for data processing and compression.

How can one solve Exercise 1.1.4 (iii) in B&K?

To solve Exercise 1.1.4 (iii) in B&K, one can use the definition of a ring to show that the set of all nxn matrices over a ring R satisfies the ring axioms. This involves proving closure, associativity, commutativity, and the existence of an identity and inverse elements for both matrix addition and multiplication.

Back
Top