Linear Algebras or k-algebras - Cohn p. 53-54 - SIMPLE CLARIFICATION

  • MHB
  • Thread starter Math Amateur
  • Start date
  • Tags
    Linear
In summary: For an algebra like $k[x]$, this is especially useful for computing invariants of $A$ via computing invariants of $k$, because the $k$-action is the "same" in both cases, and it's the "easier" one.Hopefully, this helped you and wasn't too confusing.In summary, the text explains how the multiplication in a linear algebra is completely determined by the products of the basis elements. This leads to the equation (2.1) where the elements are called the multiplication constants. Some examples are given to illustrate this concept. The importance of understanding the $k$-action and its relationship to the ring multiplication is also emphasized.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading "Introduction to Ring Theory" by P. M. Cohn (Springer Undergraduate Mathematics Series)

In Chapter 2: Linear Algebras and Artinian Rings we read the following on pages 53-54:

View attachment 3132
View attachment 3133In the above text we read:

" … … The multiplication in A is completely determined by the products of the basis elements. Thus we have the equations

\(\displaystyle u_i u_j = \sum_k \gamma_{ijk} u_k\) … … … (2.1)

where the elements \(\displaystyle \gamma_{ijk}\) are called the multiplications constants of the algebra … … "

Can someone please explain how equation (2.1) follows?

Peter
 
Physics news on Phys.org
  • #2
Hi Peter,

The the products $u_i u_j$ are elements of $A$, so they have unique expressions as linear combinations of the basis elements $u_k$. This leads to (2.1).
 
  • #3
Euge said:
Hi Peter,

The the products $u_i u_j$ are elements of $A$, so they have unique expressions as linear combinations of the basis elements $u_k$. This leads to (2.1).

Yes, understand that Euge, but still bit puzzled … how shall I explain ...

well … I keep thinking … … if the algebra has n dimensions … then ...

\(\displaystyle u_i = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_i + \ … \ … \ + 0.u_n\)

and

\(\displaystyle u_j = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_j + \ … \ … \ + 0.u_n\)

and so it seems (I think)

\(\displaystyle u_i.u_j = 0\)

? can you clarify ?

Peter

EDIT Maybe my 'multiplication' is wrong?
 
  • #4
Peter said:
Yes, understand that Euge, but still bit puzzled … how shall I explain ...

well … I keep thinking … … if the algebra has n dimensions … then ...

\(\displaystyle u_i = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_i + \ … \ … \ + 0.u_n\)

and

\(\displaystyle u_j = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_j + \ … \ … \ + 0.u_n\)

and so it seems (I think)

\(\displaystyle u_i.u_j = 0\)

? can you clarify ?

Peter
EDIT Maybe my 'multiplication' is wrong?
Peter, how do know that $u_i u_j = 0$ for all $i$ and $j$? The identities you have do not imply this since multiplication in $A$ has not been specified. In fact, what you're implying is that multiplication in $A$ is trivial. I'll give two examples with nontrivial multiplication.

$\textbf{Example 1}$. The polynomial ring $\Bbb R[x]$ is a linear algebra over $\Bbb R$ with basis $\{1, x, x^2,\ldots\}$ and multiplication defined by the usual ring multiplication of polynomials. For each $k \ge 0$, let $u_k = x^k$. Then $u_i u_j = u_{i+j}$ for all $i$ and $j$. So the multiplication constants are given by $\gamma_{ijk} = \delta_k^{i+j}$, where $\delta$ denotes the Kronecker delta.

$\textbf{Example 2}$. Consider $\Bbb R^3$ with multiplication given by the cross product. It is a non-associative linear algebra over $\Bbb R$ with standard basis $\{e_1, e_2, e_3\}$. Since $e_1e_2 = e_3$, $e_2e_3 = e_1$, $e_3e_1 = e_2$, and $e_ie_j = -e_je_i$ for all $i$ and $j$, the multiplication constants are given by

$\gamma_{111} = 0$, $\gamma_{112} = 0$, $\gamma_{113} = 0$,

$\gamma_{121} = 0$, $\gamma_{122} = 0$, $\gamma_{123} = 1$,

$\gamma_{131} = 0$, $\gamma_{132} = -1$, $\gamma_{133} = 0$,

$\gamma_{211} = 0$, $\gamma_{212} = 0$, $\gamma_{213} = -1$

$\gamma_{221} = 0$, $\gamma_{222} = 0$, $\gamma_{223} = 0$,

$\gamma_{231} = 1$, $\gamma_{232} = 0$, $\gamma_{233} = 0$,

$\gamma_{311} = 0$, $\gamma_{312} = 1$, $\gamma_{313} = 0$,

$\gamma_{321} = -1$, $\gamma_{322} = 0$, $\gamma_{323} = 0$,

$\gamma_{331} = 0$, $\gamma_{332} = 0$, $\gamma_{333} = 0$.

Observe that $\gamma_{ijk}$ equals 1 when $(i,j,k)$ is a cyclic permutation $(1 2 3)$ and -1 when $(i,j,k)$ is a cyclic permutation of $(132)$. Also, $\gamma_{ijk} = 0$ whenever $i, j$, and $k$ are not all distinct. So in fact, $\gamma_{ijk} = \epsilon_{ijk}$, the Levi-Civita symbol.
 
  • #5
It's somewhat of a trivial example, but it may be helpful to consider the $\Bbb Q$-algebra $\text{Mat}_n(\Bbb Q)$, which has a basis consisting of the elementary matrices $E_{ij} = (a_{km})$ where:

$a_{km} = 1, k = i,m = j$
$a_{km} = 0,$ otherwise.

So let's look at what happens when we multiply $E_{ij}E_{i'j'}$:

$E_{ij}E_{i'j'} = E_{ij'}, j = i'$
$E_{ij}E_{i'j'} = 0$, otherwise.

To get a handle on what this really means, let's specify $n = 2$, so our 2x2 matrices can be seen as being "$\Bbb Q^4$ with a multiplication". So our basis is, explicitly:

$u_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix};\ u_2 = \begin{bmatrix}0&1\\0&0\end{bmatrix};\ u_3 = \begin{bmatrix}0&0\\1&0\end{bmatrix};\ u_4 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$

We have:

$u_1u_1 = u_1$, so $\gamma_{111} = 1, \gamma_{11k} = 0, k = 2,3,4$.
$u_1u_2 = u_2$, so $\gamma_{121} = 0, \gamma_{122} = 1, \gamma_{12k} = 0, k = 3,4$
$u_1u_3 = 0$ (all the $\gamma$'s are 0)

and so on (there's 13 more products to compute, and thus 52 more multiplication constants).

Another important $k$-algebra is $k[x]$ (polynomials over $k$) which has as one possible basis:

$u_i = x^i$.

Since $u_iu_j = (x^i)(x^j) = x^{i+j} = u_{i+j}$, we see that:

$\gamma_{ijk} = 1$ when $k = i+j$
$\gamma_{ijk} = 0$, otherwise.

Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$.

Since $\gamma_{ijk} = \gamma_{jik}$, this forms a commutative $\Bbb R$-algebra, more commonly known as $\Bbb C$ (this is a DIVISION ALGEBRA since $U(\Bbb C) = \Bbb C - \{0\}$, that is: all non-zero elements are invertible).

Interestingly enough, we actually have this as a sub-algebra of $\text{Mat}_2(\Bbb R)$, with basis:

$\{v_1,v_2\} = \{E_{11} + E_{22},E_{21}-E_{12}\}$.

This is because the basis elements when multiplied give a linear combination of these two basis elements:

$v_1v_1 = v_1$
$v_2v_1 = v_1v_2 = v_2$
$v_2v_2 = -v_1$, as can be readily verified.

***************

Given a basis for $\text{Mat}_n(k)$, we can use this to define an isomorphism with $\text{Hom}_{k}(k^n,k^n)$, so the study of the (particular) linear algebra $\text{Mat}_n(k)$ is typically the focus of a course called "linear algebra".

It is important not to confuse the $k$-action (scalar multiplication) of a $k$-algebra with the ring multiplication, but typically, if our algebra $A$ is unital, we can often consider it as an extension ring of $k$ via the map:

$\alpha \mapsto \alpha\cdot 1_A$

For example, with $A = \text{Mat}_n(k)$ we have the embedding:

$\alpha \mapsto \alpha I_n$

and with $A = k[x]$ we have the natural embedding of $k$ as constant polynomials

(note that $k \cong k[x]/(x)$ which essentially amounts to "evaluating $p(x)$ at 0", for any polynomial $p$).
 
  • #6
Deveno said:
It's somewhat of a trivial example, but it may be helpful to consider the $\Bbb Q$-algebra $\text{Mat}_n(\Bbb Q)$, which has a basis consisting of the elementary matrices $E_{ij} = (a_{km})$ where:

$a_{km} = 1, k = i,m = j$
$a_{km} = 0,$ otherwise.

So let's look at what happens when we multiply $E_{ij}E_{i'j'}$:

$E_{ij}E_{i'j'} = E_{ij'}, j = i'$
$E_{ij}E_{i'j'} = 0$, otherwise.

To get a handle on what this really means, let's specify $n = 2$, so our 2x2 matrices can be seen as being "$\Bbb Q^4$ with a multiplication". So our basis is, explicitly:

$u_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix};\ u_2 = \begin{bmatrix}0&1\\0&0\end{bmatrix};\ u_3 = \begin{bmatrix}0&0\\1&0\end{bmatrix};\ u_4 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$

We have:

$u_1u_1 = u_1$, so $\gamma_{111} = 1, \gamma_{11k} = 0, k = 2,3,4$.
$u_1u_2 = u_2$, so $\gamma_{121} = 0, \gamma_{122} = 1, \gamma_{12k} = 0, k = 3,4$
$u_1u_3 = 0$ (all the $\gamma$'s are 0)

and so on (there's 13 more products to compute, and thus 52 more multiplication constants).

Another important $k$-algebra is $k[x]$ (polynomials over $k$) which has as one possible basis:

$u_i = x^i$.

Since $u_iu_j = (x^i)(x^j) = x^{i+j} = u_{i+j}$, we see that:

$\gamma_{ijk} = 1$ when $k = i+j$
$\gamma_{ijk} = 0$, otherwise.

Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$.

Since $\gamma_{ijk} = \gamma_{jik}$, this forms a commutative $\Bbb R$-algebra, more commonly known as $\Bbb C$ (this is a DIVISION ALGEBRA since $U(\Bbb C) = \Bbb C - \{0\}$, that is: all non-zero elements are invertible).

Interestingly enough, we actually have this as a sub-algebra of $\text{Mat}_2(\Bbb R)$, with basis:

$\{v_1,v_2\} = \{E_{11} + E_{22},E_{21}-E_{12}\}$.

This is because the basis elements when multiplied give a linear combination of these two basis elements:

$v_1v_1 = v_1$
$v_2v_1 = v_1v_2 = v_2$
$v_2v_2 = -v_1$, as can be readily verified.

***************

Given a basis for $\text{Mat}_n(k)$, we can use this to define an isomorphism with $\text{Hom}_{k}(k^n,k^n)$, so the study of the (particular) linear algebra $\text{Mat}_n(k)$ is typically the focus of a course called "linear algebra".

It is important not to confuse the $k$-action (scalar multiplication) of a $k$-algebra with the ring multiplication, but typically, if our algebra $A$ is unital, we can often consider it as an extension ring of $k$ via the map:

$\alpha \mapsto \alpha\cdot 1_A$

For example, with $A = \text{Mat}_n(k)$ we have the embedding:

$\alpha \mapsto \alpha I_n$

and with $A = k[x]$ we have the natural embedding of $k$ as constant polynomials

(note that $k \cong k[x]/(x)$ which essentially amounts to "evaluating $p(x)$ at 0", for any polynomial $p$).
Thanks so much for the example Deveno … the examples you post are extremely helpful and informative ...

Working through the details of your post very soon ...

Peter
 
  • #7
Deveno said:
It's somewhat of a trivial example, but it may be helpful to consider the $\Bbb Q$-algebra $\text{Mat}_n(\Bbb Q)$, which has a basis consisting of the elementary matrices $E_{ij} = (a_{km})$ where:

$a_{km} = 1, k = i,m = j$
$a_{km} = 0,$ otherwise.

So let's look at what happens when we multiply $E_{ij}E_{i'j'}$:

$E_{ij}E_{i'j'} = E_{ij'}, j = i'$
$E_{ij}E_{i'j'} = 0$, otherwise.

To get a handle on what this really means, let's specify $n = 2$, so our 2x2 matrices can be seen as being "$\Bbb Q^4$ with a multiplication". So our basis is, explicitly:

$u_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix};\ u_2 = \begin{bmatrix}0&1\\0&0\end{bmatrix};\ u_3 = \begin{bmatrix}0&0\\1&0\end{bmatrix};\ u_4 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$

We have:

$u_1u_1 = u_1$, so $\gamma_{111} = 1, \gamma_{11k} = 0, k = 2,3,4$.
$u_1u_2 = u_2$, so $\gamma_{121} = 0, \gamma_{122} = 1, \gamma_{12k} = 0, k = 3,4$
$u_1u_3 = 0$ (all the $\gamma$'s are 0)

and so on (there's 13 more products to compute, and thus 52 more multiplication constants).

Another important $k$-algebra is $k[x]$ (polynomials over $k$) which has as one possible basis:

$u_i = x^i$.

Since $u_iu_j = (x^i)(x^j) = x^{i+j} = u_{i+j}$, we see that:

$\gamma_{ijk} = 1$ when $k = i+j$
$\gamma_{ijk} = 0$, otherwise.

Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$.

Since $\gamma_{ijk} = \gamma_{jik}$, this forms a commutative $\Bbb R$-algebra, more commonly known as $\Bbb C$ (this is a DIVISION ALGEBRA since $U(\Bbb C) = \Bbb C - \{0\}$, that is: all non-zero elements are invertible).

Interestingly enough, we actually have this as a sub-algebra of $\text{Mat}_2(\Bbb R)$, with basis:

$\{v_1,v_2\} = \{E_{11} + E_{22},E_{21}-E_{12}\}$.

This is because the basis elements when multiplied give a linear combination of these two basis elements:

$v_1v_1 = v_1$
$v_2v_1 = v_1v_2 = v_2$
$v_2v_2 = -v_1$, as can be readily verified.

***************

Given a basis for $\text{Mat}_n(k)$, we can use this to define an isomorphism with $\text{Hom}_{k}(k^n,k^n)$, so the study of the (particular) linear algebra $\text{Mat}_n(k)$ is typically the focus of a course called "linear algebra".

It is important not to confuse the $k$-action (scalar multiplication) of a $k$-algebra with the ring multiplication, but typically, if our algebra $A$ is unital, we can often consider it as an extension ring of $k$ via the map:

$\alpha \mapsto \alpha\cdot 1_A$

For example, with $A = \text{Mat}_n(k)$ we have the embedding:

$\alpha \mapsto \alpha I_n$

and with $A = k[x]$ we have the natural embedding of $k$ as constant polynomials

(note that $k \cong k[x]/(x)$ which essentially amounts to "evaluating $p(x)$ at 0", for any polynomial $p$).

Hi Deveno … just working through your post … and need help ...You write:

" … … Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$. … … "

I am assuming (rather tentatively) that multiplication is of the form

\(\displaystyle (x_1 y_1) \cdot (x_2 y_2) = (x_1x_2, y_1y_2) \) (component wise)

and that

\(\displaystyle e_i \cdot e_j = \sum_{k = 1}^2 \gamma_{ijk} e_k = \gamma_{111} e_1 +\gamma_{112} e_2\)Is that right?
BUT … if it is correct then\(\displaystyle e_1 \cdot e_1 = (1,0) \cdot (1,0) = (1,0) = (1,0) + (0,0) = 1 \cdot e_1 + 0 \cdot e_2 \)

so

\(\displaystyle \gamma_{111} =1 \text{ and } \gamma_{112} = 0\)Similarly

\(\displaystyle e_1 \cdot e_2 = (1,0) \cdot (0,1) = (0,0) = (0,0) + (0,0) = 0 \cdot e_1 + 0 \cdot e_2 \)

so

\(\displaystyle \gamma_{121} = 0 \text{ and } \gamma_{122} = 0\)
BUT … in your analysis we have\(\displaystyle \gamma_{121} =0\) and \(\displaystyle \gamma_{122} = 1\)Can you please explain what is wrong in my analysis above?Peter
 
Last edited:
  • #8
Peter said:
Hi Deveno … just working through your post … and need help ...You write:

" … … Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$. … … "

I am assuming (rather tentatively) that multiplication is of the form

Why?

\(\displaystyle (x_1 y_1) \cdot (x_2 y_2) = (x_1x_2, y_1y_2) \) (component wise)

and that

\(\displaystyle e_i \cdot e_j = \sum_{k = 1}^2 \gamma_{ijk} e_k = \gamma_{111} e_1 +\gamma_{112} e_2\)Is that right?
BUT … if it is correct then\(\displaystyle e_1 \cdot e_1 = (1,0) \cdot (1,0) = (1,0) = (1,0) + (0,0) = 1 \cdot e_1 + 0 \cdot e_2 \)

so

\(\displaystyle \gamma_{111} =1 \text{ and } \gamma_{112} = 0\)Similarly

\(\displaystyle e_1 \cdot e_2 = (1,0) \cdot (0,1) = (0,0) = (0,0) + (0,0) = 0 \cdot e_1 + 0 \cdot e_2 \)

This is incorrect. You are assuming that:

$(a,b)\cdot(c,d) = (ac,bd)$.

We have:

$(a,b)\cdot(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

Now:

$e_1e_1 = \gamma_{111}e_1 + \gamma_{112}e_2$

so to evaluate this, we need to know the multiplicative constants BEFOREHAND. Let's find them by looking at the parent algebra these come from:

$e_1e_1 = (E_{11} + E_{22})(E_{11} + E_{22}) = E_{11}E_{11} + E_{11}E_{22} + E_{22}E_{11} + E_{22}E_{22}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= E_{11} + E_{22} = e_1 = 1e_1 + 0e_2$, so $\gamma_{111} = 1,\gamma_{112} = 0$.

(Basically, in the product $E_{ij}E_{km}$ if $j = k$ we get $E_{im}$, otherwise we get the 0-matrix).

We have to do the same thing for $e_1e_2$:

$e_1e_2 = \gamma_{121}e_1 + \gamma_{122}e_2$

and working through the matrix multiplication we have:

$e_1e_2 = (E_{11} + E_{22})(E_{21} - E_{12}) = E_{11}E_{21} - E_{11}E_{12} +E_{22}E_{21} - E_{22}E_{12}$

$= 0 - E_{12} + E_{21} - 0 = E_{21} - E_{12} = e_2 = 0e_1 + 1e_2$

so that $\gamma_{121} = 0$ and $\gamma_{122} = 1$.

The other multiplicative constants can be verified the same way. It's worthwhile to do this for yourself, once.

\(\displaystyle \gamma_{121} = 0 \text{ and } \gamma_{122} = 0\)
BUT … in your analysis we have\(\displaystyle \gamma_{121} =0\) and \(\displaystyle \gamma_{122} = 1\)Can you please explain what is wrong in my analysis above?Peter

This product is NOT "the usual component-wise product" of $\Bbb R \times \Bbb R$.

So what we wind up with is:

$(a,b)(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

$= ac(\gamma_{111}e_1 + \gamma_{112}e_2) + ad(\gamma_{121}e_1 + \gamma_{122}e_2) + bc(\gamma_{211}e_1 + \gamma_{212}e_2) + bd(\gamma_{221}e_1 + \gamma_{222}e_2)$

$= (ac\gamma_{111} + ad\gamma_{121} + bc\gamma_{211} + bd\gamma_{221})e_1 + (ac\gamma_{112} + ad\gamma_{122} + bc\gamma_{212} + bd\gamma_{222})e_2$

$= (ac + 0 + 0 - bd)e_1 + (0 + ad + bc + 0)e_2 = (ac - bd,ad + bc)$.

Note that if $b = d = 0$, everything goes away except the $ac$ term, that is, it IS true that:

$(a,0)(c,0) = (ac,0)$. This gives us an embedding of $\Bbb R$ as: $a \mapsto (a,0)$, which is a ring-homomorphism (indeed a monomorphism).

Recall that in our "parent algebra" this is:

$ae_1 + 0e_2 = a(E_{11} + E_{22}) = \begin{bmatrix}a&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&a\end{bmatrix} = aI$ so that this is the "same embedding" we had before (such matrices are often called "scalar matrices").

What is more interesting, is if $a = c = 0$, and $b = d = 1$, we have:

$(0,1)(0,1) = (-1,0)$ <--note how the 0 "jumps coordinates". Evidently in this algebra, $(0,1)$ is a square root of $(-1,0)$, which we are identifying with $-1$.

I urge you to think about what kind of mapping of $\Bbb R^2$ we get by sending:

$(x,y)\mapsto (0,1)(x,y) = (-y,x)$. Draw some pictures. It may help to think of this map as the composition of two reflections:

$(x,y) \mapsto (y,x)$ (reflecting about the diagonal line $x = y$) <---this one first.

$(a,b) \mapsto (-a,b)$ (reflecting about the $x$ axis) <---this one last (they don't commute).

Use this to convince yourself "dilation-rotations" are a good name for complex numbers (considered as a subalgebra of the algebra of real 2x2 matrices).
 
  • #9
Deveno said:
Why?
This is incorrect. You are assuming that:

$(a,b)\cdot(c,d) = (ac,bd)$.

We have:

$(a,b)\cdot(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

Now:

$e_1e_1 = \gamma_{111}e_1 + \gamma_{112}e_2$

so to evaluate this, we need to know the multiplicative constants BEFOREHAND. Let's find them by looking at the parent algebra these come from:

$e_1e_1 = (E_{11} + E_{22})(E_{11} + E_{22}) = E_{11}E_{11} + E_{11}E_{22} + E_{22}E_{11} + E_{22}E_{22}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= E_{11} + E_{22} = e_1 = 1e_1 + 0e_2$, so $\gamma_{111} = 1,\gamma_{112} = 0$.

(Basically, in the product $E_{ij}E_{km}$ if $j = k$ we get $E_{im}$, otherwise we get the 0-matrix).

We have to do the same thing for $e_1e_2$:

$e_1e_2 = \gamma_{121}e_1 + \gamma_{122}e_2$

and working through the matrix multiplication we have:

$e_1e_2 = (E_{11} + E_{22})(E_{21} - E_{12}) = E_{11}E_{21} - E_{11}E_{12} +E_{22}E_{21} - E_{22}E_{12}$

$= 0 - E_{12} + E_{21} - 0 = E_{21} - E_{12} = e_2 = 0e_1 + 1e_2$

so that $\gamma_{121} = 0$ and $\gamma_{122} = 1$.

The other multiplicative constants can be verified the same way. It's worthwhile to do this for yourself, once.
This product is NOT "the usual component-wise product" of $\Bbb R \times \Bbb R$.

So what we wind up with is:

$(a,b)(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

$= ac(\gamma_{111}e_1 + \gamma_{112}e_2) + ad(\gamma_{121}e_1 + \gamma_{122}e_2) + bc(\gamma_{211}e_1 + \gamma_{212}e_2) + bd(\gamma_{221}e_1 + \gamma_{222}e_2)$

$= (ac\gamma_{111} + ad\gamma_{121} + bc\gamma_{211} + bd\gamma_{221})e_1 + (ac\gamma_{112} + ad\gamma_{122} + bc\gamma_{212} + bd\gamma_{222})e_2$

$= (ac + 0 + 0 - bd)e_1 + (0 + ad + bc + 0)e_2 = (ac - bd,ad + bc)$.

Note that if $b = d = 0$, everything goes away except the $ac$ term, that is, it IS true that:

$(a,0)(c,0) = (ac,0)$. This gives us an embedding of $\Bbb R$ as: $a \mapsto (a,0)$, which is a ring-homomorphism (indeed a monomorphism).

Recall that in our "parent algebra" this is:

$ae_1 + 0e_2 = a(E_{11} + E_{22}) = \begin{bmatrix}a&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&a\end{bmatrix} = aI$ so that this is the "same embedding" we had before (such matrices are often called "scalar matrices").

What is more interesting, is if $a = c = 0$, and $b = d = 1$, we have:

$(0,1)(0,1) = (-1,0)$ <--note how the 0 "jumps coordinates". Evidently in this algebra, $(0,1)$ is a square root of $(-1,0)$, which we are identifying with $-1$.

I urge you to think about what kind of mapping of $\Bbb R^2$ we get by sending:

$(x,y)\mapsto (0,1)(x,y) = (-y,x)$. Draw some pictures. It may help to think of this map as the composition of two reflections:

$(x,y) \mapsto (y,x)$ (reflecting about the diagonal line $x = y$) <---this one first.

$(a,b) \mapsto (-a,b)$ (reflecting about the $x$ axis) <---this one last (they don't commute).

Use this to convince yourself "dilation-rotations" are a good name for complex numbers (considered as a subalgebra of the algebra of real 2x2 matrices).
Thanks Deveno … at first glance this post looks EXTREMELY helpful to my gaining an understanding of k-algebras … …

Obviously I have to be very careful over assuming things regarding the multiplication …

Working through your post now ...

Thanks again,

Peter
 
  • #10
Deveno said:
Why?
This is incorrect. You are assuming that:

$(a,b)\cdot(c,d) = (ac,bd)$.

We have:

$(a,b)\cdot(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

Now:

$e_1e_1 = \gamma_{111}e_1 + \gamma_{112}e_2$

so to evaluate this, we need to know the multiplicative constants BEFOREHAND. Let's find them by looking at the parent algebra these come from:

$e_1e_1 = (E_{11} + E_{22})(E_{11} + E_{22}) = E_{11}E_{11} + E_{11}E_{22} + E_{22}E_{11} + E_{22}E_{22}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= E_{11} + E_{22} = e_1 = 1e_1 + 0e_2$, so $\gamma_{111} = 1,\gamma_{112} = 0$.

(Basically, in the product $E_{ij}E_{km}$ if $j = k$ we get $E_{im}$, otherwise we get the 0-matrix).

We have to do the same thing for $e_1e_2$:

$e_1e_2 = \gamma_{121}e_1 + \gamma_{122}e_2$

and working through the matrix multiplication we have:

$e_1e_2 = (E_{11} + E_{22})(E_{21} - E_{12}) = E_{11}E_{21} - E_{11}E_{12} +E_{22}E_{21} - E_{22}E_{12}$

$= 0 - E_{12} + E_{21} - 0 = E_{21} - E_{12} = e_2 = 0e_1 + 1e_2$

so that $\gamma_{121} = 0$ and $\gamma_{122} = 1$.

The other multiplicative constants can be verified the same way. It's worthwhile to do this for yourself, once.
This product is NOT "the usual component-wise product" of $\Bbb R \times \Bbb R$.

So what we wind up with is:

$(a,b)(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

$= ac(\gamma_{111}e_1 + \gamma_{112}e_2) + ad(\gamma_{121}e_1 + \gamma_{122}e_2) + bc(\gamma_{211}e_1 + \gamma_{212}e_2) + bd(\gamma_{221}e_1 + \gamma_{222}e_2)$

$= (ac\gamma_{111} + ad\gamma_{121} + bc\gamma_{211} + bd\gamma_{221})e_1 + (ac\gamma_{112} + ad\gamma_{122} + bc\gamma_{212} + bd\gamma_{222})e_2$

$= (ac + 0 + 0 - bd)e_1 + (0 + ad + bc + 0)e_2 = (ac - bd,ad + bc)$.

Note that if $b = d = 0$, everything goes away except the $ac$ term, that is, it IS true that:

$(a,0)(c,0) = (ac,0)$. This gives us an embedding of $\Bbb R$ as: $a \mapsto (a,0)$, which is a ring-homomorphism (indeed a monomorphism).

Recall that in our "parent algebra" this is:

$ae_1 + 0e_2 = a(E_{11} + E_{22}) = \begin{bmatrix}a&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&a\end{bmatrix} = aI$ so that this is the "same embedding" we had before (such matrices are often called "scalar matrices").

What is more interesting, is if $a = c = 0$, and $b = d = 1$, we have:

$(0,1)(0,1) = (-1,0)$ <--note how the 0 "jumps coordinates". Evidently in this algebra, $(0,1)$ is a square root of $(-1,0)$, which we are identifying with $-1$.

I urge you to think about what kind of mapping of $\Bbb R^2$ we get by sending:

$(x,y)\mapsto (0,1)(x,y) = (-y,x)$. Draw some pictures. It may help to think of this map as the composition of two reflections:

$(x,y) \mapsto (y,x)$ (reflecting about the diagonal line $x = y$) <---this one first.

$(a,b) \mapsto (-a,b)$ (reflecting about the $x$ axis) <---this one last (they don't commute).

Use this to convince yourself "dilation-rotations" are a good name for complex numbers (considered as a subalgebra of the algebra of real 2x2 matrices).
Hi Deveno … thanks for your extensive help ...

Sorry to be slow, but I need your further assistance ...

In your example of \(\displaystyle \mathbb{R}^2\) as an \(\displaystyle \mathbb{R}\)-algebra, you write:

" … … Given the basis \(\displaystyle \{ e_1, e_2 \} = \{ (1,0), (0,1) \} \) … … "

So we have \(\displaystyle e_1 = (1,0)\) and \(\displaystyle e_2 = (0,1)\) … …

Then you write:

" … … \(\displaystyle e_1e_1 = ( E_{11} + E_{22} ) ( E_{11} + E_{22} )\) … … "

so, that is \(\displaystyle e_1 = ( E_{11} + E_{22} )\)

BUT … …

\(\displaystyle E_{11} + E_{22} = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix} \)

while, as indicated above, \(\displaystyle e_1 = (1,0)\) ?

Can you clarify?

Peter
 
Last edited:
  • #11
Peter said:
Hi Deveno … thanks for your extensive help ...

Sorry to be slow, but I need your further assistance ...

In your example of \(\displaystyle \mathbb{R}^2\) as an \(\displaystyle \mathbb{R}\)-algebra, you write:

" … … Given the basis \(\displaystyle \{ e_1, e_2 \} = \{ (1,0), (0,1) \} \) … … "

So we have \(\displaystyle e_1 = (1,0)\) and \(\displaystyle e_2 = (0,1)\) … …

Then you write:

" … … \(\displaystyle e_1e_1 = ( E_{11} + E_{22} ) ( E_{11} + E_{22} )\) … … "

so, that is \(\displaystyle e_1 = ( E_{11} + E_{22} )\)

BUT … …

\(\displaystyle E_{11} + E_{22} = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix} \)

while, as indicated above, \(\displaystyle e_1 = (1,0)\) ?

Can you clarify?

Peter

Given a vector space ($F$-module) $V$, one can realize this vector space many different ways. For example, one can embed an isomorph of $\Bbb R^2$ in $\Bbb R^4$ a LOT of different ways, just send any basis $\{v_1,v_2\}$ to two linearly independent vectors of $\Bbb R^4$.

Here, that isomorphism, of $\Bbb C$ as a 2-dimensional $\Bbb R$-algebra into $\text{Mat}_4(\Bbb R)$ is given by:

$e_1 = (1,0) \mapsto E_{11} + E_{22} = \begin{bmatrix}1&0\\0&1\end{bmatrix}$

$e_2 = (0,1) \mapsto E_{21} - E_{12} = \begin{bmatrix}0&-1\\1&0\end{bmatrix}$.

Of course, it might not be obvious that $\{E_{11}+E_{22},E_{21}-E_{12}\}$ is linearly independent, but if:

$c_1(E_{11}+E_{22}) + c_2(E_{21}-E_{12}) = 0$ (the 0-matrix)

then we have a linear combination of the $E_{ij}$ that sums to 0, and since the $E_{ij}$ are linearly independent, it follows that $c_1 = c_2 = 0$.

The important thing about this embedding is not only is it a VECTOR-SPACE isomorphism, but it preserves the RING multiplication as well.

So we have "the same algebraic structure" (complex numbers), as on one hand "two-dimensional things" (2-vectors in the plane), and (equivalently) as "four-dimensional things" (2x2 matrices).

If we extend by linearity, we obtain the algebra monomorphism:

$\phi: \Bbb C \to \text{Mat}_2(\Bbb R)$ given by:

$\phi(a+bi) = \begin{bmatrix}a&-b\\b&a\end{bmatrix}$

Again, it is instructive to work out that $\phi$ is both a ring-homomorphism and a vector space homomorphism with trivial kernel.

In particular, since $\Bbb C$ is isomorphic to $\phi(\Bbb C)$, it follows that for these two basis choices, we have the same multiplication constants, so it really doesn't matter if we talk about $e_j$ or $\phi(e_j)$, they "act the same".
 

FAQ: Linear Algebras or k-algebras - Cohn p. 53-54 - SIMPLE CLARIFICATION

What is a linear algebra?

A linear algebra is a mathematical branch that deals with the study of vector spaces and linear transformations between them. It involves the manipulation of vectors and matrices to solve systems of linear equations and represents linear transformations geometrically.

What is a k-algebra?

A k-algebra is a vector space over a field k equipped with a bilinear multiplication operation. It is a generalization of the concept of an algebra over a field and includes the notion of a ring, a commutative algebra, and an associative algebra.

How are linear algebras and k-algebras related?

A linear algebra can be seen as a special case of a k-algebra where the base field k is the field of real or complex numbers. This means that the operations of addition and scalar multiplication in a linear algebra follow the same rules as those in a k-algebra.

What is the difference between a simple and a semisimple k-algebra?

A simple k-algebra is one that has no non-trivial two-sided ideals, while a semisimple k-algebra is one that can be decomposed into a direct sum of simple k-algebras. In other words, a semisimple k-algebra is a direct sum of simple k-algebras, whereas a simple k-algebra cannot be broken down further.

How does Cohn's book clarify the concept of k-algebras?

Cohn's book provides a clear and concise explanation of the key concepts and properties of k-algebras. It also includes numerous examples and exercises to help readers gain a deeper understanding of the subject. Additionally, the book presents the material in a logical and organized manner, making it easier for readers to follow and comprehend.

Similar threads

Replies
4
Views
2K
Replies
8
Views
2K
Replies
15
Views
4K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
3
Views
1K
Replies
10
Views
3K
Replies
5
Views
1K
Replies
2
Views
1K
Replies
7
Views
2K
Back
Top