Regular representations of finite dimensional algebras

I assume that $k_n$ is Cohn's notation for the set of $n\times n$ matrices over $k$. The procedure for representing the map $\rho_a$ by a matrix is just the standard method for associating a matrix with a linear transformation of a finite-dimensional vector space. Given a linear transformation $T: V\to V$ from an $n$-dimensional vector space $V$ to itself, you first choose a basis $\{e_1,e_2,\ldots,e_n\}$ for $V$. Then you express each of the vectors $Te_1,Te_2,\ldots,Te_n$ as a linear combination of $e
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading "Introduction to Ring Theory" by P. M. Cohn (Springer Undergraduate Mathematics Series)

In Chapter 2: Linear Algebras and Artinian Rings we read the following on page 57:
https://www.physicsforums.com/attachments/3149I am trying to gain an understanding of representations. I would welcome a simple example of representations of algebras as this would help a great deal …Further, Exercise 2.1 (4) reads as follows:

"Verify that the regular representation (2.10 - see above text) is a homomorphism of \(\displaystyle A\) as a right \(\displaystyle A\)-module.

How is the matrix \(\displaystyle ( \rho_{ij} (a) )\) affected by a change of basis?Can someone please help me get started on this problem?Peter
 
Last edited:
Physics news on Phys.org
  • #2
Peter said:
I am trying to gain an understanding of representations. I would welcome a simple example of representations of algebras as this would help a great deal …
One very simple example would be the complex numbers, considered as a two-dimensional algebra over the field of real numbers.

If we take $\{1,i\}$ as a basis for $\mathbb{C}$ then every element of $\mathbb{C}$ can be expressed in the form $x+iy$ with $x,y \in\mathbb{R}$. Given an element $a+ib \in\mathbb{C}$, the right multiplication $\rho_{a+ib}$ is the map $x+iy \mapsto (x+iy)(a+ib) = (xa-yb) + i(xb+ya)$.

Now use the basis $\{1,i\}$ to represent the complex number $x+iy$ by the vector $[x,y]$. Then the equation $\rho_{a+ib}(x+iy) = (xa-yb) + i(xb+ya)$ corresponds to the matrix equation $$[xa-yb,\,xb+ya] = [x,\,y]\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$$ Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$

To verify that the regular representation is a multiplicative mapping in this example, notice that the product of complex numbers $(a+ib)(c+id) = (ac-bd) + i(ad+bc)$ corresponds to the matrix equation $\begin{bmatrix}a&b\\ -b&a \end{bmatrix} \begin{bmatrix}c&d\\ -d&c \end{bmatrix} = \begin{bmatrix}ac -bd&ad+bc\\ -(ad+bc)&ac-bd \end{bmatrix} .$
 
Last edited:
  • #3
Opalg said:
One very simple example would be the complex numbers, considered as a two-dimensional algebra over the field of real numbers.

If we take $\{1,i\}$ as a basis for $\mathbb{C}$ then every element of $\mathbb{C}$ can be expressed in the form $x+iy$ with $x,y \in\mathbb{R}$. Given an element $a+ib \in\mathbb{C}$, the right multiplication $\rho_{a+ib}$ is the map $x+iy \mapsto (x+iy)(a+ib) = (xa-yb) + i(xb+ya)$.

Now use the basis $\{1,i\}$ to represent the complex number $x+iy$ by the vector $[x,y]$. Then the equation $\rho_{a+ib}(x+iy) = (xa-yb) + i(xb+ya)$ corresponds to the matrix equation $$[xa-yb,\,xb+ya] = [x,\,y]\begin{bmatrix}a&-b\\ b&a \end{bmatrix}.$$ Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&-b\\ b&a \end{bmatrix}.$

To verify that the regular representation is a multiplicative mapping in this example, notice that the product of complex numbers $(a+ib)(c+id) = (ac-bd) + i(ad+bc)$ corresponds to the matrix equation $\begin{bmatrix}a&-b\\ b&a \end{bmatrix} \begin{bmatrix}c&-d\\ d&c \end{bmatrix} = \begin{bmatrix}ac -bd&-(ad+bc)\\ ad+bc&ac-bd \end{bmatrix} .$

Thanks so much for the helpful example, Opalg ...

Just working through the detail now …

Thanks again … examples are so helpful!

Peter
 
  • #4
Peter said:
Thanks so much for the helpful example, Opalg ...

Just working through the detail now …

Thanks again … examples are so helpful!

Peter
A further question I have regarding representations is as follows:In the text above we find the following text:"Given a finite-dimensional algebra \(\displaystyle A\) over a field \(\displaystyle k\), of dimension \(\displaystyle n\), say, consider the right multiplication by an element \(\displaystyle a \in A\):

\(\displaystyle \rho_a \ : \ x \mapsto xa \) where \(\displaystyle x \in A\) … … … (2.10)

This is a linear transformation of \(\displaystyle A\) and so can be represented by an \(\displaystyle n \times n\) matrix over \(\displaystyle k\). Thus we have a mapping \(\displaystyle \rho \ : \ A \to k_n\), and this is easily seen to be a homomorphism. … … "
I cannot see how (2.10) leads to a mapping of the form \(\displaystyle \rho \ : \ A \to k_n\)?Can someone please explain how this follows? (I am not sure I really understand the notation!)

Peter
 
  • #5
Peter said:
A further question I have regarding representations is as follows:In the text above we find the following text:"Given a finite-dimensional algebra \(\displaystyle A\) over a field \(\displaystyle k\), of dimension \(\displaystyle n\), say, consider the right multiplication by an element \(\displaystyle a \in A\):

\(\displaystyle \rho_a \ : \ x \mapsto xa \) where \(\displaystyle x \in A\) … … … (2.10)

This is a linear transformation of \(\displaystyle A\) and so can be represented by an \(\displaystyle n \times n\) matrix over \(\displaystyle k\). Thus we have a mapping \(\displaystyle \rho \ : \ A \to k_n\), and this is easily seen to be a homomorphism. … … "
I cannot see how (2.10) leads to a mapping of the form \(\displaystyle \rho \ : \ A \to k_n\)?Can someone please explain how this follows? (I am not sure I really understand the notation!)

Peter
I assume that $k_n$ is Cohn's notation for the set of $n\times n$ matrices over $k$. The procedure for representing the map $\rho_a$ by a matrix is just the standard method for associating a matrix with a linear transformation of a finite-dimensional vector space.

Given a linear transformation $T: V\to V$ from an $n$-dimensional vector space $V$ to itself, you first choose a basis $\{e_1,e_2,\ldots,e_n\}$ for $V$. Then you express each of the vectors $Te_1,Te_2,\ldots,Te_n$ as a linear combination of $e_1,\ldots,e_n$, and you use the coefficients in those combinations to form the rows (or is it the columns? – see Note below) of an $n\times n$ matrix. That is the procedure that Cohn is using to construct the regular representation of $A$.

In that extract from Cohn's book, the need to use a basis for $A$ in order to construct the matrix representation $\rho$ is explained in the paragraph leading up to equation (2.12).

[Note. In my previous comment, I initially had the rows and columns of the matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}$ the wrong way round (I have changed it now). My excuse is that as an analyst I am used to using the left regular representation, which in this example leads to the transpose matrix. Algebraists traditionally prefer the right regular representation (as Cohn does), which I find confusing.]
 
Last edited:
  • #6
Opalg said:
One very simple example would be the complex numbers, considered as a two-dimensional algebra over the field of real numbers.

If we take $\{1,i\}$ as a basis for $\mathbb{C}$ then every element of $\mathbb{C}$ can be expressed in the form $x+iy$ with $x,y \in\mathbb{R}$. Given an element $a+ib \in\mathbb{C}$, the right multiplication $\rho_{a+ib}$ is the map $x+iy \mapsto (x+iy)(a+ib) = (xa-yb) + i(xb+ya)$.

Now use the basis $\{1,i\}$ to represent the complex number $x+iy$ by the vector $[x,y]$. Then the equation $\rho_{a+ib}(x+iy) = (xa-yb) + i(xb+ya)$ corresponds to the matrix equation $$[xa-yb,\,xb+ya] = [x,\,y]\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$$ Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$

To verify that the regular representation is a multiplicative mapping in this example, notice that the product of complex numbers $(a+ib)(c+id) = (ac-bd) + i(ad+bc)$ corresponds to the matrix equation $\begin{bmatrix}a&b\\ -b&a \end{bmatrix} \begin{bmatrix}c&d\\ -d&c \end{bmatrix} = \begin{bmatrix}ac -bd&ad+bc\\ -(ad+bc)&ac-bd \end{bmatrix} .$

Hi Opalg,

Just a clarification … …

In your post above you write:

" … … Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$ … … … "

However, Cohn writes that the right multiplication \(\displaystyle \rho_a\) is called a regular representation and \(\displaystyle \rho_a\) maps \(\displaystyle x\) to \(\displaystyle xa\) not a to a matrix … can you clarify?

Peter
 
  • #7
Opalg said:
I assume that $k_n$ is Cohn's notation for the set of $n\times n$ matrices over $k$. The procedure for representing the map $\rho_a$ by a matrix is just the standard method for associating a matrix with a linear transformation of a finite-dimensional vector space.

Given a linear transformation $T: V\to V$ from an $n$-dimensional vector space $V$ to itself, you first choose a basis $\{e_1,e_2,\ldots,e_n\}$ for $V$. Then you express each of the vectors $Te_1,Te_2,\ldots,Te_n$ as a linear combination of $e_1,\ldots,e_n$, and you use the coefficients in those combinations to form the rows (or is it the columns? – see Note below) of an $n\times n$ matrix. That is the procedure that Cohn is using to construct the regular representation of $A$.

In that extract from Cohn's book, the need to use a basis for $A$ in order to construct the matrix representation $\rho$ is explained in the paragraph leading up to equation (2.12).

[Note. In my previous comment, I initially had the rows and columns of the matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}$ the wrong way round (I have changed it now). My excuse is that as an analyst I am used to using the left regular representation, which in this example leads to the transpose matrix. Algebraists traditionally prefer the right regular representation (as Cohn does), which I find confusing.]
Hi Opalg … thanks for the help … …

You write:

"I assume that $k_n$ is Cohn's notation for the set of $n\times n$ matrices over $k$. The procedure for representing the map $\rho_a$ by a matrix is just the standard method for associating a matrix with a linear transformation of a finite-dimensional vector space."

Indeed … I checked carefully in Cohn's text and you are correct … …

Just reflecting on the rest of your post, now … …

Peter
 
  • #8
Peter said:
Hi Opalg,

Just a clarification … …

In your post above you write:

" … … Thus the right regular representation of $\mathbb{C}$ (with respect to the basis $\{1,i\}$) takes the complex number $a+ib$ to the $2\times2$ matrix $\begin{bmatrix}a&b\\ -b&a \end{bmatrix}.$ … … … "

However, Cohn writes that the right multiplication \(\displaystyle \rho_a\) is called a regular representation and \(\displaystyle \rho_a\) maps \(\displaystyle x\) to \(\displaystyle xa\) not a to a matrix … can you clarify?

Peter
It's true that the terminology gets a bit slippery here. The map $a\mapsto \rho_a$ takes $a\in A$ to $\rho_a$, which is a linear transformation of $A$. However, a linear transformation (of a finite-dimensional space) is often described by a matrix, and Cohn goes on to say "Such a homomorphism into a full matrix ring is called a matrix representation or simply a representation of $A$". So at that stage Cohn is identifying the linear transformation $\rho_a$ with its associated matrix. More precisely, he is defining a map $\rho:A\to k_n$ by saying that $\rho(a)$ is the matrix associated with the linear transformation $\rho_a$.
 

FAQ: Regular representations of finite dimensional algebras

What is a regular representation of a finite dimensional algebra?

A regular representation of a finite dimensional algebra is a way of representing the algebra's elements as linear transformations on a finite dimensional vector space. This representation is useful for studying the algebra's structure and properties.

How is a regular representation constructed?

A regular representation is constructed by considering the algebra's set of generators and defining each element as a linear transformation on the vector space based on its action on the generators.

What is the significance of regular representations in the study of finite dimensional algebras?

Regular representations are significant because they allow us to study the structure and properties of a finite dimensional algebra in a more concrete and visual way. They also provide a way to relate different algebras and understand their similarities and differences.

Can a finite dimensional algebra have multiple regular representations?

Yes, a finite dimensional algebra can have multiple regular representations. This is because the definition of a regular representation allows for different choices in the vector space and linear transformations used to represent the algebra's elements.

How are regular representations related to other representations of finite dimensional algebras?

Regular representations are a special case of irreducible representations, which are representations that cannot be decomposed into smaller representations. They are also related to simple representations, which are representations that have no non-trivial subrepresentations.

Similar threads

Replies
2
Views
1K
Replies
15
Views
4K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
3
Views
1K
Replies
8
Views
2K
Replies
10
Views
4K
Replies
10
Views
3K
Back
Top