Multiplication Maps on Algebras .... Bresar, Lemma 1.25 ....

In summary: I am reading Matej Bresar's book, "Introduction to Noncommutative Algebra" and am currently focussed on Chapter 1: Finite Dimensional Division Algebras. In summary, Lemma 1.25 guarantees that the dimension of the multiplication algebra M(A) is at least d^2, and since the dimension of the endomorphism algebra End_F(A) is also d^2, it follows that M(A) must be equal to End_F(A). This can be seen by considering the basis of M(A) given by the left- and right-multiplications by the basis elements of A, which are linearly independent by Lemma 1.24. Furthermore, the equality L_{u_i} \cdot R
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Matej Bresar's book, "Introduction to Noncommutative Algebra" and am currently focussed on Chapter 1: Finite Dimensional Division Algebras ... ...

I need help with the proof of Lemma 1.25 ...

Lemma 1.25 reads as follows:
?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png

?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png


My questions on the proof of Lemma 1.25 are as follows:Question 1

In the above text from Bresar we read the following:

" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ... "Can someone please explain exactly why Bresar is concluding that ##[ M(A) \ : \ F ] \ge d^2## ... ... ?Question 2

In the above text from Bresar we read the following:

" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]##

and so ##M(A) = [ \text{ End}_F (A) \ : \ F ]##. ... ... "Can someone please explain exactly why ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ...

... implies that ... ##M(A) = [ \text{ End}_F (A) \ : \ F ]## ...
Hope someone can help ...

Peter
===========================================================*** NOTE ***

So that readers of the above post will be able to understand the context and notation of the post ... I am providing Bresar's first two pages on Multiplication Algebras ... ... as follows:
?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png

?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png
 

Attachments

  • Bresar - 1 - Lemma 1.25 - PART 1 ... ....png
    Bresar - 1 - Lemma 1.25 - PART 1 ... ....png
    27.6 KB · Views: 589
  • Bresar - 2 - Lemma 1.25 - PART 2 ... ....png
    Bresar - 2 - Lemma 1.25 - PART 2 ... ....png
    23.9 KB · Views: 696
  • Bresar - 1 - Section 1.5 Multiplication Algebra - PART 1 ... ....png
    Bresar - 1 - Section 1.5 Multiplication Algebra - PART 1 ... ....png
    27.6 KB · Views: 542
  • Bresar - 2 - Section 1.5 Multiplication Algebra - PART 2 ... ....png
    Bresar - 2 - Section 1.5 Multiplication Algebra - PART 2 ... ....png
    29.5 KB · Views: 726
Last edited:
Physics news on Phys.org
  • #2
Math Amateur said:
Question 1
In the above text from Bresar we read the following:
" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ... "
Can someone please explain exactly why Bresar is concluding that ##[ M(A) \ : \ F ] \ge d^2## ... ... ?
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.
Question 2
In the above text from Bresar we read the following:
" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]##
and so ##M(A) = [ \text{ End}_F (A) \ : \ F ]##. ... ... "
Can someone please explain exactly why ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ...
... implies that ... ##M(A) = [ \text{ End}_F (A) \ : \ F ]## ...
Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
 
  • Like
Likes Math Amateur
  • #3
fresh_42 said:
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.

Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
Thanks fresh_42 ... most helpful in helping me to grasp the meaning of Lemma 1.25 ...

But just a clarification ... You write:

" ... ... Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. "What do you mean when you write " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## " ...

There appear to be two "multiplications" involved, namely ##\cdot## and ##\circ## ... but what are these ...?

and, further what is the meaning and significance of the equality " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## "

Can you help ...Still reflecting on your post ...

Peter
 
  • #4
I simply wanted to indicate, that multiplication ##"\cdot"## here is the successive application of mappings ##"\circ"##, no matter how it is written even if without multiplication sign. In the end it is ##(L_{u_i}R_{u_j})(x) = L_{u_i}(R_{u_j}(x))=L_{u_i}(x\cdot u_j) = u_i \cdot x \cdot u_j##.
 
  • Like
Likes Math Amateur
  • #5
I am learning a bit about algebra(s) that I never knew! What exactly does the property of being "central simple" have to do with the conclusions above?

Also: now I want to understand the "Brauer group".
 
  • #6
A ##\mathbb{K}-##algebra ##\mathcal{A}## is central simple, if the center ##\mathcal{C}(\mathcal{A})=\{c\in \mathbb{A}\,\vert \,ca=ac \,\forall \,a\in\mathcal{A}\}## of ##\mathcal{A}## equals ##\mathbb{K}## and ##\mathcal{A}## as a ring is simple, i.e. without ideals.

The Brauer group is a long time no see. So I've read the definition again. A funny thing, that you're talking about. How does it come?

According to the theorem of Artin-Wedderburn, every central simple algebra is isomorphic to a matrix algebra ##\mathbb{M}(n,\mathcal{D})## over a division ring ##\mathcal{D}##, here ##\mathcal{D}=\mathbb{K}##. Now all ##\mathbb{M}(n,\mathbb{K})## are considered equivalent, i.e. ##\mathbb{M}(n,\mathbb{K}) \text{ ~ } \mathbb{M}(m,\mathbb{K})## and the elements of the Brauer group (of ##\mathbb{K}##) are the equivalence classes. E.g. ##[1] = [\mathbb{M}(1,\mathbb{K})]=[\mathbb{K}]## and the inverse element is the opposite algebra ##\mathcal{A}^{op}## with the multiplication ##(a,b) \mapsto ba##.

However, the really interesting question here is: Do all Scottish mathematicians (Hamilton, Wedderburn, ...) have a special relationship to strange algebras and why is it so? :cool:
 
  • Like
Likes Math Amateur
  • #7
I wasn't quite satisfied with this lapidary description of an equivalence relation here. Unfortunately the English and German Wikipedia page are a one-to-one translation. But the French has been a little bit better. Starting with a central simple algebra ##\mathcal{A}## over a field ##\mathbb{K}##, we have ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{L} \cong \mathbb{M}(n,\mathbb{L})## for a finite field extension ##\mathbb{L} \supseteq \mathbb{K}##

Now ##\mathcal{A} \text{ ~ } \mathcal{B}## are considered equivalent, if there are natural numbers ##n,m## and an isomorphism such that ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{M}(n,\mathbb{K}) \cong \mathcal{B} \otimes_{\mathbb{K}} \mathbb{M}(m,\mathbb{K})##.
The (Abelian) Brauer group are now the equivalence classes with multiplication ##\otimes##.

(At least as far as my bad French allowed me to translate it.)
 
  • Like
Likes Math Amateur
  • #8
@fresh_42 - Re: Scots & maths - try the Jack polynomial. :smile:
 
  • Like
Likes Math Amateur
  • #9
jim mcnamara said:
@fresh_42 - Re: Scots & maths - try the Jack polynomial. :smile:
His research dealt with the development of analytic methods to evaluate certain integrals over matrix spaces.
Hmmm ... I wonder whether they spoke Gaelic ...
 
  • Like
Likes Math Amateur
  • #10
fresh_42 said:
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.

Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
Hi fresh_42 ...

Just a further clarification ... ...

You write:

" ... ... We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition. ... ... "Now ... if ##M(A)## is generated by ##L_a## and ##R_b## then it should contain elements like ##L_a L_b L_c## and ##L_a^2 R_b^2 R_c## ... and so on ...BUT ... how do elements like these fit with Bresar's definition of ##M(A)## ... as follows:

##M(A) := \{ L_{a_1} R_{b_2} + \ ... \ ... \ + L_{a_1} R_{b_2} \ | \ a_i, b_i \in A, n \in \mathbb{N} \}##

... ...

... ... unless ... we treat ##L_a L_b L_c = L_{abc} R_1 = L_t R_u##

where ##t = abc## and ##u = 1## ... ...and ##t, u \in A## ... ...

... and ...

we treat ##L_a^2 R_b^2 R_c = L_{aa} R_{cbb} = L_r R_s##

where ##r = aa## and ##s = cbb## ...Can you help me to clarify this issue ...

Peter
 
  • #11
That's correct. In the lines ahead of Definition 1.22 Bresar mentions the rules by which ##\{L_{a_1}R_{b_1}+\ldots +L_{a_n}R_{b_n}\}## becomes an algebra. Without them, it would simply be a set of some endomorphisms.
 
  • Like
Likes Math Amateur
  • #12
But also, #10 is almost immediate from the definitions of Ls and Rt:

(La Lb) x = (LaoLb) x​

= La (Lb x)​

= La (bx)​

= a(bx)​

= (ab)x​

= Lab x.
And virtually the same reasoning to show

(Rc Rd) x = Rdc x.​

(Also note that, any La and any Rb commute:

La Rb = Rb La.​

This can be proved in a similar manner.)
 

FAQ: Multiplication Maps on Algebras .... Bresar, Lemma 1.25 ....

What are multiplication maps on algebras?

Multiplication maps on algebras refer to the function that takes two elements in an algebra and produces a third element in the same algebra. It is a fundamental concept in algebraic structures and is used to define the algebraic operations of addition and multiplication.

Who is Bresar and what is Lemma 1.25?

Bresar is a mathematician known for his contributions to abstract algebra and ring theory. Lemma 1.25 is a specific mathematical statement found in his work, specifically in the paper "Multiplication Maps on Algebras". It is a lemma, or a smaller result, that is used to prove a larger theorem or statement.

How are multiplication maps on algebras used?

Multiplication maps on algebras are used to define and study algebraic structures such as groups, rings, and fields. They are also used to prove important theorems and properties in abstract algebra, which have applications in various fields of mathematics and science.

What is the significance of Lemma 1.25 in Bresar's work?

Lemma 1.25 is a crucial step in Bresar's proof for the existence of multiplication maps on algebras. It is a generalization of a well-known theorem in algebraic structures and has implications for further developments in the field. It also has applications in other areas of mathematics, such as functional analysis and topology.

Are there any real-world applications of multiplication maps on algebras?

Yes, multiplication maps on algebras have many real-world applications. They are used in coding theory, cryptography, and computer science, particularly in the design of error-correcting codes and cryptographic algorithms. They also have applications in physics, specifically in the study of symmetries and conservation laws in quantum mechanics.

Back
Top