Another Question about Finite Dimensional Division Algebras ....

In summary: It's a statement about a certain class of polynomials. It's a statement about polynomials in a certain ring. In particular, it's a statement about polynomials that are linear and quadratic in some coordinate ring.There are other types of polynomials that also factor into linear and quadratic factors, but the theorem is still true for them.The theorem is true because the multiplication in the ring ##\mathbb{R}[\omega]## is commutative. That is, for every two elements of the ring, the multiplication is the same for them. That means that if we multiply two linear polyn
  • #1
Math Amateur
Gold Member
MHB
3,996
48
I am reading Matej Bresar's book, "Introduction to Noncommutative Algebra" and am currently focussed on Chapter 1: Finite Dimensional Division Algebras ... ...

I need help with another aspect of the proof of Lemma 1.1 ... ...

Lemma 1.1 reads as follows:
?temp_hash=4e72615a3c99dcd74155214627f2ac40.png

My questions regarding Bresar's proof above are as follows:Question 1

In the above text from Bresar, in the proof of Lemma 1.1 we read the following:

"... ... As we know, ##f( \omega )## splits into linear and quadratic factors in ##\mathbb{R} [ \omega ]## ... ..."

My question is ... how exactly do we know that ##f( \omega )## splits into linear and quadratic factors in ##\mathbb{R} [ \omega ]## ... can someone please explain this fact ... ...
Question 2

In the above text from Bresar, in the proof of Lemma 1.1 we read the following:

" ... ... Since ##f(x) = 0## we have

##( x - \alpha_1 ) \ ... \ ... \ ( x - \alpha_r )( x^2 + \lambda_1 x + \mu_1 ) \ ... \ ... \ ( x^2 + \lambda_s x + \mu_s ) = 0##

As ##D## is a division algebra, one of the factors must be ##0##. ... ... "

My question is ... why does ##D## being a division algebra mean that one of the factors must be zero ...?
Help with questions 1 and 2 above will be appreciated .. ...

Peter
==============================================================================

In order for readers of the above post to appreciate the context of the post I am providing pages 1-2 of Bresar ... as follows ...
?temp_hash=4e72615a3c99dcd74155214627f2ac40.png

?temp_hash=4e72615a3c99dcd74155214627f2ac40.png
 

Attachments

  • Bresar - Lemma 1.1.png
    Bresar - Lemma 1.1.png
    56.8 KB · Views: 655
  • Bresar - Page 1.png
    Bresar - Page 1.png
    34.7 KB · Views: 528
  • Bresar - Page 2.png
    Bresar - Page 2.png
    66.8 KB · Views: 654
Physics news on Phys.org
  • #2
Math Amateur said:
My question is ... how exactly do we know that ##f( \omega )## splits into linear and quadratic factors in ##\mathbb{R} [ \omega ]## ... can someone please explain this fact ... ...
The text apparently assumes the properties of ##\mathbb{R}[\omega]## are prerequisite knowledge. The above fact follows in some way from "The Fundamental Theorem Of Algebra", but I don't know if the result itself has a famous name.

It's difficult to keep track of the various types of multiplication that are going on in Bresar's book. In the ring ##\mathbb{R}[\omega]## (the ring of polynomials with coefficients in ##\mathbb{R}## and one "indeterminate" ##\omega##) there is a multiplication defined for elements of the ring that tells us how to multiply polynomials like ##(4 + 9.8\omega^2) ( 2\omega - 1.3)## as we do in high school algebra. That type of multiplication already involves two types multiplication - there is the multiplication of the numerical coefficients and there is the procedure for multiplying non-negative integer powers of ##\omega##. There are no identities in that type of multiplication that would allow something like ##\omega^2 \omega^5 = \omega^2 - 1##. Polynomials of different degrees of ##\omega## are distinct elements of the ring.

When we begin to say things like "Let ##\omega = a## or talk about the "roots" of polynomial, we are stepping beyond the algebraic properties of ##\mathbb{R}[\omega]##. When we implement the notion of "Let ##\omega = a## we create a mapping from a polynomial ##p(\omega)## in ##\mathbb{R}[\omega]## to an element ##b## in different algebraic structure S.

For example, the polynomial ##(3\omega^2 - 1)## is mapped to ##b = 3a^2 - 1 \in S##. The polynomial ##(3\omega^2 - 1)## is not the zero element of ##\mathbb{R}[\omega]##. (The zero element of ##\mathbb{R}[\omega]## is the polynomial "0".) If we write the statement ##3\omega^2 - 1 = 0## in the context of ##\mathbb{R}[\omega]##, we have simply written a false statement.

In the structure ##S##, the multiplication operation may result in things like ##3 a^2 - 1 = 0## because the multiplication operation in ##S## and the zero element of ##S## can be completely different than the multiplication and zero element in ##\mathbb{R}[\omega]##.

We are trained so thoroughly in secondary school algebra to think of ##\omega## as an "unknown number" that we don't see any contradiction in equations like ##3\omega^2 - 1 = 0##. We already have in mind that we are using a mapping from "polynomials in one indeterminate" to the real numbers and that mapping makes a polynomial "come out" to be a single real number.

My question is ... why does ##D## being a division algebra mean that one of the factors must be zero ...?

If we can say each of the individual factors represents an element ##b_i## of an (abstract!) division ring then we can rely on the theorem that if ##(b_1)(b_2)...(b_n) = 0 ## then at least one of the ##b_i = 0##. ( Does the book present a summary of theorems for division rings ?)

Finding the right words to show that each of the factors is an element of the abstract division ring under consideration makes my head spin. I'll try.

The theorem that a polynomial in ##\mathbb{R}[\omega]## factors into linear and quadratic factors is technically not a theorem about "roots" of polynomials. In ##\mathbb{R}[\omega]## we can say ## 3\omega^2 - 11\omega -4 = (3\omega+1)(\omega-4) ##, but it is simply false to write ##3\omega^2 - 11 \omega -4 = 0 ##.

However , we are allowed to consider a mapping from ##3\omega^2 - 11\omega - 4## (which is an element of the ring ##\mathbb{R}[\omega]## ) to an element ##3a^2 - 11a -4 ## in some other algebraic structure ##S##.

In the algebraic structure ##S##, it may be possible that ## 3a^2 - 11a - 4 = 0##. Using our knowledge of the factorization in ##\mathbb{R}[\omega]## (and the fact that the mapping "let ##\omega = a##" is homomorphism) we can deduce the factorization ##(3a^2 - 11a - 4) = (3a + 1)(a - 4) ## works in ##S## and that ##3a^2 - 11a -4 = 0## implies ##(3a+1)(a-4) = 0##.

Bezar's proof employs mappings back-and-forth between different algebraic structures.

We begin with elements {1,x,x^2,...x^n} in a finite dimensional division ring and conclude there is a non-trivial linear combination of those elements that is equal to the zero of that division ring. The multiplication and addition operations in the linear combination involve operations on real numbers and operations defined in the division ring.

The we map this linear combination to a polynomial ##p(\omega)## in ##\mathbb{R}[\omega]## by saying "Let ## x = \omega##". (But we do not map the statement that the linear combination is zero in the division ring to the statement that the polynomial is the zero polynomial in ##\mathbb{R}[\omega]##.)

We use the factorization properties of polynomials in ## \mathbb{R}[\omega] ## to specify how ##p(\omega)## factors.

We map the polynomial ##p(\omega)## expressed the product of factors back to the division ring by saying "Let ##\omega = x##". This maps each individual factor to an element in the division ring. In the division ring, we know the product of those elements is 0, because product of these elements is equal to the linear combination that we began with.

A different style of algebra book might present a proof that explicitly mentioned all these back and forth mappings.
 
Last edited:
  • Like
Likes Math Amateur
  • #3
Thank so much for the help, Stephen ...

Just working through what you have said ... and reflecting on it ...

Thanks again,

Peter
 
  • #4
Math Amateur said:
My question is ... how exactly do we know that ##f( \omega )## splits into linear and quadratic factors in ##\mathbb{R} [ \omega ]## ... can someone please explain this fact ... ...
Each polynomial can be divided into factors (by long polynomial division) as long as there are factors. Those which cannot be divided any further are called irreducible polynomials. In ##\mathbb{R}[\omega]##, i.e. with polynomials over ##\mathbb{R}##, the only irreducible polynomials are those of degree ##1## or ##2##. (Do you know why?)
Math Amateur said:
My question is ... why does ##D## being a division algebra mean that one of the factors must be zero ...?
How is a division algebra defined? What would it mean to have two elements ##p,q \neq 0## such that ##p \cdot q = 0##?
 
  • Like
Likes Math Amateur
  • #5
"... how exactly do we know that f(ω) splits into linear and quadratic factors in R[ω] ..."

This is a very interesting fact that follows from the Fundamental Theorem of Algebra (FTA).

FTA states that every polynomial with complex coefficients (any of these can be real) defined by

P(z) = cn zn + cn-1 zn-1 + ... + c1 z + c0

that is not constant (i.e., n ≥ 1 and cn ≠ 0) must have a root in the complex plane. I.e., there must exist some number

ξ ∈

such that

P(ξ) = 0.​

For the rest of this post we assume that all the coefficients cj (j = 0, ..., n) are real.

Then we also know that for any number

z ∈

we have

(°) conj(P(z)) = P(conj(z))​

where conj(z) denotes the complex conjugate: If z = x + iy for real x and y, then conj(z) = x = iy.

(Suggestion: Prove (°) is true if and only if the coefficients of P(z) are all real. Use the facts that

conj(u+v) = conj(u) + conj(v)​

and

conj(uv) = conj(u) conj(v)​

. )

This tells us that if P(z) is a polynomial with real coefficients, then for any root ξ we have

0 = P(ξ) = conj(P(ξ)) = P(conj(ξ)).​

(The last equality is a consequence of all the coefficients being real.)

In other words, if ξ is not real then conj(ξ) is another root of P(z). So: All roots of P(z) are either real or come in complex conjugate pairs.

Finally, we know that if {ξj }, 1 ≤ j ≤ n, are all the roots of P(z), then we can factor P(z) as

(*) P(z) = (z-ξ1 ) ... (z-ξn ).​

But by arranging the complex conjugate pairs next to each other we get expressions like

(**) (z-ξ)(z-conj(ξ)) = z2 - (ξ + conj(ξ))z + ξ conj(ξ)​

which we can easily see is a quadratic polynomial with real coefficients.

So from (*) and (**) we see that P(z) can be factored into expressions that are either of the form

z - ξ​

for roots ξ that are real, and expressions of the form

z2 - (ξ + conj(ξ))z + ξ conj(ξ)​

for roots ξ that are not real. So P(z) is a product of linear and quadratic polynomials having real coefficients. (When we assume P(z) has real coefficients to begin with!)
 
  • Like
Likes Math Amateur
  • #6
Stephen Tashi said:
The text apparently assumes the properties of ##\mathbb{R}[\omega]## are prerequisite knowledge. The above fact follows in some way from "The Fundamental Theorem Of Algebra", but I don't know if the result itself has a famous name.

It's difficult to keep track of the various types of multiplication that are going on in Bresar's book. In the ring ##\mathbb{R}[\omega]## (the ring of polynomials with coefficients in ##\mathbb{R}## and one "indeterminate" ##\omega##) there is a multiplication defined for elements of the ring that tells us how to multiply polynomials like ##(4 + 9.8\omega^2) ( 2\omega - 1.3)## as we do in high school algebra. That type of multiplication already involves two types multiplication - there is the multiplication of the numerical coefficients and there is the procedure for multiplying non-negative integer powers of ##\omega##. There are no identities in that type of multiplication that would allow something like ##\omega^2 \omega^5 = \omega^2 - 1##. Polynomials of different degrees of ##\omega## are distinct elements of the ring.

When we begin to say things like "Let ##\omega = a## or talk about the "roots" of polynomial, we are stepping beyond the algebraic properties of ##\mathbb{R}[\omega]##. When we implement the notion of "Let ##\omega = a## we create a mapping from a polynomial ##p(\omega)## in ##\mathbb{R}[\omega]## to an element ##b## in different algebraic structure S.

For example, the polynomial ##(3\omega^2 - 1)## is mapped to ##b = 3a^2 - 1 \in S##. The polynomial ##(3\omega^2 - 1)## is not the zero element of ##\mathbb{R}[\omega]##. (The zero element of ##\mathbb{R}[\omega]## is the polynomial "0".) If we write the statement ##3\omega^2 - 1 = 0## in the context of ##\mathbb{R}[\omega]##, we have simply written a false statement.

In the structure ##S##, the multiplication operation may result in things like ##3 a^2 - 1 = 0## because the multiplication operation in ##S## and the zero element of ##S## can be completely different than the multiplication and zero element in ##\mathbb{R}[\omega]##.

We are trained so thoroughly in secondary school algebra to think of ##\omega## as an "unknown number" that we don't see any contradiction in equations like ##3\omega^2 - 1 = 0##. We already have in mind that we are using a mapping from "polynomials in one indeterminate" to the real numbers and that mapping makes a polynomial "come out" to be a single real number.
If we can say each of the individual factors represents an element ##b_i## of an (abstract!) division ring then we can rely on the theorem that if ##(b_1)(b_2)...(b_n) = 0 ## then at least one of the ##b_i = 0##. ( Does the book present a summary of theorems for division rings ?)

Finding the right words to show that each of the factors is an element of the abstract division ring under consideration makes my head spin. I'll try.

The theorem that a polynomial in ##\mathbb{R}[\omega]## factors into linear and quadratic factors is technically not a theorem about "roots" of polynomials. In ##\mathbb{R}[\omega]## we can say ## 3\omega^2 - 11\omega -4 = (3\omega+1)(\omega-4) ##, but it is simply false to write ##3\omega^2 - 11 \omega -4 = 0 ##.

However , we are allowed to consider a mapping from ##3\omega^2 - 11\omega - 4## (which is an element of the ring ##\mathbb{R}[\omega]## ) to an element ##3a^2 - 11a -4 ## in some other algebraic structure ##S##.

In the algebraic structure ##S##, it may be possible that ## 3a^2 - 11a - 4 = 0##. Using our knowledge of the factorization in ##\mathbb{R}[\omega]## (and the fact that the mapping "let ##\omega = a##" is homomorphism) we can deduce the factorization ##(3a^2 - 11a - 4) = (3a + 1)(a - 4) ## works in ##S## and that ##3a^2 - 11a -4 = 0## implies ##(3a+1)(a-4) = 0##.

Bezar's proof employs mappings back-and-forth between different algebraic structures.

We begin with elements {1,x,x^2,...x^n} in a finite dimensional division ring and conclude there is a non-trivial linear combination of those elements that is equal to the zero of that division ring. The multiplication and addition operations in the linear combination involve operations on real numbers and operations defined in the division ring.

The we map this linear combination to a polynomial ##p(\omega)## in ##\mathbb{R}[\omega]## by saying "Let ## x = \omega##". (But we do not map the statement that the linear combination is zero in the division ring to the statement that the polynomial is the zero polynomial in ##\mathbb{R}[\omega]##.)

We use the factorization properties of polynomials in ## \mathbb{R}[\omega] ## to specify how ##p(\omega)## factors.

We map the polynomial ##p(\omega)## expressed the product of factors back to the division ring by saying "Let ##\omega = x##". This maps each individual factor to an element in the division ring. In the division ring, we know the product of those elements is 0, because product of these elements is equal to the linear combination that we began with.

A different style of algebra book might present a proof that explicitly mentioned all these back and forth mappings.
Thanks once again for the help and the insights, Stephen ...

I must say you've got me thinking that Bresar is a bit slipshod and lacking in precision ... I'm wondering whether to continue reading his book ... may switch to another book ... what do you think?

Peter
 
  • #7
zinq said:
"... how exactly do we know that f(ω) splits into linear and quadratic factors in R[ω] ..."

This is a very interesting fact that follows from the Fundamental Theorem of Algebra (FTA).

FTA states that every polynomial with complex coefficients (any of these can be real) defined by

P(z) = cn zn + cn-1 zn-1 + ... + c1 z + c0

that is not constant (i.e., n ≥ 1 and cn ≠ 0) must have a root in the complex plane. I.e., there must exist some number

ξ ∈

such that

P(ξ) = 0.​

For the rest of this post we assume that all the coefficients cj (j = 0, ..., n) are real.

Then we also know that for any number

z ∈

we have

(°) conj(P(z)) = P(conj(z))​

where conj(z) denotes the complex conjugate: If z = x + iy for real x and y, then conj(z) = x = iy.

(Suggestion: Prove (°) is true if and only if the coefficients of P(z) are all real. Use the facts that

conj(u+v) = conj(u) + conj(v)​

and

conj(uv) = conj(u) conj(v)​

. )

This tells us that if P(z) is a polynomial with real coefficients, then for any root ξ we have

0 = P(ξ) = conj(P(ξ)) = P(conj(ξ)).​

(The last equality is a consequence of all the coefficients being real.)

In other words, if ξ is not real then conj(ξ) is another root of P(z). So: All roots of P(z) are either real or come in complex conjugate pairs.

Finally, we know that if {ξj }, 1 ≤ j ≤ n, are all the roots of P(z), then we can factor P(z) as

(*) P(z) = (z-ξ1 ) ... (z-ξn ).​

But by arranging the complex conjugate pairs next to each other we get expressions like

(**) (z-ξ)(z-conj(ξ)) = z2 - (ξ + conj(ξ))z + ξ conj(ξ)​

which we can easily see is a quadratic polynomial with real coefficients.

So from (*) and (**) we see that P(z) can be factored into expressions that are either of the form

z - ξ​

for roots ξ that are real, and expressions of the form

z2 - (ξ + conj(ξ))z + ξ conj(ξ)​

for roots ξ that are not real. So P(z) is a product of linear and quadratic polynomials having real coefficients. (When we assume P(z) has real coefficients to begin with!)
zinq ... thanks for an intriguing and interesting proof ...

That certainly answered my question ...

Thanks again ...

Peter
 
  • #8
fresh_42 said:
Each polynomial can be divided into factors (by long polynomial division) as long as there are factors. Those which cannot be divided any further are called irreducible polynomials. In ##\mathbb{R}[\omega]##, i.e. with polynomials over ##\mathbb{R}##, the only irreducible polynomials are those of degree ##1## or ##2##. (Do you know why?)

How is a division algebra defined? What would it mean to have two elements ##p,q \neq 0## such that ##p \cdot q = 0##?
Thanks for the help fresh_42 ...

You write:

" ... ... In ##\mathbb{R}[\omega]##, i.e. with polynomials over ##\mathbb{R}##, the only irreducible polynomials are those of degree ##1## or ##2##. (Do you know why?) ... ... "

Well ... thought about your question ... but not sure why this is the case ... can you please help further ...?
You also write:

" ... ... How is a division algebra defined? What would it mean to have two elements ##p,q \neq 0## such that ##p \cdot q = 0##? ... ... "

A division algebra is the same as a field except multiplication is not necessarily commutative ... ... as such it is an integral domain and so has no zero divisors and so if ##p \cdot q = 0## then one of ##p, q## must be zero ...

Thanks again for your help ...

Peter
 
  • #9
Hi Peter,

I don't think that to switch the book is helpful. O.k. I would have expected a clear definition of a division algebra prior to any Lemmata, but this is a minor issue and I don't know whom Bresar wants to address. In any case, my personal opinion is, that it would help you a lot, if you simply start playing with the concepts and methods. I've always regarded algebra as a kind of play with toy trains. It's a huge and often delightful playground. The first book I've read on algebra (van der Waerden) has been one, which was also written by using ordinary language rather than formulas, similar to the copies you inserted. Serge Lang has probably a far more formal way to present the results in his book. (But I only can judge this by the fact he's been part of Bourbaki and what others told me.) However, whether in written language or in formal language, it is always about to get a feeling of the objects you deal with. Your first question to the Lemma above has been about linear (in)dependencies. This is something you simply have to practice, until its usage is like cycling for you. Draw pictures, think about own examples, and write really a lot of scratch papers. I love blackboards and I once built me one for this reason. All I needed was some wood and special paint. But this is a matter of taste. Paper will do, too.

The same holds for the way the subject is presented. I like formulas, but had a book in ordinary language - from the pre-Bourbaki era if you will. So I had to write the text I've read into formulas by myself. This as well trains a lot, so I recommend you to try it, too. I know algebraic thinking differs a lot from the way we learned math at school. But this holds for entire mathematics. So one hurdle to pass is to change thinking: from applying algorithms to concepts. Some people are good in remembering things. They read something and can repeat it. This is probably well suited for subjects like legal science. In mathematics, it's more about: I don't remember the wording, but I know how to do. And to play with the stuff is one way to practice. The advantage is: playing makes a lot more fun than learning by heart.

I remember a test in which the student has been asked, what a linear mapping is. He answered with the correct definition but couldn't give any examples. I guess that even today he doesn't know why he got only a "C".

So my advice would be instead of switching the book: write more scratch papers.
 
  • Like
Likes Math Amateur
  • #10
Peter — be aware than in most modern definitions, a division algebra is not only not assumed commutative but it is also not even assumed *associative*
 
  • Like
Likes Math Amateur
  • #11
Math Amateur said:
Well ... thought about your question ... but not sure why this is the case ... can you please help further ...?
@zinq did in post #5.
Math Amateur said:
A division algebra is the same as a field except multiplication is not necessarily commutative ... ... as such it is an integral domain and so has no zero divisors and so if ##p \cdot q = 0## then one of ##p, q## must be zero ...
Yep.
 
  • Like
Likes Math Amateur
  • #12
Math Amateur said:
I must say you've got me thinking that Bresar is a bit slipshod and lacking in precision ... I'm wondering whether to continue reading his book ... may switch to another book ... what do you think?

I haven't read any books that specialize in finite dimensional division algebras, so I don't know if there is another book that uses more precise language. It's very common to find books, posters on this forum, and myself failing to distinguish between the ring ##\mathbb{R}[\omega]## (which usually called ##\mathbb{R}[x]##) and the ring of "polynomial functions".
 
  • #13
fresh_42 said:
Hi Peter,

I don't think that to switch the book is helpful. O.k. I would have expected a clear definition of a division algebra prior to any Lemmata, but this is a minor issue and I don't know whom Bresar wants to address. In any case, my personal opinion is, that it would help you a lot, if you simply start playing with the concepts and methods. I've always regarded algebra as a kind of play with toy trains. It's a huge and often delightful playground. The first book I've read on algebra (van der Waerden) has been one, which was also written by using ordinary language rather than formulas, similar to the copies you inserted. Serge Lang has probably a far more formal way to present the results in his book. (But I only can judge this by the fact he's been part of Bourbaki and what others told me.) However, whether in written language or in formal language, it is always about to get a feeling of the objects you deal with. Your first question to the Lemma above has been about linear (in)dependencies. This is something you simply have to practice, until its usage is like cycling for you. Draw pictures, think about own examples, and write really a lot of scratch papers. I love blackboards and I once built me one for this reason. All I needed was some wood and special paint. But this is a matter of taste. Paper will do, too.

The same holds for the way the subject is presented. I like formulas, but had a book in ordinary language - from the pre-Bourbaki era if you will. So I had to write the text I've read into formulas by myself. This as well trains a lot, so I recommend you to try it, too. I know algebraic thinking differs a lot from the way we learned math at school. But this holds for entire mathematics. So one hurdle to pass is to change thinking: from applying algorithms to concepts. Some people are good in remembering things. They read something and can repeat it. This is probably well suited for subjects like legal science. In mathematics, it's more about: I don't remember the wording, but I know how to do. And to play with the stuff is one way to practice. The advantage is: playing makes a lot more fun than learning by heart.

I remember a test in which the student has been asked, what a linear mapping is. He answered with the correct definition but couldn't give any examples. I guess that even today he doesn't know why he got only a "C".

So my advice would be instead of switching the book: write more scratch papers.
Thanks for for the encouragement and the guidance fresh_42 ...

I found your words very helpful ...

Peter
 
  • #14
Stephen Tashi said:
I haven't read any books that specialize in finite dimensional division algebras, so I don't know if there is another book that uses more precise language. It's very common to find books, posters on this forum, and myself failing to distinguish between the ring ##\mathbb{R}[\omega]## (which usually called ##\mathbb{R}[x]##) and the ring of "polynomial functions".
Hi Stephen,

Thanks for your post ... ...

I am basically trying to get an understanding of noncommutative algebra ...

The books I have on this topic are as follows:

Noncommutative Algebra by Benson Farb and R. Keith Dennis (Springer-Verlag 1993) [aimed at beginning gradate students]

A First Course in Noncommutative Rings by T.Y. Lam [aimed at students who have done a beginning graduate course in abstract algebra]

Introductory Lectures on Rings and Modules by John A. Beachy [focussed on noncommutative aspects of rings and modules, aimed at
advanced undergrads/beginning graduates]Regarding Matej Bresar's book, Bresar states that "the purpose ... is to give a gentle introduction to noncommutative rings and algebras that requires fewer prerequisites than most other books on the subject. ... ... The necessary background to read this book is a standard knowledge of linear algebra and a basic knowledge about groups rings and fields. ... ... "

I would be interested to hear of any noncommutative algebra books you have a high opinion of ...

Peter
 
Last edited:
  • #15
Math Amateur said:
I would be interested to hear of any noncommutative algebra books you have a high opinion of ...
This is a vast range! I guess it will be difficult to answer this question without to specify in which direction it is meant.

Lie algebras alone fill entire books, analytic as well as algebraic ones. Then there are Graßmann algebras, Weyl algebras, Heisenberg algebras, Poincaré algebras, Clifford algebras and so on. Then there are algebras which mainly deal with idempotent elements (##a^2=a##). Genetic algebras (although commutative) are one example with already many types. To every group or groupoid ##G## we can build a formal algebra over a field ##\mathbb{F}## and define ##\mathbb{F}[G]## with applications in coding theory or formal language theory.

And last but not least, and with some justification the probably most important example are matrix rings ##\mathbb{M}_n(\mathcal{R})## which are non-commutative and the main subject of any books on linear algebra. (Although the latter mainly deal with ##\mathcal{R}## being a field and often of characteristic zero or need algebraic closure for some theorems, the basic approach in the general case of an arbitrary ring ##\mathcal{R}## is similar.)

Every distributive binary operation defines an eventually non-commutative structure.

And because I think it is important in the context, although commutative: The field of algebraic geometry alone is worth to be studied.
 

Related to Another Question about Finite Dimensional Division Algebras ....

What is a finite dimensional division algebra?

A finite dimensional division algebra is a type of algebraic structure where every element has a multiplicative inverse, and the set of elements is finite in size.

What is the significance of finite dimensional division algebras?

Finite dimensional division algebras are important in mathematics because they have many applications in areas such as physics, engineering, and computer science. They also provide insight into the structure of more complex algebraic systems.

What is the relationship between division algebras and fields?

Division algebras are a generalization of fields. While all fields are division algebras, not all division algebras are fields. For example, the real numbers and complex numbers are both fields and division algebras, but the quaternions are only a division algebra.

Can finite dimensional division algebras be infinite in size?

No, by definition, finite dimensional division algebras have a finite number of elements. If the set of elements is infinite, it is not a finite dimensional division algebra.

Are there any known examples of finite dimensional division algebras?

Yes, there are several known examples of finite dimensional division algebras, including the real numbers, complex numbers, quaternions, and octonions. However, it is still an open question whether there exists a finite dimensional division algebra for every dimension.

Similar threads

Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
913
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
886
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
11
Views
2K
Back
Top