Tensor Algebras - Dummit and Foote, Section 11.5

In summary: The problem is, when you define an abstract tensor algebra, you can define it any way you like (as long as it satisfies certain universal properties).I'm guessing that when you read the definition of the tensor algebra, it was defined in terms of "formal symbols", i.e. elements of the free $R$-module generated by $M$.In that case, the "formal" sum $m_2 \otimes m_3 + m_4 \otimes m_5$ is just a symbol, it hasn't been "reduced" to a single tensor.So, for example, you might have:$m_2\otimes m_3 + m_4\
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Dummit and Foote: Abstract Algebra (Third Edition) ... and am focused on Section 11.5 Tensor Algebras. Symmetric and Exterior Algebras ...

In particular I am trying to understand Theorem 31 but at present I am very unsure about how to interpret the theorem and need some help in understanding the basic form of the elements involved and the mechanics of computations ... so would appreciate any help however simple ...

Theorem 31 and its proof read as follows:
View attachment 5555

My (rather simple) questions are as follows:Question 1

In the above text from D&F we read the following:" ... ... \(\displaystyle \mathcal{T} (M)\) is an \(\displaystyle R\)-Algebra containing \(\displaystyle M\) with multiplication defined by the mapping:

\(\displaystyle ( m_1 \otimes \ ... \ \otimes m_i ) ( m'_1 \otimes \ ... \ \otimes m'_j ) = m_1 \otimes \ ... \ \otimes m_i \otimes m'_1 \otimes \ ... \ \otimes m'_j \)

... ... ... "


... my questions are as follows:

What do the distributive laws look like in this case ... and would sums of elements be just formal sums ... or would we be able to add elements in the same sense as in the ring \(\displaystyle \mathbb{Z}\) where the sum \(\displaystyle 2+3\) gives an entirely different element \(\displaystyle 5\) ... ?

Further, how do we know that with respect to multiplication \(\displaystyle \mathcal{T}^{i} (M) \ \mathcal{T}^{j} (M) \subseteq \mathcal{T}^{i+j} (M)\) ... ... ?

Question 2

In the proof we read the following:"The map

\(\displaystyle \underbrace{ M \times M \times \ ... \ \times M }_{ i \ factors} \times \underbrace{ M \times M \times \ ... \ \times M }_{ j \ factors} \longrightarrow \mathcal{T}^{i+j} (M) \)

defined by

\(\displaystyle (m_1, \ ... \ , m_i, m'_1, \ ... \ , m'_j) \mapsto m_1 \otimes \ ... \ ... \ \otimes m_i \otimes m'_1 \otimes \ ... \ ... \ \otimes m'_j \)

is \(\displaystyle R\)-multilinear, so induces a bilinear map \(\displaystyle \mathcal{T}^{i} (M) \times \mathcal{T}^{j} (M)\) to \(\displaystyle \mathcal{T}^{i+j} (M)\) ... ...
"My questions are:

... what does the multlinearity of the above map look like ... ?

and

... how do we demonstrate that the above map induces a bilinear map \(\displaystyle \mathcal{T}^{i} (M) \times \mathcal{T}^{j} (M)\) to \(\displaystyle \mathcal{T}^{i+j} (M)\) ... ... ? How/why is this the case... ?Hope someone can help ...

Peter
============================================================*** EDIT ***

To clarify my basic issue/problem with the Theorem ... it concerns the nature of elements of \(\displaystyle \mathcal{T} (M)\)Regarding this issue ... we have that ...\(\displaystyle \mathcal{T} (M) = R \oplus \mathcal{T}^1 (M) \oplus \mathcal{T}^2 (M) \oplus \mathcal{T}^1 (M) \oplus \ ... \ ... \ ... \)

which seems to suggest that an element of \(\displaystyle \mathcal{T} (M)\) is of the form

\(\displaystyle (r, m_1, m_2 \otimes m_3, m_4 \otimes m_5 \otimes m_6, \ ... \ ... \ ... \ ) \)

where only a finite number of terms are different from zero (finite support) ... ...BUT ... ... ... the definition of multiplication for \(\displaystyle \mathcal{T} (M)\) seems to imply that elements of \(\displaystyle \mathcal{T} (M)\) are of the form:\(\displaystyle m_1 \otimes m_2 \otimes \ ... \ ... \ \otimes m_i \)
?Can someone please clarify ...?Peter
 
Last edited:
Physics news on Phys.org
  • #2
It's sort of like polynomials, the tensors of rank $k$ are like $x^k$ terms: when we ADD, they stay separate, when we MULTIPLY (in this case the "multiplication" is tensoring), they "jump ranks".

So, for example:

$[v_1 + (v_2\otimes v_3 + v_4 \otimes v_5)]\otimes [v_6\otimes v_7\otimes v_8]$

$= (v_1 \otimes v_6 \otimes v_7 \otimes v_8) + (v_1\otimes v_3 \otimes v_6 \otimes v_7 \otimes v_8 + v_4\otimes v_5\otimes v_6 \otimes v_7 \otimes v_8)$
 
  • #3
Deveno said:
It's sort of like polynomials, the tensors of rank $k$ are like $x^k$ terms: when we ADD, they stay separate, when we MULTIPLY (in this case the "multiplication" is tensoring), they "jump ranks".

So, for example:

$[v_1 + (v_2\otimes v_3 + v_4 \otimes v_5)]\otimes [v_6\otimes v_7\otimes v_8]$

$= (v_1 \otimes v_6 \otimes v_7 \otimes v_8) + (v_1\otimes v_3 \otimes v_6 \otimes v_7 \otimes v_8 + v_4\otimes v_5\otimes v_6 \otimes v_7 \otimes v_8)$
Thanks for the help Deveno ... that clarified things somewhat ...

Are you able to shed light on my issue with the form and nature of elements of

... my statement regarding this issue is as follows: (repeating previous edit) ..

... ... To clarify my basic issue/problem with the Theorem ... it concerns the nature of elements of \(\displaystyle \mathcal{T} (M)\)Regarding this issue ... we have that ...\(\displaystyle \mathcal{T} (M) = R \oplus \mathcal{T}^1 (M) \oplus \mathcal{T}^2 (M) \oplus \mathcal{T}^1 (M) \oplus \ ... \ ... \ ... \)

which seems to suggest that an element of \(\displaystyle \mathcal{T} (M)\) is of the form

\(\displaystyle (r, m_1, m_2 \otimes m_3, m_4 \otimes m_5 \otimes m_6, \ ... \ ... \ ... \ ) \)

where only a finite number of terms are different from zero (finite support) ... ...BUT ... ... ... the definition of multiplication for \(\displaystyle \mathcal{T} (M)\) seems to imply that elements of \(\displaystyle \mathcal{T} (M)\) are of the form:\(\displaystyle m_1 \otimes m_2 \otimes \ ... \ ... \ \otimes m_i \) ... ... ?

Hope you can clarify ...

Peter
 
  • #4
Peter said:
Thanks for the help Deveno ... that clarified things somewhat ...

Are you able to shed light on my issue with the form and nature of elements of

... my statement regarding this issue is as follows: (repeating previous edit) ..

... ... To clarify my basic issue/problem with the Theorem ... it concerns the nature of elements of \(\displaystyle \mathcal{T} (M)\)Regarding this issue ... we have that ...\(\displaystyle \mathcal{T} (M) = R \oplus \mathcal{T}^1 (M) \oplus \mathcal{T}^2 (M) \oplus \mathcal{T}^1 (M) \oplus \ ... \ ... \ ... \)

which seems to suggest that an element of \(\displaystyle \mathcal{T} (M)\) is of the form

\(\displaystyle (r, m_1, m_2 \otimes m_3, m_4 \otimes m_5 \otimes m_6, \ ... \ ... \ ... \ ) \)

where only a finite number of terms are different from zero (finite support) ... ...BUT ... ... ... the definition of multiplication for \(\displaystyle \mathcal{T} (M)\) seems to imply that elements of \(\displaystyle \mathcal{T} (M)\) are of the form:\(\displaystyle m_1 \otimes m_2 \otimes \ ... \ ... \ \otimes m_i \) ... ... ?

Hope you can clarify ...

Peter

Well, both are right, and both are wrong.

We work in the direct sum of ALL ranks of tensors, so we can express sums of tensors of differing rank. These are "formal" sums, and as such we could just regard them as (finite) sequences of tensors indexed by rank. The analogy with polynomials is quite apt, we can regard polynomials as sequences of their coefficients.

But we can multiply tensors (using...the tensor product), and this multiplication takes place just as in the second expression. To get a "full" product, we have to "distribute" such as sum in two ways:

Once (within a rank) over the sums of "simple tensors" (tensors of a given rank that involve no formal sums, they just have "one term", like $v_1 \otimes v_2 \otimes v_3$ for a 3-tensor), and once (over all ranks) by collecting together all the sums of tensors of a given rank "in its proper slot" in the direct sum.

It would be "ugly" to write out a "fully general" element of $\mathcal{T}(M)$ of degree $k$, unless $k$ was quite small, say 2 or 3. The closest analogy I can think of would be polynomials in a (countably) infinite set of variables. For example:

$x^3,x^2z, xyz$ are all homogeneous polynomials of degree 3, and there's really no simplification of a linear combination of them. Similarly, we have a rather large array of homogeneous polynomials of degree 2:

$x_i^2, x_ix_j$ for any $i,j \in \Bbb N$.

So a "fully general" polynomial of say, degree 4, would be possibly tremendously long.
 

FAQ: Tensor Algebras - Dummit and Foote, Section 11.5

What is a Tensor Algebra?

A Tensor Algebra is a mathematical structure that extends the concept of a vector space to include higher dimensional objects called tensors. It is a powerful tool for studying multilinear algebra and has many applications in physics, engineering, and computer science.

What does Dummit and Foote's Section 11.5 cover?

Dummit and Foote's Section 11.5 covers the construction and properties of tensor algebras, including the universal property, basis and dimension, and quotient algebras. It also discusses applications of tensor algebras in representation theory and differential geometry.

How is a Tensor Algebra constructed?

A Tensor Algebra is constructed by taking a vector space V over a field F and defining a tensor product operation on V. This operation combines two tensors to create a new tensor with higher dimension. The tensor algebra of V is then defined as the direct sum of all possible tensor products of V, including the zero vector.

What is the universal property of Tensor Algebras?

The universal property of Tensor Algebras states that for any vector space V and any multilinear map f from V to an associative algebra A, there exists a unique algebra homomorphism from the tensor algebra of V to A that extends f. In other words, the tensor algebra is the most general algebra that can be constructed from V in a multilinear way.

What are some applications of Tensor Algebras?

Tensor Algebras have a wide range of applications in mathematics and other fields. In physics, they are used to describe the properties of physical systems such as elasticity, fluid dynamics, and electromagnetism. In engineering, they are used in structural analysis and control systems. In computer science, they are used in data compression and machine learning. They also have applications in areas such as differential geometry, representation theory, and quantum mechanics.

Similar threads

Back
Top