How is the Tensor Product Defined and Used in Vector Spaces?

In summary: Where V* is the dual space of V.In summary, the tensor product is a mathematical concept used to construct a new vector space from two given vector spaces. It involves defining a new operation "x" that is bilinear and extending it to create an inner product on the new vector space. The tensor product has a similar construction to that of the direct sum, but the dimensions of the original vector spaces are multiplied rather than added. It is also related to the concept of tensors, with the space of type (m,n) tensors being the tensor product of m copies of the original vector space and n copies of its dual space.
  • #1
Eye_in_the_Sky
331
4
on the "Tensor Product"

In response to some remarks made in the thread "How do particles become entangled?", as well as a number of private messages I have received, I feel there is some need to post some information on the notion of a "tensor product".

Below, a rather intuitive look at the idea of the "tensor product" is taken. For simplicity, the vector spaces involved are assumed to be finite-dimensional. The infinite-dimensional case can be accommodated with only some minor amendments to the presentation.

(Note: The usual symbol for the tensor product is an "x" with a "circle" around it, but below I will use the symbol "x".)

----------------------------------------

Let U and V be finite-dimensional vector spaces over C with bases {ui} and {vj}, respectively. For each ui and vj , define an object "ui x vj", and construe the full collection of these objects to be a basis for a new vector space W. That is,

W ≡ {∑ij αij(ui x vj) | αijЄC} ,

where, by definition,

ifij αij(ui x vj) = 0 , then αij=0 for all i,j .

The above then makes W a vector space over C such that

Dim(W) = Dim(U)∙Dim(V) .

However ... had we chosen a different set of basis vectors for U or V, then the vector space W thereby obtained would be formally distinct from the one obtained above. There would be no way to 'link' the bases for each of the two W's.

Let us now introduce some additional 'structure' on the operation "x", such that all W's obtained by the above construction will be formally identical no matter which bases are chosen for U and V. Specifically, we extend the definition of "x" to be bilinear, thus allowing any vector of U to be placed in the left "slot", and any vector of V to be placed in the right "slot". We do this as follows:

For any u,u'ЄU , v,v'ЄV , and αЄC , let

(u + u') x v = (u x v) + (u' x v) ,

u x (v + v') = (u x v) + (u x v') ,

α(u x v) = (αu) x v = u x (αv) .

Now all W's are one and the same.

The next thing we need is an inner product <∙|∙> on W. Let <∙|∙>1 and <∙|∙>2 be the inner products on U and V, respectively. Then, for any u x v and u' x v' Є W , define

<u x v|u' x v'> ≡ <u|u'>1∙<v|v'>2 .

Finally, extend <∙|∙> to the whole of W by "antilinearity" in the first slot and "linearity" in the second slot.

It now follows that <∙|∙> is an inner product on W.

Moreover, if {ui} and {vj} are orthonormal bases of U and V respectively, then {ui x vj} is an orthonormal basis of W.
 
Last edited:
Physics news on Phys.org
  • #2
That's rather enlightening, thanks very much. The tensor product appears to have a very similar construction to that of the direct sum.

Is it true that the tensor product [tex]U\otimes V[/tex] has basically the same set of basis vectors as the direct sum [tex]U\oplus V[/tex] except where:

[tex] ((u_1, u_2) | (v_1, v_2))_{\oplus} = (u_1|v_1)_1 + (u_2|v_2)_2 [/tex]​

whereas

[tex] ((u_1, u_2) | (v_1, v_2))_{\otimes} = (u_1|v_1)_{1}\cdot (u_2|v_2)_2[/tex]​

or is there a further difference that I have missed?

Regards,

Kane O'Donnell
 
  • #3
In attempting to approach the "direct sum" in a manner akin to that employed above with regard to the "tensor product" (also called the "direct product"), we would be forced to begin by defining objects like "ui [+] 0" and "0 [+] vj", and construe the full set of those objects to be a basis for a new vector space WΣ.

Specifically,

WΣ ≡ { ∑iαi(ui [+] 0) + ∑jβj(0 [+] vj) | αi , βj Є C } ,

where, by definition,

ifiαi(ui [+] 0) + ∑jβj(0 [+] vj) = 0 , then αi = βj = 0 for all i,j .

In this way, WΣ is a vector space over C such that

Dim(WΣ) = Dim(U) + Dim(V) .


... Clearly, the starting point in this construction differs from that of the "tensor product". The difference is such that for a "direct sum" the dimensions of U and V are added, whereas for a "tensor product" those dimensions are multiplied.

Of course, after the introduction of the appropriate 'structure' on the "[+]" operation, we would then find elements in WΣ of the form "ui [+] vj". However, in contrast to the "tensor product" scenario, these objects would not form a linearly independent set.
 
  • #4
To Kane: they are related, in mathematics you define the tensor space as a space that keeps a certain diagram is commutative, when the spaces in the diagram are the cross of n vector spaces V_n, the space W, and the tensor space T. (The tranformations between in the diagram are multilinear transformations from the cross to W, a map from the cross to T, and linear transformations from T to W.)
And it's pretty much all I know about it... when my Professor in classical mechanics started to talk about it, and wasn't capable to explain it properly, I went to a Professor in mathematics (I study both physics and math), and he gave me a quite good definition, sth. like what I wrote above. What Eye wrote, however, is more intuitive (it's a case of n=2 and therefore bilinear transformations).
 
  • #5
That's a nice clear explanation, just in case anyone's wondering the connection between the tensor product of two vector spaces and tensors, given the space of (1,0) tensors V, the space of type (m,n) tensors is:

[tex]\underbrace{V\otimes V\otimes...\otimes V}_m\otimes\underbrace{V^*\otimes V^*\otimes...\otimes V^*}_n[/tex]
 
Last edited:

Related to How is the Tensor Product Defined and Used in Vector Spaces?

1. What is the tensor product?

The tensor product is a mathematical operation that combines two vector spaces to create a new vector space. It is denoted by the symbol ⊗ and is used to represent relationships between objects in different dimensions.

2. How is the tensor product calculated?

The tensor product is calculated by taking the product of the individual elements in each vector space and then arranging them in a specific way. This can be done using matrices or by using the Kronecker product, which is a specific method for calculating tensor products.

3. What is the purpose of exploring the tensor product?

Exploring the tensor product allows us to understand the relationships between different objects and how they interact with each other. It has applications in many areas of science, including physics, engineering, and computer science.

4. What are some applications of the tensor product?

The tensor product is used in quantum mechanics to represent the state of a system, in machine learning to build complex neural networks, and in computer graphics to create 3D models. It also has applications in signal processing, control systems, and image processing.

5. Are there any real-world examples of the tensor product?

Yes, there are many real-world examples of the tensor product. For instance, in physics, the tensor product is used to describe the spin of a particle, and in computer graphics, it is used to create 3D models by combining two-dimensional images. It is also used in cryptography to create secure cryptographic keys.

Similar threads

  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
713
  • Linear and Abstract Algebra
Replies
10
Views
774
Replies
12
Views
1K
  • Quantum Physics
Replies
3
Views
2K
  • Quantum Physics
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
32
Views
3K
Replies
13
Views
2K
  • Calculus
Replies
4
Views
1K
Back
Top