Computating the components of a tensor

In summary, the conversation discusses how to compute the components of a tensor using basis vectors and indices. The solution provided uses this method and the Einstein summation convention. However, the question arises about whether this proof only applies to simple tensors and how to determine if a tensor is simple. It is clarified that the tensor in question is actually a sum of tensor products, with the summation sign being omitted by convention. Finally, it is mentioned that a basis for the vector space of a tensor can be derived from the basis of the vector space it is defined in.
  • #1
Rasalhague
1,387
2
In the first problem here http://elmer.tapir.caltech.edu/ph237/assignments/assignment2.pdf , we're asked to show from the duality relation

[tex]\mathbf{e}^{\mu} \cdot \mathbf{e}_{\nu} = \delta^{\mu}_{\nu}[/tex]

and the expansion of a tensor

[tex]\mathbf{T}\left(\underline \quad, \underline \quad, \underline \quad \right)[/tex]

in terms of basis vectors:

[tex]\mathbf{T} = T^{\alpha \beta}_{\mu} \mathbf{e}_{\alpha} \otimes \mathbf{e}_{\beta} \otimes \mathbf{e}^{\mu}[/tex]

that the components of a tensor can be computed by inserting basis vectors into its slots and lining up the indices, e.g.

[tex]T^{\alpha \beta}_{\mu} = \mathbf{T}\left(\mathbf{e}^{\alpha},\mathbf{e}^{\beta}, \mathbf{e}_{\mu} \right).[/tex]

The solution ( http://elmer.tapir.caltech.edu/ph237/assignments/solutions/week2/page1.jpg ) does just that.

(Note: in these lectures and the accompanying material, Kip Thorne avoids making an explicit distinction between one-forms and vectors, by associating one-forms with vectors via the metric tensor.)

What puzzles me is that I read (in Schutz: Geometrical Methods of Mathematical Physics) that a tensor isn't always simply a tensor product; it might be a sum of tensor products. So does this proof only apply to simple tensors (those which are tensor products), and if so, what would a general proof look like? Also, is there an easy way to tell whether a given tensor is simple? And is there a way to derive a basis for the vector space a tensor belongs to from the basis of the vector space V in terms of which the tensors are defined?
 
Physics news on Phys.org
  • #2
The tensor T here is a sum of tensor products. The summation sign is omitted by convention, but there is a sum over alpha, beta and mu. This is called the Einstein summation convention.
 
  • #3
Aha, yes, thanks, dx, for once again coming to my rescue! I see it now: T is a sum of all those permutations of tensor products of the chosen basis vectors of V and its dual space V* (or using this convention whereby vectors are associated with 1-forms via the inner product, the chosen basis of V and the reciprocal basis), each permutation scalar-multiplied by the appropriate component.
 

FAQ: Computating the components of a tensor

What is a tensor and why is it important in computing?

A tensor is a mathematical object that represents a multidimensional array of data. It is important in computing because it allows for efficient and concise representation of data, making it useful for tasks such as image and signal processing, machine learning, and computer graphics.

How are the components of a tensor computed?

The components of a tensor are computed using a combination of matrix multiplication and element-wise operations. The specific method depends on the type of tensor (e.g. scalar, vector, matrix, etc.) and the desired mathematical operation.

What are some common applications of computing the components of a tensor?

Some common applications of computing the components of a tensor include image and speech recognition, natural language processing, and data compression. Tensors are also used in physics, engineering, and other scientific fields to represent physical quantities and equations.

Are there any limitations or challenges in computing the components of a tensor?

One challenge in computing the components of a tensor is the potential for a large number of dimensions, which can make it difficult to visualize and manipulate the data. Additionally, the computational complexity can increase significantly as the number of dimensions and size of the tensor grows.

How can I learn more about computing the components of a tensor?

There are many online resources and textbooks available that provide detailed explanations and examples of computing the components of a tensor. Additionally, taking courses in linear algebra, multivariable calculus, and other related fields can help deepen understanding of tensor operations and applications.

Back
Top