- #1
Rasalhague
- 1,387
- 2
In the first problem here http://elmer.tapir.caltech.edu/ph237/assignments/assignment2.pdf , we're asked to show from the duality relation
[tex]\mathbf{e}^{\mu} \cdot \mathbf{e}_{\nu} = \delta^{\mu}_{\nu}[/tex]
and the expansion of a tensor
[tex]\mathbf{T}\left(\underline \quad, \underline \quad, \underline \quad \right)[/tex]
in terms of basis vectors:
[tex]\mathbf{T} = T^{\alpha \beta}_{\mu} \mathbf{e}_{\alpha} \otimes \mathbf{e}_{\beta} \otimes \mathbf{e}^{\mu}[/tex]
that the components of a tensor can be computed by inserting basis vectors into its slots and lining up the indices, e.g.
[tex]T^{\alpha \beta}_{\mu} = \mathbf{T}\left(\mathbf{e}^{\alpha},\mathbf{e}^{\beta}, \mathbf{e}_{\mu} \right).[/tex]
The solution ( http://elmer.tapir.caltech.edu/ph237/assignments/solutions/week2/page1.jpg ) does just that.
(Note: in these lectures and the accompanying material, Kip Thorne avoids making an explicit distinction between one-forms and vectors, by associating one-forms with vectors via the metric tensor.)
What puzzles me is that I read (in Schutz: Geometrical Methods of Mathematical Physics) that a tensor isn't always simply a tensor product; it might be a sum of tensor products. So does this proof only apply to simple tensors (those which are tensor products), and if so, what would a general proof look like? Also, is there an easy way to tell whether a given tensor is simple? And is there a way to derive a basis for the vector space a tensor belongs to from the basis of the vector space V in terms of which the tensors are defined?
[tex]\mathbf{e}^{\mu} \cdot \mathbf{e}_{\nu} = \delta^{\mu}_{\nu}[/tex]
and the expansion of a tensor
[tex]\mathbf{T}\left(\underline \quad, \underline \quad, \underline \quad \right)[/tex]
in terms of basis vectors:
[tex]\mathbf{T} = T^{\alpha \beta}_{\mu} \mathbf{e}_{\alpha} \otimes \mathbf{e}_{\beta} \otimes \mathbf{e}^{\mu}[/tex]
that the components of a tensor can be computed by inserting basis vectors into its slots and lining up the indices, e.g.
[tex]T^{\alpha \beta}_{\mu} = \mathbf{T}\left(\mathbf{e}^{\alpha},\mathbf{e}^{\beta}, \mathbf{e}_{\mu} \right).[/tex]
The solution ( http://elmer.tapir.caltech.edu/ph237/assignments/solutions/week2/page1.jpg ) does just that.
(Note: in these lectures and the accompanying material, Kip Thorne avoids making an explicit distinction between one-forms and vectors, by associating one-forms with vectors via the metric tensor.)
What puzzles me is that I read (in Schutz: Geometrical Methods of Mathematical Physics) that a tensor isn't always simply a tensor product; it might be a sum of tensor products. So does this proof only apply to simple tensors (those which are tensor products), and if so, what would a general proof look like? Also, is there an easy way to tell whether a given tensor is simple? And is there a way to derive a basis for the vector space a tensor belongs to from the basis of the vector space V in terms of which the tensors are defined?