What is a Tensor: FAQ Definition/Summary

  • Thread starter Greg Bernhardt
  • Start date
  • Tags
    Tensor
In summary, a tensor is a linear operation on a tensor product space, defined by its input and output type. It can be represented by a symbol with indices, and its coordinates can be changed by using a change of basis matrix. Tensors are commonly used in physics, particularly in describing displacement, force, stress, angular momentum, and other physical quantities. A tensor field is a different tensor at each point of a manifold, and a pseudotensor behaves like a tensor except for an extra factor of minus-one on a change of basis involving an odd number of reflections.
  • #1
19,557
10,337
Definition/Summary

A tensor of type (m,n) on a vector space V is an element of the tensor product space [itex]V\otimes\cdots\text{(m copies)}\cdots\otimes V[/itex] [itex]\otimes V^*\otimes\cdots\text{(n copies)}\cdots\otimes V^*[/itex], [itex]=\ V^{\otimes m}\otimes V^{*\otimes n}[/itex], where [itex]V^*[/itex] is the vector space of linear scalar-valued functions on [itex]V[/itex] (the dual vector space).

More simply, it is a linear operation whose input is an element of the tensor product space [itex]V^{\otimes n}[/itex] and whose output is an element of [itex]V^{\otimes m}[/itex] (or whose input is an element of [itex]V^{*\otimes m}[/itex] whose output is an element of [itex]V^{*\otimes n}[/itex]).

In coordinates, it is represented by a symbol such as [itex]a^{ij\cdots}_{\ \ \ \ pq\cdots}[/itex] with m upper (contravariant) indices and n lower (covariant) indices.

A scalar is a tensor of type (0,0). A vector (or contravariant vector) is a tensor of type (1,0).

A linear scalar-valued function (or linear functional, or covariant vector) is a tensor of type (0,1).

A linear operation capable of being represented by a square matrix (for example, assigning the angular momentum vector as output when a rigid body has a particular angular velocity vector as input) is a tensor of type (1,1).

The principal combination of tensors is contraction (see below). In coordinates, contraction is represented by summation over all values of a common index, one upper and one lower (in the Einstein summation notation, the summation symbol [itex]\Sigma[/itex] is not written, and a repeated index is assumed to be summed over).

Equations

Scalar: [itex]a[/itex]

Ordinary (contravariant) vector: [itex]a^i[/itex]

Covariant vector: [itex]a_i[/itex]

Mixed tensor of type (m,n): [itex]a^{pq\cdots\text{ (m indices)}}_{\ \ \ ij\cdots\text{ (n indices)}}[/itex]

A metric tensor [itex]g_{\ ij}[/itex] raises or lowers indices:

[itex]g_{ij}a^j\ =\ a_i\ \ \ g^{ij}a_j\ =\ a^i[/itex]

[itex]g^{ik}g^{jl}a_{jlpqr}\ =\ a^{ij}_{\ \ pqr}[/itex]

[itex]g_{ip}g_{jq}g^{kr}a^{pq}_{\ \ rst}\ =\ a^{\ \ k}_{ij\ \ st}[/itex]

Basis representation:

[itex]A\ =\ a^ie_i\ \ \ A\ =\ a^{ij}e_i\otimes e_j\ \ \ A\ =\ a^i_{\ j}e_i\otimes e^j\ \ \ \cdots[/itex]

Change of basis:

if [itex]e_i^{\ '}\ =\ T^i_{\ j}e_j[/itex], then [itex]a^{ijk\ \ \ '}_{\ \ \ pq}e_i^{\ '}\otimes e_j^{\ '} \otimes e_{k\,'} \otimes e^{q'} \otimes e^{r'}[/itex]

[itex]=\ T^i_{\ s}T^j_{\ t}T^k_{\ u}T_p^{\ v}T_q^{\ w}a^{stu}_{\ \ \ vw}e_s\otimes e_t \otimes e_u \otimes e^v \otimes e^w[/itex]

[tex]\bigg(\text{for a tensor field,}[/tex]
[tex]=\ \frac{\partial x^s}{\partial x^{'i}}\frac{\partial x^t}{\partial x^{'j}}\frac{\partial x^u}{\partial x^{'k}}\frac{\partial x^{'p}}{\partial x^v}\frac{\partial x^{'q}}{\partial x^w}a^{stu}_{\ \ \ vw}e_s\otimes e_t \otimes e_u \otimes e^v \otimes e^w\bigg)[/tex]

Extended explanation

Order, and raising or lowering of indices:

The order of a type (m,n) tensor is m+n. In other words: it is the number of indices.

A type (m,n) tensor may be converted to a type (p,q) tensor, where m+n = p+q, by "raising or lowering" indices.

They are essentially the same tensor, but on different tensor product spaces.

In the simplest cases, their coordinates are exactly the same, except that some are multiplied by minus-one: for example, [itex](t,x,y,z)\text{ and }(t,-x,-y,-z)[/itex].

Lowering is achieved by contraction (see next section) with the metric tensor [itex]g_{ij}[/itex], while raising is achieved by contraction with its raised version, [itex]g^{ij}[/itex].

Contraction and the Einstein summation convention:

One or more tensors may be contracted, so that the final total order is less than the original total order.

In coordinate terms, this is achieved by using the same index (a "dummy index") in both tensors, one upper (contravariant), and one lower (covariant), and summing over all possible values of that index: for example, [itex]A_3\ =\ \Sigma_{i=1,2,3,4}F^{\ i}_3B_i\ \ \ C_4\ =\ \Sigma_{i=1,2,3,4}\Sigma_{j=1,2,3,4}D^{ij}_{\ \ 4ij}[/itex]

In the Einstein summation convention (used by everyone), the [itex]\Sigma[/itex] is not written, and it is understood that any repeated index (one upper, one lower) is to be summed over:

[itex]A_3\ =\ F^{\ i}_3B_i\ \ \ C_4\ =\ D^{ij}_{\ \ 4ij}[/itex]

For example, the Ricci curvature tensor (order 2) is a contraction of the Riemann curvature tensor (order 4): [itex]R_{\mu\nu}\ =\ R^{\lambda}_{\ \mu\lambda\nu}[/itex], and the scalar curvature (order 0) is the contraction (or trace) of that: [itex]R\ =\ R^{\mu}_{\ \mu}[/itex]

For example, the inner product [itex]\mathbf{a}\cdot\mathbf{b}[/itex] of two (contravariant) vectors may be written as the contraction of [itex]\mathbf{a}[/itex] with the covariant vector [itex]\mathbf{b}^T[/itex] (in coordinates, [itex]a^ib_i[/itex]), or vice versa.


Common tensors in physics:

Displacement vector = strain tensor "times" position vector.

Internal force vector = stress tensor "times" position vector.

Stress tensor = elasticity tensor "times" strain tensor.

Angular momentum vector = moment of inertia tensor "times" angular velocity vector.

Polarisation vector = susceptibility tensor "times" electric field vector ("linear optics").

Lorentz force (per charge) 4-vector = electromagnetic tensor "times" velocity 4-vector.

Acceleration-of-displacement 4-vector = Riemann curvature tensor "times" displacement 4-vector and two copies of velocity 4-vector.

"Non-linear" tensors:

The equation [itex]a^i\ =\ T^i_{\ jk}b^jc^k[/itex] is linear in both b and c (meaning that a linear transformation on b or c induces a linear transformation on a).

But the equation [itex]a^i\ =\ T^i_{\ jk}b^jb^k[/itex] is not linear in b.

For example, in "non-linear optics", there are "higher-order" susceptibility tensors, each with the electric field vector as the only input (exactly as in the linear case), but with that input repeated one or more times:

[itex]\frac{1}{\varepsilon_0}P^i\ =\ \chi^i_{\ j}E^j\ +\ \chi^{(2)i}_{\ \ \ jk}E^jE^k\ +\ \chi^{(3)i}_{\ \ \ jkl}E^jE^kE^l\ +\ \cdots[/itex]

Covariant and contravariant:

Basis vectors are type (0,1) tensors: [itex]\{e_i,\cdots e_k\}[/itex].

Position coordinates are type (1,0) tensors: [itex](x^i,\cdots x^k)[/itex].

On a change of coordinate basis, vectors (or indices) transforming like basis vectors are covariant, while those transforming like position coordinates are contravariant.

Warning: the word "covariant" is sometimes used with a different meaning, to describe an equation whose form is independent of the coordinate system.

Tensor field:

The terms "tensor" is often (usually? :rolleyes:) used instead of the term "tensor field".

A tensor, as defined above, is a linear operator on one tensor product of a vector space.

A tensor field is a different linear operator on the tensor product of the tangent space at each point of a manifold (a second countable Hausdorff space that is locally homeomorphic to a Euclidean space). In other words: a different tensor at each point.

Pseudotensors:

A pseudotensor of type (0,n) behaves exactly like a tensor, except that on a change of basis involving an odd number of reflections, there is an extra factor of minus-one:

if [itex]e_i^{\ '}\ =\ T^i_{\ j}e_j\text{ , for }i,j\,=\,1\cdots n[/itex], then [itex]a_{i_1\cdots\,i_n}^{\ \ \ \ \ '}e_{i_1}^{\ '}\otimes\cdots\otimes e_{i_n}^{\ '}[/itex]

[itex]=\ sign(\det(T))T^{\,i_1}_{\ \ j_1}\cdots T^{\,i_n}_{\ \ j_n}a_{j_1\cdots\,j_n}e_{j_1}\otimes\cdots\otimes e_{j_n}[/itex]
For a pseudotensor field, the sign is the sign of the determinant of the Jacobian.

The "unit" pseudotensor of type (0,n) is the Levi-Civita pseudotensor, [itex]\varepsilon[/itex], each of whose components, [itex]\varepsilon_{\pi_1\cdots \pi_n}[/itex] is defined as being 1 if its indices are an even permuation of [itex]\{1,\cdots ,n\}[/itex], and -1 otherwise.

Every pseudotensor of type (0,n) is an ordinary tensor multiplied by [itex]\varepsilon[/itex].

A pseudotensor is not a tensor. A pseudotensor and a tensor cannot be added. A "product" of two pseudotensors is a tensor. A "product" of a pseudotensor and a tensor is a pseudotensor. A pseudoscalar is a pseudotensor of type (0,0).

[itex]\varepsilon[/itex] may be contracted with two or more ordinary tensors to produce something more interesting: for example, with two copies of the electromagnetic tensor, it produces an invariant pseudoscalar: [itex]\varepsilon_{\alpha\beta\gamma\delta} F^{\alpha\beta} F^{\gamma\delta}\ =\ (2/c)\mathbf{B}\cdot\mathbf{E}[/itex], and with the differential operator and an ordinary tensor, it produces a pseudotensor "curl": [itex]\varepsilon_{\alpha\beta\gamma\delta} \partial_{\alpha}T^{\beta\gamma}[/itex]

* This entry is from our old Library feature. If you know who wrote it, please let us know so we can attribute a writer. Thanks!
 
Mathematics news on Phys.org

FAQ: What is a Tensor: FAQ Definition/Summary

What is a tensor?

A tensor is a mathematical object that describes the relationship between different vectors and scalars in a multi-dimensional space. It is commonly used in physics and engineering to represent physical quantities such as force, velocity, and stress.

What are the different types of tensors?

There are several types of tensors, including scalars (zeroth-order tensors), vectors (first-order tensors), matrices (second-order tensors), and higher-order tensors. Tensors can also be classified by their symmetries, such as symmetric tensors, antisymmetric tensors, and mixed tensors.

How is a tensor different from a matrix?

While both tensors and matrices are multi-dimensional arrays of numbers, they represent different mathematical concepts. Matrices are used to perform linear transformations, while tensors are used to describe the relationships between different quantities in a multi-dimensional space. Additionally, tensors can have any number of dimensions, while matrices are limited to two dimensions.

What is the importance of tensors in machine learning?

Tensors are essential in machine learning because they can efficiently store and manipulate large amounts of data. In deep learning, tensors are used to represent inputs, outputs, and weights of a neural network, allowing for efficient computation and optimization of the model.

How are tensors used in physics?

Tensors play a crucial role in physics, particularly in the fields of general relativity and electromagnetism. In general relativity, tensors are used to describe the curvature of spacetime, while in electromagnetism, tensors are used to describe the electromagnetic field and its interactions with charged particles.

Similar threads

Replies
1
Views
1K
Replies
3
Views
3K
Replies
10
Views
1K
Replies
2
Views
2K
Replies
4
Views
2K
Replies
5
Views
1K
Replies
30
Views
2K
Replies
8
Views
2K
Back
Top