Intro to Basic Tensor Notation: I'm Confused!

  • Thread starter TromboneNerd
  • Start date
  • Tags
    Tensor
In summary, tensors have a number of components, each of which corresponds to a function of a vector and a dual vector.
  • #1
TromboneNerd
18
1
I am a little confused about the notation of tensors. could someone please explain the indexes? like what is the difference between Tmn and Tnm? and what about T[tex]^{m}_{n}[/tex] and T[tex]^{n}_{m}[/tex]? it would really be helpful if someone could give me an "intro to basic tensor notation" answer. i have no resources other than this, so please help!
 
Physics news on Phys.org
  • #3
A type (p,q) tensor--also called a valence (p,q) tensor--with respect to a given vector space, is a scalar-valued function of some number, q, of vectors of that space, and some number, p, of elements of the dual space of that vector space, linear in each of its arguments. The dual space is defined as another vector space over the same field (in the abstract algebraic sense) whose elements are scalar-valued linear functions of one vector of the primary vector space. Elements of the primary vector space are called vectors, contravariant vectors, tangent vectors or kets, while elements of the dual space are called dual vectors, covectors, covariant vectors, cotangent vectors, linear functionals, linear forms, one-forms, 1-forms or bras. Some of these names are more general, others restricted to certain contexts or applications. The tensors used in relativity are defined with respect to the tangent spaces of a spacetime manifold, whose elements are called tangent vectors. Elements of their dual spaces may be called cotangent vectors.

When a coordinate system is specified, a tensor can be represented by an ordered set of (scalar) components with respect to that coordinate system. For example, a type (0,2) tensor, a tensor with two vector arguments is

[tex]\textbf{T}\left ( ; \_ \_ \_ , \_ \_ \_ \right ) = \sum_{\mu = 0, \nu = 0}^{3} T_{\mu \nu} \, \tilde{\boldsymbol{\varepsilon}}^\mu \otimes \tilde{\boldsymbol{\varepsilon}}^\nu[/tex]

where [itex]\{ \tilde{\boldsymbol{\varepsilon}}^\mu \}[/itex] denote members of what’s called a dual basis (with respect to a given basis, [itex]\vec{\mathbf{e}_\nu}[/itex], for the primary vector space). The dual basis is a maximal ordered set of linearly independent dual vectors such that

[tex] \tilde{\boldsymbol{\varepsilon}}^\mu (\vec{\textbf{e}}_\nu) = \delta^\mu_\nu[/tex]

where [itex] \delta^\mu_\nu[/itex] is the Kronecker delta, equal to one when mu = nu, otherwise equal to zero. The Kronecker delta can be represented in matrix form as the identity matrix:

[tex] \begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{bmatrix}[/tex]

By the Einstein summation convention (which just means that an index is summed over if it appears twice in the same term, once up, once down), the above expression is abbreviated to

[tex]T_{\mu \nu} \; \tilde{\boldsymbol{\varepsilon}}^\mu \otimes \tilde{\boldsymbol{\varepsilon}}^\nu[/tex]

which is usually abbreviated further to [itex]T_{\mu \nu}[/itex], with basis tensors assumed. So the tensor is represented by its components. Similarly [itex]T^{\mu\nu} = \textbf{T}\left ( \_ \_ \_ , \_ \_ \_ ; \right ) [/itex] is a function of two dual vectors, and [itex] T^\mu\,_\nu = \textbf{T}\left ( \_ \_ \_ ; \_ \_ \_ \right )[/itex] is function of one dual vector and one vector. More fully,

[tex]T^\mu\,_\nu = T^\mu\,_\nu \; \vec{\textbf{e}}_\mu \otimes \tilde{\boldsymbol{\varepsilon}^\nu}[/tex]

The action of this tensor on a dual vector and a vector is denoted

[tex] T^\mu\,_\nu \; \omega_\mu \; v^\nu = T^\mu\,_\nu \; \omega_\rho \; v^\sigma \; \vec{\textbf{e}}_\mu (\tilde{\boldsymbol{\varepsilon}^\rho}) \otimes \tilde{\boldsymbol{\varepsilon}^\nu}(\vec{\textbf{e}}_\sigma) = \sum_{\mu=0,\nu=0}^{3} T^\mu\,_\nu \; \omega_\mu \; v^\nu[/tex]

the result being a single number. The notation [itex]T_{\mu \nu}[/itex] can either stand for a particular array of components, in this case a matrix:

[tex]\begin{bmatrix}
T_{00} & T_{01} & T_{02} & T_{03}\\
T_{10} & T_{11} & T_{12} & T_{13}\\
T_{20} & T_{21} & T_{22} & T_{23}\\
T_{30} & T_{31} & T_{32} & T_{33}
\end{bmatrix}[/tex]

where each element of the matrix is a number. Or, since tensor equations that describe physical laws don't depend on which coordinate system you chose to represent them with, [itex]T_{\mu \nu}[/itex] can also denote the tensor itself in terms of whatever set of components might happen to be used. This use of the notation is called abstract index notation or slot-naming index notation. Blandford and Thorne liken the tensor to a machine with slots, like a toaster, as many slots as there are indices, so that the indices show how many slots there are--in other words, how many arguments the tensor has, the number of vectors and dual vectors it takes.

The number of subscript (down) indices tells you how many vector arguments the tensor has. The number of superscript (up) indices tells you how many dual vector arguments it has. A dual vector is, by definition, a scalar-valued function of one vector and so has one down index. A vector can be viewed as a scalar-valued function of one dual vector, and so has one up index. The action of a dual vector on a vector (which can also be viewed as the action of a vector on a dual vector) is denoted thus:

[tex] \tilde{\boldsymbol{\omega}}(\vec{\textbf{v}})= \vec{\textbf{v}}(\tilde{\boldsymbol{\omega}}) = \omega_\mu \, \tilde{\boldsymbol{\varepsilon}}^\mu(v^\nu \vec{\textbf{e}}_\nu ) = \omega_\mu \, v^\nu \, \tilde{\boldsymbol{\varepsilon}}^\mu( \vec{\textbf{e}}_\nu ) = \omega_\mu \; v^\mu = v^\mu \; \omega_\mu[/tex]

The notation [itex]T_{\mu\nu}=T_{\nu\mu}[/itex] says that the tensor T is symmetric. In matrix language, [itex]T=T^T[/itex], T is equal to its own transpose, its indices are interchangeable.

Index rules:

1. Every unique (non-repeated) index (called a free index) that appears on one side of an equation appears on the other side at the same height. These indices aren’t summed over.

2. No more than two identical indices (called dummy indices, because it doesn’t matter what letter you use, or summation indices) appear in any term, and when a pair of identical indices appear in one term, they must be on different levels, one up and one down.

3. When a tensor with two indices at different heights is represented as a matrix, the upper index is the row number, the lower index the column number. When they’re the same height, the left index is the row number, and the right index stands for the column.

In relativity, some authors follow a convention whereby Greek indices run from 0 to 3 (spacetime dimensions), while Roman indices run from 1 to 3 (purely spatial dimensions).

Some more resources:

http://www.pma.caltech.edu/Courses/ph136/yr2008/"

http://preposterousuniverse.com/grnotes/"

http://repository.tamu.edu/handle/1969.1/2502"

http://repository.tamu.edu/handle/1969.1/3609"

Also the other external links at the foot of the Wikipedia article Tensor, although as a beginner I found the article itself quite opaque. I've only dipped into it on Google Books, but I’ve found some helpful explanations in Bernard Schutz's Geometrical Methods of Mathematical Physics.
 
Last edited by a moderator:

FAQ: Intro to Basic Tensor Notation: I'm Confused!

What is tensor notation?

Tensor notation is a mathematical notation used to represent tensors, which are objects that describe linear relations between vectors, scalars, and other tensors. It is commonly used in physics, engineering, and other fields to describe physical quantities and their relationships.

Why is tensor notation confusing?

Tensor notation can be confusing for several reasons. Firstly, it uses subscripts and superscripts to represent indices, which can be overwhelming for those who are not familiar with mathematical notation. Additionally, it can be difficult to visualize the meaning of tensor equations in terms of physical quantities, making it challenging to apply in real-world situations.

How is tensor notation used in science?

Tensor notation is widely used in science, particularly in physics and engineering. It is used to describe physical quantities such as force, stress, and electromagnetic fields. It is also used in higher-level mathematics to describe more complex relationships between vectors and tensors.

What are some common tensor notation symbols?

Some common symbols used in tensor notation include Greek letters such as α, β, and γ, which represent indices. Other symbols include the dot product (·) and the cross product (×), which are used to perform mathematical operations on tensors.

How can I better understand tensor notation?

To better understand tensor notation, it is helpful to have a strong foundation in linear algebra and multivariable calculus. Additionally, practicing with sample problems and visualizing the meaning of tensor equations can also improve understanding. Seeking out additional resources, such as textbooks or online tutorials, can also be beneficial.

Back
Top