Divergence of a rank-2 tensor in Einstein summation

In summary, to write the vector of the divergence of a rank-2 tensor in Einstein summation, we use the notation \partial_i M_{ij}, where j is a new value and i is summed over three times. It is also correct to use this notation.
  • #1
Niles
1,866
0

Homework Statement


Hi

When I want to take the divergence of a rank-2 tensor (matrix), then I have to apply the divergence operator to each column. In other words, I get
[tex]
\nabla \cdot M = (d_x M_{xx} + d_y M_{yx} + d_zM_{zx}\,\, ,\,\, d_x M_{xy} + d_y M_{yy} + d_zM_{zy}\,\,,\,\, d_x M_{xz} + d_y M_{yz} + d_zM_{zz})
[/tex]
How would I write this vector in Einstein summation? Is it correct that it would be
[tex]
\partial_i M_{ij}
[/tex]
 
Physics news on Phys.org
  • #2
Niles said:

Homework Statement


Hi

When I want to take the divergence of a rank-2 tensor (matrix), then I have to apply the divergence operator to each column. In other words, I get
[tex]
\nabla \cdot M = (d_x M_{xx} + d_y M_{yx} + d_zM_{zx}\,\, ,\,\, d_x M_{xy} + d_y M_{yy} + d_zM_{zy}\,\,,\,\, d_x M_{xz} + d_y M_{yz} + d_zM_{zz})
[/tex]
How would I write this vector in Einstein summation? Is it correct that it would be
[tex]
\partial_i M_{ij}
[/tex]

IIRC, yes.

You have three entries, that distinguished by a new j value, since it appears once, and each entry is a summation over i, since it appears twice.
 

FAQ: Divergence of a rank-2 tensor in Einstein summation

What is a rank-2 tensor?

A rank-2 tensor is a mathematical object that represents a linear transformation between two vector spaces. It consists of a matrix of numbers that describes how the components of a vector in one space change when transformed into another space.

What is the Einstein summation convention?

The Einstein summation convention is a notation used in tensor calculus and other branches of mathematics to simplify expressions involving summation over indices. It states that whenever an index appears twice in a term, once as a subscript and once as a superscript, it is implicitly summed over all possible values.

How is the divergence of a rank-2 tensor defined?

The divergence of a rank-2 tensor is defined as the sum of the partial derivatives of its components with respect to each coordinate axis. It represents the rate at which the tensor's components are changing in a given direction and can be thought of as a measure of the tensor's "spreading" or "sinking".

What is the physical significance of the divergence of a rank-2 tensor?

The physical significance of the divergence of a rank-2 tensor depends on the specific context in which it is used. In fluid dynamics, for example, the divergence of the stress tensor represents the force per unit volume acting on a fluid element. In general relativity, it plays a crucial role in Einstein's field equations, describing the curvature of spacetime.

How is the divergence of a rank-2 tensor calculated using Einstein summation notation?

The divergence of a rank-2 tensor can be calculated using Einstein summation notation by taking the sum of the partial derivatives of each component of the tensor with respect to its corresponding coordinate axis. This can be expressed as:

∇·T = ∂Tij/∂xi

where i and j are the indices of the tensor and xi represents the coordinate axis in the i direction.

Similar threads

Back
Top