How Does Tensor Contraction Relate to Summation in Minkowski Space?

In summary, to show that the definition of tensor contraction implies A^{im}{}_{klm} \equiv \sum_{m=0}^{n-1} A^{im}{}_{klm}, we can expand the tensors into a basis and use the summation convention to sum over repeated indices. This approach can be generalized to tensors of higher rank and linear combinations as well.
  • #1
quasar_4
290
0

Homework Statement



Show that the definition [of tensor contraction]

[tex] A^{ae}{}_{cde} = u^a \nu^e \sigma_c \tau_d \omega_e + w^a x^e \zeta_c \eta_d \xi_e + ... [/tex]

implies

[tex] A^{im}{}_{klm} \equiv \sum_{m=0}^{n-1} A^{im}{}_{klm} [/tex]

first by looking at tensors of the form [tex] u^a \sigma_b [/tex], then of the form [tex] u^a...\nu^b \sigma_c... \tau_d [/tex], and finally of linear combinations of these.


Homework Equations



We're working in Minkowski space, so we have n basis vectors.

The Attempt at a Solution


I guess I've been thinking I'll just expand into a basis. Does that make sense for this? Here's what happens (if I'm getting notation correctly):

[tex] A^a_b = u^a \sigma_b = a^a_i e^i a_b^j e_j [/tex]

so if a = b = m, then

[tex] A^m_m = a^m_i e^i a^j_m e_j = a^m_i a^j_m \delta^i_j [/tex]

and then (this is the part I'm not sure about...)

[tex] A^m_m \equiv \sum_m a^m_i a^j_m \delta^i_j = \sum_m A^m_m [/tex]

If that is correct I have no problems generalizing to higher rank tensors, just wasn't sure if the sum was introduced correctly. I guess I'm thinking that this is only non-zero for i = j, and then we can sum over m to get all the components... is that right?
 
Physics news on Phys.org
  • #2



Your approach is on the right track, but there are a few corrections that need to be made. First, your expansion of A^a_b is not quite correct. It should be written as:

A^a_b = u^a \sigma_b = u^a_i e^i \sigma_b^j e_j

Note that the basis vectors for u^a and \sigma_b are different, so they should have different indices. Also, you have written a_b^j instead of \sigma_b^j.

Next, when you plug in a = b = m, you get:

A^m_m = u^m_i e^i \sigma_m^j e_j = u^m_i \sigma_m^j \delta^i_j = u^m_i \sigma_m^i

Note that the summation over m is already included in the definition of A^m_m, so you don't need to include it in the equation. This is true for all tensors, not just those of the form u^a \sigma_b.

To generalize to higher rank tensors, you can use the same approach. For example, for a tensor of the form u^a...\nu^b \sigma_c... \tau_d, you would have:

A^{im}{}_{klm} = u^i_j e^j \nu^m_k e_k \sigma_l^p e_p \tau_m^q e_q

Again, note that the basis vectors for each term have different indices, and you can use the summation convention to sum over repeated indices. This approach can be generalized to linear combinations of these types of tensors as well.

Overall, your approach is correct, but just make sure to be careful with your notation and keep track of the indices.
 

FAQ: How Does Tensor Contraction Relate to Summation in Minkowski Space?

What is a tensor contraction?

A tensor contraction is a mathematical operation that involves summing over repeated indices in a tensor. It is used to simplify the representation of higher-order tensors, making them more manageable for calculation and analysis.

Why is tensor contraction important?

Tensor contraction is important because it allows us to manipulate and analyze complex tensors in a more efficient and concise manner. It is commonly used in physics and engineering to model and solve problems involving multidimensional data.

How is tensor contraction performed?

Tensor contraction is performed by multiplying corresponding elements of two tensors and then summing over all possible combinations of the repeated indices. This process can be repeated multiple times to contract higher-order tensors into lower-order tensors.

What is the difference between tensor contraction and tensor product?

The main difference between tensor contraction and tensor product is that tensor contraction involves summing over repeated indices, while tensor product involves multiplying corresponding elements of two tensors. Tensor product results in a higher-order tensor, while tensor contraction results in a lower-order tensor.

Can tensor contraction be applied to tensors of different orders?

Yes, tensor contraction can be applied to tensors of different orders. However, the resulting tensor will have a lower order than the original tensors. For example, contracting a rank-3 tensor with a rank-2 tensor will result in a rank-1 tensor.

Similar threads

Back
Top