Index algebra questions / order of indices

In summary: Abstract index notation is a way of writing equations that is convenient for coordinate-free problems. It replaces indices that are not affected by the transformation by indices that are, so that expressions like ##g_{\muu}## become ##g_{u}##. This makes it easier to manipulate the expressions and makes them more coordinate-free.
  • #1
binbagsss
1,305
11
Hi,

I've somehow gone the past year without paying attention to the order of the indicies when one is upper and one is lower i.e. that in general ##g^{\mu}## ##_{\nu}## ##\neq g_{\nu}## ## ^{\mu}##.

A have a couple of questions :

1)
##g^{u}## ##_{v} x^{v}=x^{u}## [1]
##g _{v} ## ##^{u} x^{v} = x^{u}## [2]I believe that both of these are mathematically correct to write, since there is a dummy index being summed over in both cases. However ## x^{\mu}## in [1] ##\neq## ##x^{\mu}## in [2] because ##g^{\mu}## ##_{\nu}## ##\neq g_{\nu}## ## ^{\mu}##, in general, is this correct? (i.e I am just confirming that the indices do not need to be next to each other to be summed over, as they are in [1] - this is probably a stupid question but the fact I haven't paid attention to the order of an upper and lower index for so long makes me question?)

2) Given the matrix ##g^{v} ## ##_{u}##, am I correct in thinking that we can obtain given a metric matrix ## g _{v}## ## ^{u} ## from it, but not, solely using the metric matrix, ##g ^{u}## ## _ {v}## because on top of raising and lowering the indices, the order needs to be interchanged?

Many thanks.
 
Last edited:
Physics news on Phys.org
  • #2
So there are a number of issues that need to be addressed here. To start with, it is not clear whether your g denotes a general rank 2 tensor or the metric. If it is a general tensor, the two expressions are indeed different - they are different whenever the tensor with both indices up/down is not symmetric.

Second, if by g you mean the metric tensor, you would essentially never use one index up and one down as this is then always the Kronecker delta by definition.

Third, a tensor is not a matrix. A rank 2 tensor may be represented by a matrix, but that is a matter of book keeping. The same goes for transformation coefficients.
 
  • #3
Strictly speaking an expression like ##g_{\mu \nu}## is not a tensor but tensor components with respect to a basis.
 
  • #4
vanhees71 said:
Strictly speaking an expression like ##g_{\mu \nu}## is not a tensor but tensor components with respect to a basis.

In the abstract index notation, that is a tensor, with no reference to any basis.
 
  • #5
I don't know what the abstract index notation is, but a tensor is a tensor and has no indices (at least no natural ones) it's a multilinear mapping from ##V^j \times V^{*k}## to numbers (real or complex depending on whether you have a real or complex vector space). Tensor components are always with respect to a basis and its dual:
$${V_{\mu_1,\ldots,\mu_j}}^{\nu_1,\ldots,\nu_k}=V(b_{\mu_1},\ldots, b_{\mu_j};b^{\nu_1},\ldots,b^{\nu_k}),$$
where ##b_{\mu}## is a basis of the vector space ##V## and ##b^{\mu}## the corresponding dual basis of its dual space ##V^{*}##.
 
  • #6
vanhees71 said:
I don't know what the abstract index notation is, but a tensor is a tensor and has no indices (at least no natural ones) it's a multilinear mapping from ##V^j \times V^{*k}## to numbers (real or complex depending on whether you have a real or complex vector space). Tensor components are always with respect to a basis and its dual:
$${V_{\mu_1,\ldots,\mu_j}}^{\nu_1,\ldots,\nu_k}=V(b_{\mu_1},\ldots, b_{\mu_j};b^{\nu_1},\ldots,b^{\nu_k}),$$
where ##b_{\mu}## is a basis of the vector space ##V## and ##b^{\mu}## the corresponding dual basis of its dual space ##V^{*}##.
In abstract index notation you are denoting the types of linear mappings involved by indices. For example, ##g_{ab}## would denote a multilinear map from ##V \times V## to scalars. Contractions are denoted by repeating indices, just as they would be if you were using ordinary index notation. Many find this convenient since you get a coordinate free concise way of writing your equations and the expressions in a coordinate basis become exactly the same with the abstract indices replaced by actual indices.
 
  • Like
Likes vanhees71
  • #7
Hm, for me that sounds confusing ;-).
 
  • #8
vanhees71 said:
Hm, for me that sounds confusing ;-).
Are you sure you do not mean ... (*drumroll*) ... abstract? :oldbiggrin::blushing::oldbiggrin:
 
  • #9
I'm just ignorant. Don't take it seriously :-).
 
  • #10
Yeah - don't get tensor anything. :wink:

More seriously - what's the advantage to abstract index notation? Regular index notation basically works by suppressing the basis vectors where sense is not affected, right? So abstract index notation takes it one step further and reasons that if we can just fudge the basis vectors away and everything still works then there ought to be a formalism for it? Or am I way off?
 
  • #11
Last edited:
  • Like
Likes Ibix
  • #12
"Hydra taming"!

Thanks - I'll give Penrose a proper read. At first glance it looks as though I'm on roughly the right track, though.
 
  • #13
Orodruin said:
So there are a number of issues that need to be addressed here. To start with, it is not clear whether your g denotes a general rank 2 tensor or the metric. If it is a general tensor, the two expressions are indeed different - they are different whenever the tensor with both indices up/down is not symmetric.

Second, if by g you mean the metric tensor, you would essentially never use one index up and one down as this is then always the Kronecker delta by definition.

Third, a tensor is not a matrix. A rank 2 tensor may be represented by a matrix, but that is a matter of book keeping. The same goes for transformation coefficients.

Sorry ##g_{ab}## is not supposed to be a metric, bad choice by me.

So back to one, just to confirm, if I have:

## \lambda ^{u} ## ##_{v} x ^{v} = \psi ^{u}## [*] ,

so that is just by dimension analysis that we sum over ##v## , ##\psi^{u}## and ##\lambda^{u}## different vectors

and

## \lambda _{v} ## ##^u x ^{v} = \psi ^{u}## [**]

then ##\psi^{u} ## in [*] and [**] are in general not the same, since generally not - symmetric

2) Given a rank 2 tensor ##M^{a}## ##_{b}##, from it, using the metric, one can obtain ##M_{ab}## and ##M_{a}## ##^{b}## but not ##M^{b}## ##_{a}## ? is this correct? just to confirm my understanding? thanks

3) Last expression to check my understanding again, ##g_{ab}## the metric here:

##g_{uv}g^{vb}=\delta_{u}## ##^b \neq \delta^b## ##_{u} = g^{vb}g_{uv} ##thanks in advance
 
  • #14
binbagsss said:
2) Given a rank 2 tensor ##M^{a}## ##_{b}##, from it, using the metric, one can obtain ##M_{ab}## and ##M_{a}## ##^{b}## but not ##M^{b}## ##_{a}## ? is this correct? just to confirm my understanding? thanks

You can obtain ##M^b_{\phantom b a}## by using the inverse metric, which is obtainable from knowing the metric and finding its inverse.

3) Last expression to check my understanding again, ##g_{ab}## the metric here:

##g_{uv}g^{vb}=\delta_{u}## ##^b \neq \delta^b## ##_{u} = g^{vb}g_{uv} ##

The metric is a symmetric tensor. It holds that ##g_{ab} g^{bc} = g^{bc} g_{ba} = \delta^b_a##. Note that the order of the indices in the Kronecker delta is irrelevant. There is no way of confusing the indices.
 
  • #15
Orodruin said:
You can obtain ##M^b_{\phantom b a}## by using the inverse metric, which is obtainable from knowing the metric and finding its inverse.
Isn't ##M^a {}_b## the same thing as ##M^b{}_a##, just with the indices labelled differently? That would cause chaos and meaninglessness if done carelessly as part of an expression, but is fine as written on its own, or if done carefully.
 
  • #16
vanhees71 said:
I don't know what the abstract index notation is

Wald's GR text presents it fairly early on and uses it throughout.
 
  • #17
Ibix said:
Isn't ##M^a {}_b## the same thing as ##M^b{}_a##, just with the indices labelled differently? That would cause chaos and meaninglessness if done carelessly as part of an expression, but is fine as written on its own, or if done carefully.
Right. I guess I did not read carefully enough.
 
  • #18
vanhees71 said:
I don't know what the abstract index notation is, but a tensor is a tensor and has no indices (at least no natural ones) it's a multilinear mapping from ##V^j \times V^{*k}## to numbers (real or complex depending on whether you have a real or complex vector space). Tensor components are always with respect to a basis and its dual:
$${V_{\mu_1,\ldots,\mu_j}}^{\nu_1,\ldots,\nu_k}=V(b_{\mu_1},\ldots, b_{\mu_j};b^{\nu_1},\ldots,b^{\nu_k}),$$
where ##b_{\mu}## is a basis of the vector space ##V## and ##b^{\mu}## the corresponding dual basis of its dual space ##V^{*}##.

The abstract index notation, which I personally dislike, uses an expression such as [itex]V^\mu[/itex] to mean a 4-vector, rather than a component of a 4-vector. The nice thing about that is that it's clear what type of object [itex]V[/itex], which in the alternative notation isn't at all clear, unless you spell it out in terms of a basis, and write [itex]V[/itex] as [itex]V^\alpha e_\alpha[/itex]. That's cumbersome to write, and it's also overkill, in the sense that (usually) nobody cares about the basis.
 
  • #19
is there any sort of convention where , before manipulating - raising , lowering indicies etc, the upper index is written before the first? (quick example I'm just looking at the way Lorentz transformation is written in a couple of textbooks). thanks.
 
  • #20
Yes, I don't see any merit in this abstract index notation. To the contrary it's confusing. At least Wald uses latin indices for it and for the usual components greek ones. That may help to distinguish them, but why should I use abstract tensors but still have the cumbersome indices of the Ricci notation, which however is very useful for practical calculations. So my usual notation is that a tensor is denoted by ##\boldsymbol{T}## and ##T_{\mu \nu\ldots}## are its (covariant) components wrt. to a basis ##\boldsymbol{b}^{\mu}## of the dual space. Then the relation is (Einstein summation convention implies)
$$\boldsymbol{T}=T_{\mu \nu \ldots} \boldsymbol{b}^{\mu} \otimes \boldsymbol{b}^{\nu} \otimes \cdots.$$
The disadvantage of this notation is, of course, that you don't know from just looking at the symbol ##\boldsymbol{T}## which rank the tensor has.
 
  • #21
vanhees71 said:
Yes, I don't see any merit in this abstract index notation. To the contrary it's confusing. At least Wald uses latin indices for it and for the usual components greek ones. That may help to distinguish them, but why should I use abstract tensors but still have the cumbersome indices of the Ricci notation, which however is very useful for practical calculations. So my usual notation is that a tensor is denoted by ##\boldsymbol{T}## and ##T_{\mu \nu\ldots}## are its (covariant) components wrt. to a basis ##\boldsymbol{b}^{\mu}## of the dual space. Then the relation is (Einstein summation convention implies)
$$\boldsymbol{T}=T_{\mu \nu \ldots} \boldsymbol{b}^{\mu} \otimes \boldsymbol{b}^{\nu} \otimes \cdots.$$
The disadvantage of this notation is, of course, that you don't know from just looking at the symbol ##\boldsymbol{T}## which rank the tensor has.

I generally prefer this notation as well. The real mathematics happens in the text around the equations. There is nothing wrong with saying "Let T be a tensor of type (p,q)", rather than expecting the notation itself to encode all of that information.

However, the abstract index notation becomes useful if one wants to describe tensors of high valences with lots of complicated contractions between different slots. In that case, writing something like ##T^{abc}{}_{de}{}^{fg}{}_h{}^i \, S_b{}^{dh}{}_{fig}{}^e## is much more concise than attempting to define T and S in words and then describe all of the contractions.

However, one can still levy a complaint against the abstract index notation, which is that it concerns itself with being useful in the most general case, but honestly, who needs so many tensor slots on a regular basis? If one uses Cartan's method of moving frames, then one never has to use objects with more than 2 indices and/or tensor slots in all of differential geometry; in particular the curvature data are a torsion 2-form ##T^a##, a curvature 2-form ##R^a{}_b##, and a Ricci tensor defined by the easily-understandable contraction

$$\mathrm{Ric} \equiv e^b \otimes (\iota_{e_a} R^a{}_b).$$
Anyway, my point is two-fold: Firstly, even the Riemann tensor, which people commonly complain about having "4 indices", can in fact be re-packaged into something that is much easier to understand in differential form language. Secondly, one simply does not encounter tensors of arbitrarily high rank and arbitrary symmetry properties (one does have differential p-forms, but there is no need to write out their indices!).

So yes, I think the abstract index notation is best for complicated contractions of high-rank tensors. However, I think that situation mostly doesn't occur (with possibly a few exceptions depending on your field of study).

Edited to add: In general, one should remember that the purpose of mathematical notation is to communicate, not to encode, and one should choose notation which best achieves that purpose. I have occasionally written formulas down in both notations in order to aid in understanding for readers.
 
Last edited:
  • Like
Likes vanhees71
  • #22
vanhees71 said:
The disadvantage of this notation is, of course, that you don't know from just looking at the symbol ##\boldsymbol{T}## which rank the tensor has.

There is a second disadvantage, in that you can't say which index of one tensor is contracted with which index of another tensor without introducing a basis, even though contraction is a basis-independent operation. With the abstract index notation, the expression [itex]A^{abc} B_{bde}[/itex] makes it clear which index is contracted, without introducing an irrelevant basis. In a certain sense, the abstract indices are like specifying a "signature" of a program: what are its inputs and what are its outputs, and contraction is program composition: This input to this program comes from that output of this other program.
 
  • #23
binbagsss said:
is there any sort of convention where , before manipulating - raising , lowering indicies etc, the upper index is written before the first? (quick example I'm just looking at the way Lorentz transformation is written in a couple of textbooks). thanks.

If there is a convention, I don't think it's strictly observed. It's just a matter of making the meanings of the various indices clear.
 

FAQ: Index algebra questions / order of indices

What is the basic concept of index algebra?

The basic concept of index algebra is the use of exponents or indices to represent repeated multiplication in a more concise form. It follows the rules of exponentiation, such as multiplying indices when the base is the same or raising a power to another power.

How do you simplify expressions with indices?

To simplify expressions with indices, you can use the laws of indices, such as the product law (a^m * a^n = a^(m+n)) and the power law ((a^m)^n = a^(m*n)). By applying these laws, you can combine like terms and reduce the expression to its simplest form.

What is the order of operations for index algebra?

The order of operations for index algebra follows the same rules as basic algebra, which is commonly remembered as PEMDAS (Parentheses, Exponents, Multiplication and Division, Addition and Subtraction). This means that you should always simplify expressions within parentheses first, then evaluate exponents, followed by multiplication and division from left to right, and finally addition and subtraction from left to right.

How do you solve equations with indices?

To solve equations with indices, you can use the rules of index algebra and the order of operations. Start by simplifying the expression on both sides of the equation, then isolate the variable by applying inverse operations, such as taking the logarithm or square root. Remember to check your solution by substituting it back into the original equation.

What are some real-life applications of index algebra?

Index algebra has many real-life applications, such as in finance where compound interest is calculated using the exponent rules. It is also used in scientific notation, where very large or small numbers are written in a more compact form using exponents. In computer programming, indices are used to access and manipulate elements in arrays and matrices. Additionally, index algebra is essential in physics and engineering for solving problems involving exponential growth or decay.

Back
Top