Understanding the Structure and Transformation of Tensors in Spacetime

In summary: in summary, the metric in spacetime is described by a 4-vector, points in spacetime are represented as events, and there is structure on spacetime in the form of light cones.
  • #71
Incidentally, just what sort of beast is [itex]e_i[/itex] supposed to be anyways? Is it just supposed to be a vector-valued one-form, and thus a rank (1,1) tensor? Is it merely an indexed collection of vectors, and that contracting with that index has a radically different significance than with tensors? Or is it something else entirely?
 
Last edited:
Physics news on Phys.org
  • #72
Hurkyl said:
Incidentally, just what sort of beast is [itex]e_i[/itex] supposed to be anyways?

I have been using [itex]\left\{ e_{1} , \dots, e_{n} \right\}[/itex] as a basis for a n-dimensional vector space [itex]V[/itex], so the lower index [itex]i[/itex] on [itex]e_{i}[/itex] just specifies which element of the basis, which vector in [itex]V[/itex]. Consequently, [itex]e_{i}[/itex] is a vector, a (1,0) tensor, an element of [itex]V**[/itex], a linear mapping from [itex]V*[/itex] to [itex]\mathbb{R}[/itex], etc.

Any [itex]v[/itex] in [itex]V[/itex] can be expanded as [itex]v = v^{i} e_{i}[/itex]. The upper index on [itex]v^i[itex] specifies which component, i.e., which real number.

These are the conventions that I have been using. The abstract index approach treats indices differently.

Regards,
George
 
  • #73
I guess I didn't explain my question well enough:

Each of the individual components of [itex]\delta^i_j[/itex] are real numbers, but taken as a whole, they define a rank (1,1) tensor.

When taken as a whole, are the [itex]e_i[/itex] supposed to define a rank (1,1) tensor as well?

(Looking again, maybe you did answer my question, saying that as a whole, it's simply supposed to be an indexed collection of vectors, and not form a tensor at all -- but I still feel compelled to restate myself to make sure you're giving the answer I think I'm getting!)
 
Last edited:
  • #74
Well, it looks like I'm outvoted. I may have some more comments or questions after I've studies some of the critical responses in more detail.

At the top of the list: if raising an index isn't creating a map from a vector to a co-vector, how should the operation be described? (Maybe it's a non-linear map?).

Meanwhile, this has been a very educational (if rather long) thread.
 
  • #75
Hurkyl said:
When taken as a whole, are the [itex]e_i[/itex] supposed to define a rank (1,1) tensor as well?

Ah, I knew there was more to your question than what I saw. I'm not sure, and I have no references here with me. I'd like to look into this later today or tomorrow.

Your question has made me think more about the Kronecker delta. Let [itex]\left\{e_i\right\}[/itex] be a basis for [itex]V[/itex] and [itex]\left\{\omega^i\right\}[/itex] be the associated dual basis of [itex]V*[/itex]. Then the vector-valued on-form [itex]\omega^i \otimes e_i[/itex] (sum over i) has components [itex]\delta^{i}_{j}[/itex]. Letting this act on [itex]v[/itex] in [itex]V[/itex] gives

[tex]\omega^{i} \left(v^{j}e_{j} \right) \otimes e_i = v^i e_i = v[/tex]

Regards,
George
 
Last edited:
  • #76
pervect said:
if raising an index isn't creating a map from a vector to a co-vector, how should the operation be described?

Raising the indices of elements of basis sets, i.e., going from [itex]e_i[/itex] to [itex]\omega^i[/itex] is a basis dependent (change the basis and the mapping changes) metric independent linear mapping between [itex]V[/itex] and [itex]V*[/itex].

Rasing the indices of components, i.e., going from [itex]v_i[/itex] to [itex]v^i[/itex] is basis independent (in spite of the fact that I've specified it trems of components) metric dependent linear mapping from [itex]V*[/itex] to [itex]V[/itex].

In general, these 2 mappings are not inverses of each other.

Meanwhile, this has been a very educational (if rather long) thread.

Very much so.

Regards,
George
 
Last edited:
  • #77
George Jones said:
.
Raising the indices of elements of basis sets, i.e., going from [itex]e_i[/itex] to [tex]\omega^i[/itex]
Is it accurate to call passing from a basis to the dual basis "raising indices"? I would have said that raising the indices on [itex]e_i[/itex] produces the collection of vectors [itex]e^i[/itex] (and thus not a collection of covectors, so that it certainly cannot be the dual basis).


pervect: let me try starting over for this whole discussion! :smile: (And writing it in math-speak instead of physics-speak -- I think any sort of theoretical discussion is more clear in math-speak)

Think back to your introduction to linear algebra. You probably talked about bases, and coordinates with respect to those bases. (You also probably said "Yah, yah" and promptly ignored it, much like I did when I was first introduced. :smile:)

The important thing was that when you selected a basis B for your vector space V, it allows you to write it in terms of coordinates -- it allows you to write the column-vector [itex][v]_B[/itex].

Continuing on, if you had a linear map [itex]T:V \rightarrow V'[/itex] and you selected bases B and B', it allows you to write the matrix [itex][T]_{B,B'}[/itex].

This is all important because we have the identity

[tex][T(v)]_{B'} = [T]_{B,B'} [v]_B[/tex]

in other words, the column-vector of components of T(v) (with respect to B') are given precisely by multiplying the matrix representation of T (with respect to B and B') by the column-vector of comopnents of v (with repsect to B).

This machinery is exactly what permits us to do elementary linear algebra in terms of matrix arithmetic. Without this machinery in place, we wouldn't be able to talk about things like the components of a vector, or of a matrix.



Matrix arithmetic isn't just good for vectors and linear transformations, though: it is also good for covectors. Just like the vectors of V are naturally modeled as column-vectors and linear transformations [itex]V \rightarrow V[/itex] are naturally modeled as square matrices, we have that covectors in [itex]V^*[/itex] are naturally modeled as row-vectors.


So once we've chosen a basis B for V, we are very strongly compelled to select a basis [itex]B^*[/itex] for [itex]V^*[/itex] that is compatable with matrix arithmetic. In other words, we insist that:

[tex]
\omega(v) = [\omega]_{B^*} [v]_B
[/tex]

Since each basis vector in B is mapped to a standard-basis column-vector, and each basis covector in [itex]B^*[/itex] is mapped to a standard-basis row-vector, we must insist that

[tex]
\epsilon^i(e_j) = \delta^i_n
[/tex]

where [itex]e_j[/itex] ranges over the basis vectors in B, and [itex]\epsilon^i[/itex] ranges over the basis covectors in [itex]B^*[/itex].

It is precisely this choice which allows us to speak about the components of a vector, covector, or in general any tensor, and do our computations in the usual way.

To state this differently, if you do not choose the dual basis in this manner, you absolutely, positively, cannot manipulate tensors in the usual way via their components.
 
Last edited:
  • #78
So... if I've gotten this correctly, basis vectors and basis covectors are mapped to each other by the Kronecker delta.

If that is indeed the case, then how can we normally raise and lower components (which is basically transforming vectors into covectors) using the metric, which is not necessarily the Kronecker delta (e.g. in the case of Minkowski space)?
 
  • #79
masudr said:
So... if I've gotten this correctly, basis vectors and basis covectors are mapped to each other by the Kronecker delta.

If that is indeed the case, then how can we normally raise and lower components (which is basically transforming vectors into covectors) using the metric, which is not necessarily the Kronecker delta (e.g. in the case of Minkowski space)?


You use the metric tensor or its inverse. The Kronecker delta is the metric tensor in Euclidean space. In Minkowski space it is [tex]\eta_{\alpha\beta} = Diag(1, -1, -1, -1)[/tex]; in GR it is a general symmetric rank 2 covariant tensor [tex]g_{\alpha\beta[/tex].

Thus [tex]A_{\mu} = g_{\mu\beta} A^{\beta} [/tex], where you must sum over the repeated index beta; this last is called the Einstein convention. He got tired of writing all those sigmas. When you expand there will be one equation for each value of mu.
 
Last edited:
  • #80
selfAdjoint,

Yes, but they've just discussed above that the map between vectors (i.e. the space) and covectors (the dual space) is the Kronecker delta, not the metric tensor.

And, you're saying that the map between contravariant vectors (the space) and covariant vectors (the dual space) are the metric tensor, not the Kronecker delta.

So which is correct?
 
  • #81
I've also been saying that the map from vectors to covectors is not the Kronecker delta. :-p
 
  • #82
Hurkyl said:
I've also been saying that the map from vectors to covectors is not the Kronecker delta. :-p

Yeah. I've just had a look at Carroll, and I'm now fairly certain about this. We have a set of basis vectors [itex]\hat{e}_\mu[/itex] for our tangent space. We then use this basis to define a basis for our cotangent space, [itex]\hat{\theta}^\mu[/itex] for our cotangent space, so that

[tex]\hat{\theta}^\mu\left(\hat{e}_\nu\right)=\delta^\mu_\nu.[/tex]

We can use the metric, however, to lower and raise componets of various tensors (by contracting over various indices). The crucial point to bear in mind, is that once we've messed around with some tensor by raising/lowering indices, this new tensor doesn't necessarily have a proper geometrical meaning.

Carroll said:
The gradient, and it's action on vectors, is perfectly well defined regardless of any metric, whereas the "gradient with upper indices" is not.

EDIT: Furthermore, when lowering or raising indices with the metric, we are talking about equations with actual numbers -- i.e. we have already (and usually implicitly) chosen some convenient basis (usually the co-ordinate basis). This operation is fundamentally different from using the Kronecker delta to define a basis for the cotangent space given a basis for the tangent space (or vice versa).
 
Last edited:
  • #83
Hurkyl said:
Is it accurate to call passing from a basis to the dual basis "raising indices"? I would have said that raising the indices on [itex]e_i[/itex] produces the collection of vectors [itex]e^i[/itex] (and thus not a collection of covectors, so that it certainly cannot be the dual basis).

You're right - it's probably not a good idea to call this operation "raising indices".

masudr said:
So... if I've gotten this correctly, basis vectors and basis covectors are mapped to each other by the Kronecker delta.

If that is indeed the case, then how can we normally raise and lower components (which is basically transforming vectors into covectors) using the metric, which is not necessarily the Kronecker delta (e.g. in the case of Minkowski space)?

The isomorphism between [itex]V[/itex] and [itex]V*[/itex] induced by [itex]e_{i} \mapsto \omega^{i}[/itex] is basis dependent. An arbitary vector [itex]v[/itex] gets mapped to different covectors, depending on the choice of basis.

The isomorphism between [itex]V[/itex] and [itex]V*[/itex] induced by a metric tensor is natural in the sense that the mapping is completely independent of choice of basis. An arbitary vector [itex]v[/itex] gets mapped to the same covector, call it [itex]\tilde{v}[/itex], even if different bases, say [itex] \left\{ e_{i} \right\}[/itex] and [itex] \left\{ e'_{i} \right\}[/itex].

If components of [itex]v[/itex] are defined by introducing a basis [itex] \left\{ e_{i} \right\}[/itex], then [itex]\omega^{i}[/itex] is the basis of [itex]V*[/itex] that makes all the component stuff work out correctly. I tried to show this in post #52. Note also Hurkyl's #77.

In #52 I have made an unfortuante choice of notation, but it is far too late to edit this post. As both Hurkyl and I have pointed out, the notation [itex]e^i[/itex] is best reserved for other vectors that live in [itex]V[/itex]. Whenever you see an [itex]e^i[/itex] in #52, mentally replace it by [itex]\omega^i[/itex].

In #52, I define [itex]\tilde{v}[/itex] before introducing any bases. I then introduce bases [itex] \left\{ e_{i} \right\}[/itex] and [itex] \left\{ \omega^{i} \right\}[/itex] for [itex]V[/itex] and [itex]V*[/itex] respectively. The vector [itex]v[/itex] and the covector [itex]\tilde{v}[/itex] are then expanded in terms of these bases. At this point, the expansion coefficients are seeming unrelated, but a little use of linearity properties reveals the standard connection.

Regards,
George
 
Last edited:
  • #84
Suppose I have a vector space [itex]V[/itex] and that it has two possible bases: [itex]\{e_i\}[/itex] and [itex]\{e_j'\}[/itex]. Now suppose that every basis element belonging to the first set of basis vectors is related somehow to every basis element belonging to the second. Say the relationship is related by some sort of matrix multiplication:

[tex]e_i = A_i^je'_j[/tex]
[tex]e'_j = A'_j^ke_k[/tex]

So we can in effect take a basis element [itex]e_i[/itex] and 'transform' it to another basis element [itex]e_j'[/itex] simply by multiplying it by [itex]A_i^j[/itex]. And similarly we can go backwards. So in this case would [itex]A_i^j[/itex] and [itex]A'_j^k[/itex] be related via

[tex]A_i^j = [A'_j^k]^{-1}[/tex]

That is, are they necessarily inverses of each other?

The question then is, do we have the following result?

[tex]A'_j^kA_i^j = A_j^kA'_i^j = \delta_i^k[/tex]

assuming I have all the indices right.


Similarly, it should be possible for the dual basis to undergo a similar transformation:

[tex]\epsilon'^j = A_k^j\epsilon^k[/tex]


After all that, let's see what happens when we take some arbitrary vector, [itex]v \in V[/itex] and covector [itex]\omega \in V^*[/itex]. If we multiply [itex]v[/itex] by the same matrix we multiplied our basis element with, [itex]A_i^j[/itex] we should have:

[tex]A^j_iv^i = v'^j[/tex]

Note that [itex]v^i = v^ie_i[/itex]. And similarly with the covector ([itex]\omega = \omega_j\epsilon^j[/itex]) should transform under a corresponding dual transformation:

[tex]\omega_jA'_i^j = \omega'_i[/tex]

This all looks remarkably similar to the idea of change of basis by matrix transformations that I did in linear algebra. Is what I have done here simply a generalization of it? I mean, from the looks of it I have merely transformed a vector into a new vector with respect to a new basis via a matrix multiplication - the matrix being that very matrix which defines the basis elements in the new basis.
 
Last edited:
  • #85
Oxymoron said:
Suppose I have a vector space [itex]V[/itex] and that it has two possible bases: [itex]\{e_i\}[/itex] and [itex]\{e_j'\}[/itex]. Now suppose that every basis element belonging to the first set of basis vectors is related somehow to every basis element belonging to the second. Say the relationship is related by some sort of matrix multiplication:
[tex]e_i = A_i^je'_j[/tex]
[tex]e'_j = A'_j^ke_k[/tex]
So we can in effect take a basis element [itex]e_i[/itex] and 'transform' it to another basis element [itex]e_j'[/itex] simply by multiplying it by [itex]A_i^j[/itex]. And similarly we can go backwards. So in this case would [itex]A_i^j[/itex] and [itex]A'_j^k[/itex] be related via
[tex]A_i^j = [A'_j^k]^{-1}[/tex]
That is, are they necessarily inverses of each other?
The question then is, do we have the following result?
[tex]A'_j^kA_i^j = A_j^kA'_i^j = \delta_i^k[/tex]
assuming I have all the indices right.

Yes. Just use linearity, and substitute the second the e_i from the first equation into the e_k into the second equation

[tex]
e'_j = A'^k{}_j A^i{}_k e'_i
[/tex]

Since [itex] e_j = \delta^i{}_j e_i [/itex] we know that

[tex] A'^k{}_j A^i{}_k = \delta^i{}_j [/tex]

This all looks remarkably similar to the idea of change of basis by matrix transformations that I did in linear algebra. Is what I have done here simply a generalization of it? I mean, from the looks of it I have merely transformed a vector into a new vector with respect to a new basis via a matrix multiplication - the matrix being that very matrix which defines the basis elements in the new basis.

It's exactly the same thing that you're used to from linear algebra. There is a standard that transform matrices have indexes that run from northwest-southeast. To see how to keep the indexes from displaying right below one another, study the following plaintex/latex pair

x^a' = \Lambda^{a'}{}_{a} x^a

[tex]
x^a' = \Lambda^{a'}{}_{a} x^a
[/tex]

the empty pair of brackets are what's needed to insure the northwest-southeast lineup on [itex]\Lambda[/itex].

(I think there is a better way to do the primes in latex, though, than what I did).
 
Last edited:
  • #86
Posted By Pervect

There is a standard that transform matrices have indexes that run from northwest-southeast.

Really!? I always wondered why textbooks never seems to be able to line up the indices on their matrices. But I suppose it looks neater.

Posted by Pervect

(I think there is a better way to do the primes in latex, though, than what I did).

Yeah, you can type "\prime". But to me, there is no difference. So I prefer " ' ".

"\prime" [tex]A^{\prime}[/tex]
" ' " [tex]A'[/tex]
 
Last edited:
  • #87
Consider the following quote:

"Consider a contravariant tensor [itex]T = T^i(\bold{x}(t))[/itex] defined on the curve [itex]C[/itex]."

could someone explain what this means. I don't see how a tensor can be defined on a curve, it doesn't make sense to me.

The reason for this is, I want to begin to differentiate tensors.


But first I have some questions on transformations of tensors, which I believe will help me understand differentiation.

Let [itex]C[/itex] be a curve given parametrically by [itex]x^i = x^i(t)[/itex] in an [itex](x^i)[/itex] coordinate system. Consider the tangent vector field [itex]T^i[/itex] defined by

[tex]T^i = \frac{dx^i}{dt}[/tex]

Under a change of coordinates, the same curve is given by [itex]\bar{x}^i = \bar{x}^I(t)[/itex]. and the tangent vector field by

[tex]\bar{T}^i = \frac{d\bar{x}^i}{dt}[/tex]

Now we have, via the chain rule

[tex]\frac{d\bar{x}^i}{dt} = \frac{d\bar{x}^i}{dx^r}\frac{dx^r}{dt}[/tex]

But this is nothing more than

[tex]\frac{d\bar{x}^i}{dt} = \frac{d\bar{x}^i}{dx^r}T^r[/tex]

where [itex]r[/itex] replaces the index [itex]i[/itex].

But what can we conclude from this? Well, in my opinion, the tangent vector field as defined above is a contravariant tensor of order 1, that is, given any curve and any tangent vector to it, under a change of coordinates, the new tangent vector is related to the old by a contravariant tensor of order one. I am not sure if this is correct though.

The problem comes when I try to differentiate w.r.t. [itex]t[/itex] the transformation law of contravariant tensors:

[tex]\bar{T}^i = T^r\frac{\partial\bar{x}^i}{\partial x^r}[/tex]

I get something like (from Schuam)

[tex]\frac{d\bart{T}^i}{dt} = \frac{dT^r}{dt}\frac{\partial\bar{x}^i}{\partial x^r} + T^r\frac{\partial^2\bar{x}^i}{\partial x^s\partial x^r}\frac{dx^s}{dt}[/tex]

From Schuam's Tensor Calculus

...which shows that the ordinary derivative of T along a curve C is a contravariant tensor if and only if the [itex]\bar{x}^i[/itex] are linear functions of the [itex]x^r[/itex].

Which leads to the following theorem:

The derivative of a tensor is a tensor if and only if coordinate changes are restricted to linear transformations.

Could anyway explain this to me? Apparently, Schaum uses the fact that ordinary differentiation fails in most cases of coordinate transformations, and introduces Christoffel symbols (which I don't understand anyway). Any explanation of this would be helpful (or if you have your own way of introducing this material).
 

Similar threads

  • Special and General Relativity
Replies
9
Views
481
  • Special and General Relativity
Replies
12
Views
1K
Replies
40
Views
2K
  • Special and General Relativity
Replies
25
Views
2K
  • Special and General Relativity
6
Replies
186
Views
7K
  • Special and General Relativity
Replies
29
Views
2K
  • Special and General Relativity
4
Replies
123
Views
5K
  • Special and General Relativity
2
Replies
40
Views
1K
  • Special and General Relativity
Replies
7
Views
2K
  • Special and General Relativity
Replies
3
Views
1K
Back
Top