Clock synchronization for ring-riding observers on rotating disk

In summary: Thinking again about it, I found the following argument on Landau book "The Classic Theory of Field" section 99. It seems it should always be possible to choose a reference system (chart) that allows clocks to be synchronized at different points in space.In this specific case (ring-riding Langevin observer on rotating ring) does the condition on Landau book basically amount to select the Minkowski chart since the flat spacetime is a good reference system?
  • #36
PeterDonis said:
You believe incorrectly, as I have already said. In ordinary calculus, ##dx## just means an infinitesimal interval of the variable ##x##, which is taken to zero in the limit. It is a completely different concept from the exterior derivative of a function.
I don't agree with that. If ##f## is a ##C^{\infty}## function (0-form) then its exterior derivative ##df## really is just the differential of ##f##.
 
Physics news on Phys.org
  • #37
PeterDonis said:
##x## (or ##g## in the formula ##\omega = f d g##) is just a scalar; there is no requirement that it be a "coordinate function".
Anyway in 1-dimensional case the differential one-form ##fdg## should be closed:
##d(fdg)=df\wedge dg=\left ( \frac {df} {dx}\right ) dx \wedge \left ( \frac {dg} {dx} \right ) dx = 0##, since ##dx \wedge dx =0##.

From what you said this does not mean there exist a scalar function ##h## such that ##dh=fdg##, however.
 
Last edited:
  • #38
ergospherical said:
I don't agree with that. If ##f## is a ##C^{\infty}## function (0-form) then its exterior derivative ##df## really is just the differential of ##f##.
No, it isn't. The gradient, which is the 1-form exterior derivative of a scalar function, is not the same as the differential, which is an infinitesimal interval of ##f## that gets taken to zero in a limit. A 1-form is not the same as an infinitesimal interval.
 
  • #39
cianfa72 said:
##d(fdg)=df\wedge dg=\left ( \frac {df} {dx}\right ) dx \wedge \left ( \frac {dg} {dx} \right ) dx = 0##, since ##dx \wedge dx =0##.
Wrong. The first part of what you wrote, ##d (f dg) = df \wedge dg##, is correct (since the other term ##f \wedge ddg## vanishes since ##dd## always vanishes). However, the rest is wrong. It is an example of the error I have already described, thinking that a coordinate differential ##dx## in an expression like ##df / dx## is the same thing as the gradient of a scalar function. It isn't.

You say you are not an expert. That is evidently true based on your posts. I strongly suggest spending some time with a textbook on the subject before posting further.
 
  • #40
PeterDonis said:
Wrong.
Btw, it also seems like you are imagining a 1-dimensional case; but the ring-riding observer scenario cannot be analyzed in just one dimension. It requires at least three (two space dimensions and one time dimension). So even if you discover special cases of some formulas for 1 dimension, they will not generalize to the case that is the topic of this thread.
 
  • #41
PeterDonis said:
it also seems like you are imagining a 1-dimensional case; but the ring-riding observer scenario cannot be analyzed in just one dimension.
Not only that, but any expression like ##df \wedge dg##, or for that matter ##d (f dg)## itself, is only meaningful in at least 2 dimensions (since those expressions are 2-forms and 2-forms require at least 2 dimensions).
 
  • #42
PeterDonis said:
No, it isn't. The gradient, which is the 1-form exterior derivative of a scalar function, is not the same as the differential, which is an infinitesimal interval of ##f## that gets taken to zero in a limit. A 1-form is not the same as an infinitesimal interval.

Why do you say this? The modern approach is indeed to treat differentials as 1-forms [not as numbers greater than zero but less than any standard real number, à la "nonstandard analysis"].

Let ##f## be a ##C^{\infty}## function on ##M##, then there is a 1-form ##df## defined by the relation ##df(X) = Xf##. You can check it is a 1-form by showing it is linear, ##df(X+Y) = (X+Y)f = Xf + Yf = df(X) + df(Y)## and also ##df(gX) = (gX)(f) = gX(f) = g df(X)##. Then ##df## is called the differential of ##f##.

For example, let ##f : \mathbb{R}^n \longrightarrow \mathbb{R}##, and consider a vector field ##X = X^i \partial_i##. Then ##df(X) = Xf = X^i \partial_i f##. Also, ##(\partial_i f dx^i) X = \partial_i f X x^i = X^i \partial_i f##. Therefore$$df = \partial_i f dx^i$$which is nothing but the chain rule of calculus.
 
Last edited:
  • #43
ergospherical said:
The modern approach is indeed to treat differentials as 1-forms [not as numbers greater than zero but less than any standard real number, à la "nonstandard analysis"].
Infinitesimals in nonstandard analysis are still not the same as 1-forms. If you think they are, please give a reference.
 
  • #44
PeterDonis said:
Infinitesimals in nonstandard analysis are still not the same as 1-forms. If you think they are, please give a reference.
I think you parsed what I wrote differently to what I intended. I meant that it’s more common to define a differential as a 1-form. The hyperreals of Robinson are a different approach and what I thought you had in mind.
 
  • #45
ergospherical said:
I meant that it’s more common to define a differential as a 1-form.
In expressions like line elements ##ds^2##, yes, the coordinate differentials are more properly interpreted as 1-forms, and things like ##dx^2## are more properly interpreted as ##dx \otimes dx##, things like ##dx dy## are more properly interpreted as ##dx \otimes dy##, etc.

I suppose if you are just dealing with one dimension, since there is only one possible 1-form (modulo a scalar factor), one could establish a correspondence between that 1-form and the coordinate differential ##dx## interpreted in the way one usually does in ordinary calculus (see below) (again modulo a scalar factor). But this will break down as soon as you have more than one dimension.

ergospherical said:
The hyperreals of Robinson are a different approach and what I thought you had in mind.
Ah, I see. Yes, those provide a rigorous foundation (though not the only possible one--the old epsilon-delta formulation in terms of limits still works) for infinitesimals as they are usually used in ordinary calculus.
 
  • #46
PeterDonis said:
So even if you discover special cases of some formulas for 1 dimension, they will not generalize to the case that is the topic of this thread.
Yes definitely it does not apply to the topic of the thread.

Coming back to the topic: I'm not sure to grasp the difference between the Frobenius condition on the timelike congruence's tangent vector field (zero vorticity) vs the corresponding one on covector field for the congruence hypersurface othogonal property.

Can you help me? Thanks.
 
Last edited:
  • #47
I don't have the time to write anything much, but I think a discussion of the volume 1-form and the volume 3-form would be helpful here.

MTW's index has a brief mention on pg 133 which I found in the index, but I think there was a much fuller treatment elsewhere which I don't have time to find. The treatment on pg 133 was very terse :(.

Both the volume 1-form and the volume 3-form are useful concepts, and both should be distinguished from the manifolds and vector spaces. Space-time is a 4d manifold, so at every point we can find 4 indepedent vectors that span the space. In an orthonormal basis, these 4 vectors are orthogonal.

Given a 4d spacetime and a timelike congruence as specified by a vector field in this manifold, we can always locally define a 3d manifold with three basis vectors, all of which basis vectors are orthogonal to the vector field defining the congruence. The space spanned by these 3 vectors is the 3d space.

The purpose of the volume 1 form and 3 form is to get a volume element, which allows one to calculate the volume of some region of 3-space.
 
  • #48
ergospherical said:
What you seek is explained in Appendix B.3 of Wald.
While reading Wald B.3 I got stuck on LHS equation B.3.4 (As far as I can tell Wald uses abstract index notation for tensorial equations)

##\omega_a(Y^b\nabla_bZ^a - Z^b\nabla_bY^a)=-Z^aY^b\nabla_b\omega_a + Y^aZ^b\nabla_b\omega_a##

##\omega_a## is the co-vector field such that vanishing vector fields belong to the 3d distribution. ##Y^a## and ##Z^b## are generic vector field belonging to the 3d distribution.

Can you please help me? Thanks.
 
Last edited:
  • #49
@cianfa72 note that ##\omega_a Y^a = \omega_a Z^a = 0##, so for example$$\nabla_b (\omega_a Z^a) = \omega_a \nabla_b Z^a + Z^a \nabla_b \omega_a = 0$$i.e. ##\omega_a \nabla_b Z^a = -Z^a \nabla_b \omega_a##, and the same for ##Y^a##.
 
  • Like
Likes cianfa72
  • #50
ergospherical said:
@cianfa72 note that ##\omega_a Y^a = \omega_a Z^a = 0##, so for example$$\nabla_b (\omega_a Z^a) = \omega_a \nabla_b Z^a + Z^a \nabla_b \omega_a = 0$$i.e. ##\omega_a \nabla_b Z^a = -Z^a \nabla_b \omega_a##, and the same for ##Y^a##.
ok got it. What about the 'order' of ##Z^a## and ##Y^b## for example in the first term on LHS ? Can we actually 'reverse' their order manteining their abstract index names i.e. ##-Y^bZ^a\nabla_b\omega_a## ?
 
  • #51
Yeah that’s fine. I mean, sure, in the abstract index notation I guess ##T^aS^b## is a different tensor than ##S^b T^a## in that the slot order is reversed (##T \otimes S(u, v) = S \otimes T(v, u)##), but the two are in correspondence anyway.
 
  • Like
Likes cianfa72
  • #52
ergospherical said:
Yeah that’s fine. I mean, sure, in the abstract index notation I guess ##T^aS^b## is a different tensor than ##S^a T^b## in that the slot order is reversed (##T \otimes S(u, v) = S \otimes T(v, u)##), but the two are in correspondence anyway.
ok, anyway w.r.t. the contraction with ##\nabla_b\omega_a## to get the scalar it should be fine, I believe.
 
  • #53
ergospherical said:
in the abstract index notation I guess ##T^aS^b## is a different tensor than ##S^b T^a## in that the slot order is reversed (##T \otimes S(u, v) = S \otimes T(v, u)##), but the two are in correspondence anyway.
We had a specific thread some months ago about tensor index notation.

From the tensor perspective do you think ##T^aS^b## and ##S^bT^a## are really two different tensors just because the names of their slots starting from left has been changed (reversed in this case) ?

In both cases if we fill respectively slot ##a## and ##b## with same vectors ##u## and ##v## we get the same answer.
 
Last edited:
  • #54
ergospherical said:
The tensor product’s generally not commutative, no.
Not sure to grasp it. We're not talking about tensor product commutativity (surely it is not), we are discussing the notation. I think ##TS(-,-)## and ##ST(-,-)## are actually different tensors; however if we assign names to their ordered slots starting from left (using Latin letters such as ##a## and ##b## as required in abstract index notation) why they are not the same ?

For instance since ##T^aS^b\omega_a \beta_b = S^bT^a\omega_a\beta_b## for any dual vector ##\omega_a## and ##\beta_b## then ##T^aS^b## and ##S^bT^a## should be actually the same tensor.

From this perspective ##T^aS^b## and ##S^aT^b## instead are really different tensors (e.g. ##T^aS^b\omega_a \beta_b \neq S^aT^b\omega_a\beta_b##)
 
Last edited:
  • #55
In the abstract index notation ##T^a S^b = T \otimes S## and ##S^b T^a = S \otimes T##.
 
  • #56
ergospherical said:
In the abstract index notation ##T^a S^b = T \otimes S## and ##S^b T^a = S \otimes T##.
So are they really two different tensors? In that case coming back to post#53 why ##-Z^aY^b\nabla_b \omega_a = -Y^bZ^a\nabla_b\omega_a## ?

Btw, from this perspective ##T^{ab}## should be the same as ##T^{ba}##, I guess (same tensor ##T(-,-)## just with ordered slots labelled using different names).

Thanks for your time !
 
Last edited:
  • #57
ergospherical said:
In the abstract index notation ##T^a S^b = T \otimes S## and ##S^b T^a = S \otimes T##.
I'm not an expert in the differences between abstract index notation and (non-abstract) Ricci calculus, but I'd be surprised if that really was true.

##T^a S^b = T \otimes S## implies a convention that the first "slot" is denoted by ##a## and the second "slot" is denoted by ##b##. But ##S^b T^a = S \otimes T## is incompatible with that convention.

To my way of thinking, ##T^a S^b = T \otimes S## is ambiguous, and to avoid ambiguity should really be written ##T^a S^b = (T \otimes S)^{ab}##.

And then, if I'm right, you can write $$(T \otimes S)^{ab} = T^a S^b = S^b T^a, $$and you can also write $$(T \otimes S)^{ba} = T^b S^a = S^a T^b. $$

Could someone who is experienced in abstract index notation confirm whether I'm right or not?
 
  • #58
ergospherical said:
IMO the free indices ##a## and ##b## in the abstract index notation are arbitrary letters and I didn’t pay attention to alphabetical ordering to name the slots.
Hence ##T^aS^b## should be the same (2,0) tensor as ##T^bS^a## ?
 
  • #59
cianfa72 said:
Hence ##T^aS^b## should be the same (2,0) tensor as ##T^bS^a## ?
Btw, Wald in section 2.4 makes clear that ##T_{ab} \neq T_{ba}## except for symmetric tensor of course.

Any thought ?
 
Last edited:
  • #60
cianfa72 said:
Any thought ?
I think there's a notational weakness with tensors in general. Take the four index Riemann tensor. It describes the change to a vector if you move it around an infinitesimal loop defined by two infinitesimal displacements. By convention, the "output" is the first index, the "input" is the second, and the loop definitions are the third and fourth. But there is absolutely no way to tell from ##R^a{}_{bcd}## which slot is which - you just have to know which one to contract what tensor with.

So imagine we define some tensor ##W^{ab}## and contract with a one-form ##\omega_a##. It's immediately clear that if ##W^{ab}=S^aT^b## then ##\omega_aW^{ab}=\omega_aS^aT^b##, but if ##W^{ab}=T^aS^b## then ##\omega_aW^{ab}=\omega_aT^aS^b## and these are different. But are the two ##W##s really different? If we keep track of which index relates to ##S## and which to ##T## and make sure we contract with the right one then no. If we just contract with the first index in either case then yes. But I don't know a way to notate which slot means which without referring back to the definition of the tensor or simply memorising a convention.
 
  • #61
At the beginning of this thread we talked about 'stationary' congruence like the Langevin congruence in Minkowski spacetime. Langevin congruence is defined as stationary just because its worldlines are integral orbits of a timelike Killing vector field (KVF).

If the above is correct then, by definition, given a spacetime the existence of at least one stationary congruence suffices to define it as a stationary spacetime, right ?
 
Last edited:
  • #62
cianfa72 said:
by definition, given a spacetime the existence of at least one stationary congruence suffices to define it as a stationary spacetime, right ?
Yes.
 
  • #63
So in Minkowski spacetime there are some types of timelike KVFs (actually an infinite families of them): inertial KVFs, Rindler KVFs and Langevin KVFs (the corresponding integral orbits define indeed stationary congruences). Both the first two are also static since they are hypersurface orthogonal whilst the third is not.
 
  • #64
cianfa72 said:
So in Minkowski spacetime there are some types of timelike KVFs (actually an infinite families of them): inertial KVFs, Rindler KVFs and Langevin KVFs (the corresponding integral orbits define indeed stationary congruences). Both the first two are also static since they are hypersurface orthogonal whilst the third is not.
Yes. Note, however, that while the inertial KVFs are timelike everywhere, the others are not; they are only timelike in restricted open regions of the spacetime (in the Rindler case, the appropriate "wedges" where the hyperbolas that are the integral curves of the KVF are timelike; in the Langevin case, an open "tube" where the radius is small enough to make the helical integral curves of the KVF timelike).
 
  • Like
Likes cianfa72
Back
Top