Curved Space-time and Relative Velocity

In summary, the conversation discusses the concept of relative velocity between two moving points in curved space-time. The argument is that in order to calculate relative velocity, we need to subtract one velocity vector from another at a distance and bring them to a common point through parallel transport. However, the use of different routes in parallel transport can result in different directions of the second vector at the final position, making the concept of relative velocity mathematically unacceptable. The discussion also includes examples of parallel transport on curved surfaces and the potential impact of sharp bends on the calculation of relative velocity. One example involves two static observers in Schwarzschild spacetime, where their relative velocity is found to be different when calculated using parallel transport along different paths. The conclusion is
  • #281
JDoolin said:
My use of the word reference frame is quite typical
But your use of the word "in" is very atypical. You keep on referring to objects being "in a reference frame" rather than "being at rest in" or "moving in" a reference frame. Your usage doesn't make any sense.

JDoolin said:
Example:
If I am driving down the highway at 55 miles per hour, and a truck is traveling at 55 miles per hour, how fast is the truck going in my reference frame? 110 miles per hour. How fast am I going in the truck's reference frame? 110 miles per hour. How fast are we going in the Earth's reference frame? 55 miles per hour.
This is typical usage, all three objects (you, truck, highway) have a specified velocity with respect to all three reference frames. Each object is "at rest in" or "moving in" every given reference frame. This is the usage that I mentioned in post 237 and you specifically rejected in post 239. If you have changed your mind and adopted the standard usage then it will certainly help communication.

Assuming that you are now indeed using the standard terminology then I must re-emphasize the fact that the first postulate ensures that a measuring device will get the same result for a given measurement regardless of the reference frame. You are never forced to use the reference frame where the device/observer is at rest.
 
Physics news on Phys.org
  • #282
DaleSpam said:
But your use of the word "in" is very atypical. You keep on referring to objects being "in a reference frame" rather than "being at rest in" or "moving in" a reference frame. Your usage doesn't make any sense.

This is typical usage, all three objects (you, truck, highway) have a specified velocity with respect to all three reference frames. Each object is "at rest in" or "moving in" every given reference frame. This is the usage that I mentioned in post 237 and you specifically rejected in post 239. If you have changed your mind and adopted the standard usage then it will certainly help communication.

Assuming that you are now indeed using the standard terminology then I must re-emphasize the fact that the first postulate ensures that a measuring device will get the same result for a given measurement regardless of the reference frame. You are never forced to use the reference frame where the device/observer is at rest.


I have not been as clear as I thought. For what I am referring to, it is not sufficient just to say "the reference frame I am in," because, indeed I am in every reference frame. Mea Culpa.

You may assume that every time I have said "the reference frame someone is in" I actually meant "the reference frame in which someone is momentarily at rest."

If that helps, I still disagree on the issue of whether an observer is "forced" to use the reference frame where it is momentarily at rest.

Let me try to make my main point in as simple a way as I can. I have asked several people the following question: Imagine you are in a truck, driving in a soft snowfall. To you, it seems that the snow is moving almost horizontally, toward you. Which way is the snow "really" moving.

Everyone I have asked this question answers, "straight down." Of course, this is a good Aristotlean answer, but relativistically speaking there is no correct answer, because there is no ether by which one could determine how the snow is "really" moving.

On the other hand, if you put a camcorder in the front window of the truck and filmed the snow, that camera has no other option than to film the snowfall as it appears in the reference frame where the vehicle (and the camera) is at rest. In the film, it will appear that the snow is traveling almost horizontally, straight toward the camera.

Even if you stop the truck, or throw the camera out the window, the camera still films everything in such a way that the camera is always momentarily at rest in its own reference frame. It is effectively forced to film things this way; not as a matter of convention, but as a matter of physical reality.

It is also the same with Barbara, who on her trip accelerates and turns around--what she sees is not a matter of convention, but a a matter of physical fact.

Now, there is also the matter of stellar aberration. In general, the common view is that the actual positions of stars are stationary, but it is only some optical illusion which causes them to move up to 20 arcseconds in the sky over the course of the year. The nature of this question is similar to the snowflake question. Is the light coming from the direction that the light appears to be coming from? If you point toward the image of the star, are you pointing toward the star? Are you pointing toward the event which created the light you are now seeing?

I would say that in the truck and snow example, as far as the truck-driver is concerned, the snow really is coming toward him. And in the stellar aberration case, you really are pointing toward the event which produced the light of the star. In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.
 
  • #283
JDoolin said:
In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.
Could you clarify your meaning here? I also would not characterize them as optical illusions since an optical illusion is due to our eyes and brains and how they interpret images, but instead they are due to the finite speed of light. The coordinates of events in an inertial reference frame are what remains after properly accounting for the finite speed of light. A camera does not account for the finite speed of light, therefore this seems wrong to me:
JDoolin said:
if you put a camcorder in the front window of the truck and filmed the snow, that camera has no other option than to film the snowfall as it appears in the reference frame where the vehicle (and the camera) is at rest.
The film from the camcorder will show Terrell rotation and aberration and other effects due to the finite speed of light which are carefully accounted for and removed by the coordinate system. The film will most definitely not show how things are in the inertial rest frame.

If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame. Light-cone coordinates would directly indicate what the camera would film, but they are not inertial. Of course, using the inertial rest frame you can certainly calculate what the image will look like, but you can do that from any frame, inertial or not.
 
  • #284
DaleSpam said:
If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame.
I am interested, do you have some good references (e.g. books or significant papers) to light cone coordinates DaleSpam?
 
  • #285
I would probably start with this one:
http://ysfine.com/articles/dircone.pdf
 
Last edited by a moderator:
  • #286
DaleSpam said:
Could you clarify your meaning here? I also would not characterize them as optical illusions since an optical illusion is due to our eyes and brains and how they interpret images, but instead they are due to the finite speed of light. The coordinates of events in an inertial reference frame are what remains after properly accounting for the finite speed of light. A camera does not account for the finite speed of light, therefore this seems wrong to me:The film from the camcorder will show Terrell rotation and aberration and other effects due to the finite speed of light which are carefully accounted for and removed by the coordinate system. The film will most definitely not show how things are in the inertial rest frame.

If you wish to use a coordinate system that directly reflects the effects due to the finite speed of light then you will need to use light-cone coordinates, not the inertial rest frame. Light-cone coordinates would directly indicate what the camera would film, but they are not inertial. Of course, using the inertial rest frame you can certainly calculate what the image will look like, but you can do that from any frame, inertial or not.

Now that I know you call this "light-cone coordinates" I can tell you I have been talking about "light-cone coordinates" the whole time. Now, can you understand this is what Barbara would see?

JDoolin said:
Barbara will say to Alex:

"... What I saw was for half of the trip, your image was contracted, moving away from me at less than half the speed of light and you were moving in slow-motion, then when I turned around your image shot away from me, then as I was coming back, you were moving in fast motion, and the image was elongated, and coming toward me at faster than the speed of light."
 
  • #287
JDoolin said:
Now that I know you call this "light-cone coordinates" I can tell you I have been talking about "light-cone coordinates" the whole time. Now, can you understand this is what Barbara would see?
Light cone coordinates are most definitely not the same as the momentarily co-moving inertial frame (MCIF). However, if you like light cone coordinates then you should really like Dolby and Gull's coordinates. They are very closely related (much more closely related than the MCIF). That is actually one of the things that I find appealing about them.
 
  • #288
DaleSpam said:
Light cone coordinates are most definitely not the same as the momentarily co-moving inertial frame (MCIF). However, if you like light cone coordinates then you should really like Dolby and Gull's coordinates. They are very closely related (much more closely related than the MCIF). That is actually one of the things that I find appealing about them.

Let me first make clear that I do like the article about light cone coordinates, although I think I jumped the gun in saying that I was using the light-cone coordinates. (I was not.) What I was doing was considering the locus of events that are in the observer's past light cone. Unfortunately, I went by the name of the article and the context of what I thought we were talking about, and didn't spend the time to grock what the article was actually about.

This "Dirac's Light Cone Coordinates" appears to be a pretty good pedagogical method, as it turns the Lorentz Transform into a scaling and inverse scaling on the u and v axes, simply by rotating 45 degrees, so the x=ct line and x=-ct lines are vertical and horizontal:

This is another way of writing equation (2) from the article you referenced.

[tex]\left(
\begin{array}{c}
u \\
v
\end{array}
\right)
=

\left(
\begin{array}{cc}
\cos (45) & \sin (45) \\
-\sin (45) & \cos (45)
\end{array}
\right)
\left(
\begin{array}{c}
t \\
z
\end{array}
\right)
[/tex]​
I used almost identical reasoning when I derived this: (in thread: https://www.physicsforums.com/showthread.php?t=424618").


[tex] \begin{pmatrix} ct' \\ x'\ \end{pmatrix}= \begin{pmatrix} \gamma & -\beta\gamma \\ -\beta\gamma & \gamma \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix} = \begin{pmatrix} \cosh(\theta) & -sinh(\theta) \\ -sinh(\theta) & \cosh\theta \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix}= \begin{pmatrix} \frac {1+s}{2} & \frac {1-s}{2} \\ \frac {1-s}{2}& \frac {1+s}{2} \end{pmatrix} \begin{pmatrix} \frac {s^{-1}+1}{2} & \frac {s^{-1} -1}{2} \\ \frac {s^{-1}-1}{2}& \frac {s^{-1}+1}{2} \end{pmatrix} \begin{pmatrix} c t \\ x\ \end{pmatrix} [/tex]​

It's not immediately clear that the last two matrices represent scaling on the x=c t axis and the x=-c t axis. Thehttp://ysfine.com/articles/dircone.pdf" has made the tranformation much more elegant (though I may have a sign or two wrong somewhere)

[tex]
\left(
\begin{array}{c}
\text{ct}' \\
z'
\end{array}
\right)
=
\left(
\begin{array}{cc}
\cos (45) & -\sin (45) \\
\sin (45) & \cos (45)
\end{array}
\right)

\left(
\begin{array}{cc}
e^{\eta } & 0 \\
0 & e^{-\eta }
\end{array}
\right)

\left(
\begin{array}{cc}
\cos (45) & \sin (45) \\
-\sin (45) & \cos (45)
\end{array}
\right)

\left(
\begin{array}{c}
\text{ct} \\
z
\end{array}
\right)
[/tex]​

I'm not sure how Dolby and Gull's Radar time relates to Dirac's light-cone coordinates. It appears to me that Dirac's light-cone coordinates are simply an aid to performing the Lorentz Transformations. These light-cone coordinates of Dirac's don't claim to show another frame; they simply rotate the Minkowski diagram 45 degrees.

My point is really, whatever coordinate system you use, you should be imagining Barbara, and what she is seeing, and if your predictions match mine--that she sees Alex's image basically lurch away as Barbara is turning around--then you have a good system. If you don't realize that Alex's image lurches away, then you are doing something wrong, or you haven't finished your analysis.
 
Last edited by a moderator:
  • #289
JDoolin said:
I would say that in the truck and snow example, as far as the truck-driver is concerned, the snow really is coming toward him. And in the stellar aberration case, you really are pointing toward the event which produced the light of the star. In each case, the observed phenomena are results of the observers being at rest in particular reference frames. The phenomena they are seeing are not optical illusions, but are true representations of what is happening in the reference frames where they are momentarily at rest.

The statement has a verb tense problem; should read:

The phenomena they are seeing are not optical illusions, but are true representations of what was happening in the reference frames where they are momentarily at rest.

The past-light cone of an event is the locus of events which are currently being seen by the camera. It is not what is happening, but what was happening.
 
  • #290
JDoolin said:
My point is really, whatever coordinate system you use, you should be imagining Barbara, and what she is seeing, and if your predictions match mine--that she sees Alex's image basically lurch away as Barbara is turning around--then you have a good system. If you don't realize that Alex's image lurches away, then you are doing something wrong, or you haven't finished your analysis.
And my point from the beginning of our conversation is that you can determine what Barbara sees in any coordinate system (inertial or not). There is no reason to choose one frame over another other than convenience. Are you OK with that statement now?
 
  • #291
DaleSpam said:
And my point from the beginning of our conversation is that you can determine what Barbara sees in any coordinate system (inertial or not). There is no reason to choose one frame over another other than convenience. Are you OK with that statement now?

I remain agnostic about the usefulness of accelerated reference frames. I think Rindler coordinates may have some potential. But "radar time" seems rather too arbitrary to me. I found an article by Antony Eagle that has some of my same criticisms:

http://arxiv.org/abs/physics/0411008

I also found the "Debs and Redhead" article referenced:

http://chaos.swarthmore.edu/courses/PDG/AJP000384.pdf

It concludes: "Perhaps the method discussed in this paper, the conventionality of simultaneity applied to depicting the relative progress of two travelers in Minkowski space-time, will settle the issue of the twin paradox, one which has been almost continuously discussed since Lagevin's 1911 paper."

If I correctly understand their meaning, the "relative progress" of a traveler in Minkowski spacetime is be simulated here:

http://www.wiu.edu/users/jdd109/stuff/relativity/LT.html
 
Last edited by a moderator:
  • #292
JDoolin said:
But "radar time" seems rather too arbitrary to me.
Radar time for an inertial observer is the Einstein synchronization convention. It is arbitrary, but certainly no more nor less arbitrary than the usual convention. And even more arbitrary conventions will work.

The Debs and Redhead article supports my position that the choice of simultaneity is a matter of convenience (they use the word convention).

The Eagle article explicitly admits in the third paragraph that the Dolby and Gull article is mathematically correct. Eagle's point is not that Dolby and Gull are wrong, just that their approach is not necessary. I fully agree, you can use any coordinate system you choose.
 
Last edited:
  • #293
While distant simultaneity is a matter of convention, I prefer choices that rely on some operational definition. The Einstein convention (equiv. radar time) is a particularly intuitive operational definition. However, one issue I have with it in cosmological (GR) context is that it requires that one be able (at minimum) to extend an observer's worldline back to the past light cone of distant event. In cosmology, for a very distant object, this is simply impossible (before the big bang anyone?)

I have played with a similarly intuitive operational definition that only requires an observer to pass into the future light cone of a distant event (which they must to ever be aware of it at all). Conceptually, one imagines that the distant event emits a signal of known intensity, and known frequency (e.g. a pattern of hydrogen lines). In this conceptual definition, one ignores any source of attenuation except distance. Then a receiving observer can identify the original frequency by the line pattern, compensate for red/blue shift, getting the intensity that would be received from a hypothetically non-shifted source (whether such could actually exist in the cosmology is not relevant to the operational definition). Then comparing this normalized received intensity to the assumed original intensity, applying a standard attenuation model, one gets a conventional distance to the event. Divide by c and you get the time in your current frame that would be considered simultaneous.

As a simpler stand in for this model, I have thought about the following, which might be equivalent. Imagine a two light rays emitted from a distant event at infinitesimal angle to each other. Taking the limit, as angle goes to zero, of their separation in the receiver's frame over the angle in the sender's would seem to measure the expected attenuation and directly provide a conventional distance that leads to a conventional simultaneity.

I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?
 
  • #294
I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?
If you skip the "compensate for red/blue shift" part, you get the definition of "http://en.wikipedia.org/wiki/Luminosity_distance" ".
 
Last edited by a moderator:
  • #295
PAllen said:
While distant simultaneity is a matter of convention, I prefer choices that rely on some operational definition. The Einstein convention (equiv. radar time) is a particularly intuitive operational definition.

Can you give me more detail on just what is involved in the Einstein Convention?

However, one issue I have with it in cosmological (GR) context is that it requires that one be able (at minimum) to extend an observer's worldline back to the past light cone of distant event. In cosmology, for a very distant object, this is simply impossible (before the big bang anyone?)

In the standard model, I gather certain things are impossible that would not be impossible in the Milne model. (See my blog)

I have played with a similarly intuitive operational definition that only requires an observer to pass into the future light cone of a distant event (which they must to ever be aware of it at all). Conceptually, one imagines that the distant event emits a signal of known intensity, and known frequency (e.g. a pattern of hydrogen lines). In this conceptual definition, one ignores any source of attenuation except distance. Then a receiving observer can identify the original frequency by the line pattern, compensate for red/blue shift, getting the intensity that would be received from a hypothetically non-shifted source (whether such could actually exist in the cosmology is not relevant to the operational definition). Then comparing this normalized received intensity to the assumed original intensity, applying a standard attenuation model, one gets a conventional distance to the event. Divide by c and you get the time in your current frame that would be considered simultaneous.

As a simpler stand in for this model, I have thought about the following, which might be equivalent. Imagine a two light rays emitted from a distant event at infinitesimal angle to each other. Taking the limit, as angle goes to zero, of their separation in the receiver's frame over the angle in the sender's would seem to measure the expected attenuation and directly provide a conventional distance that leads to a conventional simultaneity.

I have not actually tried these out for any interesting cases. Has anyone ever heard of any work on similar definitions and how results compare to other simultaneity conventions?

I think that apparent distance can be estimated by apparent size related to actual size in some way. Your method involves an observer that must be in two places at once (to get the end-points of two rays coming from the same point.) An alternative would be to use the positions of two ends of the object; and what angle they would be seen in the position of a point-observer. I like the idea, but I'm not well-read enough to know whether either approach has been published.
 
  • #296
Ich said:
If you skip the "compensate for red/blue shift" part, you get the definition of "http://en.wikipedia.org/wiki/Luminosity_distance" ".

I thought this must a standard astronomy technique. Actually, the wikipedia reference says you do try to compensate for redshift, time dilation, and curvature, though they don't say how (and, it seems these are very intertwined). So that is the definition I am looking for. So then, I am looking for what sort of coordinate system that imposes on, e.g. a Friedman model compared to other coordinate systems.
 
Last edited by a moderator:
  • #297
JDoolin said:
Can you give me more detail on just what is involved in the Einstein Convention?
It's the same as the rader time you've been discussing with Dalespam. You imagine signal sent to a distant event and received back, and take 1/2 your locally measured time difference. To model sending the signal, you need to extend your world line to the past light cone of the distant event.
JDoolin said:
In the standard model, I gather certain things are impossible that would not be impossible in the Milne model. (See my blog)
I looked at this and I don't think I understand the applicability. It seemed from your blog that this model imposes a global Minkowski frame. How is that possible for a strongly curved model that may include inflation?
JDoolin said:
I think that apparent distance can be estimated by apparent size related to actual size in some way. Your method involves an observer that must be in two places at once (to get the end-points of two rays coming from the same point.) An alternative would be to use the positions of two ends of the object; and what angle they would be seen in the position of a point-observer. I like the idea, but I'm not well-read enough to know whether either approach has been published.
A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.
 
  • #298
Actually, the wikipedia reference says you do try to compensate for redshift, time dilation, and curvature, though they don't say how (and, it seems these are very intertwined).
Yeah, this article claims a lot of strange things. Anyway, from the formula you see that no such corrections are applied. They want to keep the distance as close to the measured data as possible, at the expense of deliberately deviating from the most reasonabe definition if there is redshift.
So then, I am looking for what sort of coordinate system that imposes on, e.g. a Friedman model compared to other coordinate systems.
If you correct time for light travel time and distance for redshift? Minkowskian in the vicinity, and then something like reduced-circumference coordinates, with more or less static slices. Like the Schwarzschild r-coordinate, I guess.
 
  • #299
PAllen said:
It's the same as the rader time you've been discussing with Dalespam. You imagine signal sent to a distant event and received back, and take 1/2 your locally measured time difference. To model sending the signal, you need to extend your world line to the past light cone of the distant event.

I have to say I doubt the wisdom of that technique. It works fine in an inertial frame, but it shouldn't be used while you are accelerating. By the time the signal comes back to you you will not have the same lines of simultaneity as when you sent the signal.

Say I was trying to determine what the y-coordinate of an object were on a graph as I was rotating. I figure out what the y-coordinate is, and a moment later, after I've rotated 30 degrees, I find what the y-coordinate is again. Would it be valid in ANY way for me to just take the average of those two y-coordinates, and claim it as the "radar y-coordinate?"

Edit: Also unless you are accelerating dead-on straight toward your target, the signal that you send toward it is more-than-likely going to miss (unless you calculate its trajectory in your momentarily comoving frame), and certainly won't reflect straight back at you after you accelerate!

I looked at this and I don't think I understand the applicability. It seemed from your blog that this model imposes a global Minkowski frame. How is that possible for a strongly curved model that may include inflation?

Not sure exactly what you're asking about a strongly curved model, but to get inflation, you just apply a Lorentz Transformation around some event later than the Big Bang event in Minkowski space. The Big Bang gets moved further into the past, and voila... inflation.

A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.

Hmmm, there is "your distance from them" which is something, philosophically, I think is anti-relativistic, and there is "their distance from you" which is philosophically in tune with relativity. The difference is that relativity is based on the view of the observer. (at least in Special Relativity it is. That philosophy may have changed in General Relativity.) Can you clarify which one are you interested in?
 
Last edited:
  • #300
PAllen said:
A measure relation of apparent angular size in my frame with size of object in a distant frame I would take to be a measure of my distance from them. In effect, I am doing the reverse: relating angular size in the distant frame to actual size in my frame, which seems more directly equivalent to signal attenuation. Normally, I would expect these distances to be symmetric, but I don't want to assume that for some extreme case. Since none of these angular size measurements could actually be done in the real world, while luminosity measurements can be done, I was looking for a directly computable simple analog of what I now know is luminosity distance. Then I could relate computations in cosmology model to actually possible astronomic measurements.

I missed the comparison of the "distant frame" and "my frame." I gather you are assuming there is some different spatial scale to the distant objects than the nearby objects. My assumption would be that there is no such spatial scale difference.
 
  • #301
JDoolin said:
Can you give me more detail on just what is involved in the Einstein Convention?
Here is the original paper by Einstein:
http://www.fourmilab.ch/etexts/einstein/specrel/www/

The simultaneity convention is explained in section 1. Just use the same convention for a non-inertial observer and you have Dolby and Gull's radar time.

JDoolin said:
I have to say I doubt the wisdom of that technique. It works fine in an inertial frame, but it shouldn't be used while you are accelerating. By the time the signal comes back to you you will not have the same lines of simultaneity as when you sent the signal.
The point is that you have to define your lines of simultaneity by adopting some convention. Any convention you pick is fine, so why not use the same convention that you use for inertial frames?
 
  • #302
JDoolin said:
I have to say I doubt the wisdom of that technique. It works fine in an inertial frame, but it shouldn't be used while you are accelerating. By the time the signal comes back to you you will not have the same lines of simultaneity as when you sent the signal.

The papers you mention in discussion with Dalespam all agree that there is no such thing as objective lines of simultaneity. An infinite number of consistent definitions are possible for a single oberver (one of the papers even parameterizes an infinite family of valid definitions of planes of simultaneity). The Lorentz transform embodies one possible definition for inertial frames.

Given this, feel free to have doubts about Einstein's convention, but note it is the one he used throughout his papers on special relativity.

I also have a problem with it, in that it requires the ability extend an observer's world line to the prior light cone of a distant event. Even where this is possible, I find it inelegant.

JDoolin said:
Say I was trying to determine what the y-coordinate of an object were on a graph as I was rotating. I figure out what the y-coordinate is, and a moment later, after I've rotated 30 degrees, I find what the y-coordinate is again. Would it be valid in ANY way for me to just take the average of those two y-coordinates, and claim it as the "radar y-coordinate?"
The aim of this convention is simply to provide one meaningful answer to what time on an oberver's timeline corresonds to some distent event. The recipe (where achievable) is simple and intuitive: find a pair of null lines connecting some point t1 on oberver's world line to the distant event, and another connecting the event to a later point t2 on the observer's world line; assume the distant event occurred at (t1+t2)/2. This can be done unambiguously in SR (may have a couple of solutions, I think, in weird GR geometries), doesn't care about how wild or rotating the observer's state of motion is, and can be computed in any frame of reference and get the same result (for simultaneity by this definition for a specified observer (world line)).
JDoolin said:
Not sure exactly what you're asking about a strongly curved model, but to get inflation, you just apply a Lorentz Transformation around some event later than the Big Bang event in Minkowski space. The Big Bang gets moved further into the past, and voila... inflation.

So far as I know, it is impossible to impose a global minkowski coordinate system on a general solution in GR. Also, so far as I understand it, it is not unusual to have a GR solution where event e1 is in the prior light cone of e2, but no event in the prior light cone of e2 is in the prior light cone of e1. In such a situations, the Einstein convention is impossible to apply, as is any global minkowski coordinates. However, my definition provides a consistent simultaneity definition for such a case.
 
  • #303
Just for the record: there is IMHO a most general definition of "as minkowski as possible" coordinates. Operationally, it's a chain of observers, starting at the prime observer, each at rest (two-way doppler=0) and synchronized wrt its neighbours. Time coordinate is the proper time of the prime observer, space coordinate is the distance measured along the chain.
This definition reproduces not only Minkowski distance, but also Rindler distance in the case of an accelerating prime observer.
Mathematically, we're talking about geodesics orthogonal to the prime observer's worldline.
 
Last edited:
  • #304
Ich said:
Just for the record: there is IMHO a most general definition of "as minkowski as possible" coordinates. Operationally, it's a chain of observers, starting at the prime observer, each at rest (two-way doppler=0) and synchronized wrt its neighbours. Time coordinate is the proper time of the prime observer, space coordinate is the distance measured along the chain.
This definition reproduces not only Minkowski distance, but also Rindler distance in the case of an accelerating prime observer.
Mathematically, we're talking about geodesics orthogonal to the prime observer's worldline.

Try applying it to the complete Schwarzschild geometry. Two way doppler doesn't exist for events separated by the event horizon. Yet an event outside the horizon can be in the prior lightcone of an event inside the horizon, but not vice versa. I am thinking about sufficiently general notions of simultaneity such that if e1 gets signal from e2, no other information is needed to define a plausible sense of when e2 occurred from the point of view of e1.

Also, note that the rindler metric includes horizons across which two way doppler is impossible.
 
  • #305
Also, note that the rindler metric includes horizons across which two way doppler is impossible.
Of course. Two way doppler establishes staticity as far as possible. In static coordinates there are sometimes horizons, and static coordinates necessarily reflect their existence. It's not a bug, it's a feature. :wink:
I am thinking about sufficiently general notions of simultaneity such that if e1 gets signal from e2, no other information is needed to define a plausible sense of when e2 occurred from the point of view of e1.
Yes, that's rather a revised luminosity distance, if the emitter's luminosity is known. Quite a messy calculation, and vulnerable (think of gravitational lenses), but there's always a price to pay.
 
  • #306
Ich said:
It's not a bug, it's a feature. :wink:
Hehe, only if it is in the documentation!
 
  • #307
Hehe, only if it is in the documentation!
Of course, page #303 first sentence. My lawyer reads it as: the cutomer has been warned explicitly that using this program will most certainly lead to apocalypse, so in case something goes wrong it is the customer's own fault.
That's how it is in software business.
 
  • #308
DaleSpam said:
Here is the original paper by Einstein:
http://www.fourmilab.ch/etexts/einstein/specrel/www/

The simultaneity convention is explained in section 1. Just use the same convention for a non-inertial observer and you have Dolby and Gull's radar time.

The point is that you have to define your lines of simultaneity by adopting some convention. Any convention you pick is fine, so why not use the same convention that you use for inertial frames?

It depends on how you define "the same convention."

To me, "the same convention" would be to use the line of simultaneity of the momentarily comoving inertial frame. i.e. "the same convention" is the one that yields the "same result."

You're wanting to use the same technique for accelerated observers, but you will get an entirely different result.

For one thing, simultaneity should be based solely on "now." It should not be averaged out between some time in the future depending on what accelerations you plan to make in the future, and some time in the past based on your acceleration history.
 
  • #309
Ich said:
Just for the record: there is IMHO a most general definition of "as minkowski as possible" coordinates. Operationally, it's a chain of observers, starting at the prime observer, each at rest (two-way doppler=0) and synchronized wrt its neighbours. Time coordinate is the proper time of the prime observer, space coordinate is the distance measured along the chain.
This definition reproduces not only Minkowski distance, but also Rindler distance in the case of an accelerating prime observer.
Mathematically, we're talking about geodesics orthogonal to the prime observer's worldline.


I don't get this at all. Minkowski coordinates are x, y, z, t. It's the Cartesian Coordinate system plus time. Operationally it's this way, that way, the other way, and waiting.
 
  • #310
PAllen said:
While distant simultaneity is a matter of convention, I prefer choices that rely on some operational definition. The Einstein convention (equiv. radar time) is a particularly intuitive operational definition. However, one issue I have with it in cosmological (GR) context is that it requires that one be able (at minimum) to extend an observer's worldline back to the past light cone of distant event. In cosmology, for a very distant object, this is simply impossible (before the big bang anyone?)

Ah, now I understand what you meant. Because for an accelerating observer, the Einstein convention/Radar time, calculating simultaneity requires you to use both your future motion and your past motion, it can require you to use motion from before you even existed; before your particles had burst asunder from stars. And even before that, before the universe existed.

So each particle in your body has a different calculation of radar time for the most distant events.
 
  • #311
JDoolin said:
I don't get this at all. Minkowski coordinates are x, y, z, t. It's the Cartesian Coordinate system plus time. Operationally it's this way, that way, the other way, and waiting.

An observatory took a picture of a comet crashing into Jupiter. Tell me exactly how you assign x,y,z,t to this? You must use several operational definitions to achieve this. Even a magic 200 million mile tape measure is an operational definition. You could say, completely arbitrarily (and validly), that the event of my taking the picture is (0,0,0,1) and the event of the collision is (1,1,1,0). But then, to form a metric (or supply the 'c' in a Lorentz transform) you need operational definitions to relate these coordinates to observable invariants.
 
  • #312
JDoolin said:
It depends on how you define "the same convention."

To me, "the same convention" would be to use the line of simultaneity of the momentarily comoving inertial frame. i.e. "the same convention" is the one that yields the "same result."
The Einstein synchronization convention is an experimental procedure that can be used to determine if events were simultaneous. Different observers disagree on the result of this procedure. That is the whole point of the relativity of simultaneity.

Defining the convention by the result is rather inappropriate in this case and doesn't even work for inertial frames.

JDoolin said:
Ah, now I understand what you meant. Because for an accelerating observer, the Einstein convention/Radar time, calculating simultaneity requires you to use both your future motion and your past motion
The Einstein synchronization convention requires you to use both your future and past motion for an inertial observer too. It is just that inertial motion is particularly easy to describe.

JDoolin, whether it is the same convention or not, if you have a strong preference for one convention over another for some personal reason (or even for no reason at all) that is perfectly fine. You don't have to justify your preference to me nor to anyone else. What is not fine is for you to attempt to elevate your personal preference to the status a physical requirement. No coordinate system has that status. Do you understand that now?
 
Last edited:
  • #313
PAllen said:
An observatory took a picture of a comet crashing into Jupiter. Tell me exactly how you assign x,y,z,t to this? You must use several operational definitions to achieve this. Even a magic 200 million mile tape measure is an operational definition. You could say, completely arbitrarily (and validly), that the event of my taking the picture is (0,0,0,1) and the event of the collision is (1,1,1,0). But then, to form a metric (or supply the 'c' in a Lorentz transform) you need operational definitions to relate these coordinates to observable invariants.

One needs two eyes with a high enough resolution to see the event, spaced far enough apart that the parallax can be measured at that resolution. In order to find the value of c, techniques from Ole Romer, James Bradley, Louis Fizeau would work. If we had good enough clocks and cameras, even Galileo's method would work.

Once you have the speed of light, and parallax measurements of the distance, you can calculate the time the event happened by dividing the distance by the speed of light.
 
  • #314
JDoolin said:
One needs two eyes with a high enough resolution to see the event, spaced far enough apart that the parallax can be measured at that resolution. In order to find the value of c, techniques from Ole Romer, James Bradley, Louis Fizeau would work. If we had good enough clocks and cameras, even Galileo's method would work.

Once you have the speed of light, and parallax measurements of the distance, you can calculate the time the event happened by dividing the distance by the speed of light.

Two way speed of light can be objectively defined for one observer. One way speed of light being constant (even for an inertial observer) is an additional assumption that cannot be directly verified, and is one of many equally possible conventions (read all the papers you listed above carefully). Assuming one way speed of light is constant is exactly Einstein's convention. There is no way for one observer to measure one way speed of light. You need two separated observers, with distant clock synchronization established.

Parallax requires distant simultaneity between two separate observers (your eyes or even opposite ends of the Earth are no good for astronomic events). So you are back to a conventions about distant simultaneity to actually measure parallax - which all the papers under discussion here agree is impossible to define objectively.

Trying to be explicit about the operational definitions behind real measurements is exactly what leads to relativity, and to clarifying which parts of it are fundamental features of the universe and which parts are possibly useful conventions. This is also part of what leads to the quantum revolution.
 
  • #315
DaleSpam said:
The Einstein synchronization convention is an experimental procedure that can be used to determine if events were simultaneous. Different observers disagree on the result of this procedure. That is the whole point of the relativity of simultaneity.

Defining the convention by the result is rather inappropriate in this case and doesn't even work for inertial frames.

The Einstein synchronization convention requires you to use both your future and past motion for an inertial observer too. It is just that inertial motion is particularly easy to describe.

JDoolin, whether it is the same convention or not, if you have a strong preference for one convention over another for some personal reason (or even for no reason at all) that is perfectly fine. You don't have to justify your preference to me nor to anyone else. What is not fine is for you to attempt to elevate your personal preference to the status a physical requirement. No coordinate system has that status. Do you understand that now?

We have opinions about the two conventions based on facts; either facts we have wrong, or facts we have right. Even if we differ in opinion, we should agree about the facts.


You are saying that the whole point of relativity is that different observer's disagree on the results of this procedure. I don't know if you mean it this way, but it sounds like you are saying that there is some arbitrary or random way in which the results disagree, (perhaps that different observers view things differently based on the opinions of the observer, or how they weigh the information; or based on whether they decide to use cartesian or spherical coordinates, or whether they measure in feet or meters.) This is NOT the point of Special Relativity.

The point of Special Relativity is to describe exactly how and why different inertial observer's disagree on the results of the procedure. And the result of of that description is the Lorentz Transformation equations.

Changing from one inertial reference frame to another using the Lorentz Transformations is a NON-rigid transformation. In Cartesian space, the objects contract or uncontract. Events move apart or closer together.

It is my "opinion" that the easiest way to determine the observations of Barbara, at any given time, is to apply the Lorentz Transformations until we are looking at the reference frame in which Barbara is at currently at rest. (Then to do further calculation to account for the finite speed of light) It is my opinion that Tom Fontenot's procedure to use the Momentarily Comoving Inertial Reference Frame (MCIRF) to calculate the Current Age of Distant Objects" (CADO), is a better method for describing simultaneity for accelerating observers than the Einstein Convention.

Another fact is that the Lorentz Transformation already provides a one-to-one mapping of the events.

Another fact is that performing the Lorentz Transformation has no effect on the clock-values that you would get using radar time.

I leave it to you to describe your opinion.
 
Last edited:

Similar threads

Back
Top