Terrell Revisited: The Invisibility of the Lorentz Contraction

In summary, James Terrell's argument in the 1959 Physical Review is that the Lorentz contraction effect "vanishes," but this is not actually the case. He instead argues that "the conformality of aberration ensures that, at least over small solid angles, each [co-located observer, regardless of relative motion] will see precisely what the other sees. No Lorentz contractions will be visible, and all objects will appear normal."
  • #71
Ken G said:
Can you contrast a similar picture for Galilean and Lorentzian relativity? I'm wondering if the Lorentz contraction cancels out the Lorentzian modification to the aberration equation.

JDoolin said:
I have been thinking today about modeling an asterisk-shaped model. A set of eight or more tubes that would show the light paths as they came out of it, as well as the Lorentz contracted moving structure.

What I'd want to show is an animation of the paths of the light following the paths predicted by the aberration equation:

View attachment 82988
(Image from http://mathpages.com/rr/s2-05/2-05.htm )

But at the same time as it shows those paths of light, it should be showing overlying simply Lorentz contracted structure of the object.

Here, I just made this video showing the concept... Showing how the tubes of the source can be pointed one way in the observer's reference frame, while the actual photo paths can be pointing in an entirely different way. It's pretty sloppy, but I think, at least, it gets the idea across.

 
Physics news on Phys.org
  • #72
Ken G said:
if the bullets are photons
Light is not like bullets. If we are in flat space time and the rest frame of the camera is inertial, then light is propagating isotropically in all directions from each emission point in the rest frame of the camera, so the wave fronts reaching the static camera are perpendicular to the straight line between the camera and the emission point.

Ken G said:
Is that not what aberration is?
There is no aberration in the rest frame of the camera.
 
  • #73
All right, thanks to everyone for clearing that up for me, I was definitely making aberration too difficult. My bad.

Anyway, I think I see that Baez is right, though Terrell's "invisibility" claim is still a bit of a stretch-- two cameras in relative motion that image the same object at the same place and time will always image the same shape, it will just appear to be in a different direction, and it can also have a different total angular size.

If how that could be is still as unclear to anyone else as it was to me, consider Baez' two pinhole cameras, taking a picture when at the same place and time, but this time let's put the "plus sign" a little ahead of the camera that is tracking its motion, such that when the two cameras coincide, the stationary camera takes an image of the plus sign apparently at its point of closest approach. The plus sign is riding on a string through its horizontal piece, and the moving camera is trailing it on a parallel string that passes through the stationary camera.

So if the plus sign is moving left to right, this of course means the plus sign is really a bit to the right of the point of closest approach at the moment the cameras coincide and snap their photos. We know that the image from the stationary camera will show a length contracted horizontal piece, because we agree that there we can correctly reckon that it is length contracted. The moving camera, on the other hand, sees the plus sign as being a little rotated, because it is trailing it a bit, so the photo in the moving camera will also have a shortened horizontal piece. Baez is saying that the amount it will be shortened in this example is exactly the Lorentz factor, such that the shapes of the plus signs will be the same in the two photos. So to get the moving camera to coincide with the stationary one when the stationary one needs to snap this photo, the moving camera must trail the plus sign by exactly the angle needed to make the plus sign look Lorentz contracted. That the two images look the same is the basis for saying length contraction is "invisible"-- it's an ambiguity between whether the visible length contraction is real as for the stationary camera, or due to rotation as for the moving camera, when just looking at the "literal" images. This would seem to be a special feature of Lorentz contraction, perhaps an equivalent way to assert the postulates of relativity.
 
  • #74
A.T. said:
Light is not like bullets. If we are in flat space time and the rest frame of the camera is inertial, then light is propagating isotropically in all directions from each emission point in the rest frame of the camera, so the wave fronts reaching the static camera are perpendicular to the straight line between the camera and the emission point.

I would just change the word "point" to "event"

There is no aberration in the rest frame of the camera.

That is, there are no obliquely traveling wave-fronts of light from any event.
 
  • #75
Ken G said:
All right, thanks to everyone for clearing that up for me, I was definitely making aberration too difficult. My bad.

Anyway, I think I see that Baez is right, though Terrell's "invisibility" claim is still a bit of a stretch-- two cameras in relative motion that image the same object at the same place and time will always image the same shape, it will just appear to be in a different direction, and it can also have a different total angular size.

If how that could be is still as unclear to anyone else as it was to me, consider Baez' two pinhole cameras, taking a picture when at the same place and time, but this time let's put the "plus sign" a little ahead of the camera that is tracking its motion, such that when the two cameras coincide, the stationary camera takes an image of the plus sign apparently at its point of closest approach. The plus sign is riding on a string through its horizontal piece, and the moving camera is trailing it on a parallel string that passes through the stationary camera.

So if the plus sign is moving left to right, this of course means the plus sign is really a bit to the right of the point of closest approach at the moment the cameras coincide and snap their photos. We know that the image from the stationary camera will show a length contracted horizontal piece, because we agree that there we can correctly reckon that it is length contracted. The moving camera, on the other hand, sees the plus sign as being a little rotated, because it is trailing it a bit, so the photo in the moving camera will also have a shortened horizontal piece. Baez is saying that the amount it will be shortened in this example is exactly the Lorentz factor, such that the shapes of the plus signs will be the same in the two photos. So to get the moving camera to coincide with the stationary one when the stationary one needs to snap this photo, the moving camera must trail the plus sign by exactly the angle needed to make the plus sign look Lorentz contracted. That the two images look the same is the basis for saying length contraction is "invisible"-- it's an ambiguity between whether the visible length contraction is real as for the stationary camera, or due to rotation as for the moving camera, when just looking at the "literal" images. This would seem to be a special feature of Lorentz contraction, perhaps an equivalent way to assert the postulates of relativity.

Are you really coming back to the conclusion that you can't "see" Lorentz Contraction here? I hope I have helped you to understand aberration a bit better, but it was definitely not my goal to get you to come to that conclusion!
 
  • #76
I was previously going to point out a couple of no-cost programs that illustrate all the relevant effects (aberration/doppler/headlight); I now have the full information so here goes (search the web if you need to know more about what they do):
1. Real-time relativity. Windows/Mac downloads but runs under Wine in Linux. The author's SR primer is here.
2. A slower speed of light, Windows/Mac/Linux. Testbed for MIT's SR game framework, now starting to look like abandonware.
Enjoy!
 
  • #77
JDoolin said:
Are you really coming back to the conclusion that you can't "see" Lorentz Contraction here? I hope I have helped you to understand aberration a bit better, but it was definitely not my goal to get you to come to that conclusion!
No, I agree that the standard meaning of "see" is much broader than the limited meaning applied by Terrell. And I appreciate your efforts to elucidate all the various factors here!

What I'm actually saying is that it is the conclusion of Baez that appears to be correct, and I did not see that before. Baez' claim is that two cameras taking a picture at the same place and time will always photograph small shapes the same. The shapes may appear at different places in the visual field if one of the cameras is subject to aberration, and there can also be some changes in total angular size relating to similar issues, but the two shapes will be the same, i.e., a plus sign seen as having a given contrast in the lengths of its pieces in one photograph will have that same contrast in the other photograph as well, and that would not be true in Galilean relativity it seems. How to express that fact in words is a bit tricky!
 
  • Like
Likes JDoolin
  • #78
m4r35n357 said:
I was previously going to point out a couple of no-cost programs that illustrate all the relevant effects (aberration/doppler/headlight); I now have the full information so here goes (search the web if you need to know more about what they do):
1. Real-time relativity. Windows/Mac downloads but runs under Wine in Linux. The author's SR primer is here.
2. A slower speed of light, Windows/Mac/Linux. Testbed for MIT's SR game framework, now starting to look like abandonware.
Enjoy!

Here is another potential one:

http://www.visus.uni-stuttgart.de/u...vistic_Visualization_by_Local_Ray_Tracing.pdf
As future work, we plan to extend our software to a freely available tool usable for teaching in the context of Special Relativity. We want to allow the user to interactively explore relativistic effects by supporting import of arbitrary 3D models from common file formats and graphical interaction with the relevant visualization parameters, e.g., observer’s position, directions of motion, speed, and the different visual effects shown (geometric only, Doppler shift, and searchlight effect).

Maybe they have made it available already or would if enough people ask them. I think it would be a great tool.
 
  • #79
JDoolin said:
I would just change the word "point" to "event"
I said "point" because a meant a spatial point, not a point in space-time. Light is propagating isotropically in all directions of space, not of space-time.
 
  • #80
A.T. said:
Maybe they have made it available already or would if enough people ask them. I think it would be a great tool.
I'd like to think so too but the paper is from 2010 and they don't even give the program a name to search for. At first glance at least some of the emphasis of their approach is on numerical "fudging" for efficiency. I think the approach taken by Real Time Relativity is "purer" mathematically. Section 10 of the primer deals with rendering via stereographic projection, and builds on the work of Penrose.
[UPDATE]
No sooner had I posted than I found a recent program by one of the authors called GeoVis here. Unfortunately it appears to be unavailable to the general public, and with a license that I can't be bothered to even read. Shame as apparently it's a Linux program, and that's one of my things . . .
[UPDATE 2] Try here.
 
Last edited:
  • #81
A.T. said:
I said "point" because a meant a spatial point, not a point in space-time. Light is propagating isotropically in all directions of space, not of space-time.

By point, did you mean a stationary point in the observer's reference frame, or a point attached to an object which may or may not be moving in the observer's reference frame?

Because I think we've pretty well established that if the light from a point (attached to an object) is isotropic in one reference frame, it is not isotropic if you moving fast with respect to that object. That's what the diagram in post 66 is showing, and what I tried to explain in more detail in the video in post 71 (How an isotropic arrangement of beams in one reference frame leads to a nonistropic arrangement of beams in another reference frame.)

Maybe I'm misunderstanding your meaning of the word isotropic here. In the diagram in post 66, you see that the intensity of the light must be much greater coming off the front side of the source than from the back end. But the speed of light is the same in all directions. So if by isotropic you mean "the same speed" I'd agree with you, but if by isotropic, you mean "the same intensity" I'd have to disagree with you.
 
Last edited:
  • #82
JDoolin said:
By point, did you mean a stationary point in the observer's reference frame
This.
 
  • Like
Likes JDoolin
  • #83
Sorry about that. I have a tendency to compulsively edit my posts for a few minutes after posting. I may have added about three paragraphs since your response.
 
  • #84
m4r35n357 said:
I think the approach taken by Real Time Relativity is "purer" mathematically.
After reading this:
http://people.physics.anu.edu.au/~cms130/RTR/Physicist.html
"The 2D screen image is created using the computer graphics technique known as environment mapping, which renders the 3D virtual world onto a 2D cube map."

I'm not sure if this accounts for differential signal delays, which are key to the visual effects for close passing by objects discussed here. It depends how it "renders the 3D virtual world onto a 2D cube map". The 4D-raytracing approach seems to be the most general to me.
 
  • #85
JDoolin said:
So if by isotropic you mean "the same speed"
This.
 
  • Like
Likes JDoolin
  • #86
Ken G said:
No, I agree that the standard meaning of "see" is much broader than the limited meaning applied by Terrell. And I appreciate your efforts to elucidate all the various factors here!

What I'm actually saying is that it is the conclusion of Baez that appears to be correct, and I did not see that before. Baez' claim is that two cameras taking a picture at the same place and time will always photograph small shapes the same. The shapes may appear at different places in the visual field if one of the cameras is subject to aberration, and there can also be some changes in total angular size relating to similar issues, but the two shapes will be the same, i.e., a plus sign seen as having a given contrast in the lengths of its pieces in one photograph will have that same contrast in the other photograph as well, and that would not be true in Galilean relativity it seems. How to express that fact in words is a bit tricky!

And I claim that is false and provable from my detailed analysis of the moving rod. If you had a cross (with equal arms in its rest frame) with one arm in the direction of motion, then there would be one moment where the length of one arm would be shorter by gamma than the other arm. A camera at rest with respect to the cross at the same time would not see this. This is actually in agreement with Terrell (but not Baez if you quote him correctly). Terrell would say that the moving cross looks rotated such that one arm does have shorter angular span than the other. My analysis agrees with rotation (as one possible visual interpretation) in that the markings on the shorter arm would be stretched on one side and compressed on the other in a way that precisely matches rotation. However, if you imagined this arm parallel to motion as hollow, moving along rigid rod at rest with respect to the camera, you would be forced to re-interpret the same image as contraction with stretching and compression, because rotation could no longer be sustained as a visual interpretation. Penrose actually makes this point in his book "Road to Reality" that the rotation interpretation would interfered with if you introduce other objects moving at different speeds (he mentioned a track rather than hollowed out arm I proposed).

[I used brute force ray tracing in my analysis with no prior assumptions. I posted the resulting formulas, but not the derivation. If I have time at some point, I may post the derivation (it is actually only a page long on my hand written sheet). ]

[Edit1: One caveat, is that I have not analyzed the image for the camera co-moving with the cross, located at the same place and time as the camera taking the image I described above. It is possible that such an analysis could vindicate Baez as follows: in the frame of this camera co-moving with the cross, the cross is not being viewed head on, but substantially displaced; that is, at rest but far from head on in the direction of one arm. Then that arm would subtend less angle, and show distortion consistent with rotation. If that is the case, then Baez is vindicated (in the sense that both cameras would see a distorted cross). Again, if I have time in the future I will try to check whether this is what occurs. At this moment, I am suspecting that it does, and Baez is right, but in a different sense than Ken G. implies above.]

[Edit2: Further, if I am right in how Baez is correct in a certain sense, the if you start from a camera at rest with respect to the cross looking head on, and ask about a camera passing by at high speed snapping at that moment, what it would see is a non-distorted cross shifted a good distance forward, such that the light delay induced stretching compensates for the length contraction. Thus, a big part of this is simply that what one camera sees as 'head on' the other [momentarily colocated camera] sees as displaced in such a way that the combination of effects (displacement, light delay, contraction) preserves the shape.

So, while this is all interesting, it remains true for watching the cross go by:

1) There will be a time when one of its arms is shorter by gamma (and, at this time, it will look like it is being viewed head on - equidistant on either side of your line of sight). It is really hard for me to accept any definition of 'seeing' that doesn't call this directly seeing length contraction (despite the somewhat perverse way Baez's claim may remain true, as outlined in edit1).

2) At all times, length contraction is visible in the obvious way that if you account for light delays without assuming length contraction, you predict the wrong image. Thus what you see is at all times directly seeing what is expected by length contraction, and not what you would see without it.

]
 
Last edited:
  • Like
Likes mattt and JDoolin
  • #87
If you line the
m4r35n357 said:
I was previously going to point out a couple of no-cost programs that illustrate all the relevant effects (aberration/doppler/headlight); I now have the full information so here goes (search the web if you need to know more about what they do):
1. Real-time relativity. Windows/Mac downloads but runs under Wine in Linux. The author's SR primer is here.
2. A slower speed of light, Windows/Mac/Linux. Testbed for MIT's SR game framework, now starting to look like abandonware.
Enjoy!

I just now played a slower speed of light, on my machine... Although my graphics card is woefully insufficient for smooth graphics, I was able to get through the game at lowest resolution and some choppy graphics. I thought it was quite nice; artistically done, and enjoyable and probably a lot more fun on a gaming computer. There was a question I have related to the graphics though.

When I got up to 30, 40, 50 percent of the speed of light, I was happy to find, as I would expect, that when I accelerated toward objects, they immediiately receded into the background, and when I backed up, the objects sprang forward. That is totally what I would have expected from aberration.

However, I was trying hard to watch the cross-section of objects against the ground. While the ground in front of me, itself, stretched out significantly, the actual mushroom and hut cross-sections against the ground did not seem to be stretched. So I'm wondering if I'm actually seeing the phenomenon that Penrose, Terrell and/or Baez is talking about? Or did the programmers shortcut the rendering of the huts, and just render them as circular huts and circular mushrooms? If the demonstration represents an accurate rendering, I'll have to eat my words... But I can't imagine How could the ground appear stretched, while the objects along the ground do not appear stretched by the same ratio?

I couldn't see any aberration in the shapes of individual objects until the very end, when I collected the final watermelon. Then the aberration in the yz-plane (away and vertical) became plainly visible. However, I still don't think I saw aberration in the xy-plane. (away and horizontal.)
 
Last edited:
  • #88
PAllen said:
And I claim that is false and provable from my detailed analysis of the moving rod. If you had a cross (with equal arms in its rest frame) with one arm in the direction of motion, then there would be one moment where the length of one arm would be shorter by gamma than the other arm.
Yes, I originally thought that had to make Baez wrong. But it's not something he would likely get wrong, so it's very odd. I agree that if you put the moving camera right across from the moving cross, it has to see a symmetric cross at all times. But it certainly doesn't seem like the stationary camera will see a symmetric cross when the two cameras coincide in that case (though I suggested a different case where it seems like they might both see a contracted horizontal arm), because in this case, the image at that moment won't look like it is at the point of closest approach, it will look like it hasn't gotten there yet. But that should still make it look asymmetric. So we seem to have a case where one image looks symmetric, and the other doesn't. But this is in stark contrast to the conclusion of both Terrell and Baez, who cite high-powered mathematics and seem to understand exactly what they are saying.
What Baez says is:
"First, circles go to circles under the pixel mapping, so a sphere will always photograph as a sphere. Second, shapes of objects are preserved in the infinitesimally small limit. "
This seems to also be said by Terrell:
" Observers photographing the meter stick simultaneously from the same position will obtain precisely the same picture, except for a change in scale given by the Doppler shift ratio, irrespective of their velocity relative to the meter stick."
Terrell would say that the moving cross looks rotated such that one arm does have shorter angular span than the other.
Terrell does at one point say it won't look contracted, it will look rotated, but a rotated cross does not look like "precisely the same picture".
My analysis agrees with rotation (as one possible visual interpretation) in that the markings on the shorter arm would be stretched on one side and compressed on the other in a way that precisely matches rotation.
I don't understand that, wouldn't rotation contract all the tickmarks on the horizontal arm? But more importantly, we cannot even compare different tickmark lengths, because both Terrell and Baez are talking about an effect that is first order in smallness (the "infinitesmally small limit"), so no second order terms like a gradient in the stretching.
However, if you imagined this arm parallel to motion as hollow, moving along rigid rod at rest with respect to the camera, you would be forced to re-interpret the same image as contraction with stretching and compression, because rotation could no longer be sustained as a visual interpretation. Penrose actually makes this point in his book "Road to Reality" that the rotation interpretation would interfered with if you introduce other objects moving at different speeds (he mentioned a track rather than hollowed out arm I proposed).
But note that does not say the stationary camera would not also see something it could interpret as a rotation, so this doesn't speak to the issue of differences between the images.
It is possible that such an analysis could vindicate Baez as follows: in the frame of this camera co-moving with the cross, the cross is not being viewed head on, but substantially displaced; that is, at rest but far from head on in the direction of one arm. Then that arm would subtend less angle, and show distortion consistent with rotation. If that is the case, then Baez is vindicated (in the sense that both cameras would see a distorted cross). Again, if I have time in the future I will try to check whether this is what occurs. At this moment, I am suspecting that it does, and Baez is right, but in a different sense than Ken G. implies above.]
Actually, the scenario you describe here is exactly the one I described above. (Ignore the false turn on aberration, you are right that aberration only appears for a camera we regard as moving.) I figured that's what made Baez right, but what about the case where the moving camera is directly across from the cross, where it will always see a symmetric cross-- how could the stationary camera see that when they coincide? It doesn't seem like the stationary camera would ever see that, but if it ever does, then Baez must be right. If not, then I'm confused about what they mean by preserving the "shape of an object"-- if I take a cross in my hand, and rotate it, is that the same shape only rotated, or a different shape?
 
Last edited:
  • #89
JDoolin said:
If the demonstration represents an accurate rendering, I'll have to eat my words... But I can't imagine How could the ground appear stretched, while the objects along the ground do not appear stretched by the same ratio?
Could it be the difference between first-order-small effects that don't show any difference, and larger-solid-angle pictures where you start to see the distortions? Terrell only ever claimed that small shapes appeared the same, larger images require some type of cobbling together that might involve bringing in "non-literal" information, analogous to how all local frames in GR are Minkowski but the equivalence principle breaks down on larger scales.
 
Last edited:
  • #90
I'll comment more later, for now, just this.
Ken G said:
Terrell does at one point say it won't look contracted, it will look rotated, but a rotated cross does not look like "precisely the same picture". I don't understand that, wouldn't rotation contract all the tickmarks on the horizontal arm?
Not at all. One side gets closer to you, the other side further away. The angle subtended by markings closer to you will be greater than those further away. The result I got for this effect precisely matches rotation, so I find it very hard to believe this is not what Terrell is referring to. Penrose also describes this effect.
Ken G said:
But more importantly, we cannot even compare different tickmark lengths, because both Terrell and Baez are talking about an effect that is first order in smallness (the "infinitesmally small limit"), so no second order terms like a gradient in the stretching.
I don't necessarily think their results are that limited. A small object can still have markings on it.
Ken G said:
Actually, the scenario you describe here is exactly the one I described above. (Ignore the false turn on aberration, you are right that aberration only appears for a camera we regard as moving.) I figured that's what made Baez right, but what about the case where the moving camera is directly across from the cross, where it will always see a symmetric cross-- how could the stationary camera see that when they coincide? It doesn't seem like the stationary camera would ever see that, but if it ever does, then Baez must be right. If not, then I'm confused about what they mean by preserving the "shape of an object"-- if I take a cross in my hand, and rotate it, is that the same shape only rotated, or a different shape?

When what you call the stationary camera coincides and sees a symmetric cross, in that camera's frame its viewing is NOT head on at that event. Aberration will have change the incoming light angle such that the image appears to still be approaching, and the light delay stretching will compensate for the length contraction such that it produces a symmetric photograph.The one case I can't quite resolve is that a rapidly approaching cross still far away will have the parallel (to motion) arm greatly stretched by light delay (by much more than length contraction can compensate, when it is far away). I can't find an all stationary analog of this case.

[edit: I think I resolved this last case, so there no discrepancies between my understanding and Terrell (Baez?), except for describing any of this as not seeing length contraction.

The resolution for approach is to consider a camera stationary relative to the cross, but displaced, sees some subtended angle for the shorter arm as e.g. 3 degrees, with a viewing angle of, say 40 degrees to the left. Then, a moving camera approaching the cross, momentarily coinciding with this camera, sees the same 3 degree subtended angle, the viewing angle is interpreted as much more than 40 degrees off head on. Thus, compared to a similar cross stationary with respect this 'moving camera', at the same viewing angle, the moving cross will appear to have one arm very elongated.

Properly accounting for frame dependence of viewing angle appears to resolve all remaining anomalies, as I see it.]
 
Last edited:
  • #91
PAllen said:
Not at all. One side gets closer to you, the other side further away.
Not in the limit of infinitesmally small images, in that limit, a rotation will contract uniformly along the horizontal direction. It has to be a linear transformation.
The angle subtended by markings closer to you will be greater than those further away. The result I got for this effect precisely matches rotation, so I find it very hard to believe this is not what Terrell is referring to. Penrose also describes this effect.
I don't understand how you can get that it exactly matches rotation, does not the scale of the effect you describe depend on the ratio of how wide the cross is, to how far it is away? But that ratio doesn't appear in the analysis, it is a limit.
I don't necessarily think their results are that limited. A small object can still have markings on it.
Yes, but all transformations on those markings must be linear, so no gradients in what happens to them.
When what you call the stationary camera coincides and sees a symmetric cross, in that camera's frame its viewing is NOT head on at that event.
This is the scenario that is not clear to me-- I can't see if there will ever be a time when the stationary camera sees a symmetric cross. But for Terrell and Baez to be right, there must be such a time, and it must be when the moving camera directly opposite the cross coincides with the stationary camera.
Aberration will have change the incoming light angle such that the image appears to still be approaching, and the light delay stretching will compensate for the length contraction such that it produces a symmetric photograph.
When the moving camera is directly opposite the cross, we can certainly agree the cross will appear symmetric. You are explaining how that is reckoned in the frame of the stationary camera, but in any event we know it must be true.
The one case I can't quite resolve is that a rapidly approaching cross still far away will have the parallel (to motion) arm greatly stretched by light delay (by much more than length contraction can compensate, when it is far away). I can't find an all stationary analog of this case.
But are you also including the rotation effect, not just length contraction? Since none of the locations we could put the moving camera will ever see a horizontal arm that is wider than the vertical arm, it must hold that the stationary camera never sees that either.

But I think your point that the horizontal arm is stretched by time delay effects is the crucial reason that there is a moment when the cross looks symmetric to the stationary camera. I believe that moment will also be when the camera directly opposite from the cross passes the stationary camera. That is, it is the moment when the cross is actually at closest approach. A moment like that would make Baez right. Note that for any orientation of the moving camera, relative to the comoving cross, there is only one moment when the stationary camera needs to see the same thing-- the moment when the two cameras are coincident. If we imagine a whole string of moving cameras, then the stationary camera will always see what the moving camera sees that is at the same place as the stationary camera-- but at no other time do they need to see the same thing.

If so, this means it is very easy to tell what shape the stationary camera will photograph-- simply ask what the moving camera would see that is at the same place when the stationary camera takes its picture, and what the moving camera sees just depends on its location relative to the comoving object. You just have to un-contract the string of moving cameras, and measure their angle to the object, and that's the angle of rotation the stationary observer will see at that moment.
 
Last edited:
  • #92
Ken G said:
Not in the limit of infinitesmally small images, in that limit, a rotation will contract uniformly along the horizontal direction. It has to be a linear transformation.
I disagree on how restrictive Terrell/Baez conclusion is. It may only be exact in some limit, but is good 'reasonably small'.
Ken G said:
I don't understand how you can get that it matches rotation, does not the scale of the effect you describe depend on the ratio of how wide the cross is, to how far it is away? But that ratio doesn't appear in the analysis, it is a limit.
No, I disagree. Moving a tilted ruler further away linearly scales the image, but does not change the ratio of subtended angle at one end compared to the other for e.g. centimeter markings. Per my computation, the effect does match that produced by rotation. [edit: well, as long as you are not too close. Once you are far enough that further distance is linear shrinkage, just imagine the tilted ruler against a non-tilted ruler. The whole image scales, thus preserving the ratio of subtended angle between a closer inch and a further inch, on the tilted ruler] [edit 2: OK, I see that if you allow the angular span of a tilted ruler to go to zero with distance, the ratio angles subtended by ruler lines goes to 1. However, if you fix the angular span of a tilted ruler (e.g. 2 degrees), then distance doesn't matter and the ratio front and back ruler lines remains constant. This is what I was actually modeling when comparing to rotation - all angles. I remain convinced the the rotation model is quite accurate for small, finite spans, e.g several degrees.]
Ken G said:
Yes, but all transformations on those markings must be linear, so no gradients in what happens to them.
I disagree. I claim the result goes beyond this.
Ken G said:
This is the scenario that is not clear to me-- I can't see if there will ever be a time when the stationary camera sees a symmetric cross. But for Terrell and Baez to be right, there must be such a time, and it must be when the moving camera directly opposite the cross coincides with the stationary camera.
When the moving camera is directly opposite the cross, we can certainly agree the cross will appear symmetric. You are explaining how that is reckoned in the frame of the stationary camera, but of course we know it must be true.
I think this is the crucial issue-- it is that stretching that allows the cross to have a moment when it looks symmetric to the stationary camera, and I believe that moment will also be when the camera directly across from the cross passes the stationary camera. That would make Baez right.

I agree, and I thought that's what I was explaining in my last few posts.

Where I continue to disagree (with you, but I think agree with Terrell and Penrose) is that I think there is more to image rotation than you want to admit. Consider it from another angle, so to speak. Imagine a camera and cross stationary with respect to each other, but with the cross displaced from head on. One arm will subtend less angle than the other, and it would look (exactly) rotated relative to a colocated stationary cross turned head on to the camera. To be concrete, let us imagine the displacement is to the left. Now add a camera moving left to right, past this stationary camera. It will see a viewing angle (for the appropriate set up) of 'head on' due to aberration of viewing angle. The image seen by the stationary camera will have moved to a perpendicular viewing angle, but otherwise essentially unchanged. Thus it will see a rotated image in the head on viewing angle, with the rotation producing the contraction and explaining the distribution of ruler lines on this cross arm. Then, my final comment is that this is only one way to interpret the image. If you introduce another element that establishes rotation could not have occurred, you change your interpretation to contraction and distortion - that happens to match rotation.
 
Last edited:
  • #93
I will attempt another summary, similar to #49, that includes full understanding of Terrell/Penrose (I haven't looked as much at Baez) explaining my view that while accurate when properly understood, common statements of these results are inaccurate.

1) A common sense definition of 'seeing length contraction' means with knowledge of the object's rest characteristics. It is only relative to that there is any meaning to 'contraction'.

2) There are obvious ways to directly measure/see any changes in cross section implied by the coordinate description. Simply have the object pass very close to a sheet of film, moving along it (not towards or away) and have a bright flash from very far away so you get as close as you want to a plane wave. Then circles [and spheres] becoming ovals, and every other aspect of the coordinate description will be visible. Note that in a frame co-moving with the object, the plane wave reaching the film will be considered angled, and the exposure non-simultaneous. It is precisely because this method directly measures simultaneity across a surface in a given frame, that it directly detects the coordinate description of length contraction.

3) The impact of light delays on idealized camera image formation has nothing to do with SR. However it combines with SR in such a way that a common sense definition of 'see', length contraction is always visible (if it occurs, e.g. not for objects fully embedded in the plane perpendicular to their motion ). That is, if you establish what you would see from light delay under the assumption that the object didn't contract, and compare to what you would see given the contraction, they are different. You have thus seen (the effect of, and verified) length contraction.

4) To my mind, a correct description of the Terrell/Penrose result is that they have described a much more computationally elegant way (compared to ray tracing) to arrive at the image detected by any idealized camera, that allows one to qualitatively arrive at a result with often no computation at all.

A) Instead of ray tracing based on world tube representation of the object, simply represent the image in terms of angles for a camera at rest with respect to the object at the detection event of interest. Then apply aberration to get the angular displacement of all detected rays in a camera moving in any way at this same event. This method is completely general and exact, up to having a frame in which you can ignore the object's motion (e.g. for a swirling gas cloud where you care about the details, there is no small collection of rest frames you can use). Given the static nature of the analysis before applying aberration, this is a huge simplification.

B) For objects of smallish size (not just infinitesimal objects; size defined by subtended angle), the result of (A) is (to good approximation) to shift the stationary (with respect to object) camera image to a different viewing position (with some scaling as well). This implies apparent visual rotation in a substantive sense. Viewing a sphere with continents on it, from a moving camera, the apparent hemisphere seen will correspond to a different viewing angle the one you are sighting along. The markings on a rod will appear distorted (relative to what is expected for the viewing angle of the moving camera) as if rotated by the change in viewing angle between the stationary and moving cameras. All of these results can be had, much more laboriously, by direct ray tracing in the frame of moving camera, with the object properly represented as a world tube.

C) Summarizing A) and B) as "invisibility of length contraction is physically absurd", not just because of the logical point made in (3), but also because if additional elements are introduced into the visual scene that are stationary with respect to the camera considered moving in the 4)A) analysis, you will see that the apparent rotation of the image of the moving object is illusory, and must be replaced by an alternate interpretation of the same image - that 'actual' contraction plus light delay is the only interpretation consistent with the whole scene.
 
Last edited:
  • #94
PAllen said:
I will attempt another summary, similar to #49, that includes full understanding of Terrell/Penrose (I haven't looked as much at Baez) explaining my view that while accurate when properly understood, common statements of these results are inaccurate.

1) A common sense definition of 'seeing length contraction' means with knowledge of the object's rest characteristics. It is only relative to that there is any meaning to 'contraction'.
...
...
C) Summarizing A) and B) as "invisibility of length contraction is physically absurd", not just because of the logical point made in (3), but also because if additional elements are introduced into the visual scene that are stationary with respect to the camera considered moving in the 4)A) analysis, you will see that the apparent rotation of the image of the moving object is illusory, and must be replaced by an alternate interpretation of the same image - that 'actual' contraction plus light delay is the only interpretation consistent with the whole scene.
This seems well argued but I have always had a problem with 'actual contraction'. If you mean what I think you mean, I don't see how the object can have a different 'actual contraction' for different observers. I understand that different observers might 'measure' a contracted length, but it is a frame dependent measurement. In its rest frame the object does not experience contraction.
 
  • #95
PAllen said:
[edit 2: OK, I see that if you allow the angular span of a tilted ruler to go to zero with distance, the ratio angles subtended by ruler lines goes to 1. However, if you fix the angular span of a tilted ruler (e.g. 2 degrees), then distance doesn't matter and the ratio front and back ruler lines remains constant. This is what I was actually modeling when comparing to rotation - all angles. I remain convinced the the rotation model is quite accurate for small, finite spans, e.g several degrees.]
This is the crux of the matter, it is what I find confusing about the language relating to "rotation." A rotation looks different at different angular sizes, because of how it makes some parts get closer, and other parts farther away. Is that being included, or just the first-order foreshortening? And what angular scales count as "sufficiently small"? Baez said:
"Well-known facts from complex analysis now tell us two things. First, circles go to circles under the pixel mapping, so a sphere will always photograph as a sphere. Second, shapes of objects are preserved in the infinitesimally small limit."

I interpreted that to mean the shapes are only preserved in the infinitesmally small limit, i.e., for the Lorentz contracted cross to look like a rotated cross, it has to be infinitesmally small, so this would not include how the forward tilted arm can look longer than the backward tilted arm on a large enough angular scale. You are saying I am overinterpreting Baez here, and what's more, your own investigation shows a connection between that longer forward arm, and what Lorentzian relativity actually does. So perhaps Baez missed that, or did not mean to imply what I thought he implied.

This is what Terrell says in his abstract:
"if the apparent directions of objects are plotted as points on a sphere surrounding the observer, the Lorentz transformation corresponds to a conformal transformation on the surface of this sphere. Thus, for sufficiently small subtended solid angle, an object will appear-- optically-- the same shape to all observers."

The answer must lie in the meaning of having a conformal transformation on the sphere of apparent directions. If we use JDoolin's asterisk, instead of a cross, we can see that a rotation will foreshorten the angles of the diagonal arms, and it is clear that a conformal transformation will keep those angles fixed, so certainly Terrell is saying that the Lorentz contraction will foreshorten the angles in exactly the same way. But what about the contrast in the apparent lengths of the arms tilted toward us and the arms tilted away, is that contrast also preserved in the conformal transformation? You are saying that it does, and that seems to be the key issue. We do have one more clue from Terrell's abstract:
"Observers photographing the meter stick simultaneously from the same position will obtain precisely the same picture, except for a change in scale given by the Doppler shift ratio"
So the word "precisely" says a lot, but what is meant by this change in scale, and is that change in scale uniform or only locally determined? You are saying that it looks precisely like a rotation, including the contrast between the fore and aft distortions, not just the first-order foreshortening effect.

A sphere with continents on it might be a good case to answer this. We all agree the sphere still looks like a sphere, and in some sense it looks rotated because we see different continents than we might have expected. But the key question that remains open is, do the continents in the apparent forward regions of the sphere appear larger than the continents in the most distant parts of the sphere, or is that element not preserved in the conformal transformation between the moving and stationary cameras? I agree Terrell's key result is essentially that it is easier to predict what you will see for small shapes by using the comoving camera at the same place and time as the stationary camera, but what we are wondering about is over what angular scale, and what types of detail, we should expect the two photos to agree on. The mapping between the two cameras is conformal, but it is not the identity mapping, so can we conclude the continents will look the same size in both photos? Certainly distortions on the surfaces of large spheres should look different between the two photos, but even large spheres will still look like spheres.
 
Last edited:
  • #96
Mentz114 said:
This seems well argued but I have always had a problem with 'actual contraction'. If you mean what I think you mean, I don't see how the object can have a different 'actual contraction' for different observers. I understand that different observers might 'measure' a contracted length, but it is a frame dependent measurement. In its rest frame the object does not experience contraction.
Substitute "actual contraction per some camera's frame of reference", if you prefer. The contraction is actual up to inclusion of interaction fields, e.g. an EM model of an object moving in some frame will have the EM field represented such that equilibrium distances of moving charges will be closer than modeled in a frame where the charges are not moving.
 
  • #97
Ken G said:
Could it be the difference between first-order-small effects that don't show any difference, and larger-solid-angle pictures where you start to see the distortions? Terrell only ever claimed that small shapes appeared the same, larger images require some type of cobbling together that might involve bringing in "non-literal" information, analogous to how all local frames in GR are Minkowski but the equivalence principle breaks down on larger scales.

Actually, I'm starting to think maybe the game designers rendered some of the objects in the game with the full aberration effect, and other objects in the game without it.

Here are four screen-captures from the promotional video at http://gamelab.mit.edu/games/a-slower-speed-of-light/

2015-05-04-RelativisticAberrationFormula-screenshots.PNG

This is a very short part of the promotional video but it captures several things. For instance the distance between the two poles increases when the observer moves to the right, and it shrinks when the observer moves to the left (so long as the poles are on the right-side of the observer's view.

Looking at the warping of this one structure in the game, it seems like they attempted to get the shapes right. The circles on the ground don't look quite circular, but rather they look flattened ovals, as I think they should.

2015-05-04-RelativisticAberrationFormula-screenshots02.PNG
 
  • #98
Ken G said:
This is the crux of the matter, it is what I find confusing about the language relating to "rotation." A rotation looks different at different angular sizes, because of how it makes some parts get closer, and other parts farther away. Is that being included, or just the first-order foreshortening? And what angular scales count as "sufficiently small"? Baez said:
"Well-known facts from complex analysis now tell us two things. First, circles go to circles under the pixel mapping, so a sphere will always photograph as a sphere. Second, shapes of objects are preserved in the infinitesimally small limit."

In interpreted that to mean the shapes are only preserved in the infinitesmally small limit, i.e., for the Lorentz contracted cross to look like a rotated cross, it has to be infinitesmally small, so this would not include how the forward tilted arm can look longer than the backward tilted arm on a large enough angular scale. You are saying I am overinterpreting Baez here, and what's more, your own investigation shows a connection between that longer forward arm, and what Lorentzian relativity actually does. So perhaps Baez missed that, or did not mean to imply what I thought he implied.

This is what Terrell says in his abstract:
"if the apparent directions of objects are plotted as points on a sphere surrounding the observer, the Lorentz transformation corresponds to a conformal transformation on the surface of this sphere. Thus, for sufficiently small subtended solid angle, an object will appear-- optically-- the same shape to all observers."

The answer must lie in the meaning of having a conformal transformation on the sphere of apparent directions. If we use JDoolin's asterisk, instead of a cross, we can see that a rotation will foreshorten the angles of the diagonal arms, and it is clear that a conformal transformation will keep those angles fixed, so certainly Terrell is saying that the Lorentz contraction will foreshorten the angles in exactly the same way. But what about the contrast in the apparent lengths of the arms tilted toward us and the arms tilted away, is that contrast also preserved in the conformal transformation? You are saying that it does, and that seems to be the key issue. We do have one more clue from Terrell's abstract:
"Observers photographing the meter stick simultaneously from the same position will obtain precisely the same picture, except for a change in scale given by the Doppler shift ratio"
So the word "precisely" says a lot, but what is meant by this change in scale, and is that change in scale uniform or only locally determined? You are saying that it looks precisely like a rotation, including the contrast between the fore and aft distortions, not just the first-order foreshortening effect.

Focusing on Terrell's statement above, and on my description of the exact method of 4)A) in post #93, the key is that the the conformal transform is applied to the image from a different viewing angle - thus it preserves the angular distortion produced by the overall shift in viewing angle of the smallish object. What is conformally mapped is the image from the camera stationary with respect to the object. But this image, even to first order, is rotated by the overall change in viewing angle for the moving camera, compared to what the moving camera would expect at its apparent viewing angle.
 
  • #99
PAllen said:
Focusing on Terrell's statement above, and on my description of the exact method of 4)A) in post #93, the key is that the the conformal transform is applied to the image from a different viewing angle - thus it preserves the angular distortion produced by the overall shift in viewing angle of the smallish object. What is conformally mapped is the image from the camera stationary with respect to the object. But this image, even to first order, is rotated by the overall change in viewing angle for the moving camera, compared to what the moving camera would expect at its apparent viewing angle.
Yes, the globe with continents will show different continents from what would be expected if the globe was not in relative motion. The question is, will the continents look larger in the forward parts, as a static image would give, or will their relative sizes be distorted from that? In other words, the conformal transformation maps spheres to spheres, but it need not be the identity mapping on the surfaces of those spheres, so distortions can appear between the two photographs if the spheres are not small. It is not clear that aspect of what a "rotation" does is intended to be taken literally in Terrell-Penrose rotation, it might just be the fact that you see the different continents and no more than that can be relied on in general. It seems to me what is crucial is that a cross seen by a comoving camera directly across from it will look symmetric, so a stationary camera that sees the cross as moving must at the appropriate moment also see the cross as symmetric, that's the first-order "invisibility" of the length contraction. We are wondering if there is also a higher-order effect, where you can take contrasts in the fore and aft parts of the rotated object as part of that "invisibility" as well.
 
  • #100
Ken G said:
Yes, the globe with continents will show different continents from what would be expected if the globe was not in relative motion. The question is, will the continents look larger in the forward parts, as a static image would give, or will their relative sizes be distorted from that? In other words, the conformal transformation maps spheres to spheres, but it need not be the identity mapping on the surfaces of those spheres, so distortions can appear between the two photographs if the spheres are not small. It is not clear that aspect of what a "rotation" does is intended to be taken literally in Terrell-Penrose rotation, it might just be the fact that you see the different continents and no more.
I am not sure how to convince you. The aberration is applied to the rays forming an image at viewing angle x. To first order, for modest subtended angle, they rotate all the rays by the change in viewing angle. This produces a distortion in the positions of ruler lines that I independently verify with direct ray tracing computation. Perhaps I overstated precise - my computational comparison was numerical to 4 significant digits, for a two degree subtended ruler.

Consider, for example, in the camera stationary with respect to a ruler viewed off to the left. Suppose the angle between 1 cm markings is .02 degrees on one side and .01 degrees on the other side. If all of these rays are rotated by the overall aberration change in viewing angle, these angles are preserved.
 
  • #101
JDoolin said:
Looking at the warping of this one structure in the game, it seems like they attempted to get the shapes right. The circles on the ground don't look quite circular, but rather they look flattened ovals, as I think they should.
Yes, flat disks are supposed to look rotated. I think when Terrell says this makes length contraction "invisible", he only mean the observer who does not know relativity will not see anything that sets off an alarm in just the nature of that flattened disk. The observer cannot say "hey, wait a minute, that's a length contracted disk," they can just say, "hey, who rotated that disk?" Of course, we agree that "seeing" is allowed to invoke additional information, like, there's no one there to rotate that disk.
 
  • #102
PAllen said:
I am not sure how to convince you. The aberration is applied to the rays forming an image at viewing angle x. To first order, for modest subtended angle, they rotate all the rays by the change in viewing angle. This produces a distortion in the positions of ruler lines that I independently verify with direct ray tracing computation. Perhaps I overstated precise - my computational comparison was numerical to 4 significant digits, for a two degree subtended ruler.
What would convince me is a calculation of the angular size of the forward tilted arm, contrasted with a calculation of the angular size of the backward tilted arm, where that contrast is the same in the ray-tracing calculation as in a simple rotation. That would mean that the conformal mapping of the globe does not just map the rotated image into what the aberrated picture looks like in terms of seeing all the same continents, but also that all distortions that a rotation produces in the relative apparent sizes of the continents is also reproduced in the aberrated photo. That's an issue that does not come up to first order, because to first order, there is no difference in the apparent size of a continent that is "closer" to us.
Consider, for example, in the camera stationary with respect to a ruler viewed off to the left. Suppose the angle between 1 cm markings is .02 degrees on one side and .01 degrees on the other side. If all of these rays are rotated by the overall aberration change in viewing angle, these angles are preserved.
Yes, but a conformal mapping preserves the angles on the sphere being mapped, it doesn't preserve angles from points on the sphere to the center of the sphere, which is the angles you are talking about. Still, I can see that if the rod is very rotated, it could have a significant length yet still be confined to a very small angle, so perhaps in that case the conformal mapping does have to preserve the distortions you are talking about.
 
Last edited:
  • #103
Ken G said:
What would convince me is a calculation of the angular size of the forward tilted arm, contrasted with a calculation of the angular size of the backward tilted arm, where that contrast is the same in the ray-tracing calculation as in a simple rotation. That would mean that the conformal mapping of the globe does not just map the rotated image into what the aberrated picture looks like in terms of seeing all the same continents, but also that all distortions that a rotation produces in the relative apparent sizes of the continents is also reproduced in the aberrated photo. That's an issue that does not come up to first order, because to first order, there is no difference in the apparent size of a continent that is "closer" to us.
Yes, but a conformal mapping preserves the angles on the sphere being mapped, it doesn't preserve angles from points on the sphere to the center of the sphere, which is the angles you are talking about. Can you really get a factor of 2 contrast in those angles while still enforcing a small angle between those two sides?
The conformal mapping applies to the image, directly from its derivation via aberration. It does not apply the object being imaged.

Yes, you can get large front to back distortion for a small subtended angle ruler. Holding subtended angle of ruler constant, while increasing velocity, increases front to back distortion. It can be made very large for a visually small ruler, for v close to c. Think of this as a very long ruler, rotated near head on, at a distance such as to preserve the subtended angle of the ruler as a whole.

Again, to first order you are shifting all the image rays by the same amount. Thus, if two pairs of rays in the image differ by a factor of 2 in subtended angle, they still will do so after shifting all the rays the same way.
 
Last edited:
  • #104
PAllen said:
The conformal mapping applies to the image, directly from its derivation via aberration. It does not apply the object being imaged.
Sure, it is a 2D transformation on the sphere of apparent angles.
Yes, you can get large front to back distortion for a small subtended angle ruler. Holding subtended angle of ruler constant, while increasing velocity, increases front to back distortion. It can be made very larger for a visually small ruler, for v close to c.

Again, to first order you are shifting all the image rays by the same amount. Thus, if two pairs of rays in the image differ by a factor of 2, they still will do so after shifting all the rays the same way.
Yes, I now see that you can pack a long rod into a small angle by rotating it toward you, achieving a lot of fore/aft distortion over a narrow angle, perhaps narrow enough to allow us to apply Terrell's argument. So it seems you are right that not only the foreshortening, but also the fore/aft distortions over small scales, look just like a rotation.

Ironically, that strengthens Terrell's case for claiming the length contraction is not literally visible, because it produces even more similarity between the photo of the moving and nonmoving camera. I think what Terrell is really saying is he is imagining two coincident cameras asking "do objects look like they are length contracted", and saying "let's compare our photos to find out-- oh, we can't see any difference." In other words, the implied comparison is not between a moving object and a stationary object in some general sense, it is between the observation of two observers in relative motion made at the same place and time. In that instant, one observer regards the object as stationary, and the other as moving, and in that instant, they cannot see any length contraction. It is only if they use more than just that instant, but tell a compete story of the situation (including things like the finite speed of light), that they can infer the length contraction, so it comes down to what contextual information can be included in the act of "seeing." So it's an issue of language, but at least if we can get all the physics ironed out, that's what matters, so thank you for clarifying these points.

Above all, I agree with you and JDoolin that the term "invisible" makes over-extended implications, and that the real value of what Terrell did is showing the much simpler way to figure out what a movie of an object in motion will look like, by imagining a string of cameras comoving with the object, and just borrowing the appropriate snapshots as those cameras coincide with the movie camera.
 
Last edited:
  • #105
A.T. said:
Are you sure it's not the other way around? The back (still approaching) should look stretched, and the front (already receding) should look compressed. See Fig.1 here:
http://www.spacetimetravel.org/bewegung/bewegung3.html

PAllen said:
You are right. I did that math a while ago, re-did it this morning. I had remembered it backwards.

You guys have gone into really subtle details that I hope to get into sometime soon. But I just wanted to come back to this simple statement about flat lines. I noticed that the link A.T. posted to actually shows videos of straight lines passing
• left-to-right in the distance (Lorentz Contraction at "small angle")
• right-to-left in the distance (Lorentz Contraction at "small angle")
• back-to-front underfoot (Lorentz + Moving Away)
• front-to-back underfoot (Lorentz + Moving Closer)

Now, what it doesn't show, is the "large angle", that is, if the camera panned to "the feet" as the lines passed underneath the observer. If you could imagine "strafing" a fence, you should expect that to one side, the fence should be obviously stretched (in the direction you're moving toward), and to the other side, the fence should be obviously contracted (in the direction you're moving away from), and directly in front of you, the fence should be as it is in the "Lorentz Contraction at 'small angle'" examples.

We're all agreed on this point, right?
 
Last edited:

Similar threads

Replies
14
Views
1K
Replies
8
Views
1K
Replies
54
Views
3K
Replies
52
Views
4K
Replies
14
Views
661
Replies
7
Views
3K
Replies
3
Views
719
Back
Top