CERN team claims measurement of neutrino speed >c

In summary, before posting in this thread, readers are asked to read three things: the section on overly speculative posts in the thread "OPERA Confirms Superluminal Neutrinos?" on the Physics Forum website, the paper "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam" published on arXiv, and the previous posts in this thread. The original post discusses the potential implications of a claim by Antonio Ereditato that neutrinos were measured to be moving faster than the speed of light. There is a debate about the possible effects on theories such as Special Relativity and General Relativity, and the issue of synchronizing and measuring the distance over which the neutrinos traveled. The possibility
  • #141
DevilsAvocado said:
This combined with the fact that the neutrino pulse from supernova 1987A would have shown up years earlier than the exploding star's flash of light (at speeds seen by OPERA). Instead, http://en.wikipedia.org/wiki/SN_1987A#Neutrino_emissions"...

So why are the speeds seen by OPERA not achievable by the SN 1987A neutrinos?

I don’t know...

Neutrinos come in multiple mass eigenstates. Strictly speaking, all we know is that _some_ SN 1987A neutrinos arrived within hours of the flash of light. The Lorentz-violating eigenstate could have arrived during the Middle Ages, for all we know. Or arrived two years in advance, but dispersed over the period of 6 months and undetectable above background noise. There is a theory that it arrived 5 hours earlier, but it was only seen by one detector and discounted as a statistical fluke.

Wait why did the neutrinos of that supernova arrive 3 hours earlier if we're going to assume this experiment is wrong and c is still the ultimate speed limit? How can neutrinos beat light if they have mass?

The light from the supernova only appears a few hours after the explosion, because it is initially blocked by cool shock front that is ejected during collapse.
 
Last edited by a moderator:
Physics news on Phys.org
  • #142
Vanadium 50 said:
No. The energies are too small by a factor of 1000. The neutrinos cannot be efficiently produced nor efficiently detected at these energies.

Okay so:
a) How were the 1987 neutrinos detected in the 1st place?
b) If we're talking efficiency, utilizing the accuracy of OPERA, isn't it safe to say this is mostly a tagging and timing issue?

If we can tag low-energy neutrinos (whether produced naturally or artificially) then that means we can effectively detect them after traveling the linear baseline no?

So basically, set your initial detector to find low energy neutrinos that are on flight paths toward the 2nd downstream detector, tag them, perfect the metrologies, and then look for them in said 2nd detector. Maybe even piggyback a high-energy beam as a marker? Then filter.
 
  • #143
The detectors that detect MeV neutrinos are very different than the ones that detect GeV neutrinos. The operate on an entirely different technology, one that also happens to have less good timing. You can't make Opera do this.
 
  • #144
I did the math too and was able to confirm Vanadium's result of a 4 year expected difference if the neutrinos were in fact faster than light. I think I trust that supernova over this experiment. Were there any other differences in the neutrinos themselves (I tried reading the paper but it's too dense for me)?
 
  • #145
What bothers me most about this result is not so much the claim of v > c, as the magnitude of the effect. It is way, way too strong.

It is incompatible with QG-inspired Lorentz-violating dispersion relations (it's too strong, by something like 13 orders of magnitude, compared to what we'd expect.) It is incompatible with tachyons (for tachyons, speed goes up as energy goes down, and that would be hard to miss - for starters, MINOS would've seen the arrival time anomaly of ~2000 ns.) The energy scale implied by this value of (v-c)/c is in the MeV range. I could accept a slightly superluminal mass eigenstate with negative m^2 on the same order as mass differences measured in neutrino oscillations; or even a value like those produced in tritium beta decay experiments (where m^2 values down to ~-100 eV^2 have been reported). But none of these values would come even close to producing a 10^-5 effect in the speed of travel at 17 GeV.

It has to be an unaccounted-for systematic error, a large-extra-dimensions effect that increases the strength of QG-induced Lorentz violation, or something completely unexpected. I'm leaning towards a systematic error.
 
  • #146
SeventhSigma said:
I did the math too and was able to confirm Vanadium's result of a 4 year expected difference if the neutrinos were in fact faster than light. I think I trust that supernova over this experiment. Were there any other differences in the neutrinos themselves (I tried reading the paper but it's too dense for me)?

For one, the method of acceleration is different. So if any difference is real, then the geometry and strength and dynamical motion of the magnetic field might play a role. This could lead to a different oscillation signature. It is possible that a difference in the speed may be real while us not having any immediate reasons as to why that may be. It is too early to speculate much further than on Physics Forums though. Answers to this (accounting for whether it is a statistically significant difference or not) cannot be complete or valid at this time.
 
  • #147
SeventhSigma said:
I did the math too and was able to confirm Vanadium's result of a 4 year expected difference if the neutrinos were in fact faster than light. I think I trust that supernova over this experiment. Were there any other differences in the neutrinos themselves (I tried reading the paper but it's too dense for me)?
Put yourself in the position of an observational astronomer, and then extrapolate to the position of a theorist in stellar evolution. Do we know that photons and neutrinos are emitted at the same time in a SN? Do we know that if there is a differential in the emission of copious amounts of photons and neutrinos that it can be constrained to minutes, hours, months, years? Are any of these time-frames relevant if we don't know what happens when a star self-destructs?

We have a lot of stars to look at and supernovas of all types get lots of attention. Still, we don't know all that we need to about the birth, life, and death of stars. We have some compelling models, but our lives are very short and the lives of stars are very long, so there is a sampling problem...
 
Last edited:
  • #148
PAllen said:
ghwellsjr said:
I don't see how clocks adjusted by GPS can get around the concern that I have. Consider the atomic clocks at Greenwich and Boulder running at different elevations and therefore running at different rates. If we measured the round-trip speed of light at both locations using their own atomic clocks, we'd get the correct answer of c. But if we used a common time generated by GPS, we will no longer get the correct answer of c at both locations, correct?
Yes, that's true, but not relevant to what they are doing.
Is it not relevant because the errors caused by the different time rates are too small to matter in this experiment or because the experimenters took the different rates into account?
 
  • #149
'Just standing back, ignoring the particle physics, looking a this from a nuts and bolts perspective ...

60ns. 18m. This seems too crazy-big to be a systematic error, right? What about this:

GPS-based distance measurements are made at the Earth's surface. Then, most significantly at the OPERA detector, adjustments are made for the detector's position relative to the GPS receiver. So, if the neutrino detector is 1400m underground, and 50m toward CERN, the correction is about -50m. Right? Wrong.

Since the Earth isn't flat like it used to be (sorry, I can't cite a reference for this offhand), two deep holes some distance apart are not parallel. They converge toward the Earth's center. The bottom of the 1400m deep hole at OPERA is in fact 26m closer to CERN than the top of the hole where the GPS receiver is, if you work out the numbers. (The extreme case would be a 1400m hole in New Delhi, India, which is about 1400m closer to New York. With OPERA and LHC only 730km apart, the effect is much smaller, but relevant.)

26 metres. That would quite nicely explain the 60ns premature neutrino detection within statistical error.

Of course, the scientists already must have considered this, right? It sure would be embarrassing if they didn't.
 
  • #150
peefer said:
'Just standing back, ignoring the particle physics, looking a this from a nuts and bolts perspective ...

60ns. 18m. This seems too crazy-big to be a systematic error, right? What about this:

GPS-based distance measurements are made at the Earth's surface. Then, most significantly at the OPERA detector, adjustments are made for the detector's position relative to the GPS receiver. So, if the neutrino detector is 1400m underground, and 50m toward CERN, the correction is about -50m.

This is logical, but wrong. :) OPERA is not exactly "underground" (as in, "in an abandoned mine".) It sits just off a 10-km highway tunnel through the mountain. They took two GPS units and measured locations of both ends of the tunnel, and then tracked back from the entrances to the facility to determine its exact coordinates.
 
  • #151
peefer said:
Of course, the scientists already must have considered this, right? It sure would be embarrassing if they didn't.

Highly likely. They have GPS receivers at both ends of the tunnel, and they triangulated from both ends. They said the two measurements agree very closely. It seems improbable that they would get close agreement between the two if they had ignored this.
 
  • #152
Nikpav said:
Significant part of that measurement relies on use of GPS timing distribution.

I propose to perform an experiment to verify that it is correct.
It is based on the fact that current stability performance of atomic clocks is at the level of 10^-14,
or about 10nS error per day (24 hours).
I propose to use some portable atomic clock device to first synchronize at one site (e.g. OPERA)
and then physically move ( by cars and airplaine) to another one within few hours.
During that period of time it should not drift for more then a few nSec.
Such precision should be adequate to calibrate against possible GPS related errors.

Thank you for your attention

This experiment has been done many,many times. Note that the UTC is based on comparing lots of clocks around the world, meaning there are well developed methods for time transfer. Time transfer does become tricky with very accurate clocks due to GR effects that can not be compensated for(due to uncertainties in position); but this is only an issue for the best optical clocks which are orders of magnitude better than the cesium clocks useful for the UTC (10^-17 level or so).
Again. this is NOT a problem with GPS time keeping, 60 ns is a very long time inmodern time metrology .
 
  • #153
keji8341 said:
Neutrino results challenge cornerstone of modern physics --- Sagnac effect?

Within 60 ns, light goes through 18 m.

The Earth ‘s velocity around the Sun is 30 km.
The time during which light goes through 730 km is: 730km/300,000 km = 0.00243 s.
Within 0.00243 s, the Earth around the Sun goes through 30 km/s x 0.00243 s = 73 m.
With the location effect of the experiment taken into account, 18 m and 73 m are in the same order.

Not that one: GPS is ECI frame based. The Sagnac effect that the experimenters appear to have overlooked is the rotation speed of the Earth. However, as we discussed before, that effect is still more than an order of magnitude too small to explain this riddle (between 0 and 465* m/s instead of 7500 m/s).

*v of equator: 40000 km / (24x60x60) s
 
Last edited:
  • #154
Hi everyone, interested newbie.

Was sent a link today to a page that explains why the result was wrong statistically (http://johncostella.webs.com/neutrino-blunder.pdf).

Leaving aside any concern on the background of the person involved, I was left unconvinced by the argument put forward but I don't know enough to be able be definitive about it.

Could one of the more knowledgeable people (particularly in statistics) have a quick read and post their thoughts?
 
Last edited by a moderator:
  • #155
According to the Costella paper, if I want to measure the distance between the left end of a piece of wood and a point to its left, it depends on how long the piece of wood extends to the right. That's nonsense.
 
  • #156
Harisankar said:
This might sound stupid,but I can't get it out of my mind,so i' asking it.
They have VERIFIED that it's neutrinos didn't they? Or they just assumed it's neutrinos because that is what is expected?

hamster143 said:
The accelerator is expected to produce neutrinos. The detector is expected to detect neutrinos. Timing of collisions seen by the detector matches exactly the timing of protons emitted by the accelerator. Nothing except neutrinos is known to be capable of penetrating through 700 km of rock. Processes inside the accelerator are well understood and it would be extremely surprising to find any unknown particles produced in bulk.

The fact is that if we leave aside the systematic error as the most likely cause of this, this is the point where the search must be and surely will be centered by serious theorists long time before they even seriously consider something's wrong with relativity. (the media is a different matter, all newpapers I've seen have already decided "Einstein was wrong").
As hamster 143 correctly answered the "FTL neutrinos" are assumed to be neutrinos because they are expected to be the neutrinos coming from the accelerator (measuring the time distributions of protons for each extraction for which neutrino interactions are observed in the detector) based on measured neutrino interaction time distributions. These statistical results can't completely rule out for instance that the "arriving" neutrinos' signal is due to some local neutrino-like interaction totally unrelated to the proton accelerator.
 
  • #157
in one of the videos about OPERA, I saw a mention of fiber carrying light along the path - assuming that this light is subject to the same mass distributions as you ghc mentioned, couldn't you work backward with this fiber as the calibration point for c, and determine if the neutrinos have traveled faster than the photons in this fiber?
 
  • #158
TrickyDicky said:
These statistical results can't completely rule out for instance that the "arriving" neutrinos' signal is due to some local neutrino-like interaction totally unrelated to the proton accelerator.

When the accelerator is on, they see neutrinos. When it's off, they don't. That's pretty convincing.
 
  • #159
f95toli said:
Again. this is NOT a problem with GPS time keeping, 60 ns is a very long time inmodern time metrology .

Could you please help me out here (because I’m about to lose my mind [almost])...

IF something is moving faster than light then the Lorentz factor γ (using c as a constant) must be somewhat 'adjusted', right? The Lorentz factor appears in several equations in Special Relativity, including time dilation, right? Time dilation is used in the GPS system to adjust the clocks involved for relativistic effects, right?

300px-Lorentz_factor.svg.png

Lorentz factor as a function of velocity

So, how can we trust the GPS timing if we at the same time are looking for data that will 'overthrow' the scientific foundation the system is built on?? I don’t get it...


Please note: I know that I’m wrong (to many extremely smart people around to miss this), I just can’t see it myself...
 
  • #160
Vanadium 50 said:
When the accelerator is on, they see neutrinos. When it's off, they don't. That's pretty convincing.

It is not as simple as that and anyone who has bothered to look up the paper or knows about neutrino detection knows it, so you ought to know.
If experimental error is not found,the very detection of neutrinos 60ns before they should if they were coming from the accelerator should make you consider this possibility, unless you are one of those speculating about the fall of modern physics as we know it.
 
  • #161
TrickyDicky said:
It is not as simple as that and anyone who has bothered to look up the paper or knows about neutrino detection knows it,

That's exactly how it's done. You have the neutrino beam produced for a fraction of a second every SPS cycle, and the detector sees more events - substantially more - in this period than at other times. Furthermore, this tracks the accelerator operation at all time periods. Machine off for a week? No neutrinos that week.

You can see it graphically in the paper; figure 11.
 
  • #162
Buckleymanor said:
Surely the one way neutrino method of measuring there speed could be adapted to measure the single-way speed of photons.
If it can work for neutrinos why can't it work for photons.
And if by any chance it ain't possible, though doubtfull, you could allways send the neutrinos back the other way and measure there velocity in the opposite direction.

It is more convincing that experiments for both neutrino and photon are done based on the same clock synchronization. The key point is the clock synchronization.
 
  • #163
So, how can we trust the GPS timing if we at the same time are looking for data that will 'overthrow' the scientific foundation the system is built on?? I don’t get it...

GPS compensates for time dilation, if you read the paper.

In any case, I feel that this may be a statistical anomaly, as there have been no real follow-up observations. And unlike much of the Internet, I do not think that this in any way disproves relativity, just in the way that relativity itself did not disprove Newton.

Relativity may have a few permutations, but the theory will only become refined further, rather than disproven entirely. However, 99.9999999% of the time, relativity holds true, just as Newtonian mechanics held true for pretty much all of the world of everyday experience. No-one considers the advent of relativity a "nail in the coffin" for Newton's ideas.

Furthermore, the news media has ignored one key line in the paper announcing the results:

The time of flight of CNGS neutrinos (TOFv) cannot be precisely measured at the single interaction level since any proton in the 10.5 µs extraction time may produce the neutrino detected by OPERA.

The paper later goes on to say that the measurements were normalized, but the truth remains that no individual neutrino was clocked at FTL velocities. Obviously the lamestream needs to ignore anything which will dampen the sensationalism.
 
Last edited:
  • #164
keji8341 said:
I wonder in which frame the clock synchronization is done? In the Earth frame or in the Sun frame?

Since the clock synchronization is done using GPS, I would assume that it is in the frame GPS uses, which is an Earth Centered Inertial (ECI) frame:

http://en.wikipedia.org/wiki/Earth-centered_inertial

I don't remember seeing an explicit statement to that effect in the paper, though.
 
  • #165
Vanadium 50 said:
You have the neutrino beam produced for a fraction of a second every SPS cycle, and the detector sees more events - substantially more - in this period than at other times. Furthermore, this tracks the accelerator operation at all time periods.
This is the statistical process I referred to

Vanadium 50 said:
Machine off for a week? No neutrinos that week.

This might be misleading, neutrinos are detected in a location at some rate at all times, regardless the existence of beams directed to that location.
 
  • #166
DevilsAvocado said:
So, how can we trust the GPS timing if we at the same time are looking for data that will 'overthrow' the scientific foundation the system is built on?? I don’t get it...

The reason why we trust it is because it has been tested to many times. The GPS system is compensated both for SR and GR effects; UTC time uses a "normalized" geodesic sphere to compensate for local differences in speed and position.
Note that GPS time is NOT the same thing as UTC, but the former is disciplined to the latter.

Now, there are several methods for time transfer; although the two methods that currently used are (as far as I know) based on transfer via satellites. One methods uses GPS, the second geostationary satellites that are not part of the GPS system, the latter system is more accurate than the GPS.
It is also possible to transfer time using optical fibres etc; but that is as far as I know only done for experiments with optical clocks; the latter are several orders of magnitude better than cesium clocks (and will one day replace the cesium clocks as time standards) and current time transfer methods are not good enough.

The main point here is that UTC and associated methods are very well established (old, if you like), if you visit a modern NMI you will find that many if them have clocks that are much better than the clocks that are part of the UTC. Hence, comparing two clocks using the UTC in the way it was done in this experiment is -if not easy- so at least routine.

Also, note that both PTB and METAS were involved and they certainly know what they are doing, the clocks were properly calibrated and the result checked by a movable time transfer device.

Hence, it is extremely unlikely that the error (and I agree that it is probably a systematic error) comes from problems with the clocks.

I should point out that I am not involved in time metrology (although in my experiments I use methods from frequency metrology), most of what I know about this I've learned from collegues who work on clocks and time transfer (talks etc) so take what I've written in this thread with a pinch of salt.
 
  • #167
keji8341 said:
It is more convincing that experiments for both neutrino and photon are done based on the same clock synchronization. The key point is the clock synchronization.

They didn't do a corresponding experiment with photons (as someone mentioned in an earlier post in this thread, that would require cutting a 730 km vacuum tunnel between CERN and OPERA). They calculated what the time of flight for a photon should be based on the GPS-determined positions of the source and detection points. That is subject to a number of uncertainties, but so far I don't think anyone in this thread has found one that is potentially large enough to shorten the actual distance (as compared to the calculated distance) by 18 meters.
 
  • #168
xeryx35 said:
The paper later goes on to say that the measurements were normalized, but the truth remains that no individual neutrino was clocked at FTL velocities.

That's because they didn't clock neutrinos individually at all. Your argument is invalid.
 
  • #170
Read the article, and they were careful. That being said:

a) The 8.3 km fiber optic, including Tx and Rx circuits, has some temperature coefficient of group delay. Since the GD is ROM 30 us, and they desire a couple ns cal error, then was the temperature at the cal times close enough to the temperature at pulse measurement times?
b) Would like to know more detail on how the digitizer time stamping was done. Concern is with front end latencies. My sense is they probably did fine here, but it would put everybody to sleep actually explaining it.
c) What if the proton pulse shape has a good-size temperature coefficient? Then will the pulse shape statistical treatment they did still work and not lead to errors? Because the pulse, 10 us long, might then give problems with the way they modeled it if it varies quite slowly.
 
  • #171
On a lighter note: according to Italian Minister of Education, Universities and Research, Mariastella Gelmini, experiment took place in the tunnel between the CERN and the Gran Sasso laboratories - so measuring the distance shouldn't be a problem :smile:

The only source I found in English is a blog here: http://141microseconds.wordpress.com/, searching for "tunnel Mariastella Gelmini" gives a lot of hits in Italian, the main one being http://www.istruzione.it/web/ministero/cs230911:

Roma, 23 settembre 2011

Dichiarazione del ministro Mariastella Gelmini
"La scoperta del Cern di Ginevra e dell'Istituto Nazionale di Fisica Nucleare è un avvenimento scientifico di fondamentale importanza."

Rivolgo il mio plauso e le mie più sentite congratulazioni agli autori di un esperimento storico. Sono profondamente grata a tutti i ricercatori italiani che hanno contribuito a questo evento che cambierà il volto della fisica moderna.
Il superamento della velocità della luce è una vittoria epocale per la ricerca scientifica di tutto il mondo.

Alla costruzione del tunnel tra il Cern ed i laboratori del Gran Sasso, attraverso il quale si è svolto l'esperimento, l'Italia ha contribuito con uno stanziamento oggi stimabile intorno ai 45 milioni di euro.

Inoltre, oggi l'Italia sostiene il Cern con assoluta convinzione, con un contributo di oltre 80 milioni di euro l'anno e gli eventi che stiamo vivendo ci confermano che si tratta di una scelta giusta e lungimirante".
 
Last edited by a moderator:
  • #172
dan_b said:
Read the article, and they were careful. That being said:

a) The 8.3 km fiber optic, including Tx and Rx circuits, has some temperature coefficient of group delay. Since the GD is ROM 30 us, and they desire a couple ns cal error, then was the temperature at the cal times close enough to the temperature at pulse measurement times?
b) Would like to know more detail on how the digitizer time stamping was done. Concern is with front end latencies. My sense is they probably did fine here, but it would put everybody to sleep actually explaining it.
c) What if the proton pulse shape has a good-size temperature coefficient? Then will the pulse shape statistical treatment they did still work and not lead to errors? Because the pulse, 10 us long, might then give problems with the way they modeled it if it varies quite slowly.

I'd think, if the problem came from any sort of temperature effect that the experiment would have to see a large seasonal variation; and, the seem to have been very careful to demonstrate that they, in fact, do not.
 
  • #173
millitiz said:
That is my point - if they somehow synchronize the neutrino with gamma ray from the emitter, then as I said, it would be an amazing technique because the light would be so scattered that it would be nearly none existing - and if they calculate the speed through distance/duration, then as I said, 60 nanoseconds is on the order of 10 m of differences. And from my limited knowledge, it could be an error somewhere. Although in the news (maybe not this one), they did check the result - and it also said that it is beyond statistic significance (I would assume it is 3 sigma? Although the news did not say anything about it) - then they probably did take into account of the error of measuring things.

I guess my bottom line is that, we will have to wait a bit longer, and as you noted, probably would have to dig around. I remember in the BBC news, it said that the team is going to talk about it soon. Although I would imagine it to be a false alarm...maybe.

I hope this is not overly speculative: I was wondering if seasonal temperature variations over large land masses can cause the ground to expand in such a way as to offset the straight line distance between two landmarks 730 kilometers apart by about 10 meters, or so? I had spoken a while back with a person who was familiar with bridge design, who explained that bridges can expand during summer due to the materials in the bridges being heated to higher temperatures during the summer months. So a natural question in my mind was whether the same thing is true for general land masses. I have tried searching around, but have not found any information that states that the ground of land masses in various regions expands during summer months in a way as to significantly change distances between landmarks (but I did not look very hard: it was a quick search, about 5 minutes of googling various links, so if there is an obvious link, I apologize).


I was wondering if the distance between the two facilities could have deviated by a factor of plus or minus 10 meters as a result of the expansions and contractions of the land mass the facilities and tunnels sit on due to seasonal variations in temperature of the ground?

p.s. I have removed this post a couple of times, as I think I am having problems with posting successfully. I am not sure if I did this right, but if this post ends up in more than one place, I apologize, and to the moderator, please delete any duplicates. Any duplicates is unintentional and is a result of my having difficulty with posting: I am not sure if I am having problems with my account, or if it is just plain error on my side.
 
  • #174
Edwin said:
I hope this is not overly speculative: I was wondering if seasonal temperature variations over large land masses can cause the ground to expand in such a way as to offset the straight line distance between two landmarks 730 kilometers apart by about 10 meters, or so? I had spoken a while back with a person who was familiar with bridge design, who explained that bridges can expand during summer due to the materials in the bridges being heated to higher temperatures during the summer months. So a natural question in my mind was whether the same thing is true for general land masses. I have tried searching around, but have not found any information that states that the ground of land masses in various regions expands during summer months in a way as to significantly change distances between landmarks (but I did not look very hard: it was a quick search, about 5 minutes of googling various links, so if there is an obvious link, I apologize).


I was wondering if the distance between the two facilities could have deviated by a factor of plus or minus 10 meters as a result of the expansions and contractions of the land mass the facilities and tunnels sit on due to seasonal variations in temperature of the ground?

p.s. I have removed this post a couple of times, as I think I am having problems with posting successfully. I am not sure if I did this right, but if this post ends up in more than one place, I apologize, and to the moderator, please delete any duplicates. Any duplicates is unintentional and is a result of my having difficulty with posting: I am not sure if I am having problems with my account, or if it is just plain error on my side.

Were that the case, there would have been seasonal variations in the inferred speed, which there were not. Additionally, they've included data tracking the change in distance over time; and, it only comes to centimeters, even with the effects of an earthquake.
 
  • #175
cdux said:
Indeed, and it's so common sense it would be surprising it's not checked in all the facets of the experiment. Though it's an oxymoron in case there is no >c here, since in that case, such a mistake would not be a mistake. Then again if it was, it would disrupt the result in ether a positive, negative or neutral direction.

xeryx35 said:
GPS compensates for time dilation, if you read the paper.

f95toli said:
The reason why we trust it is because it has been tested to many times. The GPS system is compensated both for SR and GR effects; UTC time uses a "normalized" geodesic sphere to compensate for local differences in speed and position.
Note that GPS time is NOT the same thing as UTC, but the former is disciplined to the latter.


Many thanks for your answers guys.

I think I found the true answer; it’s a MBNM (Malfunction in Brain Near Me) :biggrin:

My original (stupid) thought was that if GPS is used to verify SR, how on Earth could it be used in something that (in worst scenario) could look like a possible refutation of SR?? It doesn’t make sense.

But it was built on (extremely) bad assumptions (of course).

Let me try to repair any 'damage' made to the 'casual reader':
1) GPS satellite clocks lose 7,214 nanoseconds per day due to SR/time dilation, and gain 45,850 nanoseconds per day due to GR/gravitational frequency shift. Giving a total gain of approx 38 microseconds per day.

2) Relativity is not the only source for error correction in GPS, there are http://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System" .

3) Typical accuracy of GPS system is:
SA activated ± 100 Meter
SA deactivated ± 15 Meter
Differential GPS (DGPS) ± 3 - 5 Meter
WAAS/EGNOS ± 1 - 3 Meter​

From this we can tell that none of the standard accuracy will do for the CNGS. They use http://www.ppmgmbh.com/pdf_d/GPS Hardware/Referenzstationen/PolaRx2e_Sensor.pdf"), allowing positioning to high-precision centimeter level.

Of course.

Now my mumbo-jumbo about SR and time dilation doesn’t matter one bit with this precision, fixed ground-based reference, and real-time corrections. Sorry. :redface:
 
Last edited by a moderator:

Similar threads

Replies
1
Views
2K
Replies
1
Views
2K
Replies
2
Views
2K
Replies
8
Views
4K
Replies
30
Views
7K
Replies
19
Views
4K
Replies
46
Views
5K
Back
Top