CERN team claims measurement of neutrino speed >c

In summary, before posting in this thread, readers are asked to read three things: the section on overly speculative posts in the thread "OPERA Confirms Superluminal Neutrinos?" on the Physics Forum website, the paper "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam" published on arXiv, and the previous posts in this thread. The original post discusses the potential implications of a claim by Antonio Ereditato that neutrinos were measured to be moving faster than the speed of light. There is a debate about the possible effects on theories such as Special Relativity and General Relativity, and the issue of synchronizing and measuring the distance over which the neutrinos traveled. The possibility
  • #316
kisch said:
I get your point.

So I'm confident that the 100MHz clock is shared, and not generated on the front end boards, although I admit that this is not expressly stated in the paper.

Here's a confirmation for my view:

http://www.lngs.infn.it/lngs_infn/contents/lngs_en/research/experiments_scientific_info/conferences_seminars/conferences/CNGS_LNGS/Autiero.ppt" by Dario Autiero (2006).

Slide 8 and 9 describe the clock distribution system, and the master clock signal seems to be running at 10MHz.

In a http://www.docstoc.com/docs/74857549/OPERA-DAQ-march-IPNL-IN-CNRS-UCBL" by J. Marteau et al. (2002), the DAQ boards are described in detail.

On page 8, you can see that the boards don't contain any local oscillator.
Page 16 states:
"A fast 100MHz clock is generated by the FPGA using a PLL." (essentially from the 10MHz master clock signal).
This clock also drives the local CPU (an ETRAXX chip - the design was done in 2002).
 
Last edited by a moderator:
Physics news on Phys.org
  • #317
FlexGunship said:
I read back a couple of pages, and didn't see that this article has gotten shared yet, so here it is:


(Source: http://www.livescience.com/16506-einstein-theory-put-brakes-faster-light-neutrinos.html)

Again, everything is still up in the air; experimental results haven't been repeated yet and attempts to academically discredit the results still must be verified, but considering the group accounted for continental drift, it's hard to tell if they would've forgotten the different time dilation effects at the two locations.



Could be the answer that everyone's looking for. Sure, you'd expect error to be evenly distributed in a positive and negative fashion, but if the GPS satellite can't even tell if it's screw by 60 nanoseconds, then how could you use it to measure sub 60-nanosecond events?

I know how I would do it as an engineer, but is that good enough for "scientific breakthroughs"?

The effect of gravity is of the order of 8E-14 .
The time of flight is of the order TOF = 2.4E6 ns.
Therefore the error by not taking gravity into account is 8E-14 * 2.4E6 = 20 E-8 ns.
If I am not mistaken, this is negligible.
 
  • #318
Parlyne said:
If you read the Contaldi paper, you'll see that he's actually discussing the effects of GR on the procedure used to (at least attempt to) allow better synchronization than the 100 ns limit from GPS. His claim is neither that time measurements are necessarily limited to 100 ns precision nor than GR effects on the flight of the neutrino are significant, but that GR effects on the Time Transfer Device used to improve the synchronization are path dependent and cumulative and could easily reach 10s of ns of error if sufficient care was not taken to account for such effects.

This appears to be yet another credible point which could be totally irrelevant and is otherwise impossible to evaluate based on the information thus far presented by the OPERA collaboration.

I read http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf" , of course.
The difference of potential between the GPS satellites and the ground is taken in account by the GPS system itself. It is well known (see ref 13 in Contaldi paper) that these corrections are important and that without these corrections you would never be able to find your way in Paris with a GPS.

The main remark in the paper is about the potential difference between CERN and GRAN SASSO. If you used the DV/c² ratio given by Contalidi (4 lines after eq 3) you would see that the effect of the potential difference between CERN and GS is much smaller than picoseconds, as I explained in my previous mail.

Finally, let me mention that you can get the same number by using the difference in altitude between the CNGS proton switch and the OPERA detector. I have not yet understood why Contaldi needed the geoïd potential, when only the difference in altitude matters. After all, the level of the sea is an equipotential!

I am ashamed to tell it, but http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf" is totally irrelevant, or I am really stupid (which is my right).
 
Last edited by a moderator:
  • #319
Parlyne said:
If you read the Contaldi paper, you'll see that he's actually discussing the effects of GR on the procedure used to (at least attempt to) allow better synchronization than the 100 ns limit from GPS. His claim is neither that time measurements are necessarily limited to 100 ns precision nor than GR effects on the flight of the neutrino are significant, but that GR effects on the Time Transfer Device used to improve the synchronization are path dependent and cumulative and could easily reach 10s of ns of error if sufficient care was not taken to account for such effects.

This appears to be yet another credible point which could be totally irrelevant and is otherwise impossible to evaluate based on the information thus far presented by the OPERA collaboration.

Which contaldi paper are you talking about?
I know only one:

[1] http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf"

and another where he is only paraphrased:

[2] http://www.livescience.com/16506-einstein-theory-put-brakes-faster-light-neutrinos.html"

I have only been using [1] as reference.
I have no idea about the references used by [2], it is not a first-hand opinion.

Now for the physics.
At the speed of light 1ns = 0.3 m.
Would it be possible that the GPS achieves a precision better than 1 m
but would not be able to achieve a clock synchronisation better than 100 ns = 30 m ??
Of course this could be possible, but it would at least need an explanation.
Saying that Contaldi said that it might ... is only word foam.
Even the cheapest GPS on the market can indicate crossroads with a 5 m precision.
The Opera experiment is about a 60 ns = 18 meters gap.

I do not give the OPERA results a large likelihood, for reason I explained. (see http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.0239v1.pdf" )
Nevertheless the 100 ns is a very big claim that needs better arguments.
The OPERA team also explained in detail how they proceeded, and this was not a cheap argument.
 
Last edited by a moderator:
  • #320
I'm no expert on GPS; but, my understanding of what Contaldi's saying is that the issue arises with GPS synchronization due to the necessity that both endpoints receive the same signal from the same satellite and be able to extrapolate back to the emission time based on the propagation of that signal through the atmosphere (and the receivers). In the distance measurements, each receiver is using signals from 4 or 5 (or possibly more) different satellites. I assume that this allows some amount of correction for the effects that become important when considering only one satellite. (But, maybe I'm the one misreading.)

Whether or not GPS takes the GR effects in question into account (which it does), it won't, by itself, account for those effects on the direct time transfer, which is what Contaldi's discussing - literally the transportation of a highly stable clock from one site to the other, which is at least mentioned in the OPERA paper.

As I said before, the paper may, in fact, be totally irrelevant; but, it won't be so for the reasons you've mentioned. At least, I don't think so.
 
  • #321
lalbatros said:
Which contaldi paper are you talking about?
I know only one:

[1] http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.6160v2.pdf"

Well I liked that Contaldi paper. He drew attention to the fact that testing the synchronization of two GPS clocks in an inertial frame via portable time transfer device has it’s own limitations. He does not state that the GPS system is wrong, or that PTT test was not valuable, but he points out a weakness in the main paper that can be improved upon if deemed necessary.

Much of my thinking is the same. My comments typically appear unsupportive, but I would actually like to see the result stand up. It would be much more fun than if somebody finds a serious error. However, due process dictates that every part of the experiment is properly scrutinized. As such Contaldi is making a valuable contribution.
 
Last edited by a moderator:
  • #322
pnmeadowcroft said:
Well I liked that Contaldi paper. He drew attention to the fact that testing the synchronization of two GPS clocks in an inertial frame via portable time transfer device has it’s own limitations. He does not state that the GPS system is wrong, or that PTT test was not valuable, but he points out a weakness in the main paper that can be improved upon if deemed necessary.

Much of my thinking is the same. My comments typically appear unsupportive, but I would actually like to see the result stand up. It would be much more fun than if somebody finds a serious error. However, due process dictates that every part of the experiment is properly scrutinized. As such Contaldi is making a valuable contribution.

I too liked the paper, at first.
However, when going into the details, I could not find a clear message.
In addition, I do not see why he is referencing the geoïd Earth potential, when the altitude above sea level of both sites is the only thing that matters. I still don't understand his need for these geoïd formulas when two numbers and the constant g=9.81 m/s² is enough to conclude (plus the GR metric tensor in the weak field approximation!).
The question of the 3 km journey below the Earth surface is also a strange thing since this effect on clocks is totally negligible. Remember that the GPS sattelites are at 20000 km above earth, which is taken into account by the GPS system.

I agree that the clock synchronization is an essential part of this experiement, and needs to be scrutinized carefully.
But I do not see why altitude and depth below Earth surface would receive this attention, when the GPS satellites are so far from the ground. If there is a drift, it would be more likely caused by the 20000 km to the satelites.

As I am not an expert, I would probably first go back to the basics: what does it mean to measure the speed of neutrinos in this sisuation, and what does it mean to compare it to the speed of light?

In other words, How can the OPERA experiment be extrapolated to a real race between photons and neutrinos from CERN to GRAN SASSO.
It is obviously impossible to build a 730 km long tunnel from CERN to GRAN SASSO.
However, how can we be sure that the OPERA experimental data and processing can be extrapolated to this hypothetical experiment?
Actually, starting fom this elementary question, we could better understand what synchronization means.

Finally, the interresting point that I note from this paper is about the time between two TTD synchronizations. The paper assumes this synchronization occurs every 4 days and concludes that the clocks could drift by 30 ns. This is perfectly right. However, we are missing information:

- how often are the clock synchronized: every 4 days, or every minutes ?
- how much is the observed drift when a re-synchronization is performed ?

In addition, even if there was such a drift, it would be anyway very easy to correct each event for this observed drift. This would be precise enough by using a simple linear interpolation. Again, no information about that.
 
Last edited:
  • #323
lalbatros said:
Remember that the GPS sattelites are at 40000 km above earth, which is taken into account by the GPS system.

Altitude 20,000 km.
http://www.gps.gov/systems/gps/space/
 
  • #324
BertMorrien said:
The error is clear.
Read http://static.arxiv.org/pdf/1109.4897.pdf
They did discard invalid PEW's, i.e. PEW;s without an associated event, but they did not discard invalid samples in the valid PEW's, i.e. samples without an associated event. As a result, virtually all samples in the PEW's are invalid.
This can be tolerated if they are ignored at the Maximum Likelihood Procedure.
However, the MLP assumes a valid PDF and because the PDF is constructed by summing the PEW's the PDF is not valid, as explained below.
The effect of summing is that all valid samples are buried under a massive amount of invalid samples.
which makes the PDF not better than a PDF which is constructed only with invalid PEW's
This is a monumental and bizar error.

Why is it that all these scientists were not missing the missing
probability data or did not
stumble over the rumble in the PDF used by the data analysis?

For a more formal proof:
http://home.tiscali.nl/b.morrien/FT...aCollaborationNeutrinoVelocityMeasurement.txt

Bert Morrien, Eemnes,The Netherlands

I like this post, apart from the fact that it is a little over enthusiastic :smile:

Nice to see a little original thinking. The summing of the PEW has bugged me for a while, mainly because I cannot see why it was necessary. Is there anything wrong with replacing:

Lk(δtk)=∏wk(tj + δtk)

with

Lk(δtk)=∏wkj(tj + δtk)

appart from the obvious computational complexity?

If the PEW waveforms are almost all the same, then summing them will not matter much, but it will hardly gain either. As Bert points out the same could be achieved by summing a random sample of PEW that did not generate any pulses. Lol, perhaps this could be done as a cross-check.

Here are some practical difficulties I run into with the summed PEW.

1) The pulses are presumably of varying length. Approx 10.5μs with some standard deviation that might be tiny, but it is not stated. How was an average of different length pulses prepared? Are the longest ones just truncated?

2) The possible detection window is about 67ns longer than the generation window due to the length of the detector in the z-axis. How are the events from the longer detection time window squeezed to match the generation pattern?

3) No indication was given in the paper as to how long events count for after then end of the expected detection window. A few late events right at the back of the detector might get ignored because they were not in the 10.5μs window, biasing the result forwards.

4) Features like the time to decay cause the neutrino PDF to be different from the proton PDF. My best guess is that it smoothes the curve.

5) It seems to be assumed that the number of neutrino's hitting the detector is directly preportional to the number of protons in the PEW. This is a good general assumption, but it does not seem to be proven in the paper. Perhaps the neutrinos might start off accurate and then just get spayed all over the place as the beam intensity increases :devil: In other words it would be nice to be able to demonstrate that the proton pdf to neutrino pdf relationship is stable over the period the pulse. I saw an earlier preprint discussing the problems of variation in the PDF, relinked here for convenience:

http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.5727v1.pdf
 
Last edited by a moderator:
  • #325
lalbatros said:
In addition, I do not see why he is referencing the geoïd Earth potential, when the altitude above sea level of both sites is the only thing that matters.

The precision position calculations were done in GPS-derived coordinates, which are based on an ellipsoid Earth surface model (WGS-84). Sea level is a gravitational equipotential surface and differs from WGS-84 up to 150m.
Here's a geoid difference calculator:
http://geographiclib.sourceforge.net/cgi-bin/GeoidEval?input=E13d41'59"+N42d27'00"&option=Submit" - about 47m.

So that's not negligible, and is hasn't been neglected in the geodesic campaign:
http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf" .

Contaldi's point is that the effect on the reference clock traveling between CERN and Gran Sasso has not been taken into account, and in fact even this detailed report http://operaweb.lngs.infn.it/Opera/publicnotes/note134.pdf" by Thomas Feldmann (PTB, Germany) doesn't mention it.

But: apparently the PTB did calibrate the time-travel-clock before and after the synchronisation campaign against the German UTC time reference, and found a deviation of only 0.04ns (avg) caused by the journeys. For me this seems to counter at least Contaldi's arguments relating to accelerations experienced while travelling.

lalbatros said:
Finally, the interresting point that I note from this paper is about the time between two TTD synchronizations. The paper assumes this synchronization occurs every 4 days and concludes that the clocks could drift by 30 ns. This is perfectly right. However, we are missing information:

- how often are the clock synchronized: every 4 days, or every minutes ?
- how much is the observed drift when a re-synchronization is performed ?

The synchronisation with the Time-Travel Device has been performed once, with the result that the master clocks at the different sites differ by 2.3ns, which is then assumed to be a constant deviation (because the clocks are extremely stable), and calculated into the event time differences between the two sites.
 
Last edited by a moderator:
  • #326
Thanks a lot, kisch.
I had read the Feldmann report, but I had missed the missing point in it !
I was also pertubed by the technicalities of this paper which is a bit difficult to read for non-specialists.
However, I had asked myself several time if a forth and back clock travel had been tested.
Do you know if they did try a forth and back travel test over a long period of time?
Or would such a test be meaningless?
Thanks
 
  • #327
kisch said:
The synchronisation with the Time-Travel Device has been performed once, with the result that the master clocks at the different sites differ by 2.3ns, which is then assumed to be a constant deviation (because the clocks are extremely stable), and calculated into the event time differences between the two sites.

That's funny: those 2.3 ns are in fact the Sagnac correction which was presumably not taken into account. So, if in fact they already corrected for the difference in synchronization between the two reference systems than they obviously should not do it twice. :-p
 
  • #328
Graph from http://arxiv.org/abs/1109.5445 Tamburini preprint that PAllen refers to:

tamburini2.jpg



As to not sound cryptic: what this graph suggests is a preferred frame, that of the vacuum, which would be the one where neutrinos show no superluminality (zero imaginary mass), and a dynamical imaginary mass possibly related with the transversed material.
 
  • #329
Aether said:
No, Gran Sasso is in the same inertial system as CERN because the two places are not in relative motion. Someone objected that the atomic clocks on the GPS satellites are in a different inertial system as CERN/Gran Sasso (Earth), but nobody claims that CERN and Gran Sasso are in different inertial systems.
Peripheral speed at Gran Sasso is a little larger than at CERN. Maximal periperal speed at equatior = 1670km/h. This is much larger than CERN/Gran Sasso difference. I admit, it is possible to calculate, maybe it is negligible. But these two ARE different inertial systems.

A few years ago someones put one atomic clock in a plane and after they compare both clocks, they measured twin-paradox.
 
  • #330
PeterDonis said:
... but the basic argument appears to be that clock synchronization using GPS signals, at the level of timing accuracy required for measuring time of flight of the neutrinos, needs to take into account the relative motion of the GPS satellites and the ground-based receivers, because the GPS clock synchronization depends on accurately estimating the time of flight of the GPS signals from satellite to receiver, as well as the GPS timestamps that the signals carry.

Which people who work on time transfer are very well aware of, if they weren't it would be impossible to synchronize clocks as well as we do. And -again- GPS is only ONE of the systems used to synchronize clocks worldwide, it is even the least accurate of two satellite based systems. However, the fact that there is more than one system also means that the accuracy of time transfer via GPS is routinely checked and is known to be of the order of 1ns if you use a metrology grade system (which OPERA didn't, but their system was still quite good).

Also, I am not an expert in time metrology but I know quite a few people who are, and I also have a fair idea of which research groups around the world are working on time transfer. What is quite striking is that none of the criticism of the the time keeping that I've seen so far, has come from people in that field.
One should of course always be very careful about appealing to authority when it comes to who is right, but you'd think that people who've worked on time and transfer their whole careers would be better at spotting errors than someone who has no experience beyond what they've read over the past few weeks. Moreover, I can assure you that the people at e.g. NIST would love to show that their competitors from METAS and PTB got it wrong, there is a LOT of -mostly friendly- competition between the US and Europe in time metrology.
 
  • #331
The OPERA result has the most serious challenge to date, and it comes from a sister experiment located also in San Grasso.

http://arxiv.org/abs/1110.3763

The experiment uses the same neutrino source from CERN, and the neutrinos also traveled the same distance. They found the muons created from the neutral-current weak-interaction radiation from the neutrinos have an energy spectrum consistent to what one would expect if the neutrinos were moving at c, not at the speed found by OPERA.

Tommaso Dorigo has a detailed analysis of this work on his blog, if anyone follows or knows how to find that.

Zz.
 
  • #332
harrylin said:
Thanks - that looks very convincing! :smile:

You’re welcome, but I think Zz deserves the 'credit'. :wink:

Yup, looks like a very big nail in the coffin:
http://www.science20.com/quantum_diaries_survivor/icarus_refutes_operas_superluminal_neutrinos-83684

ICARUS Refutes Opera's Superluminal Neutrinos
...
Given a neutrino moving at a speed v>c as the one measured by Opera, and given the distance traveled to the Gran Sasso cavern, one can relatively easily compute the energy spectrum of observable neutrinos at the cavern, given the production energy spectrum.
...
Neutrinos at CERN are produced with an average energy of 28.2 GeV, and neutrinos at the receiving end - the LNGS where Opera and ICARUS both sit - should have an average energy of only 12.1 GeV for neutrinos detected via charged-current interaction.
...
icarus_mup.jpg
 
  • #333
Last edited by a moderator:
  • #334
http://physicsforme.wordpress.com/2011/10/19/neutrino-watch-speed-claim-baffles-cern-theoryfest/" article seems to confirm what I said before about the Cohen-Glashow/ICARUS hypothesis: "...neutrinos can’t travel faster than light unless electrons do too...".

But, why must super-luminal electrons necessarily "emit a cone of Cerenkov radiation in empty space"? How would momentum be conserved in such a process?

As I understand it, Cerenkov radiation can occur within a refractive medium, where the speed of light is less than c, only because the momentum of a photon does not decrease along with the reduction in the speed of light within the refractive medium.

http://physicsforme.wordpress.com/2011/10/19/neutrino-watch-speed-claim-baffles-cern-theoryfest/ said:
Another strike against the speedy neutrinos comes from the fact that neutrinos are linked to certain other particles – electrons, muons and tau particles – via the weak nuclear force. Because of that link, neutrinos can’t travel faster than light unless electrons do too – although electrons needn’t travel as fast as the neutrinos.

Speedy electrons

CERN physicist Gian Giudice, who spoke at the seminar, and colleagues looked into what would happen if electrons traveled faster than light by one part in 100,000,000, a speed consistent with the OPERA neutrino measurement. Such speedy electrons should emit a cone of Cerenkov radiation in empty space – but previous experiments show that they don’t.

The only way out, theorists at the meeting decided, was to break another supposedly fundamental law of nature – the conservation of energy. But that suggestion seems even more ludicrous than breaking the speed of light.
 
Last edited by a moderator:
  • #335
Islam Hassan said:
What if you tried to "de-statistify" the experiment: can you in practice fire one sole proton at a time from CERN to Gran Sasso? If yes:

i) how often would you be able to do this per second; and
ii) assuming you can fire one proton per second, how long would you need to wait on average to have one neutrino detected at Gran Sasso?

IH
According to http://news.sciencemag.org/scienceinsider/2011/10/faster-than-light-result-to-be.html?ref=hp" article, new experiments will be conducted soon with a proton pulse width of 1 to 2ns, and an interval between pulses of 500ns. That will allow for about 2million pulses per second, and the OPERA collaboration expects to detect about twelve neutrinos from these pulses over a ten day period. That's one neutrino detection per 144billion pulses.

Ha ha, I bet there are many folks in the OPERA collaboration who would have liked to have been able to do this new experiment before going public with their first result, but maybe the publicity was necessary in order to get to this new experiment.
 
Last edited by a moderator:
  • #336
gvk said:
I'd like to ask what would be the OPERA's result if the proton bunch duration instead of 10 microsecond would have just 10 nsec?

So, they decided to proceed in this way, but with much less width:
"The new measurements will involve a change in the CERN neutrino beam. CERN makes the particles by colliding proton pulses with a graphite target, with each pulse being about 10,500 nanoseconds long. CERN has now split these pulses up so that each one consists of bunches lasting 1 to 2 nanoseconds; bunches are separated by gaps of 500 nanoseconds. "

I bet that now in OPERA, Lorentz, Einstein, Poincare, Minkowski and Co. withstand.
 
  • #337
I have just done a massive cleanup of this thread.

I removed hundreds of messages that were either:

  • Overly speculative
  • Off-topic
  • Repeats of points previously raised.
  • Discussions of the "is not! is too!" variety.
 
  • #338
gvk said:
So, they decided to proceed in this way, but with much less width:
"The new measurements will involve a change in the CERN neutrino beam. CERN makes the particles by colliding proton pulses with a graphite target, with each pulse being about 10,500 nanoseconds long. CERN has now split these pulses up so that each one consists of bunches lasting 1 to 2 nanoseconds; bunches are separated by gaps of 500 nanoseconds. "

I bet that now in OPERA, Lorentz, Einstein, Poincare, Minkowski and Co. withstand.

It could be that the high-resolution experiment wipes out the previous result.
There is however a serious chance that the result is confirmed, in which case an in-depth scrutinity of clock synchronization will be needed.
 
  • #339
lalbatros said:
There is however a serious chance that the result is confirmed, in which case an in-depth scrutinity of clock synchronization will be needed.
The common view GPS method of clock synchronization isn't the same thing as slow clock transport, but we know that slow clock transport is fully equivalent to Einstein clock syncrhonization using two-way light pulses. So, when you can't send two-way light pulses directly between two points, such as between CERN and Gran Sasso, then you could accomplish the same synchronization using slow clock transport.

It would be interesting, and easy, to see if there was a difference between the clock syncrhonization that has been achieved using common view GPS, and what would be the result using slow clock transport. Does anyone have a link to a tutorial on the common view GPS clock synchronization method that compares its results with synchronization of clocks by slow clock transport?

I'm assuming that the "portable time transfer device" that OPERA used did not accomplish slow clock transport per se, but rather was a part of the implementation of common view GPS.
 
Last edited:
  • #340
Aether said:
http://physicsforme.wordpress.com/2011/10/19/neutrino-watch-speed-claim-baffles-cern-theoryfest/" article seems to confirm what I said before about the Cohen-Glashow/ICARUS hypothesis: "...neutrinos can’t travel faster than light unless electrons do too...".

I was just thinking... the quote you provided:
"neutrinos are linked to certain other particles – electrons, muons and tau particles – via the weak nuclear force"

I have not understood everything yet in the excellent answers I got from Parlyne (in the fork https://www.physicsforums.com/showthread.php?t=541589"), this is the answer I got on right-handed neutrino interaction with the W and Z bosons (weak nuclear force):
"Purely right-handed neutrinos will not interact with the W and Z at all. The post-mixing heavy neutrinos of the Type I see-saw will interact with the W and Z; but, the interaction strengths will be tiny."

[my bolding]

Then we have the Type II see-saw mechanisms, and I don’t know if the interaction strength is also tiny in this case...

However assume it is; how would this affect the link to the leptons? Is this why the state:
"although electrons needn’t travel as fast as the neutrinos"

??
 
Last edited by a moderator:
  • #341
DevilsAvocado said:
Then we have the Type II see-saw mechanisms, and I don’t know if the interaction strength is also tiny in this case...

However assume it is; how would this affect the link to the leptons? Is this why the state:
"although electrons needn’t travel as fast as the neutrinos"

??
I don't think so, but I haven't looked at the thread "Neutrino Oscillations for Dummies" (yet). The energy spectrum that you posted is what seems (to me) to imply, in view of Cohen & Glashow's paper, that the maximum attainable velocity of electrons must be close to that of the muon neutrinos that were detected by ICARUS. The error bars on the ICARUS data, as far as I know, are what would still allow for the possibility that the maximum attainable velocity of electrons could be slightly different than the speed of neutrinos. Also, in general there is nothing to prevent any of the electrons from traveling slower than their maximum attainable velocity, so that could be what they meant by that (in the article that I quoted from) as well.

http://arxiv.org/abs/1109.5682" is what seems to be a relevant paper by the same physicist who was quoted in that article.
 
Last edited by a moderator:
  • #343
Aether said:
I don't think so ...

Okay, thanks Aether.
 
  • #344
Likely, the explanation lies in the "don't-call-it-Sagnac-effect" effect. In any case, we will see if they go through all the peer-review process. Chances are that they don't get it published.
 
  • #345
We need D = 18 meters ( 60 nanoseconds * c )

D = h / c * w * R * cos (theta) * cos (beta)

h = 20.000 km
c = 300.000 km / s
w * R = 465.1 m / s
cos (theta) = 0.7
cos (beta) = 0.82

Result, D = 17.8 meters

Pretty interesting that having the same order of magnitude than required, and an aproximate value (back-of-the-envelope calculation), close to the value, the hypothesis is completely ignored.
 
Last edited:
  • #346
have they considered Newtons cradle? .. we all know of the desk based toy with 5 balls that clack back and forth.. well what if those balls were atoms and what if there were 600 miles of them in a row, a clack at one end would result in an instant movement at the other...and as the neutrino's are relatively sized to a regular atom as a golf ball is to our universe they are not individually registered, they are only counted as an electrical impulse, so its a false assumption that the one registered is the same one that was created. thoughts?
 
  • #348
deuticomet said:
... the hypothesis is completely ignored.

What makes you think that 160 researchers from 30 institutions and 11 countries working for 5 years would have missed something like this, if it has any value?
 
  • #349
phasta said:
have they considered Newtons cradle? .. we all know of the desk based toy with 5 balls that clack back and forth.. well what if those balls were atoms and what if there were 600 miles of them in a row, a clack at one end would result in an instant movement at the other...and as the neutrino's are relatively sized to a regular atom as a golf ball is to our universe they are not individually registered, they are only counted as an electrical impulse, so its a false assumption that the one registered is the same one that was created. thoughts?

Also, read the following FAQ:

https://www.physicsforums.com/showthread.php?t=536289

which shows one of the many fundamental misconceptions you have.
 
Last edited by a moderator:
  • #350
The UTC times CERN use are derived from GPS time by the receiver. The official spec alllows for +-100ns accuracy.
In practice the receiver companies claim a lot better, but it is interesting that they are not bound to a tighter standard.
 

Similar threads

Replies
1
Views
2K
Replies
1
Views
2K
Replies
2
Views
2K
Replies
8
Views
4K
Replies
30
Views
7K
Replies
19
Views
4K
Replies
46
Views
5K
Back
Top