CERN team claims measurement of neutrino speed >c

In summary, before posting in this thread, readers are asked to read three things: the section on overly speculative posts in the thread "OPERA Confirms Superluminal Neutrinos?" on the Physics Forum website, the paper "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam" published on arXiv, and the previous posts in this thread. The original post discusses the potential implications of a claim by Antonio Ereditato that neutrinos were measured to be moving faster than the speed of light. There is a debate about the possible effects on theories such as Special Relativity and General Relativity, and the issue of synchronizing and measuring the distance over which the neutrinos traveled. The possibility
  • #281
I have a dumb question:

Why is there such a large delay for the BCT? (i.e. 580 ns)

My understanding is that the BCT is a torroidal coil around the beam and then the results are sent along a cable to a digital oscilloscope.

Why would the oscilloscope be so far away? Wouldn't you think that since the analog accuracy of the BCT is so important to the measurement, that they would figure a way to put the oscilloscope closer? Wouldn't a large distance contribute to a distortion of the actual signal (high freq attenuation)

If I understand it right, different bandwidth signals will travel at different speeds through the medium (cable) thus causing
a distortionIf this resulted in the main square wave data from the BCT being distorted, such that the main DC part of the pulse was shifted slightly further than it would normally, then it would show a waveform that was "behind" the protons. Then if this waveform was take as gospel as to the actual time the protons left, then it would show the neutrinos as arriving early.

Probably I misunderstand the hookup. I would be grateful for someone setting me straight.
 
Physics news on Phys.org
  • #282
dimensionless said:
I don't know. It also raises the question of what altitude is as the Earth is somewhat elliptical.

Google "geoid"; start with the Wiki hit. Enjoy!
 
  • #283
Another thing I wanted to add.

Distortion of the BCT waveform doesn't necessarily mean that the delays aren't accurate. It just means that different parts of the waveform would get attenuated and thus the waveform would be distorted. (see the picture). So you could accurately measure 580 nsec for the delay, AND still get a distorted waveform

Again..why put the digitizer so far away? It just seems like you would be asking for trouble. It seems like it would be a lot better to have a long trigger that is always the same and can be accurately compensated for.

Imagine it was distorted like a low pass filter (blue wave form below). That would move the centroid of the wave form to the RIGHT, which would result in the neutrino time being thought to be early, when in fact it was the Beam measurement was distorted to have aspects
which were late.[PLAIN]http://upload.wikimedia.org/wikipedia/en/a/a5/Distorted_waveforms_square_sine.png Here's another image showing distortion from a 100m cable

coax-pulse.cgi?freq=0.2&len=100&rs=0&cs=10&rr=50&cr=10&rt=800&wave=trapezoid&name=2.5D-2V.gif

lwiniarski said:
I have a dumb question:

Why is there such a large delay for the BCT? (i.e. 580 ns)

My understanding is that the BCT is a torroidal coil around the beam and then the results are sent along a cable to a digital oscilloscope.

Why would the oscilloscope be so far away? Wouldn't you think that since the analog accuracy of the BCT is so important to the measurement, that they would figure a way to put the oscilloscope closer? Wouldn't a large distance contribute to a distortion of the actual signal (high freq attenuation)

If I understand it right, different bandwidth signals will travel at different speeds through the medium (cable) thus causing
a distortionIf this resulted in the main square wave data from the BCT being distorted, such that the main DC part of the pulse was shifted slightly further than it would normally, then it would show a waveform that was "behind" the protons. Then if this waveform was take as gospel as to the actual time the protons left, then it would show the neutrinos as arriving early.

Probably I misunderstand the hookup. I would be grateful for someone setting me straight.
 
Last edited by a moderator:
  • #284
kikokoko said:
formally you're right, but not substantially
...
certainly sigma is less than 6
but it is useless to deny that these numbers are the indicator that something may be abnormal
No, I am right both formally and substantially, and what is useless is to claim that the MINOS numbers show v>c.

Certainly, the MINOS people understood that in their report. It is one of the hallmarks of crackpots and bad science to try to claim results where there is only noise. The MINOS experiment did not even reach the level of significance traditionally required in the medical or psychological fields, let alone the much more stringent level of significance traditionally required in particle physics. That is why they themselves did not interpret it as v>c, they understand science and statistics.

Suppose I measured time by counting "1-Mississippi, 2-Mississippi, ..." and I measured distance by counting off paces, it would not be inconceivable that I could have measured some velocity > c. Is that because my result is "substantially" correct? No. It is because my measurement is prone to error. In science you do not get points nor priority for having noisy measurements.

The MINOS results are consistent with the OPERA measurement of v>c, but the MINOS results are not themselves a measurement of v>c. The OPERA group is the first and only measurement of v>c for neutrinos. To claim anything else is a misunderstanding of science and statistics.

Again, please stop repeating your incorrect statements.
 
Last edited:
  • #285
kikokoko said:
I just did a small calculation:

If the altitude extimation of the emitter or detector is about 100-to-300meters wrong,
the distance will be shortened by 6-to-18meters

Please share your calculation... because according to Pythagoras' it would require >Mont Blanc (5.5 km) to get a +20 m baseline (hypotenuse) – assuming latitude & longitude is correct:

[itex]c = \sqrt{a^2 + b^2}[/itex]

[itex]730.020 km = \sqrt{5.5^2 + 730^2}[/itex]
 
  • #286
kikokoko said:
(sorry my english not very good, please be patient...)

cosines theorem

Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

5n10d0.png
 
  • #287
PAllen said:
Adding errors in quadrature means you compute sqrt(e1^2 + e2^2 + e3^2...). It is generally valid if the errors are independent. It is routinely used for statistical errors. It is much more controversial for systematic errors, and has been questioned by a number of physicists. If the more conservative philosophy is used (you add systematic errors linearly unless you have strong evidence for independence), this alone makes the significance of the result much less, not sufficient to meet minimum criteria for a discovery.

It's quite reasonable for many independent errors if one can be sure that the errors are independent (that looks fine to me).
However, it's not clear to me where they specify if the uncertainties correspond to 1 or 2 standard deviations - did they indicate it anywhere?. For measurement equipment it is common to specify 2 SD (or even 3), but I suspect that here they imply only 1 SD. It's even possible that they unwittingly added differently specified uncertainties.
 
  • #288
DevilsAvocado said:
Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

I've spent almost 5 minutes to draw the sketch below,
I hope now you agree my calculations (pls. refer to my previous message)

:smile:
 

Attachments

  • CERN_by_Kikokoko.jpg
    CERN_by_Kikokoko.jpg
    11.2 KB · Views: 683
  • #289
kikokoko said:
I agree they measured well the GranSasso peak,
but laboratories are more than 1500 meters underground, into the mountain,
and maybe the antenna-signal has been placed some meters above the detector

An error of 100-200 meters in altitude estimation would completely invalidate the CERN results

I don't see how they'll commit such an error... They even measured the distance to the detector using signals through well-known cables. Even the guy that dug the hole for the original mine, and probably a hole for an elevator, would know if it's 200m deeper :-)

Remember the Chilean miners? They knew exactly they were ~680 meters deep if I don't recall the exact number wrong.
 
  • #290
but maybe not the idea about cosine

This is what kikokoko means, and I've explained before: A vertical error (red line at OPERA in the example below) results in a baseline error (yellow line in example below).

But the team was meticulous in considering this, as well in transforming GPS data into ETRF200 (xyx) values. They even (it seems) accounted for the geoid undulation in http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf" , which basically means that they considered the variation of gravity with position (yes, it varies), and therefore corrected for the systematic error which would otherwise be caused by equipment along the traverse being improperly leveled.

I am truly impressed by the care the geodesy team took to make quality measurements.

10elgcw.jpg
 
Last edited by a moderator:
  • #291
I agree they measured well the GranSasso peak

No.

A tunnel passes through the mountain. They used 2 GPS measurements at the East end of the tunnel, and 2 GPS measurements at the West. The OPERA detector is only about 6m below the Western GPS's. The lab is basically cut sideways from the road somewhere along the tunnel.
 
  • #292
peefer said:
... A vertical error (red line at OPERA in the example below)

Well... if they did this kind of error... they must be dumber than I am! :smile:

Anyhow, it’s kind of interesting... the BIG OPERA is mounted at right angle (90º) to the ground (I assume...?).

14c9yqo.png


AFAICT, this would mean that the neutrino beam would hit the detector at some ~30º angle??

27y3nnl.png


How did they cope with that?
 
  • #293
AFAICT, this would mean that the neutrino beam would hit the detector at some ~30º angle??

3.2° is the actual number. Those cartoon sketches are 10x vertical exaggerations.

I imagine they angled the detector correctly. Anyways, the error with doing it wrong is < 1 ns at worst.

(kikokoko, I don't know anything more about OPERA than is available in the publicly available papers.)
 
  • #294
kikokoko said:
Your English is okay, but maybe not the idea about cosine... :smile: The baseline is a 732 km straight line:

I've spent almost 5 minutes to draw the sketch below,
I hope now you agree my calculations (pls. refer to my previous message)
:smile:

attachment.php?attachmentid=39439&d=1317395400.jpg

DevilsAvocado said:
Please share your calculation... because according to Pythagoras' it would require >Mont Blanc (5.5 km) to get a +20 m baseline (hypotenuse) – assuming latitude & longitude is correct:

[itex]c = \sqrt{a^2 + b^2}[/itex]

[itex]730.020 km = \sqrt{5.5^2 + 730^2}[/itex]

lol Devil's, did you just calculate LL'h with pythagoras :redface: a new Ignobel prize winner in the making.

But seriously, it is an interesting post. They certainly will have done the geodesy in 3 dimentions, however there was no discussion of the measurement at the Cern end in the presentation.

The angle of the detector from Kikokoko's calculation is 3.31%, and it seems probable that it is a shorter flight path to the bottom of the detector than the top, but if the origin point on their slide is at ground level, then hit at the top of the detector will be a few ns late and this would strengthen the result.
 
  • #295
hefty said:
Didn't Autiero say on the seminar, that they even measured a 7cm in the change of Gran Sasso positions (x,y,z) after an earthquake? I recall they measured the altitude very precisely.
I don't see them missing the altitude by 250m...

He did, but measuring a 7cm change in position is not the same as measuring an absolute distance to 7cm. I gather that the change in position was a measurement by the GPS receivers, as were the tidal changes presented on the chart.
 
  • #296
PAllen said:
Adding errors in quadrature means you compute sqrt(e1^2 + e2^2 + e3^2...). It is generally valid if the errors are independent. It is routinely used for statistical errors. It is much more controversial for systematic errors, and has been questioned by a number of physicists. If the more conservative philosophy is used (you add systematic errors linearly unless you have strong evidence for independence), this alone makes the significance of the result much less, not sufficient to meet minimum criteria for a discovery.

Hi PAllen,

Disagree, your interpretation is too simple. It's not about conservative or liberal, that's for people who are unable to judge the factors due to lack of directly applicable experience. Use of quadrature treatment of systematic errors is a judgment call in each case. If there is good reason to think the systematic errors are independent, it's fine. If there is likely to be strong correlation due to underlying coupling mechanism, then it's not so fine. So, look at the list and (if you're an experienced engineer or knowledgeable experimental physicist) ask yourself the question: "Do I imagine a mechanism which make many or all the largest systematic components move the same direction at the same time?" In this case I think they called that right, even though I think the results are wrong for other reasons.
 
  • #297
Since the GPS uses correction factors to account for propagation delay due to atmospheric refraction, could this cause a systemic problem in comparing the expected TOF of a photon through vacuum to the measured TOF of the neutrinos?

Even with the fancy receivers installed by OPERA, the GPS still has to account for this. I would imagine a GPS installed over the moon (MPS?) would not need this correction factor but still has to account for the SR and GR effects, and would operate on the same principles, just maybe with a much smaller correction factor here since it has a MUCH thinner atmosphere.

The Purdue link does talk about a 10^-6 effect in distance measurement error due to the troposphere, so at least within an order of magnitude of this problem on the distance side even before accounting for the ionosphere. But I'm more worried about what this correction factor does to the time stamping in order to make the distance come out right - the 20cm accuracy over 730km is not being questioned. The GPS was designed to get distance right, not measure time of flight for photons and particles.

web.ics.purdue.edu/~ecalais/teaching/.../GPS_signal_propagation.pdf
http://www.kowoma.de/en/gps/errors.htm

Regarding the 11ns and 14ns differences in day vs. night and in summer vs. spring or fall - I presume these were looked at in the spirit of Michelson & Morley, but then thought the differences could simply be due to atmospheric changes that usually happen at sunset or with seasons. Expanding on that thought, I wonder if the 60ns problem would go away if we also took away the atmosphere and associated GPS correction factor(s).
 
Last edited by a moderator:
  • #298
I don't know about the absolute distance measurement, but the Opera data pretty conclusively shows that the relative position is unbelievably accurate. So, that seems to put a damper on any sort of random effect as this would seem to change over time, and as the satellites changed in orbit.

So any effect would have to be a constant problem with GPS.

I can't prove that this isn't the case, but it just seems very very very very hard to believe
millions of surveyors, geologists, planners and other professionals who rely on GPS every day would not have found this mistake.

Let's just look at a simple way to test it over long distances.

If there was an error of 20m over 730km, then there would be an error of 1m over 36.5km.
or an error of 1cm in 365 meters. I think I could discover that error with a long tape measure or a simple wheel on a road.

How the heck could this be missed in the last 10 years? You can theorize all you want
about possible problems, and conspiracies, but I'd bet 1000:1 that the world wide GPS system used by millions is not in error here, and the problem (if there is one) is somewhere
else

Of course I could be wrong, and I guess all the italians will need to adjust their property boundaries now by 20 meters :smile:
 
  • #299
exponent137 said:
These two links seems reasonable to me, but I do not read them precisely. I am missing comments on them.
Is there any answer from OPERA Group?

Just read http://arxiv.org/abs/1109.6160 again and it is a valuable contribution. I do not have the depth of knowledge of Carlo R. Contaldi, but I was just wondering if the time measurement using TTDs could be improved by having 4 identical clocks, two at each end and then having two of them travel in oposite directions over the same roads at the same speeds at the same time?

BTW, don't expect direct responses from the OPERA group at this stage. What they put out next is going to be measured and very well considered. They will want to allow due time for all the comments to come in. The one thing you can be sure of is that they are paying close attention to every relevant comment.
 
  • #300
LaurieAG said:
So why would you take 13 bunches and discard the last bunch if you didn't have a cycle miscount issue?
My blue, it was actually the first bunch/cycle that was discarded not the 13th and it was a dummy one anyway.

All the OPERA and CNGS delays were accounted for correctly but one.
This takes into account the 10 ns quantization effect due to the clock period.
The 50 ns spacer and the extra 10 ns between the start of the second bunch was ignored in both the blind and final analysis. But how could you argue that there is a discarded cycle.

The accumulated experimental margin of error is equal to ± 60 ns and the individual ΔtBCT margin of error from 2 bunches (1 counted and 1 discarded) is also equal to ± 10 ns.

There is room for counter error but, as the -580 ns is corrected as BCD/WFD lag and the bunch size used was also 580 ns, a phantom first cycle can be introduced that is then discarded resulting in the timing error due to the spacer and quantization effect of 60 ns remaining.

The FPGA cycle counter, to be capable of hiding this phantom cycle, will increment when the first part of the first trigger arrives, i.e. the end UTC Timestamp, and is incremented again when the first cycle actually completes loading and therefore the counter has an extra cycle when the last bunch in the series is completed. The error can be made during analysis if this cycle is not completely removed from the data when the counters are corrected.

The WFD would count 12 full bunches and the FPGA would increment 13 times at the end, including the extra dummy first arrival counter (theoretical 630 ns), so subtracting the BCD/WFD lag of 580 ns and therefore removing only 580 ns of the complete (theoretical) dummy cycle from the theory/statistical analysis, leaves a high potential for a consistent error of 60 ns in the calculations and simulations within the total experimental margin of error for the FPGA.
 

Attachments

  • miscount2.jpg
    miscount2.jpg
    31.2 KB · Views: 401
  • #301
lwiniarski said:
I can't prove that this isn't the case, but it just seems very very very very hard to believe millions of surveyors, geologists, planners and other professionals who rely on GPS every day would not have found this mistake.

You might have slipped a couple orders of magnitude in your argument, it happens sometimes. The distance in question (730534.61 ± 0.20) m is not in question, the time is. They splurged for a special "time transfer" GPS receiver and an atomic clock, items not usually used by millions of surveyors, etc. How many other times do you think customers of the GPS service asked to simulate time of flight for photons between two points not in line of sight?

As an engineer I'm aware of something called "scope creep", which would sort of be like "You guys have this great positioning system, can we use it to do time transfers at locations 730km apart to an accuracy of 2.3ns? What happens is the marketing guys say, Sure we can, you betcha" then tell the engineers the good news.

More later.
 
  • #302
This might be interesting. It's a PDF about Beam Diagnostics.

http://cas.web.cern.ch/cas/Bulgaria-2010/Talks-web/Raich-Add-Text.pdf"
 
Last edited by a moderator:
  • #303
I'm not sure that this has been discussed already : the neutrino cross section increases with energy. Assume that the energy composition changes during the rising and decaying phases of the beam. Then the beam would interact more en more with the detector, which means that the rising slope of the signal would be slightly steeper than the initial beam, and the decaying slope as well. When trying a "best fit" to adjust the signal to the beam, this could produce a slight offset of the time , giving an appearance of v >c , but only statistically. This would also explain the apparent absence of chromaticity, the effect would be of the same order whatever the average energy of the neutrino beam is. How does it sound for you ?
 
  • #304
It would seem that one way to test this might be to look at the neutrino energies
for the first neutrinos captured and see if these have more energy. I think they can see this can't they?

FYI, I'm not a particle physicist, so my opinion means nothing, but it sounds like a pretty clever idea!

Gilles said:
I'm not sure that this has been discussed already : the neutrino cross section increases with energy. Assume that the energy composition changes during the rising and decaying phases of the beam. Then the beam would interact more en more with the detector, which means that the rising slope of the signal would be slightly steeper than the initial beam, and the decaying slope as well. When trying a "best fit" to adjust the signal to the beam, this could produce a slight offset of the time , giving an appearance of v >c , but only statistically. This would also explain the apparent absence of chromaticity, the effect would be of the same order whatever the average energy of the neutrino beam is. How does it sound for you ?
 
  • #305
About the data analysis in report 1109.4897

For each event the correspondent proton extraction waveforms were taken, summed up and normalised to build two PDF’s, one for the first and one for the second SPS extractions, see fig 9 or fig 11, red lines.
The events were used to construct an event time distribution (ETD), see fig 11, black dots, apparently the number of events in 150 nS intervals, starting at a fixed time tA after the kicker magnet signal.

My point is, that the PDF’s are different from the individual proton extraction waveform (PEW) associated with a particular event.
In my opinion, this makes using this PDF for maximum likelihood analysis questionable.
Via the same line of reasoning the grouping the events should also not done if the PEW amplitude may vary too much in the grouping time interval.Since this amplitude is taken from different PEWs, grouping is not an option nor is maximum likelihood analysis.

Alternative analysis.
Assuming that the probability of detecting a neutrino is proportional to the neutron density and in turn to the PEW and further assuming that this waveform is delayed with the exact flighttime, each time an event is detected the amplitude of the waveform is sampled. The samples are summed, the sum is set to 0 before the first waveform starts and at the end of the last waveform the sum will reach a value S.
I assume that the sum must be lower than S for all other delays.
Since the exact flighttime is not known, one can repeat the above procedure for various flighttimes and select the delay with the highest sum. I cannot prove that this
is the correct flighttime, but I think it is.
I presume all relevant raw experimental data is still available, so a new analysis should be entirely feasible.
Moreover, if the raw data was available (16000 events <= 2e-6 with a resolution of 10 nS, 8 bytes each) plus as many PEWs (16000 times 1100 samples of 2 bytes), totalling less then 40 MB, anyone with a little programming experience can do it.
As mentioned in the report, the supplied data could be modified to enforce a blind analysis.
 
Last edited:
  • #306
Just keeping this thread complete; I have not looked at the following paper co-authored by Glashow on the OPERA findings:

http://arxiv.org/abs/1109.6562
 
  • #307
lwiniarski said:
It would seem that one way to test this might be to look at the neutrino energies
for the first neutrinos captured and see if these have more energy. I think they can see this can't they?

FYI, I'm not a particle physicist, so my opinion means nothing, but it sounds like a pretty clever idea!


Thanks, I elaborated it a little bit more and have put a paper on arXiv

http://arxiv.org/abs/1110.0239
 
  • #308
jaquecusto said:
Holly Shame! :blushing: Sorry my mistake! Gun and Target are in the same frame!
But... It's possible the Coriolis effect delays the neutrinos travel when this group of particles reaches the italian target. The italian target is nearest to equator than the swiss gun...

Gun and Target are not at rest in the frame that GPS uses as reference. Thus, your approach as I understood it was roughly correct (and the Sagnac effect isn't a Coriolis effect!) but you made a few calculation errors, as I showed in post #913. :-p

Now, someone of CERN has confirmed* that they indeed forgot to correct for it. Taken by itself this Sagnac effect increases the estimated anomaly to ca. 63 ns. However, I suppose that there will be more corrections to their calculations.

Harald

*according to an email of which I saw a copy; I can't put more here.
 
Last edited:
  • #309
Looking at figure 10 from the Opera paper, there seems to be a periodic pattern of speed variation around the mean 1048ns line. What could that seasonal variation be attributed to?
 
  • #310
TrickyDicky said:
Looking at figure 10 from the Opera paper, there seems to be a periodic pattern of speed variation around the mean 1048ns line. What could that seasonal variation be attributed to?

I did not like that much either. The deviations are quite high, especially Extr 1 in 2009. However, the accuracy seems to improve in recent years, so I put it down to improving experience with the experiment.

Some of the papers at
http://proj-cngs.web.cern.ch/proj-cngs/Publications/Publications_publications_conferences.htm
suggested that the number of protons on target and the number of detection events have increased over time, so the wider variance in 2009 is to be expected.
 
  • #311
pnmeadowcroft said:
I am not even going to attempt to fully understand the paper from Kaonyx, but I am glad it is posted, because I was sorry to see no detailed calcuations in the Opera report. However, I would like to asked a couple of operational questions that have troubled me about the timing.

How often is a time correction uploaded to the satalite from earth? What is the probability that a new time was uploaded between the time signal used at the Cern end and the time signal used at the Opera end?

I know that the clocks in the satalites are especially designed to run at a different speeds that the ones on earth, but I also know they are corrected from time to time. I am thinking that the uploaded corrections will generally be in the same direction each time.

Hi pn,

Minor misconception. (I think, if I remember this part right) they don't correct the satellites' time directly, they only correct the frequency. The step size is 1e-19. This they call "steering". Each satellite broadcasts a rather large block of data, repeated every 12.5 minutes, which has a lot of information about time error, frequency error, steering, and especially the very precise orbital parameters called "ephemeris" which are measured and corrected essentially all the time. The receivers see all this data and that's how they can get pretty good fixes even though the satellites are orbiting around a rather lumpy geoid which has MANY km's of asymmetries, and so their orbits are lumpy too. Even things like the pressure of sunlight is taken into account in GPS satellite orbit determination. I don't remember how often uplink (corrections) happen, and I couldn't find that in the books at hand or a quick Google search, but I'll make a good guess the uplinks are no more seldom than once per orbit (6 hours.) Probably a multiple of that.

Since the time is the integral of the frequency plus the starting value (GPS zero time is some time in 1980) when they make a frequency step, the time ramps. Thus there are no time steps, just ramps at adjustable rate.

Here are two nice references:

1. GPS Time: http://tycho.usno.navy.mil/gpstt.html
2. Relativistic effects in GPS: http://www.phys.lsu.edu/mog/mog9/node9.html

I especially like that second one, which briefly 'answers' some GPS time questions a lot of posters have asked; it's from 1997, so they had already thought of all that stuff more than 14 years ago.

Don't be fooled or alarmed by mentions of a couple hundred ns time error between UTC and GPS in the USNO link above. That's the absolute error between UTC and GPS for arbitrarily long intervals, neglecting UTC leap seconds. The very-short-term time difference between two locations which can see the same satellite at the same time can be driven down to near 1 ns. Deeper thinking reveals that "at the same time" itself introduces complications, but even for all that, synchronization between two locations can be made very good indeed. After that, it's a matter of how stable the local clocks are, the relative motion, altitude and geoid shape effects both SR and GR. They did call in the time and frequency consulting experts, so I HOPE they were listening to them.
 
Last edited by a moderator:
  • #312
Aging in the 100 Mhz Oscillator Chip

I have been looking at the text from page 13 of the main paper that describes the (FPGA latency) in Fig 6.

“. . . The frontend card time-stamp is performed in a FPGA (Field Programmable Gate Arrays) by incrementing a coarse counter every 0.6 s and a fine counter with a frequency of 100 MHz. At the occurrence of a trigger the content of the two counters provides a measure of the arrival time. The fine counter is reset every 0.6 s by the arrival of the master clock signal that also increments the coarse counter. The internal delay of the FPGA processing the master clock signal to reset the fine counter was determined by a parallel measurement of trigger and clock signals with the DAQ and a digital oscilloscope. The measured delay amounts to (24.5 ± 1.0) ns. This takes into account the 10 ns quantization effect due to the clock period.”

The main potential error here seems to be the accuracy of the 100 Mhz oscillator. I suspect that this is a standard timing chip similar to the ones in computers and mobile phones, but I hope it is a more accurate version. All such chips have a variety of problems in holding accurate time. For example: if the time signal is slow by just 0.2ppm (parts per million), then it will start at zero and finish at 59,999,987 before being reset to zero when the next time signal comes in 0.6s later. Without calibration this would mean that the a time recorded just after the periodic 0.6s time signal would have a very accurate fine counter but a time recorded almost at the end of the period would be out by 120ns and the average error would be 60ns.

However, this effect can be corrected for by calibrating the FPGA clock signal, and then redistributing the fine counter value proportionally over the whole 0.6seconds. I hope this was done and that it was embedded into the (24.5 ±1.0) ns delay that was reported, but it does not say so.

Ok, so how can this system go wrong?

Here is a link to the specification of a 100MHz HCSL Clock Oscillator.

http://datasheets.maxim-ic.com/en/ds/DS4100H.pdf

The total for all errors for this chip was ±39ppm, and remember that 0.1ppm is not good. Things that are listed that affect the accuracy are: initial frequency tolerance; temperature; input voltage; output load; and aging. The main four factors can be compensated for by accurate calibration, but he aging is easily missed. This sample chip can change frequency by ±7ppm over 10 years, or approximately 0.7ppm per year on average.

So how to fix it?

Obviously sending a counter reset more often than once every 0.6s in the most important thing to do, but also if it is possible to capture the number of fine counter ticks lost or gained at the clock reset that happens after a specific detection has recorded a time, then the time value of the fine counter can be redistributed retrospectively across the 0.6s period to get a more precise time. Such a dynamic correction mechanism would largely remove the need for accurate calibration. It may well be something that is already in place, but it is not mentioned.

What other problems might be in the same subsystem.

Operating conditions that are not the same as the calibration conditions.
An occasional late arrival of the 0.6s clock signal.
Oscilloscopes have all the same problems, so any calibration equipment needs to be very good.
Do magnetic fields also affect accuracy? I have no idea.

This is also a less obvious answer to the Fig 10. variance :smile:

TrickyDicky said:
Sure, random is the "obvious" answer.
 
  • #313
Regarding the 100Mhz oscillator accuracy, it's hard to imagine they would go into all that trouble getting high-precision master clock into the FPGA and then somehow not bother calibrating their high-speed clock against it. All it takes is to output the counter every 0.6 seconds just before resetting it, it's a kind of obvious thing to do, really.
 
  • #314
kisch said:
a Vectron OC-050 double-oven temperature stabilised quartz oscillator.

Many thanks for that datasheet. Always nice not to have to find every paper, but you listed the Opera Master Clock chip, and in my post I was talking about the chip in the FPGA board. Soz, tried to make it as clear as I could. It is slide 38, T10 to Ts.

If you also happen to know a link to the exact specification of the FPGA please do post that too. I spend 3 hours today on google looking for more details, but moved onto other things.
 
  • #315
pnmeadowcroft said:
Many thanks for that datasheet. Always nice not to have to find every paper, but you listed the Opera Master Clock chip, and in my post I was talking about the chip in the FPGA board. Soz, tried to make it as clear as I could. It is slide 38, T10 to Ts.

I get your point.

But wouldn't individual freerunning oscillators defeat the whole point of the clock distribution system? (kind of exactly what you're saying, too)
M-LVDS is completely fine for distributing 100MHz.

Also, what would be the point in having the Vectron oscillator in the Master Clock Generator "... keep the local time in between two external synchronisations given by the PPmS signals coming from the external GPS" (from the paper, page 13) when only the 0.6s signal would be distributed? You would only need a 1:600 divider to get 1/0.6s pulses from the 1/ms input, not a fast and super-stable oscillator.

So I'm confident that the 100MHz clock is shared, and not generated on the front end boards, although I admit that this is not expressly stated in the paper.

pnmeadowcroft said:
If you also happen to know a link to the exact specification of the FPGA please do post that too.

I remember Mr Autiero mentioned "Stratix" in his presentation.
 

Similar threads

Replies
1
Views
2K
Replies
1
Views
2K
Replies
2
Views
2K
Replies
8
Views
4K
Replies
30
Views
7K
Replies
19
Views
4K
Replies
46
Views
5K
Back
Top