How does 400 sigma compare with 5 sigma?

In summary: they are just there to help you see how much different the data points are from the theoretical prediction.
  • #1
Cerenkov
277
54
Hello.

On Ethan Siegel's 'Starts With a Bang' blog...

http://scienceblogs.com/startswithabang/2010/06/28/how-do-we-use-the-cmb-to-learn/

...he points out that the FIRAS CMB data has 400 sigma error bars.

Since I've read that a 5 sigma value corresponds to a p-value of 3 x 10 -7 (approx. 1 in 3.5 million) I was wondering what p-value 400 sigma corresponds to?

Thanks,

Cerenkov.
 
  • Like
Likes atyy
Space news on Phys.org
  • #2
The error bars are multiplied by 400 so that you can see them. There is nobody who has any illusion that all errors are under so much control that you can claim that they actually represent the 400 sigma.
 
  • Like
Likes atyy and ohwilleke
  • #3
The sigma of an error bar corresponds roughly to the number of decimal points in the confidence level of results. For example; 1 sigma corresponds roughly to about 1 chance in 10 the true value exceeds the average value by more than one standard deviation. 2 sigma indicates a probability of about 5/100 the true average exceeds the measured average by more than 1 standard deviation. For 3 sigman its about 2.5/1000, and so forth. in particle physics a 5 sigma result is cosidered sufficient to constitute proof of a hypothesis. A 400 sigma result is absolutely ridiculous and mainly serves to cast doubt upon the validity of the data. It is highly unlikely an instrument can make a measurement with such precision. It's kind of like trying to weigh a car on a gram scale. There is ,in fact, a way to quantity measurement uncertainty as well as result uncertainty. This kind of error is often rolled into a number called the systematic uncertainty of an experiment.
 
  • #4
Chronos said:
The sigma of an error bar corresponds roughly to the number of decimal points in the confidence level of results. For example; 1 sigma corresponds roughly to about 1 chance in 10 the true value exceeds the average value by more than one standard deviation. 2 sigma indicates a probability of about 5/100 the true average exceeds the measured average by more than 1 standard deviation. For 3 sigman its about 2.5/1000, and so forth. in particle physics a 5 sigma result is cosidered sufficient to constitute proof of a hypothesis.
This rule of thumb quickly deteriorates at higher sigma. 1 sigma is also closer to 1/3 than 1/10.

Just to see the level of ridiculousness in claiming to control the Gaussianity of the errors, I slammed in the error function in Wolfram alpha - the result came out to something like 1e-34000, give or take a few hundred orders of magnitude.
 
  • #5
Cerenkov said:
Hello.

On Ethan Siegel's 'Starts With a Bang' blog...

http://scienceblogs.com/startswithabang/2010/06/28/how-do-we-use-the-cmb-to-learn/

...he points out that the FIRAS CMB data has 400 sigma error bars.

Since I've read that a 5 sigma value corresponds to a p-value of 3 x 10 -7 (approx. 1 in 3.5 million) I was wondering what p-value 400 sigma corresponds to?

Thanks,

Cerenkov.
As others mentioned, it's not really viable to take seriously any error estimates that go much beyond a handful of standard deviations away.

The fundamental problem is that most any real distribution will tend to have broader tails than the normal distribution. Even at five-sigma, which represents an error that would happen by chance only once in about 3 million tries, cannot really be taken seriously for most applications.

The realistic way to interpret errors beyond a few sigma is to just state that it's highly unlikely that deviation is due to chance. Any further numbers applied to the probability really are meaningless, because they're based upon a mathematical model which is known to not be all that accurate in that regime.
 
  • Like
Likes atyy, Orodruin and jim mcnamara
  • #6
Well, this is... unprecedented.
I suppose I'd better ask some further questions.

1. The WMAP and Planck satellites confirmed the COBE data?

2. To the same level of confidence?

3. If 400 sigma FIRAS data is "absolutely ridiculous" and "meaningless" and "cannot be taken seriously"... then what value of sigma did WMAP and Planck record, when it came to measuring the power spectrum of the CMB?

4. Or, putting it another way, did all three science teams independently agree on the same sigma value?

5. This nobody was under the illusion that the FIRAS errors were under sufficient control for the 400 sigma claim to be representative of something meaningful.
Certain popular-level websites and books were responsible for this illusion. Could I therefore please ask for a link to somewhere or something that deals with the COBE, WMAP and Planck data in a meaningful (and not illusory) way that can also be understood at the popular level?

Thanks,

Cerenkov.
 
  • Like
Likes atyy
  • #7
Cerenkov said:
then what value of sigma did WMAP and Planck record, when it came to measuring the power spectrum of the CMB?
This is the wrong question to ask. The error bars just give you the error of each data point. You can do the exercise of comparing this with the theoretical prediction by plotting it in the same graph. The point is that you would not see the error bars if they were plotted at 1 sigma, you would just se points on top of the graph, so what they have done is to blow up the 1 sigma error bars by a factor of 400.

Cerenkov said:
4. Or, putting it another way, did all three science teams independently agree on the same sigma value?
Again, this is the wrong question to ask.
 
  • #8
Hmmm...

Thank you for letting me know that I am asking the wrong questions.

Would it fall within the mission statement of this forum...

"Our mission is to provide a place for people (whether students, professional scientists, or others interested in science) to learn and discuss science as it is currently generally understood and practiced by the professional scientific community."

...for you to help someone interested in science in a proactive, rather than a reactive way?

I ask because, speaking as a nobody, it puts a real dampener on my enthusiasm for science to be told by a somebody that I'm wrong and wrong and wrong again.

Any chance of changing my leaning experience in this thread from something negative into something more positive and helpful?

Thanks,

Cerenkov.
 
  • #9
Cerenkov said:
1. The WMAP and Planck satellites confirmed the COBE data?

2. To the same level of confidence?
These satellites were built for different purposes, and don't actually measure the average temperature of the CMB directly. They were constructed to, as accurately as possible, measure the differences in temperature across the sky.

The results of those satellites are consistent with COBE, but they just weren't built to measure the average temperature.

Cerenkov said:
3. If 400 sigma FIRAS data is "absolutely ridiculous" and "meaningless" and "cannot be taken seriously"... then what value of sigma did WMAP and Planck record, when it came to measuring the power spectrum of the CMB?
The power spectrum is a completely different measurement. The 400-sigma error bars were placed on the temperature spectrum of the CMB as measured by COBE. COBE didn't really measure the power spectrum (at least, not very well), which is a statistical description of how the temperature varies from place to place on the sky.

The point of the incredible accuracy of the COBE result is not that there was a precise measurement, but rather than the measurement was precise enough that there is little reason to actually care about how precise it was. The average CMB temperature is known, to a high degree of accuracy. That's the take-away. The experimental team could **** up in a wide variety of ways, getting the answer wrong by, say, 5-sigma, and it would make no difference at all to the meaning of the result.

To state this again in other words, if the CMB temperature were 2.728K rather than the current best-estimate of 2.7255K, would it change our understanding of the CMB in any meaningful way? Almost certainly not.

Cerenkov said:
4. Or, putting it another way, did all three science teams independently agree on the same sigma value?
No, because they're different instruments with different errors. The measurements of all three satellites are consistent with one another to within their respective errors, though.

Cerenkov said:
5. This nobody was under the illusion that the FIRAS errors were under sufficient control for the 400 sigma claim to be representative of something meaningful.
The meaningful conclusions are:
1) We know the temperature of the CMB very, very precisely. The 400 number doesn't clarify how precisely we know it.
2) We know it so precisely because the CMB is extraordinarily bright.
 
  • Like
Likes atyy
  • #10
kimbyd said:
1) We know the temperature of the CMB very, very precisely. The 400 number doesn't clarify how precisely we know it.
More precisely, the 400 number has nothing to do with it. It is how much the error bars in the data have been magnified to be visible in the plot. How this translates to an estimate of the CMB temperature is a matter of data analysis and results in two things: (1) Confidence intervals for the CMB temperature, which can be quoted at varying confidence levels. (2) A best fit temperature for which the predicted spectrum best fits the observation. To see how well this best fit actually fits the data, the common thing to do is to quote the value of the ##\chi^2## per degree of freedom (##n##), which is expected to be 1 with a variance of ##1/n##. This is what can be translated to a p-value telling you how good the fit really is given the data and the model.
 
  • Like
Likes atyy
  • #11
Orodruin said:
More precisely, the 400 number has nothing to do with it. It is how much the error bars in the data have been magnified to be visible in the plot. How this translates to an estimate of the CMB temperature is a matter of data analysis and results in two things: (1) Confidence intervals for the CMB temperature, which can be quoted at varying confidence levels. (2) A best fit temperature for which the predicted spectrum best fits the observation. To see how well this best fit actually fits the data, the common thing to do is to quote the value of the ##\chi^2## per degree of freedom (##n##), which is expected to be 1 with a variance of ##1/n##. This is what can be translated to a p-value telling you how good the fit really is given the data and the model.
Right. The current best-fit estimate of the CMB temperature is:
$$2.725 48 \pm 0.000 57 K$$
source

If we are comparing against the null hypothesis of no CMB signal at all, then this is a ~4800 sigma detection. But then, we knew the CMB existed long before COBE was launched. So this is rather like measuring the temperature of sunlight hitting the Earth and then comparing the measured temperature to a null hypothesis of no sunlight. Sure, you'll get an extremely significant detection, because there is no question whatsoever that the Sun is pretty bright. But, "The Sun is bright," isn't something that needed to be answered.

The benefit of this measurement is less about the fact that the CMB exists, but that it was demonstrated to be highly uniform and almost perfect black-body spectrum. That's the point of that graph with the 400-sigma error bars: there's a theoretical prediction that the spectrum of the CMB will be almost a perfect black body spectrum. When you fit a black body spectrum to the CMB intensity across a variety of wavelengths, you get a result that cannot be distinguished from a perfect black-body spectrum, even though you have to multiply the error bars by a massive 400 in order to even see that there are error bars at all.
 
  • Like
Likes jim mcnamara
  • #12
kimbyd said:
These satellites were built for different purposes, and don't actually measure the average temperature of the CMB directly. They were constructed to, as accurately as possible, measure the differences in temperature across the sky.

The results of those satellites are consistent with COBE, but they just weren't built to measure the average temperature.The power spectrum is a completely different measurement. The 400-sigma error bars were placed on the temperature spectrum of the CMB as measured by COBE. COBE didn't really measure the power spectrum (at least, not very well), which is a statistical description of how the temperature varies from place to place on the sky.

The point of the incredible accuracy of the COBE result is not that there was a precise measurement, but rather than the measurement was precise enough that there is little reason to actually care about how precise it was. The average CMB temperature is known, to a high degree of accuracy. That's the take-away. The experimental team could **** up in a wide variety of ways, getting the answer wrong by, say, 5-sigma, and it would make no difference at all to the meaning of the result.

To state this again in other words, if the CMB temperature were 2.728K rather than the current best-estimate of 2.7255K, would it change our understanding of the CMB in any meaningful way? Almost certainly not.No, because they're different instruments with different errors. The measurements of all three satellites are consistent with one another to within their respective errors, though.The meaningful conclusions are:
1) We know the temperature of the CMB very, very precisely. The 400 number doesn't clarify how precisely we know it.
2) We know it so precisely because the CMB is extraordinarily bright.

----------------------------------------------------------------------------------------------------------------------------------------------------------------Thank you for explaining these complex and difficult-to-understand concepts to me in a clear and easy-to-read way, kimbyd.

Cerenkov.
 
  • #13
Cerenkov said:
Certain popular-level websites and books were responsible for this illusion.
Yes, you will find that that happens a lot. When I got to a certain age in retirement, I decided that I needed to start doing something besides my wood anatomy research to keep the old gray cells from going stale, so I started reading Economics popular literature and Science popular literature. The Economics was very interesting and I read a dozen or so books but what really grabbed me was the physics so after reading probably a couple dozen pop-sci presentations (VERY misleading stuff like Kaku and others) I started delving into somewhat more serious popularizations and watching every "science" video on TV.

I got a lot of information from all that, most of it wrong or misleading in some significant way. They I came here and spent a bunch of time reading posts and getting a sense of where the pop-science presentations had led me astray. Don't be discouraged if you find yourself coming from a point of view that is based on a misunderstanding that is no fault of your own. Getting your "science" from pop-science presentations is at the very best just a starting place to move towards understanding ACTUAL science.

Terminology, in particular, is often used in very imprecise and downright sloppy ways in pop-science and you'll find that such useage is quickly pointed out here on PF as wrong. It's never anything personal when people point out such mistakes (I'm not speaking of anything in your posts, I'm just warning you in general)

Although as others have pointed out, you cannot get the anywhere near a full understanding of, for example, General Relativity, without knowing a LOT of math, it has been pleasantly surprising to me to find that you CAN move well beyond pop-sci nonsense without the full understanding that requires a lot of math.

Good luck.
 
  • Like
Likes Orodruin, Bystander and PeterDonis
  • #14
Cerenkov said:
Hello.

On Ethan Siegel's 'Starts With a Bang' blog...

http://scienceblogs.com/startswithabang/2010/06/28/how-do-we-use-the-cmb-to-learn/

...he points out that the FIRAS CMB data has 400 sigma error bars.

Since I've read that a 5 sigma value corresponds to a p-value of 3 x 10 -7 (approx. 1 in 3.5 million) I was wondering what p-value 400 sigma corresponds to?

Thanks,

Cerenkov.
One thing that may help here: It's one thing to compare a measurement to its uncertainty. (A tape measure says my desk is 48 inches wide, and the uncertainty on that is well under 0.1 inch, so this is a "480 sigma" result.) It's quite another to translate that into a probability.

The translation requires a model, and such models are virtually always only good out to a few sigma, at best, as @kimbyd said. In addition, each model yields probabilities of a specific situation. (In this case, the usual model would correspond to the probability that my desk was zero inches wide and that by some fluke, the measurement process yielded 48 inches.) As in the desk example, some models are just inappropriate: They correspond to a possibility that is likely to be of no real interest.
 
  • #15
JMz said:
(A tape measure says my desk is 48 inches wide, and the uncertainty on that is well under 0.1 inch, so this is a "480 sigma" result.)

No, it isn't. The number of "sigmas" is not the measurement result divided by the uncertainty.
 
  • #16
PeterDonis said:
No, it isn't. The number of "sigmas" is not the measurement result divided by the uncertainty.
Please don't go there, Peter. We both know that there is just such a null hypothesis, and the inappropriateness of that one was the very reason I used it.
 
  • #17
JMz said:
Please don't go there, Peter. We both know that there is just such a null hypothesis, and the inappropriateness of that one was the very reason I used it.

I'm sorry, this doesn't make any sense. My statement was very clear and pointed out an obviously incorrect statement by you. The incorrectness of the statement of yours that I quoted had nothing to do with any null hypothesis.
 
  • #18
JMz said:
One thing that may help here: It's one thing to compare a measurement to its uncertainty. (A tape measure says my desk is 48 inches wide, and the uncertainty on that is well under 0.1 inch, so this is a "480 sigma" result.) It's quite another to translate that into a probability.
Without knowing what to compare with, talking about a number of sigmas is not very meaningful. Your intent could have been to find out whether your desk is 48 or 47 inches, which would leave you with a "10 sigma" result instead even if your actual measurement did not change so there is no real "x sigma" associated to the measurement itself. One should not confuse error bars in the data with the p-values obtained in hypothesis testing, which is one of the problems with the OP here.

Another issue is that people often mix up p-values with probabilities of a hypothesis actually being true.
 

FAQ: How does 400 sigma compare with 5 sigma?

1. How does 400 sigma compare with 5 sigma?

The difference between 400 sigma and 5 sigma is significant. Sigma is a statistical measure of the standard deviation of a population, and it is used in science to determine the likelihood of obtaining a particular result by chance. A 400 sigma difference means that the likelihood of obtaining the same result by chance is extremely low, while a 5 sigma difference is still significant but has a slightly higher likelihood of occurring by chance.

2. What does a 400 sigma difference indicate?

A 400 sigma difference indicates an extremely significant deviation from the expected result. In science, this is often interpreted as evidence of a new discovery or a confirmation of a hypothesis.

3. How reliable is a 400 sigma result?

A 400 sigma result is considered highly reliable and is often seen as a gold standard in scientific research. It indicates a high level of confidence in the results and supports the validity of the conclusions drawn from the data.

4. Can a result with 5 sigma be considered significant?

Yes, a 5 sigma result is still considered significant in scientific research. It indicates a low likelihood of obtaining the same result by chance and is often used as a threshold for accepting a new discovery or confirming a hypothesis.

5. How does the sigma level impact the acceptance of a scientific discovery?

The sigma level plays a crucial role in the acceptance of a scientific discovery. Generally, the higher the sigma level, the more confident scientists are in the results and the more likely it is to be accepted as a valid discovery or confirmation. A 400 sigma result, for example, would be highly likely to be accepted, while a 5 sigma result may still require further validation and replication before being accepted.

Back
Top