What's Delaying Fermilab's Muon g-2 Results Release?

In summary: But, if the unblinded data reveals what looks like a goofy problem, like the one that caused the superluminal neutrino speed result from the OPERA experiment a while back, it wouldn't be improper to try to look for an explanation before publishing and to include that investigation in the final product.That is definitely ethical and responsible behavior. However, I don't think it's the right thing to do if the problem turns out to be something small and easily fixed. I would prefer for the collaboration to release the results and then have the community of scientists try to figure out the problem. If the problem is something big and difficult to fix, then they would need to come out and say that
  • #36
First, this thread started because one member felt that the experiment had the results in January but was withholding them because they were hiding a problem. We know now that was totally untrue. Somebody made it up and then it was used to cast aspersions on the scientific team's competence, integrity, or both.

Second, it is also not the case that all new physics must affect g-2. It's actually quite easy: 2HDM with a light h and H and a heavy A, H+ and H-. One might even say "trivial". I'm not even a theorist and it took me less time to think of one than to type it. It may be relevant that the electroweak contribution is in the seventh significant digit, so a W' and Z' that were a factor of ~3 heavier (long excluded by direct searches) would be invisible here.

Third, there seems to be the feeling that 4.2 sigma means "new physics". If you go to the theory paper (Ref. [13] in the PRL) you can see in Figure 1 that the calculation is well within the "no new physics" band. Also, the BMW collaboration has a calculation they say is right on the money.

Fourth, as Chris Polly said, this assumes there is no problem with the technique. Such a problem does not need to be large - this is a 460 ppb measurement. There is a long history of different techniques giving different results - two recent ones are the neutron lifetime and the proton radius. This is why the JPARC experiment is so important. It would be important even if it were less sensitive than the FNAL experiment (as it stands, the two have comparable targets).
 
  • Like
  • Informative
Likes PeroK, weirdoguy, ohwilleke and 2 others
Physics news on Phys.org
  • #37
ohwilleke said:
In China, it comes from coal fired power plants with very few emissions controls.
BTW this post of mine reminded me of the great and catchy tune of level 42:
 
  • #40
gmax137 said:
Apparently it depends on who is asked.

Just like the SM prediction for g-2. 😈
 
  • Haha
  • Like
Likes ohwilleke, vanhees71, Demystifier and 2 others
  • #42
mfb said:
Reaching 750 times the energy with the LHC technology would need a ring 750 times as large, ~20,000 km circumference. Even if you double the field strength it's still 10000 km. Europe doesn't have space for that, but in North America it could just fit between Hudson Bay, Mexico, Washington and Washington DC. At least if we ignore all financial and technical problems.

They should build in in outer space :smile:
 
  • Like
Likes ohwilleke
  • #43
gmax137 said:
Not that it is important to this conversation, but no.
Also recall NAFTA the "North American Free Trade Agreement" between Canada, US, and Mexico.
I am always right even when I am not!
 
  • #44
mfb said:
in North America it could just fit between Hudson Bay, Mexico, Washington and Washington DC. At least if we ignore all financial and technical problems.

**** it, let's do it.
 
  • #45
JLowe said:
**** it, let's do it.
It's called dang your curse word... :oldbiggrin:
 
  • #46
I expect JPARC to end up close to the Fermilab value, and eventually most theory predictions to end up at the same value. The BMW prediction is matching the experimental result.

At least the accelerators for g-2 experiments are nice and compact. Here are some 3000 km diameter circles. Note the map distortions.
 
  • Like
Likes ohwilleke and vanhees71
  • #47
But it would be so much more fun if E34 gets the g-2 Theory Initiative value, and FNAL continues to match the BMW value.
 
  • Haha
Likes ohwilleke and vanhees71
  • #48
mfb said:
If there are two SM predictions and only one agrees with measurements...
At least we will learn more about SM predictions of hadronic effects.
Do you have any opinion about this second model? Is real?
https://www.nature.com/articles/s41586-021-03418-1
 
Last edited:
  • #49
That's the above mentioned lattice-QCD calculation of the leading hadronic contribution to ##(g-2)## by the Wuppertal (BMW) lattice-QCD collaboration. It's at least a hint that one has to consolidate the prediction of the theory side. If I understand it right, what's compared to the measurement is a theoretical calculation using empirical input for the said hadronic contributions, which uses dispersion-relation analyses of the data, and afaik that fitting is a tricky business of its own.

Of course also the lattice calculation has to be solidified and maybe also checked by other lattice collaborations since also lattice-QCD calculations are tricky business (I only remind about the long debate about the deconfinement and/or chiral-transition temperature, which finally settled at the lower value of around 155 MeV predicted by the Wuppertal group ;-)).

Whether or not the ##(g-2)## results are really hints for "physics beyond the Standard Model" still seems to stay an exciting question.
 
  • Like
Likes PeroK and exponent137
  • #50
gwnorth said:
Central America is part of North America.
I hate this.

EDIT: To end the argument once and for all, Mexico is part of central southern north America. Either that or southern central north America. Perhaps both.
 
  • #51
vanhees71 said:
That's the above mentioned lattice-QCD calculation of the leading hadronic contribution to ##(g-2)## by the Wuppertal (BMW) lattice-QCD collaboration. It's at least a hint that one has to consolidate the prediction of the theory side. If I understand it right, what's compared to the measurement is a theoretical calculation using empirical input for the said hadronic contributions, which uses dispersion-relation analyses of the data, and afaik that fitting is a tricky business of its own.

Of course also the lattice calculation has to be solidified and maybe also checked by other lattice collaborations since also lattice-QCD calculations are tricky business (I only remind about the long debate about the deconfinement and/or chiral-transition temperature, which finally settled at the lower value of around 155 MeV predicted by the Wuppertal group ;-)).

Whether or not the ##(g-2)## results are really hints for "physics beyond the Standard Model" still seems to stay an exciting question.
Can't wait until I learn enough QFT for all that to not sound like complete gibberish to me!
 
  • Haha
Likes Demystifier
  • #52
How does the relate to LHCb result? I think I get them mixed up. Are they measuring totally separate things that just have to do with muons? Are both sensitive to the same or similar QCD calculations?
 
  • #53
nolxiii said:
Are they measuring totally separate things that just have to do with muons?
Yes.
 
  • Like
Likes vanhees71 and ohwilleke
  • #55
exponent137 said:
What can this article tell us about g-2 disagreement?
https://www.quantamagazine.org/protons-antimatter-revealed-by-decades-old-experiment-20210224/
At least, it can tell us that the hadrons are not explained enough?

(Although we talk about muons, the problem of g-2 disagreement is because of hadrons.)

Not much. The article is about proton structure and the proton parton distribution function (PDF).

The Theory Initiative's white paper is basically looking at the propensity of electron-positron collisions to produce pions and the properties of the pion's produced in order to avoid having to calculate it from first principles, and then extrapolating that to the muon g-2 calculation context, while the BMW calculation is straight up from QCD. The BMW paper argues that the transfer of the electron-positron collision data to the muon g-2 calculation by the Theory Initiative has been done wrong (and an ugly mix of experimental results for parts of a calculation and lattice QCD simulations for other parts of it is certainly an unconventional approach).

In the muonic hydrogen proton radius case, it turns out that the measurement of the proton radius in the muonic hydrogen case was spot on and that the old and inaccurate measurement of the proton radius in ordinary electron hydrogen was the source of the discrepancy. We could be seeing something similar here.
 
  • Like
Likes vanhees71 and exponent137
  • #56
But indeed the largest uncertainty in the theoretical calculation of ##(g-2)## of the muon are the radiative corrections due to the strong interactions (in low-energy language, "the hadronic contributions" or "hadronic vacuum polarization" (HVP). If I understand it right, what's usually compared as "theory" to the data uses a semiempirical approach to determine these hadronic distributions by calculating the needed matrix elements via dispersion relations from measurements of the ##\mathrm{e}^+ + \mathrm{e}^{-} \rightarrow \text{hadrons}## cross section. This is based on very general theoretical input, i.e., the unitarity of the S-matrix, but the devil is in the detail, because it's everything else then easy to use the dispersion relations to get the HVP contributions from data. So I'd be not too surprised, if the systematic unertainty of this procedure turns out to be underestimated. After all there are hints from the lattice (by the Wuppertal/BMW lQCD group) that the HVP contributions may well be such that the discrepancy between "theory" and "data" is practically gone (only about 1 sigma discrepancy). Of course, also lQCD calculations are a tricky business. One must not forget that we talk here about high-accuracy physics, which is never easy to get (neither experimental nor theoretical).
 
  • Like
Likes ohwilleke, websterling, PeroK and 1 other person
  • #57
I am not an expert, but I don't believe that any of the theory calculations (of HVP) are pure ab initio calculations. All are ways of relating low energy measurements (done at places like Serpukhov in the 1960's) to g-2.

Had history evolved differently and we had a g-2 measurement first, we would be discussing whether there was an anomaly in the low-energy Serpukov data.
 
  • Like
Likes Astronuc, vanhees71 and exponent137
  • #58
Vanadium 50 said:
I am not an expert, but I don't believe that any of the theory calculations (of HVP) are pure ab initio calculations. All are ways of relating low energy measurements (done at places like Serpukhov in the 1960's) to g-2.

Had history evolved differently and we had a g-2 measurement first, we would be discussing whether there was an anomaly in the low-energy Serpukov data.
Are you sure?

I had understood lattice QCD to be akin to N-body simulations in cosmology. You discretize the QCD equations and the particles and time intervals and then iterate it. The description of what they did in their pre-print sounds like this is what they did.

Quanta Magazine, interviewing the authors, summarizes what was done by the BMW groups as follows:
They made four chief innovations. First they reduced random noise. They also devised a way of very precisely determining scale in their lattice. At the same time, they more than doubled their lattice’s size compared to earlier efforts, so that they could study hadrons’ behavior near the center of the lattice without worrying about edge effects. Finally, they included in the calculation a family of complicating details that are often neglected, like mass differences between types of quarks. “All four [changes] needed a lot of computing power,” said Fodor.

The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching, the supercomputers spat out a value for the hadronic vacuum polarization term. Their total, when combined with all other quantum contributions to the muon’s g-factor, yielded 2.00233183908. This is “in fairly good agreement” with the Brookhaven experiment, Fodor said. “We cross-checked it a million times because we were very much surprised.”
 
  • #60
ohwilleke said:
quantamagazine.org said:
The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching...
So I have an off-topic question about this computer time. One-hundred million hours is 11,000 years; split between the seven computers mentioned that would be what, 1600 years each. How does that work?
 
  • #61
Each computer has more than one CPU.
 
  • Like
Likes vanhees71, Demystifier and ohwilleke
  • #62
Vanadium 50 said:
Each computer has more than one CPU.
Thanks. I just looked up the first mentioned, Jülich, and just one of their machines (JUWELS) is said to have 122,768 CPU cores. Amazing.
 
  • Like
Likes vanhees71, Demystifier and ohwilleke
  • #63
gmax137 said:
122,768 CPU cores. Amazing.

Tiny. ANL's Mira, now retired, had 786,432. Each would run four threads.

A lot of DOE supercomputer use goes to Lattice QCD.
 
  • Like
Likes vanhees71, Demystifier and ohwilleke
  • #64
Great article on the Muon g-2 results posted in Forbes yesterday (just to add to the discussion back on page two of this thread)...

Obviously, what was released a couple of weeks ago are just some of the first results from Muon g-2. It will be interesting to see what else comes out of that campus and the engineers at FNAL.

If anyone else is interested, our organization provided some (or all) of the copper thermal straps (flexible thermal links) that are used by the accelerators at FNAL, SLAC, JLAB, ANL, and CERN, in their cryomodules, as well as the cold boxes, cryocoolers, cryostats, and dilution refrigerators in use at these labs, and we are always looking for university collaboration/partners at physics departments across North America, Europe, and Asia (partnering on articles for journals, collaborative research, ways to more efficiently cool cryocoolers, etc.).

If anyone on this thread would like to discuss how we can work together and even provide your university or lab with free thermal hardware, comment here or reach out to me at any time. You can also take a look at some of our other thermal strap products used by physics departments across the globe (for both terrestrial and spaceflight applications).

Arguments over the data and controversy aside--congrats to the Fermi team for their work...
 
  • Like
Likes ohwilleke
  • #65
Information for the next announcement:
https://physicstoday.scitation.org/doi/10.1063/PT.3.4765
The second and third runs, which incorporated additional improvements informed by the first run, are already complete; their results are expected to be published by next summer. According to Chris Polly, a spokesperson for the collaboration and a physicist at Fermilab, there’s about a 50-50 chance that those results will push the muon anomaly beyond 5 standard deviations.
 
  • Like
Likes gwnorth, ohwilleke and vanhees71
  • #66
exponent137 said:
Information for the next announcement:
https://physicstoday.scitation.org/doi/10.1063/PT.3.4765
The second and third runs, which incorporated additional improvements informed by the first run, are already complete; their results are expected to be published by next summer. According to Chris Polly, a spokesperson for the collaboration and a physicist at Fermilab, there’s about a 50-50 chance that those results will push the muon anomaly beyond 5 standard deviations.
Either it will or it won't.
Didn't need to use any fancy equations for this.
:cool:
 
  • #67
exponent137 said:
Information for the next announcement:
https://physicstoday.scitation.org/doi/10.1063/PT.3.4765
The second and third runs, which incorporated additional improvements informed by the first run, are already complete; their results are expected to be published by next summer. According to Chris Polly, a spokesperson for the collaboration and a physicist at Fermilab, there’s about a 50-50 chance that those results will push the muon anomaly beyond 5 standard deviations.
Of course, all the drama in this story is on the theory side and not the experiment side. If someone determines that the SM prediction really is the BMW one then this becomes a case of boring every more precise confirmation of the SM, and all of the BSM theories proposed to explain the muon g-2 anomaly are wrong because there isn't one.
 
  • Like
Likes vanhees71 and exponent137
  • #68
It's still interesting to figure out why the other prediction is off in that case (and I think that's the most likely case).
 
  • Like
Likes vanhees71, ohwilleke and exponent137
  • #69
mfb said:
It's still interesting to figure out why the other prediction is off in that case (and I think that's the most likely case).
I agree. I'm not sure that the muon g-2 experiment, as opposed to conducting new rounds of the experiments incorporated in the estimate (which BMW didn't use), will resolve that, however.
 
  • #70
The other prediction is based on a semiempirical calculation of certain "hadronic contributions" to ##(g-2)_{\mu}## based on ultraprecise measurements of ##\text{e}^+ + \text{e}^- \rightarrow \text{hadrons}## using dispersion relations. There the devil is in the detail, how to apply these dispersion relations based on the data. It's numerically not trivial, given that it's really high-precision physics. It's of course also important to consolidate the lattice calculations further.
 
  • Like
Likes ohwilleke

Similar threads

Replies
0
Views
1K
Replies
83
Views
13K
Replies
8
Views
3K
Replies
7
Views
768
Replies
17
Views
5K
Replies
18
Views
3K
Replies
49
Views
10K
Replies
11
Views
2K
Back
Top