What's Delaying Fermilab's Muon g-2 Results Release?

In summary: But, if the unblinded data reveals what looks like a goofy problem, like the one that caused the superluminal neutrino speed result from the OPERA experiment a while back, it wouldn't be improper to try to look for an explanation before publishing and to include that investigation in the final product.That is definitely ethical and responsible behavior. However, I don't think it's the right thing to do if the problem turns out to be something small and easily fixed. I would prefer for the collaboration to release the results and then have the community of scientists try to figure out the problem. If the problem is something big and difficult to fix, then they would need to come out and say that
  • #71
Just as a reference point. The muonic proton radius discrepancy was almost entirely due to weaknesses in old ordinary hydrogen proton radius, and the data used in the Theory Initiative SM calculation could present similar issues.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #72
ohwilleke said:
Just as a reference point. The muonic proton radius discrepancy was almost entirely due to weaknesses in old ordinary hydrogen proton radius, and the data used in the Theory Initiative SM calculation could present similar issues.
Do you think about this article:
https://physicsworld.com/a/solving-the-proton-puzzle/

Maybe also atom interferometry will give that those other classical measurements of G had some unknown systematic error.
 
Last edited:
  • Like
Likes vanhees71
  • #73
exponent137 said:
Do you think about this article:
https://physicsworld.com/a/solving-the-proton-puzzle/

Maybe also atom interferometry will give that those other classical measurements of G had some unknown systematic error.
The article is a well done analysis.
 
  • Like
Likes vanhees71
  • #74
The new release of g-2 measurement will be on Aug, 10th:
 
  • Like
Likes vanhees71, mfb and ohwilleke
  • #75
The new August 10, 2023 paper and its abstract:

Screenshot 2023-08-10 at 11.12.57 AM.png

The new paper doesn't delve in depth into the theoretical prediction issues even to the level addressed in today's live streamed presentation. It says only:

A comprehensive prediction for the Standard Model value of the muon magnetic anomaly was compiled most recently by the Muon g−2 Theory Initiative in 2020[20], using results from[21–31]. The leading order hadronic contribution, known as hadronic vacuum polarization (HVP)was taken from e+e−→hadrons cross section measurements performed by multiple experiments. However, a recent lattice calculation of HVP by the BMW collaboration[30] shows significant tension with the e+e− data. Also, a new preliminary measurement of the e+e−→π+π−cross section from the CMD-3 experiment[32] disagrees significantly with all other e+e−data. There are ongoing efforts to clarify the current theoretical situation[33]. While a comparison between the Fermilab result from Run-1/2/3 presented here, aµ(FNAL),and the 2020 prediction yields a discrepancy of 5.0σ, an updated prediction considering all available data will likely yield a smaller and less significant discrepancy.
The CMD-3 paper is:
F.V. Ignatovetal. (CMD-3 Collaboration), Measurement of the e+e−→π+π−cross section from threshold to 1.2GeV with the CMD-3 detector(2023), arXiv:2302.08834.​

This is 5 sigma from the partially data based 2020 White Paper's Standard Model prediction, but much closer to (consistent at the 2 sigma level with) the 2020 BMW Lattice QCD based prediction (which has been corroborated by essentially all other partial Lattice QCD calculations since the last announcement) and to a prediction made using a subset of the data in the partially data based prediction which is closest to the experimental result.

The 2020 White Paper is:

T. Aoyama et al.,The anomalous magnetic moment of the muon in the Standard Model, Phys. Rep. 887,1 (2020).​

This is shown in the YouTube screen shot from their presentation this morning (below):

Screenshot 2023-08-10 at 10.03.01 AM.png

As the screenshot makes visually very clear, there is now much more uncertainty in the theoretically calculated Standard Model predicted value of muon g-2 than there is in the experimental measurement itself.

For those of you who aren't visual learners:

World Experimental Average (2023): 116,592,059(22)
Fermilab Run 1+2+3 data (2023): 116,592,055(24)​
Fermilab Run 2+3 data(2023): 116,592,057(25)​
Combined measurement (2021): 116,592,061(41)​
Fermilab Run 1 data (2021): 116,592,040(54)​
Brookhaven's E821 (2006): 116,592,089(63)​
Theory Initiative calculation: 116,591,810(43)
BMW calculation: 116,591,954(55)

It is likely that the true uncertainty in the 2020 White Paper result is too low, quite possibly because of understated systemic error in some of the underlying data upon which it relies from electron-positron collisions.

In short, there is no reason to doubt that the Fermilab measurement of muon g-2 is every bit as solid as claimed, but the various calculations of the predicted Standard Model value of the QCD part of muon g-2 varies are in strong tension with each other.

It appears the the correct Standard Model prediction calculation is closer to the experimental result than the 2020 White Paper calculation (which mixed lattice QCD for parts of the calculation and experimental data in lieu of QCD calculations for other parts of the calculation), although the exact source of the issue is only starting to be pinned down.

Side Point: The Hadronic Light By Light Calculation

The hadronic QCD component is the sum of two parts, the hadronic vacuum polarization (HVP) and the hadronic light by light (HLbL) components. In the Theory Initiative analysis the QCD amount is 6937(44) which is broken out as HVP = 6845(40), which is a 0.6% relative error and HLbL = 98(18), which is a 20% relative error.

In turn, the e+e−→π+π− cross section portion of the HVP contribution to muon g-2, which is the main thing that the Theory Initiative relied upon experimental data rather than first principles calculations to do, accounts for 5060±34×10−11 out of the total aHVP µ =6931±40×10−11 value, and is the source of most of the uncertainty in the Theory Initiative prediction.

The presentation doesn't note it, but there was also an adjustment bringing the result closer to the experimental result in the hadronic light-by-light calculation (which is the smaller of two QCD contributions to the total value of muon g-2 and wasn't included in the BMW calculation) which was announced on the same day as the previous data announcement. The new calculation of the hadronic light by light contribution to the muon g-2 calculation increases the contribution from that component from 92(18) x 10-11 to 106.8(14.7) x 10-11.

As the precision of the measurements and the calculations of the Standard Model Prediction improves, a 14.8 x 10-11 discrepancy in the hadronic light by light portion of the calculation becomes more material.

Why Care?

Muon g-2 is an experimental observable which implicates all three Standard Model forces that serves as a global test of the consistency of the Standard Model with experiment.

If there really were a five sigma discrepancy between the Standard Model prediction and the experimental result, this would imply new physics at fairly modest energies that could probably be reached at next generation colliders (since muon g-2 is an observable that is more sensitive to low energy new physics than high energy new physics).

On the other hand, if the Standard Model prediction and the experimental result are actually consistent with each other, then low energy new non-gravitational physics are strongly disfavored at foreseeable new high energy physics experiments, except in very specific ways that cancel out in a muon g-2 calculation.
 
Last edited:
  • Like
Likes vanhees71 and exponent137
  • #76
For a moment, let us forget about the measurements of g-2. Can we say that the BMW assumptions are more logical and correct than these of the Standard Model? Or, this is not clear?
 
  • #77
BMW is the SM. As is the Theory Initiative.
 
  • Like
Likes exponent137, ohwilleke and vanhees71
  • #78
exponent137 said:
For a moment, let us forget about the measurements of g-2. Can we say that the BMW assumptions are more logical and correct than these of the Standard Model? Or, this is not clear?
Everybody is trying to make a Standard Model calculation.

BMW does a first principles lattice QCD calculation relying only on general physical constants (like the strong force coupling constant) measurement as experimental inputs.

The Theory Initiative took a different approach. It concluded that a big part of the lattice QCD calculation (which is profoundly difficult to do with BMW being the only group that has ever done the entire thing and that using multiple supercomputers for a long period of time) is equivalent to experiments that had already been done, although those experiments were somewhat stale.

The QCD calculations are so involved that it is hard to error check your work, and we haven't had a full replication of these calculations by an independent group yet which is the only surefire way to confirm that BMW didn't make errors. But, key parts of the BMW calculation have been replicated repeatedly, and the Theory Initiative values for those key parts of the calculation (called the "window") are very different from the BMW calculation. So, there is no good reason to doubt the BMW calculations at this point, and there is good reason to doubt the Theory Initiative result.

It is possible that the Theory Initiative merged the experimental results with different in kind lattice QCD calculations in a manner that was not correct.

But, the early CMD-3 experimental data redoing the stale experiments that the Theory Initiative relied upon and getting results very close to the lattice QCD calculation done by BMW and very different from the stale experiments, make it seem more likely that while the Theory Initiative's method for integrating experiment and lattice QCD calculations may have been sound, that the experimental results it was relying upon were flawed and had understated systemic error. (Very much like the Proton Radius Puzzle problem discussed above in this thread.)

I think that the underlying electron-positron data that the Theory Initiative was relying upon was from the Large Electron-Positron Collider that operated from the years 1989-2000 at CERN, although I haven't definitively pinned this down by going paper by paper back to the original sources. But, I may be wrong about that. The introduction to the CMD-3 paper cited above notes that:

The π+π−channel gives the major part of the hadronic contribution to the muon anomaly,506.0±3.4×10−10 out of the total aHVP µ =693.1±4.0×10−10 value. It also determines(together with the light-by-light contribution) the overall uncertainty ∆aµ= ±4.3×10−10 of the standard model prediction of muon g−2 [5]. To conform to the ultimate target precision of the ongoing Fermilab experiment [16,17]∆aexp µ [E989]≈±1.6×10−10 and the future J-PARC muon g-2/EDM experiment[18],the π+π− production cross section needs to be known with the relative overall systematic uncertainty about 0.2%. Several sub-percent precision measurements of the e+e−→π+π− cross section exist. The energy scan measurements were performed at VEPP-2M collider by the CMD-2 experiment (with the systematic precision of 0.6–0.8%)[19,20,21,22] and by the SND experiment (1.3%)[23].These results have some what limited statistical precision. There are also measurements based on the initial-state radiation(ISR) technique by KLOE(0.8%)[24,25, 26,27],BABAR(0.5%)[28] and BES-III (0.9%)[29]. Due to the high luminosities of these e+e−factories, the accuracy of the results from the experiments are less limited by statistics, meanwhile they are not fully consistent with each other within the quoted systematic uncertainties. One of the main goals of the CMD-3 and SND experiments at the newVEPP-2000 e+e− collider at BINP,Novosibirsk, is to perform the new high precision high statistics measurement of the e+e−→π+π−cross section. Recently, the first SND result based on about 10% of the collected statistics was presented with a systematic uncertainty of about 0.8%[30]. Here we present the first CMD-3result.
 
Last edited:
  • Like
Likes AndreasC, vanhees71 and exponent137
  • #79
AFAIK the method used by the theory initiative is to use experimental data to extract spectral functions and then use dispersion relations to get the radiative corrections for g-2. That's also numerically challenging method.
 
  • Like
Likes Vanadium 50, exponent137 and ohwilleke
  • #80
vanhees71 said:
AFAIK the method used by the theory initiative
I believe this is correct.

Some history - the "old way" was for people to take the calculations and experimental inputs and combine them out of the box. This had some consistency problems, including a rather embaarssing sign error. The Theory Initiative was a community response to this: instead of a patchwork, let's all Do The Right Thing.

There is not consensus on what the "right thing" (more later) so this evolved to something closer to "Do The Same Thing". The procedure is at least consistent. The problem - or at least a problem - is with the data inputs. Term X might depend on experiments A, B, and C. Term Y might depend on B, C and D, and Term Z on A, E, F and G. How do you get from the errors on A-G to the errors on X, Y and Z? If the errors were Gaussian, and you fully understood the correlations, you'd have a chance, but the errors aren't Gaussian, nor exact, nor are the correlations 100% understood.

And some data is just wrong. You can get two measurements that feed in, but can't both be right. Do you pick one? How? Do you take the average and inglate the error, thus ensuring that the central value is wrong, but hopefully covered by the errors? Something else?

A similar issue cropped up with parton densities in the proton. It was twelve years between when they started down a Theory Initiative like path and where the PDF sets had serious predictive power (i.e. could tell you what you didn't already know). This is not easy, and the fact that two groups get different answers does not mean one is right and one is wrong. Both are wrong to a degree, and will become less wrong as the calculations and input data improve.
 
  • Like
  • Informative
Likes ohwilleke, nsaspook, vanhees71 and 2 others
  • #81
I've also a general quibble with this approach. After all the main motivation for this high-precision measurement of the muon's g-2 is to test the Standard Model of elementary particle physics (SM) with some hope to finally find some deviations pointing in the direction, how a better theory might look like, which again is motivated by the belief that the SM is incomplete. For me the most convincing argument for this conjecture is that the SM seems not to have "enough CP violation" in it to explain the matter-antimatter asymmetry in the universe, which also rests on the believe that the "initial state" an ##\epsilon## after the big bang has been symmetric. Anyway, a test of the SM is always interesting.

Now if you extract some QCD-radiative corrections from corresponding experimental data of ##\mathrm{e}^+ + \mathrm{e}^- \rightarrow \text{hadrons}##, you don't compare the g-2 measurement with the prediction of the SM but with parts of the SM prediction, which can be calculated perturbative (mostly the electroweak corrections) and parts that are extracted from measurements. The latter are not SM predictions but what's really going on in Nature for the processes under consideration like strong-interaction corrections to photon-photon scattering etc. So maybe, there's some beyond-the-SM physics involved, i.e., it's indeed not the result you'd get from a calculation of these processes/radiative corrections within the SM.

That's why lattice calculations are so important, because they provide the corresponding radiation corrections from QCD in some approximation, and obviously to get these contributions is computationally very challenging. So there's only one complete calcuation by the BMW group, and interestingly that lowers the discrepancy between the SM prediction and g-2 tremendously (I think it's only around ##2 \sigma##), i.e., it seems as if the SM after all might survive also this test. This is the more likely since parts of the BMW calculation have been checked and confirmed by other, independent lattice groups.

From history in my own field, it's clear that such independent checks of highly complicated lattice calculations are very important, as the determination of the pseudo-critical chiral as well as confinement-deconfinement transition temperature (even at ##\mu_{\text{B}}=0##!) demonstrates, but that's another story.
 
  • Like
Likes ohwilleke
  • #82
J-PARC works on its own muon g-2 experiment. At the time of the proposal we didn't have the lattice calculations (at least not with competitive uncertainties) and the experimental uncertainty was larger as well. Now the motivation for this experiment has gotten significantly weaker. It will still be useful as independent measurement with a different method to cross-check the Fermilab result, and it will improve the world average - but it has become clear that the main issue is on the theory side.
 
  • Informative
  • Like
Likes ohwilleke and berkeman
  • #83
vanhees71 said:
So there's only one complete calcuation by the BMW group, and interestingly that lowers the discrepancy between the SM prediction and g-2 tremendously (I think it's only around ##2 \sigma##), i.e., it seems as if the SM after all might survive also this test.
The BMW calculation is consistent with the new muon g-2 world average at 1.77 sigma. The fit is even a little better than that (despite a slightly lower combined uncertainty in the theoretical calculations of the SM prediction) when the improvement in the hadronic light-by-light calculation that was not included in the BMW calculation is taken into account.
 
Last edited:
  • Like
Likes vanhees71
  • #84
mfb said:
J-PARC works on its own muon g-2 experiment. At the time of the proposal we didn't have the lattice calculations (at least not with competitive uncertainties) and the experimental uncertainty was larger as well. Now the motivation for this experiment has gotten significantly weaker. It will still be useful as independent measurement with a different method to cross-check the Fermilab result, and it will improve the world average - but it has become clear that the main issue is on the theory side.
FWIW, an independent cross check from J-PARC is desirable because both the Brookhaven and Fermilab experiments are using some of the same experimental equipment that was shipped (in part by barge), from one lab to the other:

Transporting the g-2 ring 900 miles from Brookhaven to Fermilab was a feat of a different sort. While the iron that makes up the magnet yoke comes apart, the three 50-foot-diameter superconducting coils that energize the magnet do not, and therefore had to travel as a single unit. In order to maintain the superb accuracy of the electromagnet, the 50-foot-diameter circular coil shape had to maintain to within a quarter-inch, and flatness to within a tenth of an inch, during transportation.
In the summer of 2013, the Muon g-2 team successfully transported a 50-foot-wide electromagnet from Long Island to the Chicago suburbs in one piece. The move took 35 days and traversed 3,200 miles over land and sea. Thousands of people followed the move of the ring, and thousands were on hand to greet it upon its arrival at Fermilab.

The move began on June 22, 2013, as the ring was transported across the Brookhaven National Laboratory site, using a specially adapted flatbed truck and a 45-ton metal apparatus keeping the electromagnet as flat as possible. On the morning of June 24, the ring was driven down the William Floyd Parkway on Long Island, and then a massive crane was used to move it from the truck onto a waiting barge.

The barge set to sea on June 25, and spent nearly a month traveling down the east coast, around the tip of Florida, into the Gulf of Mexico and then up the Tennessee-Tombigbee Waterway to the Mississippi, Illinois and Des Plaines rivers. The barge arrived in Lemont, Illinois on July 20, and the ring was moved to the truck again on July 21. And then over three consecutive nights — July 23, 24 and 25 — that truck was used to drive the ring to Fermilab in Batavia, Illinois.

The Muon g-2 electromagnet crossed the threshold into Fermilab property at 4:07 a.m. on July 26. That afternoon, Fermilab held a party to welcome it, and about 3,000 of our neighbors attended. The collaboration is grateful for the support, and for the assistance of all the local, county and state agencies who made this move possible.

So any systemic error due to flaws in the design or construction of that equipment wouldn't be caught by the Fermilab replication. But, J-PARC would address that issue.
 
  • Like
Likes exponent137 and Astronuc
  • #85
I don't think I buy that. The biggest recycled part is the magnet yoke, but the coils were all redone and the field remeasured (and remeasured better), so unless you want to argue that the iron is somehow cursed, it's not an equipment problem.

It could be a problem with the "magic momentum" technique, and that would possibly be exposed by a different technique. That still would not solve the problem of the theoretical uncertainties, of course.
 
  • Like
Likes vanhees71
  • #86
Vanadium 50 said:
That still would not solve the problem of the theoretical uncertainties, of course.
Of course.

And, nobody has any good reason to think that Fermilab's measurements are not spot on. It is an expensive and cumbersome measurement to do at that level of precision, but it is a much more straight forward and cleaner measurement than, for example, most of the quantities measured at the Large Hadron Collider.
 
  • Like
Likes vanhees71
  • #87
The YouTube presentation on August 10 also discussed how much improvement in the precision of the measurement is expected as new data is collected (something that wasn't discussed in the paper that was submitted).

The experimental value is already twice as precise as the best available theoretical prediction of muon g-2 in the Standard Model. The experimental value is expected to ultimately be about four times more precise than the current best available theoretical predictions, as illustrated below:

Screenshot%202023-08-10%20at%2010.08.27%20AM.png


Completed Runs 4 and 5 and in progress Run 6 are anticipated to reduce the uncertainty in the experimental measurement over the next two or three years by about 50%.

But the improvement will be mostly from Run 4 which should release its results sometime around October of 2024. The additional experimental precision after that which is anticipated from Run 5 and Run 6 is expected to be pretty modest.

The chart only shows the reduction in uncertainty due to a larger sample size, but so far, reductions in systemic uncertainty and reductions in statistical uncertainty in each new run have been almost exactly proportionate, and there is good reason to think that this trend will continue.
 
  • Like
Likes exponent137 and vanhees71

Similar threads

Replies
0
Views
867
Replies
83
Views
13K
Replies
8
Views
3K
Replies
7
Views
609
Replies
17
Views
5K
Replies
18
Views
3K
Replies
49
Views
10K
Replies
11
Views
2K
Replies
10
Views
1K
Back
Top