CERN Experiment NA62 observes rare Kaon decay mode

  • #1
Astronuc
Staff Emeritus
Science Advisor
2023 Award
22,186
6,854
Geneva, 25 September 2024. At a seminar held at CERN this week, the NA62 collaboration reported the unequivocal confirmation of the ultra-rare decay of a positively charged kaon into a positively charged pion and a neutrino–antineutrino pair. Experiments including NA62 have previously measured and seen evidence of this process, but this is the first time it has been measured with a statistical significance of five standard deviations, crossing the threshold traditionally required to claim a discovery in particle physics.

Denoted by K+→π+νν, this decay is among the rarest particle processes ever observed: in the Standard Model of particle physics, less than one in 10 billion positively charged kaons are predicted to decay in this way.

https://home.cern/news/press-release/physics/na62-experiment-cern-observes-ultra-rare-particle-decay

August 2024 - https://home.cern/news/news/physics/na62-announces-its-first-search-long-lived-particles
 
  • Like
  • Informative
Likes FactChecker and ohwilleke
Physics news on Phys.org
  • #2
I'm not a big fan of "we finally crossed 5σ results" for a number of reasons, but I will say this is a hard, hard measurement. Probably the 2nd hardest (the hardest being the neutral kaon equivalent).

While the decay is certainly rare, it's not beyond the pale. If this were the only way that the K+ decayed it would have a half life of hours. Plenty of nuclei do that. So this is more "difficult" than "unexpected".
 
  • Like
Likes ohwilleke and Astronuc
  • #3
I rack this up as yet another high precision vindication of the unmodified Standard Model of Particle physics.

The SM predicted, first of all, that this would happen, and second of all, how often it would happen. And, despite the fact that this probability is tiny, the prediction was vindicated. The decay was found to definitely happen as predicted and was found to fit that frequency to the limits of experimental quantification of that frequency. The experimental measurement of this frequency has a relative uncertainty of ± 25%, but that still means that it is still spot on with respect to the 10-11 order of magnitude frequency of the decay when the frequency of various different kinds of experimentally detectible decays spans a full 11+ orders of magnitude.

The basic outlines of the calculation of how frequently this should happen were in place in the late 1970s or early 1980s. The values of the physical constants that go into the calculation have been refined a bit since then, and we've gotten significantly better at doing the calculations efficiently and to high precision. But, basically, a forty to fifty year old prediction that was not used to devise the SM in the first place, one of hundreds of possible calculations of rare decays like this one, has been vindicated and confirmed.

Solid confirmations of the SM predicted likelihoods of very rare decays don't happen every day, but they do happen at a relentless steady pace of perhaps several to a dozen times every year, year in and year out.

On the one hand, this makes this particular result not very newsworthy or remarkable. It is in some sense a dog bites postman story. I didn't bookmark it in my daily scan of new experimental physics results because it doesn't disturb the SM paradigm or show us something qualitatively new. It would have been front page news in major scientific journals like Nature if the researchers didn't get the result that they did.

But if we step back to see it in the context of all of the other results of the same kind over the decades, we can sleep at night very comfortably knowing that the SM is doing a bang up job of explaining everything happening at the subatomic particle level at energies up to and including the highest energies of the Large Hadron Collider.
 
Last edited:
  • #4
Well, yes and no.

A 5σ measurement is no better than 20%, and that means one is sensitive to interactions only slightly smaller than the SM prediction. There is a huge family of models where the effect is much, much smaller. Like millions of times smaller.

There are models that can drive this number much higher, but we already excluded them before this result because we didn't see them with much less data.

It's not clear how many follow-ons make sense. Do we want to measure this to 10%? Probably. 1%? I don't see a good reason to other than "because we can". 0.1%? 0.01%?
 
  • #5
Vanadium 50 said:
A 5σ measurement is no better than 20%, and that means one is sensitive to interactions only slightly smaller than the SM prediction. There is a huge family of models where the effect is much, much smaller. Like millions of times smaller.

There are models that can drive this number much higher, but we already excluded them before this result because we didn't see them with much less data.
True.

But, the thing is that it isn't just this result standing alone that rules out alternatives to the SM. It is the collection of hundreds of similar five sigma results confirming the SM taken together that make it hard to come up with plausible alternatives.

This is why I always viewed the lepton universality violation anomaly experiments with skepticism (which was ultimately vindicated with refined analysis of the data and more experimental data from the experiments where the anomalies were initially seen). There were many ways to test the same thing, all of the other tests confirmed to high precision that there was no lepton universality violation, all LUV would have the same source in any SM-similar model since it is mediated through W and Z boson decays, and finding a model that would fit the one anomalous kind of experimental results, while fitting the many SM-confirming results at the same time was always a stretch that required real contortions and Byzantine mechanisms.
Vanadium 50 said:
It's not clear how many follow-ons make sense. Do we want to measure this to 10%? Probably. 1%? I don't see a good reason to other than "because we can". 0.1%? 0.01%?
To oversimplify only a little, presumably, the collider is just running collision after collision and filing away the results in a bunch of raw result data.

Then, a study like this one says, well, we've done 50 billion collisions and we'd expect to see X of this and Y of that and Z of this other thing. Let's mine the data in a clever way to see if there are Z events of this other thing there.

So, the heavy lifting is pretty much exclusively on the analysis side. The experimental data collection happens whether or not you end up analyzing it for everything that you could look for. You are basically doing dozens or hundreds of searches simultaneously.

Also, even on the analysis side, you have to figure out your analysis method and cuts the first couple of times you look for it. Once you have a 5 sigma results like this one, you've pretty much refined that process, so you can just dump the additional raw data for another run or couple of runs of the collider into your existing analysis model to reduce the statistical error to a minimum, although it won't help your systemic error estimate at all. I don't know how much of the uncertainty is statistical and how much is systemic, but given that it is a rare decay that produces a tiny fraction of the total number of events, the raw number of events is still probably pretty small and that probably means that the statistical uncertainty is pretty significant, so a "just dump more raw data into my model until the experiment stops making new collisions" approach ought to buy some meaningful improvement in the frequency with which this decay is pinned down experimentally.
 
Last edited:
  • #6
Vanadium 50 said:
A 5σ measurement is no better than 20%, and that means one is sensitive to interactions only slightly smaller than the SM prediction. There is a huge family of models where the effect is much, much smaller. Like millions of times smaller.

There are models that can drive this number much higher, but we already excluded them before this result because we didn't see them with much less data.

It's not clear how many follow-ons make sense. Do we want to measure this to 10%? Probably. 1%? I don't see a good reason to other than "because we can". 0.1%? 0.01%?
I don't follow you here but I know nothing about particle physics. How do you go from ##5\sigma## to 20%? The ##5\sigma## probability for a normal distribution would be 0.00006% if it was two-tailed. This must be 20% of something else.
 
  • #7
FactChecker said:
I don't follow you here but I know nothing about particle physics. How do you go from ##5\sigma## to 20%? The ##5\sigma## probability for a normal distribution would be 0.00006% if it was two-tailed. This must be 20% of something else.
The existence of the decay is confirmed to really exist to five sigma. The frequency of the decay was determined with a precision of ± 25%.

To give a stylized example (not the actual facts, just to illustrate the concept), suppose that 40 events were observed, and that the odds that the 40 events were just a statistical fluke in background events from other decays, rather than the decay that was discovered, was 0.00006%. So, it was a 5 sigma observation that this decay was really happening.

But suppose that due to statistical and systemic uncertainties, an observation of 40 events would be consistent with a long term expected number of events per 100 billion collisions that was anywhere from 30 to 50 events. So, the likelihood of this decay happening could only be pinned down to ± 25%, which isn't all that precise (although again, the expected decay frequency is a certain number of events per 100 billion decays, so getting in the right ballpark is still a big deal).

What Vanadium50 is saying is that even if the Standard Model prediction is 40 events, there might be a tweak to the Standard Model (for example, a variation in which there is some rule that makes semi-leptonic decays like the one discovered a little more common than expected in the vanilla Standard Model, but that this variation on the Standard Model makes fully hadronic decays a little less common than expected in the Standard Model), in which the expected number of decays of this kind was 50 in the alternative model, rather than the 40 of the Standard Model (perhaps the alternative is some sort of TeV scale supersymmetry theory). And, this experiment wouldn't be precise enough to distinguish between the null hypothesis of the Standard Model, and the alternative hypothesis in which more decays are expected than in the Standard Model.

The more precise your experimental measurement is, the more strongly your experimental results can rule out subtle alternatives to the Standard Model, based on how common a decay that has definitely been discovered turns out to be.

At that point there is a cost-benefit analysis. How much do you want to spend to get a more precise measurement in order to rule out subtle alternatives to the Standard Model?

Maybe you can do 20% more collisions at a modest additional cost with no upgrades to your collider to reduce the uncertainty in the long term frequency of this decay from 30-50 events per 100 billion collisions down to the 35-45, which would favor the Standard Model over the subtle alternative to it that predicts 50 events, but not strongly enough to totally rule out that alternative in a definitive way. But, maybe it would take 100 times as many collisions and expensive upgrades to your detectors at your collider to get that frequency pinned down to limit the experimental result to one that is consistent with 39-41 events per 100 billion collisions, which would strongly rule out the subtle modification of the Standard Model that predicts 50 decay events per 100 billion collisions.

The main point Vanadium50 is making is that while I tend to see this result from the current experiment as a vindication of the Standard Model (because I am thinking about this particular experimental result in the context of the hundreds of experimental results that have confirmed the Standard Model in many different ways, and just because I'm more of an optimist in this particular matter), he's dropping the footnote (legitimately and correctly) that really, we can never 100% vindicate the Standard Model or any other high energy physics theory with experiments. Instead, we can only rule out alternatives by doing experiments that are sufficiently precise to distinguish between the Standard Model and subtle alternatives to it.

Figuring out where to draw that line is hard. It is harder still because there aren't just two alternatives. There are infinitely many theoretically possible tweaks to the Standard Model that could be imagined, some of which are only slightly different in expected outcomes from the Standard Model (e.g. a PeV scale supersymmetry theory).

Vanadium50 then makes a rough guestimate of where we ought to draw the line between greater precision and greater cost, based upon his experience with what is necessary to improve the precision of an experimental HEP result's precision and his evaluation of the scientific value of greater precision in this particular measurement. He thinks that the cost to get the precision with which we can measure the frequency of this decay from ± 25% to ± 10% would probably be small and worth it, but the cost to get precision with which we can measure the frequency of this decay down to ± 1% probably isn't worth it.

Part of his analysis in drawing this line is that this particular rare kaon decay doesn't have any great intrinsic importance in and of itself.

We are looking for it and trying to pin it down, basically as part of a long term, ongoing high energy physics effort to confirm (or identify flaws in) the Standard Model of Particle Physics generally, and not because there is something special or important about this particular decay (except that, at the moment, it happens to be right on the edge between what we are experimentally able to do and what we aren't able to do experimentally).

It isn't, for example, comparable to the measurement of muon g-2 which is a strong global test of all parts of the Standard Model of Particle Physics at once (at least at relatively low energies) that is particularly susceptible to ultra-precise measurement.

So, maybe our limited money for high energy physics experiments would be better spent on something else that has more potential to show us something new than on this measurement which is already decent enough and isn't particularly better than other experiments to tease out any possible and plausible flaws in the Standard Model.

One of the links in #1 in this thread explains the number of events actually expected in the Standard Model (80 at this point) and what the people doing the experiment see its purpose as being:

In two years of data taking the experiment is expected to detect about 80 decay candidates if the Standard Model prediction for the rate of charged kaon decays is correct. This data will enable the NA62 team to determine the value of a quantity called |Vtd|, which defines the likelihood that top quarks decay to down quarks.

Understanding with precision the relations between quarks is one constructive way to check the consistency of the Standard Model.

So, in addition to the other points discussed, this experiment is a partial measurement of one of the experimentally measured physical constants of the Standard Model of Particle Physics called |Vtd|, which does tilt the balance a little in favor of making more of an effort to measure the frequency of this decay more precisely.

|Vtd| together with eight other elements of what is called the CKM matrix are used to pin down the four degrees of freedom that fully describe all nine elements of this 3 x 3 matrix which code the probability that when a quark emits a W boson that that quark will turn into a different kind of quark via the weak force.

The current global average of the experimentally measured values of |Vtd| is 0.0086 ± 0.0002 (a relative uncertainty of between two and three percent). This latest rare kaon decay measurement, because it isn't very precise, probably won't tweak this experimental value very much yet, because other experimental measurements of it, at this point, are much more precise.
 
Last edited:
  • Informative
Likes FactChecker
  • #8
FactChecker said:
I don't follow you here but I know nothing about particle physics. How do you go from 5 sigma to 20%?
20% is the uncertainty on the magnitude, not the probability that zero will fluctuate to the observed value.
 
  • Informative
  • Like
Likes PhDeezNutz and FactChecker

Similar threads

Replies
10
Views
1K
Replies
17
Views
5K
Replies
11
Views
2K
Replies
49
Views
10K
Replies
0
Views
1K
Back
Top