A novel explanation of CKM and PMNS matrix parameters

  • #1
ohwilleke
Gold Member
2,537
1,500
TL;DR Summary
A new paper from two MIT profs purports to explain the parameters of the CKM and PMNS matrixes with values that minimize quantum entanglement, a novel explanation for the value.
I've never seen anyone reach this remarkable insight before, and it is indeed very tantalizing. This is huge if true and comes from seemingly credible authors. The modest language in which the claim is made is also encouraging.

The authors note in the body text that:

To our knowledge, this is the first time the differing CKM and PMNS structures have arisen from a common mechanism that does not invoke symmetries or symmetry breaking.

Are there any obvious flaws in their analysis? (The abstract and citation are below).

The Cabibbo-Kobayashi-Maskawa (CKM) matrix, which controls flavor mixing between the three generations of quark fermions, is a key input to the Standard Model of particle physics. In this paper, we identify a surprising connection between quantum entanglement and the degree of quark mixing. Focusing on a specific limit of 2→2 quark scattering mediated by electroweak bosons, we find that the quantum entanglement generated by scattering is minimized when the CKM matrix is almost (but not exactly) diagonal, in qualitative agreement with observation.

With the discovery of neutrino masses and mixings, additional angles are needed to parametrize the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix in the lepton sector. Applying the same logic, we find that quantum entanglement is minimized when the PMNS matrix features two large angles and a smaller one, again in qualitative agreement with observation, plus a hint for suppressed CP violation.

We speculate on the (unlikely but tantalizing) possibility that minimization of quantum entanglement might be a fundamental principle that determines particle physics input parameters.
Jesse Thaler, Sokratis Trifinopoulos, "Flavor Patterns of Fundamental Particles from Quantum Entanglement?" arXiv:2410.23343 (October 30, 2024).

The paper's literature review does note one prior paper making a similar analysis:

Entanglement is a core phenomenon in quantum mechanics, where measurement outcomes are correlated beyond classical expectations. In particle physics, entanglement is so ubiquitous that we often take it for granted, but every neutral pion decay to two photons (π0 → γγ) is effectively a mini Einstein–Podolsky–Rosen experiment. In the context of SM scattering processes, though, the study and quantification of entanglement in its own right has only begun relatively recently [12–26]. In terms of predicting particle properties from entanglement, the first paper we are aware of is Ref. [27] which showed that maximizing helicity entanglement yields a reason but every neutral pion decay to two photons (π0 → γγ) is effectively a mini Einstein–Podolsky–Rosen experiment. In the context of SM scattering processes, though, the study and quantification of entanglement in its own right has only begun relatively recently [12–26]. In terms of predicting particle properties from entanglement, the first paper we are aware of is Ref. [27] which showed that maximizing helicity entanglement yields a reasonable prediction for the Weinberg angle θW, which controls the mixing between electroweak bosons.
References 12-26 are from 2012 to 2024.

Ref. [27] is A. Cervera-Lierta, J. I. Latorre, J. Rojo and L. Rottoli, Maximal Entanglement in High Energy Physics, SciPost Phys. 3 (2017) 036, [1703.02989].

Footnote 6 of the main paper is also of interest and addresses the fact that using only a one-loop calculation the get a value of 6º for a parameter whose measured value is 13º.

We happened to notice that in the limit where we neglect photon exchange, the exact valueθmin C =13◦ is recovered. However, we do not have a good reason on quantum field theoretic grounds to neglect the photon contribution. Because of the shallow entanglement minimum in Fig.2a, a 10% increase in the charged current process over the neutral-current one would be enough to accomplish this shift, which is roughly of the expected size for higher-order corrections.

A somewhat similar prior analysis that is not cited is Alexandre Alves, Alex G. Dias, Roberto da Silva, "Maximum Entropy Principle and the Higgs boson mass" (2015) (cited 42 times) whose abstract states:

A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as MH = 125.04 ± 0.25 GeV, a value fully compatible to the LHC measurement.
This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel within a Higgs portal model.

PF Meta Considerations

I weighed whether it made more sense to make this post in BSM v. HEP forums here, and concluded that BSM was more appropriate because the minimization of quantum entanglement principle is not part of the Standard Model. In the Standard Model the four parameters of the CKM matrix and the four parameters of the PMNS matrix are simply experimentally determined physical constants whose particular values have no explanation. So, this paper goes beyond what the Standard Model sets forth.

I would also argue that all three of the papers linked in this post are not just "numerology" papers, as they suggest a plausible physical mechanism or theoretical principal by which the values of the SM physical constants in question can be determined.
 
Last edited:
  • Informative
  • Like
Likes nnunn and phyzguy
Physics news on Phys.org
  • #2
How does this look from the perspective of old work by @CarlB and @Kea, involving MUBs (mutually unbiased bases)?
 
  • #3
It took me a while to understand how and why "entanglement minimization" could determine anything. The best idea, I believe, is not that this is as fundamental as "extremization of the action", but rather that it is a side effect of the genuinely fundamental dynamics.

The first paper to talk about entanglement minimization in particle physics might be

"Entanglement Suppression and Emergent Symmetries of Strong Interactions" (Beane et al, 2019)

Near the end of their paper they write

"The Pauli exclusion principle’s requirement of antisymmetrization produces a natural tendency for highly entangled states of identical particles... It is somewhat perplexing how to understand the result that the S-matrix for baryon-baryon scattering exhibits screening of entanglement power when the quarks and gluons that form the nucleon are highly entangled. It may be the case that the nonperturbative mechanisms of confinement and chiral symmetry breaking together strongly screen entanglement fluctuations in the low energy sector of QCD..."

A follow-up paper

"Entanglement minimization in hadronic scattering with pions" (Beane et al, 2021)

also says:

"Techniques which make use of entanglement minimization to select out physically relevant states and operators from an exponentially large space have a long history. For instance, tensor methods and DMRG crucially rely on the fact that ground states of reasonable Hamiltonians often exhibit much less entanglement than a typical state."

Whether it's suppression of entangling fluctuations, the result of arriving at a ground state, or some other mechanism, the point is that dynamics can have the consequence of minimizing entanglement. In this case, the implication is that unknown dynamics responsible for determining the yukawa couplings of the standard model, also has this property.

In the previous comment, I mentioned Brannen and Sheppeard's use of mutually unbiased bases (MUBs). The defining property of MUBs (which are vector bases for a Hilbert space), is that a basis vector in one such basis, is an equal-probability superposition of all the basis vectors in another basis. Carl Brannen has used MUBs to study the masses of three-particle bound states, in particular as a way to generate Koide mass triplets (thanks to the appearance of circulant matrices in MUB theory). For the leptons, I gather this implies a three-preon model (which was a feature of Koide's original work too).

Meanwhile, Carl has also used the MUB formalism to model three-color ground states, specifically baryon masses. Over the years, there have been many speculative classifications of the observed hadrons into new multiplets, and in fact the first paper by Beane et al is one of these, or rather, it's trying to explain a postulated SU(16) symmetry of three-flavor baryons. I think it would be of interest to compare the two arguments, and also to see if @arivero's Koide waterfall is "entanglement minimizing" in any way.
 
  • Like
Likes arivero and ohwilleke

Similar threads

Replies
4
Views
2K
Replies
3
Views
2K
Replies
1
Views
2K
Replies
0
Views
2K
Replies
13
Views
3K
Replies
0
Views
2K
Back
Top