What is new with Koide sum rules?

  • Thread starter arivero
  • Start date
  • Tags
    Rules Sum
In summary: } )^2+\left( 3 (1- {\sqrt...} )^2\right)+\left( 3 (1+ {\sqrt...} )^2\right)\right)n=\left( 1 (1+ {\sqrt...} )^2+\left( 1 (1- {\sqrt...} )^2\right)+\left( 1 (1+ {\sqrt...} )^2\right)\right)gives the following result:m_g=\left( 3+\sqrt 3 \over 2+\left( 3-\sqrt 3 \over 2\right)+\left( 3+\sq
  • #246
Piotr Zenczykowski has appeared in this thread before (#74, #93). Today he points out that the modified gravitational law of "MOND" can be expressed in terms of the square root of mass, something which also turns up in Koide's formula.
 
  • Like
Likes ohwilleke
Physics news on Phys.org
  • #247
The notion of expressing it in terms of the square root of mass has merit. I must say, however, that I find the inclination of the linked paper to frame the discussion in terms of classical Greek natural philosophers a real blow to the credibility of the overall presentation.
 
  • #248
This youtube video

does not mention Koide but just cubic equations... using the cosine form. Not sure if it is a known trick in algebra.

1639793966824.png
1639794055023.png

1639794738618.png

You can check this in wolfram alpha,:
https://www.wolframalpha.com/input/?i=(x+-+h+-+2+r+cos+t)(x+-+h+-+2+r+cos+(t+++2+pi/3))(x+-+h+-+2+r+cos+(t+++4+pi/3))

Check the last minutes for more thoughts on the relation between cubic equation, conformal maps and Martin (Morgan? Moivre?) theorem.

The idea of using roots of cubic equation was mentioned before,

https://www.physicsforums.com/threads/what-is-new-with-koide-sum-rules.551549/post-5955905

and indeed the equation

[tex]
(\nu - x) (\nu^2 + 4 \nu x - 2 x^2) - \nu^3 \sqrt{2} \cos(3 \delta) = 0
[/tex]

meets, when expanded, the requirement b^2=6ac of the previous post, but note that the independent term

[tex]
\nu^3 (1 - \sqrt{2} \cos(3 \delta) )
[/tex]

is not completely free; of course it is subject to the requirement of producing three real solutions. It is interesting here to note that the tuples for 15 and 45 degrees cancel the sqrt(2).

Other forms:

[tex]
2 \left({x \over \nu} -1\right) \left({x \over \nu} - (1+\sqrt{3\over 2} )\right) \left( {x \over \nu} - (1-\sqrt{3\over 2})\right) = \sqrt{2} \cos(3 \delta)
[/tex]

[tex]
\left({x \over \nu} -1\right) \left({x \over \nu} - (1+\sqrt{3\over 2} )\right) \left( {x \over \nu} - (1-\sqrt{3\over 2})\right) = \cos(\pi/4) \cos(3 \delta)
[/tex]

[tex]
\left({x \over \nu} -1\right) \left({x \over \nu} - (1+\sqrt{3\over 2} )\right) \left( {x \over \nu} - (1-\sqrt{3\over 2})\right) = \frac 12 (\cos(3 \delta + \frac{\pi}{4} ) + \cos(3 \delta - \frac{\pi}{4} ))
[/tex]

EDIT: of course, a way via characteristic polynomials of matrices allow very easily to find formulae for mass alone, without roots, or for mass square. Not sure how the result would be different of Goffinet's https://www.physicsforums.com/threads/what-is-new-with-koide-sum-rules.551549/post-4269684 or of the original derivation of the formula.

EDIT2:
for a general cubic equation,
[tex]a x^3 + b x^2 + c x^2 + d = 0[/tex]
if the three roots are real, then
[tex] { x_1^2 + x_2^2 + x_3^2 \over (x_1+x_2+x_3)} = 1 - {2 a c \over b^2}[/tex]

EDIT3:

I am pondering if the expanded equation

[tex]
2 x^3 - 6 \nu x^2 + 3 \nu^2 x + \nu^3 ( 1+ \sqrt{2} \cos(3 \delta) ) = 0
[/tex]

Could be seen as a condition for extremal (both maximum and minimum) of[tex]
\frac {1}{2} x^4 - 2 \nu x^3 + \frac{3}{2} \nu^2 x^2 + \nu^3 ( 1+ \sqrt{2} \cos(3 \delta) ) x
[/tex]

Very far fetched, but the coefficients are simple.
 
Last edited:
  • #249
My intuition (guess) on the square root in the Koide equation is that it is in the nature of waves that their energies (and therefore their masses) are proportional to the squares of their amplitudes. Here "amplitude" is something about a wave that is convenient for mathematical physicists in that it is linear. Amplitudes are squared to get probabilities and it is probabilities that are proportional to energies and masses. We would do all our calculations in the probability / energy / mass units instead of amplitudes except that we would lose the convenient linearity. So it's natural to use square roots of mass when we're looking for linear equations relating mass / probability wave functions.

Now I also prefer density matrices to state vectors and this is the same relationship. In my view state vectors are not a part of reality, it is the density matrices that are fundamental. The state vectors are just a convenient way of making things linear so that we can use linear algebra to do calculations. But if I want a linear relationship between stuff represented by density matrices it is again natural to think about square roots.

And my paper on the subject of density matrix symmetry (which is a generalization of state vector symmetry) and the Standard Model is still "reviewers assigned" at Foundations of Physics now since May 2021.

***** Now for some fairly incoherent speculations *******

For the square root argument about MOND, I speculated back in 2003 that MOND might be related to the quantum Zeno effect. That effect is basically about the inability of a state vector to give exponential decay in the weak limit. The reason has to do with the square root relationship between the probability and the amplitude. Here's a link:
http://brannenworks.com/PenGrav.html

That references a 2003 write-up of mine titled "Ether, Relativity, Gauges and Quantum Mechanics" which is rather out of date. What I agree with it now is that position is discrete, that is, spatially the universe is a cubic lattice and time is also discrete. Maybe this means that velocity is quantized; that is, there is a minimum velocity. It's some tiny fraction of the speed of light.
 
  • Like
Likes ohwilleke
  • #250
I wonder if given the degree 3 equation, could it be useful to work out some mass matrices. For instance the matrix

[tex]M^\frac{1}{2}=
\nu \begin{pmatrix}
1 & 1 & 0 \\
0 & 1 +\sqrt \frac 32 & {\sqrt 2 \over 2} \cos (3\delta) \\
1 & 0 & 1 - \sqrt \frac 32
\end{pmatrix}[/tex]

should have eigenvalues meeting Koide formula. With a symmetric matrix, we could square it to produce other equivalent formulae. I am wondering if one needs to ask for normality of [itex]M^\frac{1}{2}[/itex], symmetry, or some other property, in order to get a valid mass matrix.

At the end of the day, Koide formula is about the quadratic invariant of a matrix, or the quotient between the quadratic invariant and the linear invariant.
 
Last edited:
  • Like
Likes ohwilleke
  • #251
Has someone reviewed this one?

https://arxiv.org/abs/2108.05787 Majorana Neutrinos, Exceptional Jordan Algebra, and Mass Ratios for Charged Fermions by Vivan Bhatt, Rajrupa Mondal, Vatsalya Vaibhav, Tejinder P. Singh

It is obscure, not easy read. But it uses, or finds, the polynomial equation form of Koide formula. Acording a blog entry, they will at some time ship a v2 with enhanced readability. Meanwhile T.P. Singh seems to update versions of a similar paper faster here, and has a previous blog post here about it.
 
Last edited:
  • Like
Likes CarlB and ohwilleke
  • #252
They don't aim to produce the Koide formula specifically. They just try to match mass ratios using combinations of eigenvalues of various matrices, and then use the Koide formula as an extra check.

The formulas for electron, muon, tau ratios are equations 56, 57 (an alternative that doesn't work as well is in equations 62, 63). The numbers they combine to create these formulas, are found in figure 1, page 21. They have no explanation for the particular combinations they use (page 23: "a deeper understanding... remains to be found"; page 25: "further work is in progress").
 
  • Like
Likes arivero and ohwilleke
  • #254
One of the issues that has been raised about Koide's rule is that masses run with energy scale, unless you interpret it as applying only to pole masses.

It has also been noted that Koide's rule (and extensions of it) are really just functions of the mass ratios of particles and not their absolute masses.

With that in mind, some observations about the running of masses and mass ratios at high energies in the Standard Model in a recent preprint bear mentioning:
The CKM elements run due to the fact that the Yukawa couplings run. Furthermore, the running of the CKM matrix is related to the fact that the running of the Yukawa couplings is not universal. If all the Yukawa couplings ran in the same way, the matrices that diagonalize them would not run. Thus, it is the nonuniversality of the Yukawa coupling running that results in CKM running.
Since only the Yukawa coupling of the top quark is large, that is, O(1), to a good approximation we can neglect all the other Yukawa couplings. There are three consequences of this approximation:
1. The CKM matrix elements do not run below m(t).
2. The quark mass ratios are constant except for those that involve m(t).
3. The only Wolfenstein parameter that runs is A.
The first two results above are easy to understand, while the third one requires some explanation. A is the parameter that appears in the mixing of the third generation with the first two generations, and thus is sensitive to the running of the top Yukawa coupling. λ mainly encodes 1–2 mixing — that is, between the first and second generations — and is therefore insensitive to the top quark. The last two parameters, η and ρ, separate the 1–3 and 2–3 mixing. Thus they are effectively just a 1–2 mixing on top of the 2–3 mixing that is generated by A. We see that, to a good approximation, it is only A that connects the third generation to the first and second, and thus it is the only one that runs.
The preprint is Yuval Grossman, Ameen Ismail, Joshua T. Ruderman, Tien-Hsueh Tsai, "CKM substructure from the weak to the Planck scale" arXiv:2201.10561 (January 25, 2022).

The preprint also identifies 19 notable relationships between the elements of the CKM matrix at particular energy scales with one in particular that is singled out.

Screen Shot 2022-01-27 at 7.33.03 PM.png


At low energies, A2 which is the factor by which the probability associated with the CKM matrix entries that it is present in is consistent with being exactly 2/3rds.

The parameter "A" grows by 13% from the weak energy scale to the Planck energy scale, which means that A2 is about 0.846 at the Planck energy scale (about 11/13ths).

FWIW, I'm not convinced that it is appropriate to just ignore the running of the other Wolfenstein parameters, however, since if A increases, then one or more of the other parameters need to compensate downward, at least a bit, to preserve the unitarity of the probabilities implied by the CKM matrix which is one of its theoretically important attributes.

For convenient reference, the Wolfenstein parameterization is as follows:

1643335193829.png
 
Last edited:
  • #255
If we take Koide's observation that his equation is perfect only at low energy limit, it calls into question the usual scheme of trying to understand the Standard Model as the result of symmetry breaking from something that is simplest at the high temperature limit.

Contrary to that symmetry breaking assumption, the history of 100 years work on the Standard Model is that as energies increase, our model becomes more complicated, not simpler. Maybe Alexander Unzicker is right and we're actually abusing symmetry instead of using it. As you generalize from symmetries to broken symmetries you increase the number of parameters in what is essentially a curve fitting exercise. People complain about the 10 ^ 500 models in string theory but the number of possible symmetries is also huge, given our ability to choose among an infinite number of symmetries each with an infinite number of representations.

My paper which shows that the mixed density matrices have more general symmetries seems to have finally reached the "under review" status at Foundations of Physics. It had been "reviewers assigned" since I sent it in in May last year, other than a few days but it went to "under review" on January 17 and is still there. I suppose they've got reviews back and are arguing about it. That paper's solution to the symmetry problem is to use mixed density matrices which can cover situations where the symmetry depends on temperature which is just what is needed for the Standard Model. But density matrices are incompatible with a quantum vacuum; instead of creation and annihilation operators you'd have to use "interaction operators" where, for example, an up-quark is created, a down quark annihilated and simultaneously a W- is created. But you couldn't split these up into the individual operators for the same reason you don't split density matrices into two state vectors (from the density matrix point of view).
 
  • Like
Likes ohwilleke and arivero
  • #256
Indeed SO(32) is huge if interpreted in the usual way, but I was surprised that the idea of the sBootstrap implies very naturally SO(15)xSO(15)

As for the low energy limit... I wonder if it relates to QCD mass gap. We have the numerical coincidence of 313 MeV

(EDIT: https://arxiv.org/abs/2201.09747 calculates 312 (27) MeV, but it compares with the lattice estimate 353.6 (1.1) and with longitudinal Schwinger with gives 320 (35). And then we have the issue of renormalization scheme )
 
Last edited:
  • Like
Likes ohwilleke
  • #257
CarlB said:
density matrices are incompatible with a quantum vacuum; instead of creation and annihilation operators you'd have to use "interaction operators" where, for example, an up-quark is created, a down quark annihilated and simultaneously a W- is created. But you couldn't split these up into the individual operators for the same reason you don't split density matrices into two state vectors (from the density matrix point of view).
Is this formalism of nonfactorizable thermal interaction terms already described somewhere? Or do we have to wait for the paper?
 
  • Like
Likes CarlB
  • #258
Zenczykowski has written a follow-up to the MOND/Koide paper mentioned in #246, "Modified Newtonian dynamics and excited hadrons". This one does not mention Koide relations at all, so it's getting a little off-topic, but so we can understand his paradigm, I'll summarize it.

In hadronic physics, there is a "missing resonance" problem. The ground states of multiquark combinations are there as predicted by QCD, but there are fewer excited states than e.g. a three-quark model of baryons would predict. The conventional explanation of this seems to be, diquarks: quarks correlate in pairs and so in effect there are only two degrees of freedom (quark and diquark) rather than three, and therefore there are fewer possible states. (As another explanation, I will also mention Tamar Friedmann's papers, "No radial excitations in low energy QCD", I and II, which propose that radial excitations of hadrons don't exist,, and that they shrink rather than expand when you add energy.)

Zenczykowski's idea is that space is partly emergent on hadronic scales, e.g. that there are only two spatial dimensions there, and that this is the reason for the fewer degrees of freedom. This is reminiscent of the "infinite momentum frame", various pancake models of a relativistically flattened proton, and even the idea of a space-time uncertainty relation (in addition to the usual position/momentum uncertainty) that often shows up in quantum gravity.

The weird thing he does in the current paper, is to draw a line on a log-log graph of mass vs radius, indicating where the Newtonian regime ends in MOND, and then he extrapolates all the way down to subatomic scales, and argues that hadrons lie on this line! So that's how the "square root of mass" would enter into both the MOND acceleration law, and the mass formulae of elementary particles.
 
  • Wow
  • Like
Likes ohwilleke and CarlB
  • #259
Michael; What I've been concentrating on is QFT in 0 dimensions, that is, on things that happen at a single point in space or equivalently, things that happen without any spatial dependence. Without spatial dependence you cannot define momentum and without time dependence you cannot define energy but I see those as complications that can be included later.

By fermi statistics, only one or zero of any fermion can exist there, subject to spin. So you could have a spin-up electron and spin-down electron and a positron of any spin and that would be 3 particles -- the rest of the leptons and quarks would be absent. In addition to mixing (by superposition) spin-up with spin-down you can also mix colors and generations. But you can't mix particles with different electric charge.

This fits with the density matrix calculations in my papers; superpositions between different electric charges are not just forbidden by a "superselection sector" principle but they are in addition not even elements of the algebra. That is, the particles are the result of superpositions over symmetry. There are just enough degrees of freedom for the observed particles and their superselection sectors; there aren't any degrees of freedom left over to do something weird with it like a superposition of a neutrino and quark.

If you model the particles with state vectors, under this assumption you've got a single state vector with sectors that don't mix. That implies that the raising and lowering operators in a single superselection sector are just square matrices with (at least) an off diagonal 1 (or matrices equivalent to this on transformation so you can have a matrix that converts spin+x to spin-x). I think that handles the "interaction operators" that stay within a superselection sector such as a photon or gluon, but it would be outside of the algebra when considering something that changes the superselection sector such as the weak force. An example of a square matrix that defines an interaction is the gamma^\mu that is used for a photon in QED; but that stays inside a superselection sector so it's easy. Also it comes with a coupling constant that isn't obvious how to calculate.

To get a interaction that changes superselection sector (like the weak force) implies a mathematical object that is outside of the symmetry algebra. For what I'm working on the symmetry algebra is the octahedral group (with 48 elements). That is a point symmetry and I'm thinking it implies that space is on a cubic lattice. And the group has a mysterious tripling to give 144 elements so that the generation structure appears. Since the weak force changes superselection sector it has to be outside the algebra and cannot obey the symmetry and so the weak force mixes generations.

Anyway, I haven't figured it out. Every now and then I get an idea and a week of my life disappears in attempts to make calculations but that's slightly better than not having any idea what to calculate.
 
  • Like
Likes ohwilleke
  • #260
Mitchell, that paper Modified Newtonian Dynamics and Excited Hadrons was quite a read; a paper I will read again as I'm sure it has insights I've overlooked. Zenczykowski has quite a lot of fascinating papers.

I like the concept of "linearization". The way I interpret this, Nature is naturally squared as, for example, energy is the square of amplitude. But Man prefers things that can be conveniently computed by linear methods so he linearizes things and this is essentially taking the square root. Thus Nature's density matrices are converted into state vectors.

What I don't quite possibly understand is where x^2 + p^2 comes from. I'm guessing this means the particle is being represented, one way or another, as a harmonic oscillator. Any ideas?

The Zenczykowski paper may not mention the Koide formula but dang it sure seems to come near to it. The idea of there being two sets of Pauli spin matrices (for momentum and position) seems to imply 2x3 = 6 degrees of freedom that are being rearranged so that generations come in pairs of triplets. As in charged leptons + neutrinos, or up-quarks and down-quarks.

Some years ago I was interested in Koide triplets among hadrons. For this I was looking at states with the same quantum numbers and I found quite a number of pairs of triplets. I wrote it up here and never published it. The fits begin on page 27:
http://www.brannenworks.com/koidehadrons.pdf

You can ignore the derivation, but it involves the discrete Fourier transform. My most recent paper uses the non commutative generalization of the discrete Fourier transform to classify the fermions so these are related ideas. What's missing from these papers is an explanation of how a discrete lattice gives apparent Lorentz symmetry which is nicely explained in Bialynicki-Birula's paper: "Dirac and Weyl Equations on a Lattice as Quantum Cellular Automata" https://arxiv.org/abs/hep-th/9304070
 
  • #261
"Zenczykowski's idea is that space is partly emergent on hadronic scales, e.g. that there are only two spatial dimensions there, and that this is the reason for the fewer degrees of freedom."

This is definitely a thing. Alexandre Deur, when working on a MOND-like theory that is heuristically motivated by analogizes a graviton based quantum gravity theory to QCD (in the gravity as QCD squared paradigm) (even though one can get to the same results with a classical GR analysis in a far less intuitive way) talks about dimensional reduction (sometimes from 3D to 2D, and sometimes to 1D flux tubes) in certain quark-gluon systems as a central conclusion of mainstream QCD.

Put another way, emergent dimensional reduction when the sources of a force carried by a gauge boson has a particular geometry, is a generic property shared by non-Abelian gauge theories in which there is a carrier boson that interacts with other carrier bosons of the same force, although the strength of the interaction, and the mass, if any, of the carrier boson, will determine the scale at which this dimensional reduction arises. This emergent property arises from the self-interactions of the gauge bosons that carry the force in question. The scale is the scale at which the strength of the self-interaction and the strength of the first order term in the force (basically a Coulomb force term) are close in magnitude.

Generically, this self-interaction does not arise (due to symmetry based cancellations) and the dimensional reduction does not occur if the sources of the force are spherically symmetric in geometry. To the extent that the geometry of the sources of force approximate a thin-disk, there is an effective dimensional reduction from 3D to 2D at the relevant scale. To the extent that the geometry of the sources of the force are well approximated as two point sources isolated from other sources of that that force, you get an effective dimensional reduction from 3D to 1D with a one dimensional flux tube for which the effective strength of the force between them is basically not dependent upon distance (in the massless carrier boson case).
 
Last edited:
  • Like
Likes CarlB
  • #262
mitchell porter said:
Piotr Zenczykowski has appeared in this thread before (#74, #93). Today he points out that the modified gravitational law of "MOND" can be expressed in terms of the square root of mass, something which also turns up in Koide's formula.
yes, very interesting indeed. Square root of mass plays a decisive role in understanding mass ratios and the Koide formula https://arxiv.org/abs/2209.03205v1 and in fact this is the very reason square root of mass appears in MOND.
 
  • Like
Likes arivero and ohwilleke
  • #263
Tejinder Singh said:
yes, very interesting indeed. Square root of mass plays a decisive role in understanding mass ratios and the Koide formula https://arxiv.org/abs/2209.03205v1 and in fact this is the very reason square root of mass appears in MOND.
But, of course, the square of rest mass also matters.

The sum of the square of the Standard Model fundamental particle rest masses is consistent with the square of the Higgs vev at the two sigma level, which probably isn't a coincidence and is instead probably a missing piece of electroweak theory (and also makes the Higgs rest mass fall into place very naturally).

In other words, the sum of the respective Yukawa or Yukawa equivalent parameters in the SM, that quantify the Higgs mechanisms proportionate coupling to each kind of fundamental particle that gives rise to its rest mass in the SM, is equal to exactly 1.

If electroweak unification and the Higgs mechanism were invented today, I'm sure that the people devising it would have included this rule in the overall unification somehow.

If this is a true rule of physics and not just a coincidental relationship (and it certainly feels like a true rule of physics in its form), this is also excellent evidence that the three generations of SM fermions and the four massive SM fundamental bosons (the Higgs, W+, W-, and Z) are a complete set of fundamental particles with rest mass in the universe (although it would accommodate, for example, a massless graviton or a new massless carrier boson of some unknown fifth force), especially when combined with the completeness of the set of SM fundamental particles that follows from observed W boson, Z boson and Higgs boson decays. The W and Z boson data are strong consistent with the SM particle set being complete up to 45 GeV, and the Higgs boson decays would be vastly different if there were a missing Higgs field rest mass sourced particle with a mass of 45 GeV to 62 GeV. The sum of Yukawas equal to one rule and current experimental uncertainties in SM fundamental particle masses leaves room for missing Higgs field rest mass sourced SM particles with masses of no more than about 3 GeV at most. This low mass range is firmly ruled out by W and Z boson decays.

These observations are part of why I am with strongly with Sabine Hossenfelder on having a Bayesian prior that there is a very great likelihood that there are no new fundamental particles except the graviton and perhaps something like a fundamental string that could give rise to other particles of which the SM set plus the graviton is the complete set.

Skeptics, of course, can note that the contributions of the top quark on the fundamental SM fermion side, and the Higgs boson and weak force gauge bosons on the fundamental SM boson side are dominant so that the contributions of the three light quarks, muons, electrons, and neutrinos, as well as the massless photons and gluons, are so negligible as to be completely lost in the uncertainties of the top quark and heavy boson masses, and thus just provide speculative theoretical window dressing until our fundamental particle mass measurements are vastly more precise.

But the big picture view as a method to the Higgs Yukawa values madness does reduce the number of SM degrees of freedom by one if true and is suggestive of a deeper understanding of electroweak unification and the Higgs mechanism that is deeply tied to the same quantities upon which Koide's rule and its extensions act.

The Higgs vev in turn, is commonly expressed as a function of the weak force coupling constant and the W boson mass, suggesting a central weak force connection to mass scale of the fundamental particles, although not necessarily explaining their relative masses (although electroweak unification explains the relative masses of the W and Z bosons to each other).

And, of course, it is notable that the only fundamental SM particles without rest mass (i.e. photons and gluons) are those that don't have a weak force charge again pointing to the deep connections between the weak force and fundamental particle masses in the SM.

These points are also a hint that the source of neutrino mass may be more like the source of the mass of the other particles than we give it credit for being.

The lack of rest mass of the gluons also presents one heuristic solution to the so called "strong CP problem." The strong force, the EM force, and gravity don't exhibit CP violation because gluons, photons and hypothetical gravitons must all be massless and massless carrier bosons of a force don't experience time in their own reference frame. And, since CP violation is equivalent to T (i.e. time) symmetry violation, forces transmitted by massless carrier bosons shouldn't and don't have CP violation. In contrast, the weak force, which has a massive carrier boson (the W+ and W-) is the only force in which there is CP violation and hence T symmetry violation, since massive carrier bosons can experience time. (Incidentally, this also suggests that if there were a self-interacting dark matter particle with a massive carrier bosons transmitting a Yukawa DM self-interaction force that it would probably show CP violation, not that I think SIDM theories are correct.)

Alexandre Deur's work demonstrates one approach from first principles in GR of how the square root of mass can work its way into the phenomenological toy model of MOND. This tends to suggest that there is no really deep connection between MOND and Koide's rule, even if the connection isn't exactly a coincidence. After all, MOND is acting not just on fundamental particle masses arising via the Higgs mechanism as Koide's rule does. It also acts on, all kinds of mass-energy, such as the mass arising from gluon fields in protons, neutrons and other hadrons which has nothing to do with the Higgs mechanism rest masses that Koide's rule and its extension relate to. Gluons field masses arise from the magnitude of quark color charges and the strong force coupling constant instead.

Alas, no comparable first principles source for Koide's rule is widely shared as an explanation for it although a few proposals have been suggested. My own physics intuition is that Koide's rule and its extension to follow an ansatz based on a dynamic non-linear balancing the charged lepton flavors, and quark flavors, respectively via flavor changing W boson interactions governed by the CKM matrix and lepton universality (with the Higgs mechanism really just setting the overall mass scale for the fundamental SM particles), and I can see the bare outlines of how something along that lines might work, but I lack the mathematical physics chops to fully express it.
 
Last edited:
  • #264
Square root mass is also in equation (2) of the excellent paper "River Model of Black Holes" Hamilton & Lisle Am.J.Phys.76:519-532,2008: https://arxiv.org/abs/gr-qc/0411060

The paper is about a model of black holes on flat space-time. It's what you conclude black hole must be if you follow the model of GR using geometric algebra (gamma matrices) from the Cambridge geometry group. The non rotating version is called "Gullstrand-Painleve" coordinates so I treated it, along with Scwharzschild coordinates in my paper: https://arxiv.org/abs/0907.0660

The idea is that black holes act as if space is a river that flows into the hole. The square root of the black hole mass is in the velocity of the river.

Meanwhile, I'm working on a paper that defines a new formulation of quantum mechanics that includes statistical mechanics and the intermediate transitions from wave to particle in wave / particle duality.
 
  • Like
Likes ohwilleke
  • #265
reviewing the wikipedia I think that we never mentioned koide original definition of the formula.

The original derivation in "Quark and lepton masses speculated from a Subquark model" is very nice. It just says to assume that $$m_{e_i} \propto (z_0 + z_i)^2 $$ with the conditions $$z_1+z_2+z_3=0$$ and $$\frac 13(z_1^2+z_2^2+z_3^2)=z_0^2$$

A interesting corollary is that the "two generations version" is simply a pair with a massless particle and a massive one.
 
Last edited:
  • Like
Likes ohwilleke
  • #266
Elaborating on the original paper, could be interesting to rewrite Koide equation as a less spectacular "Koide Postulate"
$$ \operatorname {Tr} D^2 = \operatorname {Tr} Z^2 $$
where D and Z are respectively diagonal and traceless matrices that decompose the Yukawa matrix via
$$ A= D+Z$$
$$ Y = AA^+$$
If ##A## is also diagonal, and then so D and Z, we recover traditional Koide formula. But it is still interesting to look to the Trace equation. Generically we do:
$$A = \pm \sqrt Y ;\; D= {\operatorname {Tr} A \over 3} Id_3 ;\; Z= A- D$$
and then we compare. For the charged leptons with tau mass the old 1776.86 pm 0.12 we get

##\operatorname {Tr} D^2=941.52 \pm 0.05 MeV##
##\operatorname {Tr} Z^2 =941.51 \pm 0.07 MeV##
If we use the lepton masses at the charm scale as said in the XZZ paper, then as you know the Koide equation is not in the error bars anymore, we get 0.667850+/-0.000011 instead of just two thirds. But it is interesting that in this approach the hit is taken more by the diagonal part, that goes down to 938.27+/-0.08 MeV, while the traceless part goes only a bit up to 941.60+/-0.12 MeV. At the bottom scale, both sides go down proportionally, running until GUT scale 890 vs 894 MeV.

EDIT: It can be worthwhile to remark that the old equation is recovered because:
$$\operatorname {Tr} Z^2 - \operatorname {Tr} D^2 = \operatorname {Tr} A^2 - \operatorname {Tr} 2AD = (m_e+m_\mu+m_\tau) - 2 (\sqrt m_e+\sqrt m_\mu+\sqrt m_\tau)^2/3 $$
So the equation can be also reformulated as
$$ \operatorname {Tr} A^2 = \operatorname {Tr} \{A,D\} $$ with ##[A,D]=0##
 
Last edited:
  • Like
Likes ohwilleke and mitchell porter
  • #267
arivero said:
$$ \operatorname {Tr} D^2 = \operatorname {Tr} Z^2 $$

ADDENDA: A consequence of this line of thought is that it allows us to reformulate R. Foot observation in the most abstruse algebraic way possible. Consider that being at 45 degrees of 1,1,1 means that the projection into the diagonal and its orthogonal have the same size. Consider the 3x3 matrices as vector space with the Hilbert-Schmidt inner product ##<A,B>= Tr(A^+ B)## associated to the norm ##||A||_{HS}=Tr(AA^+)##. The diagonal matrix ##D## above is just the projection ##A^\|## of ##A## into the line of identity multiples and it is the projection we visualize in Foot's cone. So

We call Koide ansatz to the postulate that there exists a decomposition $$Y = A A ^+$$ of the yukawa mass matrix of the charged leptons such that $$\|A^\parallel\|_{HS}=\|A^\perp\|_{HS}$$ for the projection into the line of multiples of identity, using the Hilbert-Schmidt inner product ##<U,V>= Tr(U V^+)## and its associated norm.

When ##A## is self-adjoint, the ansatz produces Koide formula.

A possible pursue in this approach could be to investigate the normality of ##A##. It is easy to consider non-normal 3x3 matrices knowing that every non diagonal triangular matrix is non normal. And then we have two yukawa mass matrices ##Y_0= A A^+##, ##Y_1= A^+ A## with the same mass values.

The normal but not self-adjoint case is also interesting. Some of the work of CarlB on circulant matrices could reappear here. Besides, it invites to consider generalisation of the square root of a mass to include complex phases, using ##z=\sqrt m e^{i \phi}## and then ##m=z z^* ##

Ah, Koide used this format for his formula in this paper: https://arxiv.org/abs/hep-ph/0005137v1
 
Last edited:
  • Like
Likes ohwilleke
  • #268
Sorry for throwing in a very tangential reference. But I tried to find a precedent for caring about the Hilbert-Schmidt norm of the yukawa matrices. The only thing I found was a paper from 1997, part of a research program in which they try to construct quantum field theories whose beta functions vanish. It's weird but tantalizing, because we have to care about beta functions too. Maybe we need to think about the new ansatz, in the context of RG flows in the space of couplings.
 
  • Like
Likes ohwilleke and arivero
  • #269
Yeah, Hilbert-Schmidt norm is very pedantic :smile:. But once one goes to Tr A A^+, a lot of stuff can appear.

Still I am worried I do not know how lo look fundamental books on this "theory of trace invariants". For instance I was very surprised when I considered the complex generalisation; the more general matrix such that Tr Z Z^+ = 1 and Tr Z = 0. I was expecting it to be just some "unphysical phases".

EDIT: lets ask stackexchange too https://math.stackexchange.com/questions/4734443/more-general-traceless-normalized-matrix Of course they pointed me that any traceless matrix makes the role up to normalisation... not very helping about how to parametrise the eigenvalues. Sure if I ask they will tell me "just the dimension of the matrix, minus one".
 
Last edited:
  • Like
Likes ohwilleke
  • #270
This thread was launched by the idea of a "waterfall" of Koide-like relations that relate the masses of all the quarks as well as the charged leptons. An esoteric idea buried in that paper (in part 3), is that the more fundamental version of this "waterfall" starts with a massless up quark, but that instantons add a finite correction to the up quark mass, a correction which then propagates through the waterfall and gives rise to the observed values of the masses.

The idea that the up quark is fundamentally massless was proposed as a solution to the strong CP problem (why the theta angle of QCD is zero), but lattice QCD calculations imply that there must be a nonzero fundamental mass, in addition to any mass coming from QCD instantons. However, this just means that the up yukawa must be nonzero at the QCD scale. It is still possible that the up mass comes from instantons of a larger gauge group for which SU(3) color is just a subgroup.

"Non-Invertible Peccei-Quinn Symmetry and the Massless Quark Solution to the Strong CP Problem" illustrates this for the example of SU(9) color-flavor unification. Actually they talk about a massless down quark, but they state it could work for the up quark as well, and they cite some 2017 papers (references 86-87) which feature a massless up quark in the context of SU(3)^3. Also see their reference 2, which posits a similar origin for neutrino masses, and illustrates that these instantons can be thought of as arising from virtual flavored monopoles.
 
  • Like
Likes ohwilleke
  • #271
mitchell porter said:
This thread was launched by the idea of a "waterfall" of Koide-like relations that relate the masses of all the quarks as well as the charged leptons. An esoteric idea buried in that paper (in part 3), is that the more fundamental version of this "waterfall" starts with a massless up quark, but that instantons add a finite correction to the up quark mass, a correction which then propagates through the waterfall and gives rise to the observed values of the masses.

The idea that the up quark is fundamentally massless was proposed as a solution to the strong CP problem (why the theta angle of QCD is zero), but lattice QCD calculations imply that there must be a nonzero fundamental mass, in addition to any mass coming from QCD instantons. However, this just means that the up yukawa must be nonzero at the QCD scale. It is still possible that the up mass comes from instantons of a larger gauge group for which SU(3) color is just a subgroup.

"Non-Invertible Peccei-Quinn Symmetry and the Massless Quark Solution to the Strong CP Problem" illustrates this for the example of SU(9) color-flavor unification. Actually they talk about a massless down quark, but they state it could work for the up quark as well, and they cite some 2017 papers (references 86-87) which feature a massless up quark in the context of SU(3)^3. Also see their reference 2, which posits a similar origin for neutrino masses, and illustrates that these instantons can be thought of as arising from virtual flavored monopoles.
Thanks for the heads up.

The Many Experimentally Determined Constants Of The SM Belong In The Electroweak Sector - The QCD Sector Isn't The Right Place To Look For Answers To Koide-Like Questions

The idea that the quark masses are related to the SU(3) QCD interactions, at all, as opposed to being basically an electroweak phenomena, however, doesn't seem right.

QCD has nothing to do with the CKM matrix, the PMNS matrix, the charged lepton masses that follow the original Koide's rule, W boson and Z boson decays, or the SM Higgs mechanism which has been demonstrated well at the LHC.

QCD related hadron mass doesn't even have the same origin (or even a similar origin) to the masses of the charged fundamental fermions and massive fundamental bosons that arise from the Higgs mechanism. QCD is doing its part of the mass generation thing in composite particles, dynamically, in a quite feasible to calculate with lattice QCD way already.

Rather than QCD instantons, starting with a zero mass or small self-interaction origin mass for the up quark (indeed, the up quark, down quark, electron, and lightest neutrino mass eigenstate are all reasonably close to what they should be due to self-interactions), and then modifying it with loop level modifications (much like the electroweak part of the muon g-2 calculation), seems so much more in tune with the way all of the other relevant parts of the Standard Model work.

Similarly, the seductive LC & P relationship (i.e. that the sum of the squares of the masses of the fundamental particles is equal to the square of the Higgs vev), is still true to within about 2 sigma or a hair more for a very slight statistical tension (almost all of the uncertainty arising from uncertainty in the top quark pole mass and the Higgs boson mass, which combined, are a little light at current best fit values). But this relationship really only makes sense in the context of the electroweak part of the Standard Model model ignoring QCD.

Even if the simple LC & P relationship comes into a greater tension with the best fit fundamental particle masses with new data, it doesn't take much of a BSM fix to solve that in a situation where there are a lot more potential moving parts.

A single BSM 3 GeV gauge boson (perhaps serving a similar role for neutrino mass to the role that the Higgs boson serves for all of the other fundamental particle masses), for example, would be enough to bring the current 2 sigma deviation from best fit values to a perfect LC & P fit.

The two sigma range for the top quark pole mass according to the last paper with combined LHC data sets is 171.86-173.18 GeV with a best fit value of 172.52 GeV. This result is essentially the same as the Particle Data Group value, but cuts the uncertainty in half. If the true value is on the high end of this range, and the true value of the Higgs boson is on the high end of the range it is experimentally permitted to have, then either the need for a BSM particle vanishes entirely or the mass needed to make it balance gets much smaller than 3 GeV.

In contrast, there's no way to fix any LC & P (or other) discrepancies between theory and experiment in the QCD part of the SM because it doesn't have enough moving parts.

The strong force coupling constant is really the only experimentally measured physical constant in the SU(3) QCD sector of the SM. What is it?

α(s)(n(f)=5)(M(Z))=

0.1171(7) at the renormalization group summed perturbation theory (RGSPT) value;

0.1179(9) at the Particle Data Group (PDG) value,

0.1184(8) at the 2021 Flavor Lattice Averaging Group (FLAG) value.

These values are consistent with each other at the usual two sigma level and are each precise to a bit under the one percent level. There are lots of deep intrinsic barriers to making that value more precise because strong force propagator loops converge so much more slowly and with so much more computational effort than electroweak propagator loops do (and reach peak precision at a much lower level before they start to diverge in perturbative QCD). And, I'm not aware of any real strong theoretical hint at what value it should have form any theory.

So, there just isn't much to work with there and perhaps unsurprisingly as a result of this simplicity there isn't even any significant amount of BSM variant theorizing about the QCD sector of the SM model. It isn't fruitful because the experimental and lattice data isn't precise enough to confirm or deny any reasonable variant on it.

The beta function of the strong force coupling constant is deterministically set by renormalization theory without any experimental input, and the color charges possibilities and relative values are similar fixed in the theory at small integer or ratio of small integer values, that are confirmed by experiment to high precision and by the need for theoretical consistency.

So why turn to QCD to explain the unexplained values of the fundamental constants or to reduce the degrees of freedom in the model?

In contrast, eight of the SM constants are CKM/PMNS matrix parameters in the electroweak sector of the model that basically describe W boson interactions, twelve are Higgs boson Yukawas in the electroweak sector, and two are electroweak coupling constants. The Higgs vev doesn't have any measurable QCD contributions either. The three neutrino mass eigenstates may not be Higgs Yukawas, but they certainly have nothing to do with QCD with which neutrinos don't even interact at the tree level.

What's one more experimentally measured physical constant in the electroweak sector if you need it to balance the books and make a credible prediction of new physics, in a sector where you already have 25 experimentally measured physical constants (less one or two degrees of freedom because they aren't fully independent of each other)?

The genius of Koide sum rules, if you can make them work, is that it can, in principle, greatly reduce the number of independent degrees of freedom associated with those 25 experimentally measured physical constants in the electroweak sector, eliminating seven or more of them, in addition to the one or two that we can already trim in the existing SM electroweak sector due to related electroweak quantities like the EM and weak force coupling constants, and W and Z boson masses.

The Strong CP Problem Is A Non-Problem

I also continue to be unimpressed with the notion that the Strong CP problem is really a problem at all.

Nature can set its physical constants to anything it wants to in the SM. It is sheer arrogance to impose our expectations on those values and the quest for "naturalness" driving this "problem" has been perhaps the most fruitless and most scientific effort consuming scientific program since we tried to explain planetary motions with epicycles.

Also, the fact that gluons are massless in SM QCD alone, for example, by symmetry and the fact that massless particles don't experience time in their own reference frame because they travel at exactly the speed of light, alone, makes any possibility other than zero CP violation in QCD very hard to justify or consider to be "natural".

We don't theoretically need a zero mass up quark to get that result. So, finding a cheat by which we can get a zero mass up quark isn't very impressive either.
 
Last edited:
  • #272
mitchell porter said:
Also see their reference 2
I meant reference 3.
ohwilleke said:
QCD related hadron mass doesn't even have the same origin (or even a similar origin) to the masses of the charged fundamental fermions and massive fundamental bosons that arise from the Higgs mechanism.
It's been noted here for many years that in Carl Brannen's eigenvalue formulation of the Koide formula, the mass scale is determined by a quantity equal to the mass of a "constituent quark", i.e. a quark in the context of a nucleon, dressed with whatever extra stuff is responsible for most of the nucleon mass. (To get this quantity from Brannen's paper, look up μ in equation 14, which has dimensions of sqrt(eV), and square it.)

Now interestingly, one of the theories of confinement in QCD (such as confinement of quarks inside a nucleon), is monopole condensation in the QCD vacuum. These aren't massive persistent monopoles like in grand unified theory, but rather configurations of the gauge field, that can even be gauge-dependent in some versions of this idea.

Meanwhile, the paper referenced in #270 proposes that fundamental fermions get a contribution to their Higgs-generated mass, via virtual monopole loops - see their Figure 2 on page 26.

However, these monopoles are not just SU(3) color monopoles, they are SU(9) color-flavor monopoles. And this could bring us back to the electroweak sector, since electroweak interactions are the flavor-changing interactions in the standard model.
 
  • #273
Recently I have been reviewing Koide 1981, the preon theory of lepton (and quark) masses. Let me review what Koide did:
  • He postulates that a charged lepton is a composite [itex]lh^i[/itex] (i=1,2,3) of a flavour preon and a generation preon
  • The flavour preon [itex]l[/itex] has subcharge [itex]z_0[/itex] and the generation preons [itex]h^i[/itex] have subcharge [itex]z_i[/itex] of whatever interaction that keeps the preons in place.
  • The energy of the composite is the sum of self-energies of each preon and the energy of interaction. Here Koide claims that [tex] E_i = m_i c^2 = K(a) {z_0^2\over 2} + K(a) {z_i^2\over 2} + K(a) z_0 z_i [/tex] where we can allow the force coupling to depend of a cutoff or scale [itex]a[/itex] with the requirement of having the same dependency in the three pieces. Note that in the model the generation preon is a boson and the flavour preon is a fermion.
  • The sum of generation preon charges is zero, [itex]z_1+z_2+z_3[/itex]. This is a typical group theoretical requisite, also related to anomalies etc. Fine here.
  • The square of flavour preon charge is the average of the square of generation preon charges. [itex]3 z_0^2 = z_1^2 + z_2^2+z_3^2[/itex]. From the usage it looks that Koide imagines this to be a sort of normalisation condition.
And with all of this, the trick is done, let [itex]B(a)=K(a)/2c^2[/itex] and run the math:
[tex]
m_i = B(a) (z_0+z_i)^2
[/tex][tex]
(\sum \sqrt {m_i})^2 = 9 B(a) z_0^2
[/tex][tex]
\sum m_i = 6 B(a) z_0^2
[/tex][tex]
{ \sum m_i \over (\sum \sqrt {m_i})^2 } = {2 \over 3}
[/tex]

(Edit: a good integer approximation could be z1=-48, z2=-21, z3=69, z0=50 but well I do not see how it could come from group theory. Perhaps Eddington o Krolikowski)

Most of the stuff in the old papers is not about this formula but one for quark mixing; in order to get it, Koide postulates that the generation boson themselves are composites of SU(3) generation fermions, that the bosons that go to the leptons are the ones in the antitriplet of [itex]3 \times 3 = 6 + \bar 3[/itex], and that the bosons that go to the quarks are the mix of the singlet and the two octet neutrals that we get from [itex]3 \times \bar 3 =8 + 1[/itex]
 
Last edited:
  • Like
Likes ohwilleke
  • #274
Oh well, so the sBootstrap implies koide formula.

1721406948266.png


(if one can fake the SU(2) charge so that it works as z_0)
 
  • #275
arivero said:
Oh well, so the sBootstrap implies koide formula.

View attachment 348581

(if one can fake the SU(2) charge so that it works as z_0)
FYI, the image is very fuzzy and hard to make out, even when you click on it.
 
  • #276
It was a fast note to remember that I have taken more than ten years to notice how to recover Koide from group theory, while it is trivial given SU(3). So I am a bit ashamed.

All the action has been happening in the other thread. There we saw that the sBootstrap solution can be obtained out from -or is equivalent to- an SU(15) group, that optionally lives inside SO(32). We want to break it down to SU(3) colour times SU(3) flavour times SU(2) flavour, which we label as r,g,b times d,s,b times u,c to keep track of the diquarks and dibosons. To do this, we factor colour first so we are left with an anticoloured 15 of SU(5), an coloured 15 conjugate, and a neutral 24. Now we look the flavour irreducible representations.

The 15 has a triplet (1,3) with the horrible +4/3 scalars, a sextuple (3,2) with the three families of down squarks, and a sextet (6,1) with the three families of up squarks.

The 24 has neutral triplet (1,3), a neutral singlet (1,1) and a neutral octet (8,1). So the 12 sneutrinos. And then two sextuples (3,2) and (3, 2) that are our charged leptons! and pretty organised in SU(3), so now we just do the trick mentioned in #273: we assign for z1, z2, z3 any combination of the coordinates of the SU(3) roots; we ask z0 to get the value of Koide postulate, which we are free to do as it is the charge along the SU(2) symmetry, and voila, the masses meet Koide formula.
 
  • #279
arivero said:
View attachment 348854

Updated the section on masses also in arxiv, v3 of https://arxiv.org/abs/2407.05397
A key would be appreciated. Are the entries in the body of the chart in Mev?

What do the columns represent?

Also, I love this illustration from the pdf:

Screenshot 2024-07-25 at 2.13.04 PM.png
 
  • #280
I read the article earlier this week I like it and have a copy for useful references. Haven't looked at the updated tables yet though
 
Back
Top