What is new with Koide sum rules?

  • Thread starter arivero
  • Start date
  • Tags
    Rules Sum
In summary: } )^2+\left( 3 (1- {\sqrt...} )^2\right)+\left( 3 (1+ {\sqrt...} )^2\right)\right)n=\left( 1 (1+ {\sqrt...} )^2+\left( 1 (1- {\sqrt...} )^2\right)+\left( 1 (1+ {\sqrt...} )^2\right)\right)gives the following result:m_g=\left( 3+\sqrt 3 \over 2+\left( 3-\sqrt 3 \over 2\right)+\left( 3+\sq
  • #176
Some basic remarks on obtaining the Koide relation, and its generalizations, via string theory.

In the standard model, elementary masses and mixings come from yukawa couplings between two chiral fermions and the Higgs field. This is also the case in string theory realizations of the standard model. A sketch of how it works for intersecting branes may be seen in figure 1 (page 3) here. Each distinct species of elementary particle is found at a specific location in the extra dimensions, at a point where two branes intersect; the particle itself is an open string between the two branes.

The left-handed fermion is at one location, the right-handed fermion at another location, the Higgs boson is at a third location. The yukawa coupling is a three-way interaction in which e.g. a string jumps from the left-handed site to the right-handed site, by stretching out until it joins with a Higgs string. The probability amplitude for this to happen is dominated by the worldsheet with the minimum surface area, which is the triangle in the picture.

To a first approximation, the amplitude equals exp(-area). So if you know the mass matrices you want, this is a way to picture the stringy geometry that you need: the Higgs boson will be localized somewhere in the extra dimensions, the elementary chiral fermions will be scattered around it, and the distances and angles must be such that the areas of these triangles are - ln(yukawa).

But you can't just say that you want the strings to be located at specific points, and then just place them there. Or at least, you can't do that in any stringy model that I ever heard of. In real string theory, you'll have an overall geometry for the extra dimensions, and then the branes will occupy hypersurfaces in that geometry, and all the geometric parameters (the moduli) are dynamical. They will settle into a state of lowest energy, and that will determine the relative locations of everything... Perhaps this could be avoided if the background geometry were hyperbolic and rigid, or if numerous branes form a dense mesh so that there's always an intersection point near where you want your particles to be located. But I am not aware of any brane model where that can be done.

The masses and mixings present certain patterns or possible patterns, that might guide you in constructing such a brane geometry. But if we take Koide seriously, there's a very special and precise pattern present, specifically in the masses of the charged leptons. In Koide's field-theoretic models, he introduces extra fields, "yukawaons", which enter into the yukawa coupling, in order to produce his relation.

In terms of string theory, it's possible that the Koide relation, if it can be produced at all, might be due solely to a special symmetry of the compact geometry and the location of branes within it - that might be enough to induce the mass relation. Or, there might be extra string states involved - the worldsheet may trace out an n-gon with n>3. A further interesting possibility is that virtual branes may be involved - branes that wrap some closed hypersurface in the compact geometry, with which the strings interact; a kind of vacuum polarization. It would be interesting indeed if yukawaons were associated with such "Euclidean branes".

(I will also mention again that a Koide relation among pole masses seems to require still further interactions that produce special cancellations, like the family gauge bosons introduced by Sumino. All the mechanisms mentioned above are also potentially relevant here.)

How about the generalization of the Koide relation which initiated this thread, the waterfall of quark triplets introduced by @arivero in arXiv:1111.7232? Unlike the original Koide relation, there is still no field-theoretic implementation of the full waterfall, because the triples include quarks with different hypercharges, and that's just more difficult to do. But all my comments still apply, and the paper contains some remarks on the geometry of the mass vectors involved, which, who knows, might be directly relevant to a stringy implementation.

There's one more notable phenomenon, and that is the appearance of mass scales from QCD - 313 MeV, 939 MeV - in some of these Koide triples, when they are expressed using Carl Brannen's method. 939 MeV is the nucleon mass and it has been obtained from lattice QCD, but I am not aware of any simplified heuristic explanation of where that number comes from, that QCD theorists would agree with. In a few places in this thread, I have posted about papers which do purport to give a field-theoretic derivation of these quantities (Schumacher in #134, Gorsky et al in #136). The holographic QCD of Sakai and Sugimoto also gives a framework (from string theory rather than field theory) in which the nucleon mass can be obtained, once all the parameters of the brane geometry have been specified.

If the QCD scales do appear in the extended Koide relations for a reason, and not just by chance, I think it has to be because there is some QCD-like theory underlying the standard model. There have been many proposals for what this could be, as has been documented throughout the thread on "the wrong turn of string theory". Presumably one should then look for a stringy implementation of QCD mechanisms like those just described, and then rerun the previous arguments about yukawa couplings on top of that.
 
Physics news on Phys.org
  • #177
An anonymous edition in the wikipedia, deleted because it did not provide sources, points out that Koide equation amounts to say that the square roots [itex]x_n={\sqrt {m_{n}}}[/itex] are the three solutions of a cubic equation
[tex]ax^{3}+bx^{2}+cx+d=0[/tex]
when [tex]b^{2}=6ac[/tex]

This idea is along the line of writting Koide formula as [tex] (x_1^2 + x_2^2 + x _3^2) - 4 (x_1 x_2 + x_2 x_3 + x_3 x_1) =0[/tex] A point that Goffinet already exploited to build his quartic equation.

I was wondering, one can always multiply the cubic by [itex]ax^{3}-bx^{2}+cx-d[/itex], can we? If so, we shoud have also
[tex]a^2 m^3+(2 a c-b^2) m^2+(c^2-2 b d) m-d^2 = 0[/tex]
 
Last edited:
  • Like
Likes ohwilleke
  • #178
arivero said:
I agree, bare plus corrections seems the best approach, and in fact it is the usual approach to calculate the decay width. But I am intrigued really about the size of phase space, and more particularly about which is the maximum energy that the neutrino pair can carry. In principle is is a measurable quantity. Is it [itex]105.6583668 - 0.510998910[/itex], i.e, [itex]m_\mu(m_\mu) - m_e(m_e)[/itex] (?), or is it [itex]m_\mu(m_\mu) - m_e(m_\mu)[/itex]?
Since they are both free particles, the electron's and the muon's masses are both on-shell masses (pole masses): ##m_e(m_e)## and ##m_\mu(m_\mu)##.

This points to a more serious problem with Koide's mass formula. How well does it hold up at electroweak-unification energy scales or GUT energy scales?
 
  • #179
lpetrich said:
This points to a more serious problem with Koide's mass formula. How well does it hold up at electroweak-unification energy scales or GUT energy scales?
That will depend on what happens at intermediate scales. In the past ten years, Koide and his collaborators have considered many variations on the theme that the mass formula is exact at some high scale, and is somewhat preserved at lower scales by a version of Sumino's mechanism, in which the bosons of a gauged flavor symmetry cancel a major electromagnetic contribution to the running. According to this paradigm, even when the Sumino mechanism is included, one has to regard the precision with which the formula works for the pole masses, as partly coincidental.

To be a little more specific: Sumino said that there would be a unification of electroweak and the flavor symmetry at around 10^3 TeV, and predicted that the next decimal place of the tau lepton pole mass would deviate from the formula. Koide has modified Sumino's theory in ways that imply larger corrections at low scales (and thus the formula's success when applied to the pole masses is more of a coincidence in these theories), but has retained the idea that the new gauge bosons have masses of around 10^3 TeV.

Meanwhile, one could guess that the pole masses are the important quantities after all, but then some wholly new perspective or mechanism is needed. We do have the concept of an infrared fixed point; maybe there's some nonperturbative perspective that mixes UV and IR in which it makes sense; but right now these models by Koide and friends are the only ones that address this problem.
 
  • #180
How compatible could it be a composite Higgs with GUT? One could explain Koide coincidente, the other could explein coupling coincidence.
 
  • #181
Hmm, I should avoid to type from the phone. Well, anyway, the point was that perhaps GUT scale is not relevant for Koide. It is amusing that the main argument that we have (had?) for GUT is another numerical coincidence, the one of the coupling constants, but there was nothing about coincidence of yukawas... at most, variations on the theme of Jarslkog and Georgi https://en.wikipedia.org/wiki/Georgi–Jarlskog_mass_relation.

Another problem for quarks is that the pole mass is not directly measurable. Worse, Koide formula seems to work better with MSbar masses. Taking as input 4.18 and 1.28 GeV, Koide formula predicts 168.9 GeV for the top quark, while taking the pole masses 4.78 and 1.67 the prediction goes off to 203.2 GeV. (we nail it with intermediate mixes, eg input 4.18 and 1.37 predicts 173.3). Note that we now suspect that the MSbar mass of the top has a very noticeable EW contribution; Jegerlehner says that it actually counterweights the QCD contribution.
 
Last edited:
  • #182
A new Koide paper:

Yoshio Koide, Hiroyuki Nishiura

(Submitted on 18 May 2018)

Recently, we have proposed a quark mass matrix model based on U(3)×U(3)′ family symmetry, in which up- and down-quark mass matrices Mu and Md are described only by complex parameters au and ad, respectively. When we use charged lepton masses as additional input values, we can successfully obtain predictions for quark masses and Cabibbo-Kobayashi-Maskawa mixing. Since we have only one complex parameter aq for each mass matrix Mq, we can obtain a parameter-independent mass relation by using three equations for Tr[Hq], Tr[HqHq] and detHq, where Hq≡MqM†q (q=u,d). In this paper, we investigate its parameter-independent feature of the quark mass relation in the model.
 
  • #183
Koide considers the possibility that his charged lepton rule could be a function of SUSY physics. https://arxiv.org/abs/1805.09533

The observed charged lepton masses satisfy the relations K≡(me+mμ+mτ)/(me‾‾‾√+mμ‾‾‾√+mτ‾‾‾√)2=2/3 and κ≡memμmτ‾‾‾‾‾‾‾‾√/(me‾‾‾√+mμ‾‾‾√+mτ‾‾‾√)3=1/486 with great accuracy. These parameters are given as K=(Tr[ΦΦ])/(Tr[Φ])2and κ=detΦ/(Tr[Φ])3 if the charged lepton masses mei are given by mei∝∑kΦ kiΦ ik where Φ is a U(3)-family nonet scalar. Simple scalar potential forms to realize the relations have been already proposed in non-supersymmetric scenarios, but the potential forms are not stable against the renormalization group effects. In this paper, we examine supersymmetric scenarios to make the parameters K and κ stable against the effects, and show possible simple superpotential forms for the relations.
 
  • #184
While Strings 2018 convened in Okinawa, Koide gave a talk at Osaka University (PDF) reviewing very succinctly the nature of his relation, the contribution of Sumino, and the very latest theoretical ideas.
 
  • #185
mitchell porter said:
While Strings 2018 convened in Okinawa, Koide gave a talk at Osaka University (PDF) reviewing very succinctly the nature of his relation, the contribution of Sumino, and the very latest theoretical ideas.

Thanks. The presentation is a riot! Such humor and humility.
 
  • #186
<Moderator's note: twitter link removed: too much advertising and inappropriate source.>

I didn't know that Twitter links were categorically forbidden, even top flight newspapers use them now and a lot of worthwhile discussion among experts in the field also occurs by Twitter before it ends up being published if it is published at all. Surely there must be some appropriate way to note where other people are discussing an idea. The link isn't being used as a source of authority in this case, it is being used as a link to a discussion elsewhere, in much the same way that someone might link to another Physics Forum thread or a link to leaked information about an imminent announcement.


A skeptical lot. I don't think they give sufficient credit to the fact that Koide's rule was proposed in 1981 when it was a poor fit to the tau mass which has consistently improved for 37 years of increased precision in measurement (even from 2012 to 2018), or to the fact that the number of significant digits of match is high and consistent to MOE with data when it wasn't built to match existing data.

But, credit to them for getting to a lot of the key related articles quickly (Descarte's circle and quark mass relations) and hitting on some key points quickly.

-1 for the guy saying that 0.999999... is not equal to 1.

Is there merit to the analytic expression they reference? How accurate is it? How old is it?

Also, the other bit of numerology with the analytical expressions of the lepton masses in terms of the fine structure constant and pi was interesting.
<Moderator's note: twitter link removed: too much advertising and inappropriate source.>

If I knew Twitter links were forbidden across the board, I would have included a more direct sourcing by clicking through to the references therein and the references within the referenced material. It is a bit irksome not to know that in advance and have to recreate a reference. I would also urge the Mods to reconsider a category ban on Twitter links as a matter of moderation policy, and to make it more clear if it is to be a policy. Mostly I was simply trying to save myself the tedium of trying to type it a formula accurately using LaTeX.

The interesting series of formulas are for the ratio of the muon mass to the electron mass, of the tau mass to the muon mass, and of the tau mass to the electron mass which are compared using 1998 CODATA and PDG sources.

There are three expressions shared by the three formulas:

A = 1-4pi(alpha^2)
B = 1 + (alpha/2)
C = 1 + 2pi*(alpha/2) = 1+ pi*alpha

The muon mass/electron mass formula is (1/(2*pi*alpha2))2/3*(C/B)

It purports to have a difference of 1 in the 7th significant digit from the PDG value.

The tau mass/muon mass formula is (1/2*alpha)2/3*(B/A)

It purports to match a 5 significant digit PDG value.

The tau mass/electron mass formula is (1/4pi*alpha3)2/3*(C/A)

It purports to have a difference of 1 in the 5th significant digit from the PDG value.

For what it is worth, I haven't confirmed the calculations or the referenced CODATA and PDG constants.


PDG for the tau mass is 1776.82 +/ 0.12 MeV

Koide's prediction for the tau mass is 1776.968921 +/- 0.000158 MeV

This formula predicts a tau mass of 1776.896635 MeV, which is about 0.07 MeV less than the Koide prediction, although there might be some rounding error issues and I don't have a MOE for the formula number. I used the five significant digit estimate of the tau mass to electron mass ratio in the illustration, so a difference in the sixth significant digit could be simply rounding error.

What to make of Dirac's 1937 Conjecture?

Dirac's conjecture on the electron radius v. size of the universe being roughly the same as the fine structure constant v. Newton's constant is also intriguing.
<Moderator's note: twitter link removed: too much advertising and inappropriate source.>

The conjecture called the Dirac Large Numbers Hypothesis is discussed at Wikipedia here: https://en.wikipedia.org/wiki/Dirac_large_numbers_hypothesis


An analysis that explores the same thing with a bit more clear language is here: http://www.jgiesen.de/astro/stars/diracnumber.htm

A 2017 preprint with eight citations discusses it here: https://arxiv.org/pdf/1707.07563.pdf

A 2013 paper revised in 2015 analyzes it here: http://pragtec.com/physique/download/Large_Numbers_Hypothesis_of_Dirac_de.php

A 2003 paper touches on it at https://www.jstor.org/stable/41134170?seq=1#page_scan_tab_contents

I didn't know that twitter links were categorically forbidden and would purge the ads if I knew how. It seemed a convenient way to link to an academically explored idea. Also, without the link the latest insights of very notable commentator, and mathematical physicist Baez are harder to present. If the latest commentary of leading scientists on scientific issues isn't acceptable to reference, it should be. Is it permissible to cut and paste a post from a Twitter thread by someone like Baez?

Baez notes that even though this coincidence holds at the moment, that we have enough data to know that the magnitude of Newton's constant has not changed that dramatically over the history of the universe.

Neutrino Mass and Koide?

By the way - do you have links to any of the Koide-ish neutrino mass papers? The mass measurements are quite a bit more constrained that they were then (with normal hierarchy strongly favored, some sense of the CP violating phase, pretty accurate relative mass differences and a fairly tight sum of three neutrino masses cap) so it would be interesting to compare. Plugging in all of those constraints you get:

Mv1 0-7.6 meV
Mv2 8.42-16.1 meV
Mv3 56.92-66.2 meV

The CP violating phase seems to be centered around -pi.

Which is more information than it seems because most of the Mv2 an Mv3 mass ranges are perfectly correlated with the Mv1 mass range.

One ought to be able to look at the Koide-ish neutrino mass papers (which flip a +/- sign IIRC) and numerically run through the allowed range for Mv1 to see what the best fit is and use that to make a prediction for all three absolute neutrino masses.

Never mind, found it: http://brannenworks.com/MASSES.pdf It puts a negative sign in front of the square root of Mv1 in the denominator and comes up with:

m1 = 0.000383462480(38) eV
m2 = 0.00891348724(79) eV
m3 = 0.0507118044(45) eV (I think this maybe an error in the original as it doesn't seem to be consistent with the Mv3 squared - Mv2 squared value predicted, I think it should be 0.05962528 . . .).

m22 − m12 = 7.930321129(141) × 10−5 eV2 ------ PDG Value 7.53±0.18 (a 2.22 sigma difference - i.e. a modest tension)
m32 − m2 2= 2.49223685(44) × 10−3 eV2 ------ PDG Value 2.51±0.05 (less than 1 sigma different)

There is no value of Mv1 which can make the Koide formula without a sign flip work. I tried to reproduce his calculation and came up with Mv1 of 0.31 meV using current PDG numbers for the M1-M2 and M2-M3 mass gaps, which isn't far off from Brannen's estimate.
 
Last edited:
  • #187
ohwilleke said:
Also, the other bit of numerology with the analytical expressions of the lepton masses in terms of the fine structure constant and pi was interesting.
<Moderator's note: twitter link removed: too much advertising and inappropriate source.>

PDG for the tau mass is 1776.82 +/ 0.12 MeV

Koide's prediction for the tau mass is 1776.968921 +/- 0.000158 MeV

This formula predicts a tau mass of 1776.896635 MeV, which is about 0.07 MeV less than the Koide prediction

I looked closely at Mills and his "hydrino" paper. Mills is a fraudster. I assume a deliberate one. Elaborate one, too - you need to look rather closely to find blatant inconsistencies in his formulas, but when I found a place where he said "this quantity needs to be imaginary, so just insert 'i' multiplier here", it is a dead giveaway. No actual honest scientist would ever do that. If by the logic of your theory something has to be imaginary, it must come out imaginary from the math. Inserting multipliers where you need them is nonsense.

His mass formulas you link to are probably constructed by trying combinations of fine structure constant, pi, and various powers of them until a "match" is "found". E.g. multiplying by (1-alpha) fudges your result by ~0,9% down. Multiplying by sqrt(1-alpha) fudges your result by ~0,3% down. Divisions fudge it up, etc. This way a "formula" for any value may be constructed.
 
Last edited by a moderator:
  • #188
nikkkom said:
I looked closely at Mills and his "hydrino" paper. Mills is a fraudster. I assume a deliberate one. Elaborate one, too - you need to look rather closely to find blatant inconsistencies in his formulas, but when I found a place where he said "this quantity needs to be imaginary, so just insert 'i' multiplier here", it is a dead giveaway. No actual honest scientist would ever do that. If by the logic of your theory something has to be imaginary, it must come out imaginary from the math. Inserting multipliers where you need them is nonsense.

His mass formulas you link to are probably constructed by trying combinations of fine structure constant, pi, and various powers of them until a "match" is "found". E.g. multiplying by (1-alpha) fudges your result by ~0,9% down. Multiplying by sqrt(1-alpha) fudges your result by ~0,3% down. Divisions fudge it up, etc. This way a "formula" for any value may be constructed.

On further review this is a 1998 formula from a rather disreputable source but may very well still hold.

I don't know anything about Mills personally, and honestly don't expect that his GUT theory is right. But, I think his lepton mass formulas are interesting even though they may very well be numerology and no more. Looking at ways that physical quantities can be closely approximated often adds insight, even if the phenomenological formula has no basis in underlying theory that has been established yet.

Even if he formula is nothing more than tinkering, the number of significant digits of agreement achieved with three fairly simple looking formulas (part of which is a common factor for all three) with only one physical constant and one only one common transcendental number is still an admirable counterfeit.

It is also proof of concept that it is possible that a first principles formula that simple that did explain the quantities from a theoretical basis using only coupling constants could exist, even if it turns out that this isn't the one that is actually supported by a coherent theory. There are a great many quantities for which this is not possible even in principle.

Along the same lines, suppose that MOND is false that that we discover actual dark matter particles tomorrow. Any dark matter theory still needs to explain how it produces the very tight and simple phenomenological relationship between rotation curves and the distribution of baryonic matter in the universe that it does by some other means. The counterfeit or trial and error hypothesis can shed light on some feature of the true theory that makes it work.
 
Last edited:
  • #189
suppose I give you this formula
proton electron mass ratio =3*(9/2)*(1/alpha-1) -1/3= 1836.152655 using codata for alpha
= 1836.1526734 using (1/alpha =137.036005 very close to average of codata and neutron Compton wave experiments base precision qed tests).

Can you say that this might have a physical basis or this is just a fluke. Is it possible to give probabilities for such and similar formulas.
 
Last edited:
  • #191
arivero said:
Hmm we are going to complete a cycle, are we?. Please remember that our interest on Koide formula happened while examining different combinations of alpha and masses, in the thread https://www.physicsforums.com/threads/all-the-lepton-masses-from-g-pi-e.46055/

Are some of these relationships linked to koide formula? Can not tell. Perhaps the most promising, to me, is the mass of proton compared with the sum of electron, muon and tau. Three confined quarks vs three free leptons.
 
  • #192
Using numbers from 1 to 9 plus e, pi and alpha, five different operations (+-*/^) and the option to take square roots, we have at least 10 options per operation. Even taking into account that multiple expressions can have the same result you would expect more than one additional significant figure added per operation. I count 7 in the above calculation plus one initial value. We would expect that we can get 8 significant figures just by random chance. And, surprise (?), we get 8 significant figures agreeing with measurements.

##\frac{e^8-10}{\phi} \approx 1836.153015## - 6 significant figures (or 7 if we round) with just 3 operations.
 
  • Like
Likes dextercioby and Vanadium 50
  • #193
mfb said:
6 significant figures (or 7 if we round) with just 3 operations.

Ok, but relating two fundamental constants with simple numbers seems to be much more stringent, doesn't it.
 
  • #194
ftr said:
Ok, but relating two fundamental constants with simple numbers seems to be much more stringent, doesn't it.
You want the fine structure constant in?
##\displaystyle \frac{e^8-10(1+\alpha^2)}{\phi}\approx 1836.152686028##, an 8-digit approximation of 1836.15267389(17).

9 is not simpler than 8 and 10, an exponential is not very unnatural, and the golden ratio is always nice.
 
  • Like
Likes dextercioby
  • #196
BTW does anybody know the whereabouts of Hans de Vries. Or why he drop out.
 
Last edited:
  • #197
ftr said:
Ok. still some expression look more simpler and "natural" than others. see post #238

https://www.physicsforums.com/threads/all-the-lepton-masses-from-g-pi-e.46055/page-10

But anyway all this is useless unless backed up by a clear derivation.
I get 31.8 bits for 3*(9/2)*(1/alpha-1) -1/3 counting one bit for the 1 in "-1" and ld(5) for alpha. The approximation is good for 26.5 bits, worse than expected.
I get 33.8 bits for (e^8-10(1+alpha^2))/phi again counting the 1 as one bit and e and phi as ld(5). The approximation is good for 27.2 bits, similarly worse.
I get 20.7 bits for (e^8-10)/phi. The approximation is good for 22.4 bits.

The last one is the only one that beats the algorithm from @Hans de Vries you referenced. phi is too exotic? Okay, give it ld(20), then we are still at 22.7 bits for 22.4 bits, or equality.
 
  • #198
Although the bit calculation can be close, however, there are other considerations. For example the relation between the fundamental constants is very strong, i.e. one is the major bulk that makes the other in my equation (indicating a possible physics), in yours it affects the digits beyond the accuracy anyway, that is a very weak relation. Moreover, due to this consequence one constant is very sensitive to the accuracy you choose for the other(experimentally varying somewhat), hence the bit analysis accuracy problem. Also, if you reverse the formula, mine looks good, yours looks like ugly duckling :-p.
 
Last edited:
  • #199
"Origin of fermion generations from extended noncommutative geometry" by Hefu Yu and Bo-Qiang Ma not only claims to get three generations by extending a noncommutative standard model for one generation, but the Koide relation too.

I have not yet tried to follow their constructions, but here is some of what they say. They extend the usual spectral triple (A,H,D) to (A,H,D,J,gamma) (eqn 25). They also allow fields (?) to be quaternion-valued. They consider two sets of basis quaternions, I, J, K and I', J', K'. They have two conditions on the second set (eqns 87 and 88) which together imply a Koide-like relation (eqn 89). The moduli squared of I', J', K' show up in the mass matrices (eqns 96-99) and this implies Koide relations for each family (eqns 100-101). They acknowledge that the Koide relation is not perfect for the quarks but they think it is close enough.

There is definitely handwaving here. They have probably grafted something akin to the Foot vector condition, onto the noncommutative standard model, in a quite artificial way. But we can't be sure of that, without dissecting their argument more thoroughly.
 
  • Like
Likes arivero
  • #200
ftr said:
BTW does anybody know the whereabouts of Hans de Vries. Or why he drop out.

Thanks ftr, I'm still there :smile:
 
  • Like
Likes ohwilleke and arivero
  • #201
Hans de Vries said:
Thanks ftr, I'm still there :smile:

Oh Good, I hate loosing unconventional talents.I see that you have been working very hard behind the scenes .Good luck and be strong:biggrin:
 
  • #202
ftr said:
I see that you have been working very hard behind the scenes .

Thanks, indeed, with many new insights.



Insights on the spinor level:

1) How to calculate all three spin-vectors
How to calculate all three spin-vectors ##s_x,~s_y## and ## s_z## of a spinor and how to do so with a single matrix multiplication. The sum of the three vectors is the total spin ##s##: T
he precessing spin 1/2 pointer.

2) A second triplet of spinor rotation generators
These generators rotate the spinor in its local reference frame instead of in world coordinates. This uncovers the (infinitesimal) rigid-body aspect of field theory with generators that rotate a spinor around its own three principle axis.

Insights on the fermion field level:

1) A single fermion field

The two light-like chiral components ##\xi_L## and ##\xi_R## each get two orthogonal polarization states, with the orientation of the states defined by spinors.
$$\mbox{Dirac field}~~
\left(\!

\begin{array}{c}
\xi_{_L} \\ \xi_{_R}
\end{array}
\!\right)
~~~~\Longrightarrow~~~~
\left(\!\!\!
\begin{array}{rc}
\xi_{_{L}} \\ \pm{\mathbf{\mathsf{i}}}_g\,\xi_{_{L}} \\ \pm~~\,\xi_{_{R}} \\ \pm{\mathbf{\mathsf{i}}}_g\,\xi_{_{R}}
\end{array}
\!\!\right)
~~\mbox{Unified Fermion field}$$

2) A Standard Model fermion generator.
All standard model fermions, three generations of leptons and quarks and their anti-particles are the eigen-vectors of a single generator with only the charge and its sign as input. All fermions obtained this way posses all the right electroweak properties corresponding with a ##\sin^2\theta_w## of 0.25

3) A single electroweak fermion Lagrangian.
The many different electroweak-fermion pieces of the Lagrangian can be replaced by:
$$\mathcal{L} ~~=~~ \bar{\psi}\,\check{m}\big(\,\gamma^\mu_{_0}\partial_\mu+\mathbf{U}-\check{m}\,\big)\,\psi,~~~~~~~~
\mathbf{U} ~=~\tfrac{\,g'}{\,2\,}\gamma^\mu_{_o}\gamma^5_{_o}Z_\mu + \tfrac{g}{2}\gamma^\mu_{_1}A_\mu + \tfrac{g}{2}\gamma^\mu_{_2}W^1_\mu + \tfrac{g}{2}\gamma^\mu_{_3}\gamma^5_{_o}W^2_\mu$$

4) A single bilinear field matrix
This matrix contains all bilinear field components as well as all source currents for all electroweak bosons. The matrix is calculated with a single matrix multiplication.

Insights on the electroweak boson level.

1) The fundamental representation of the electromagnetic field.
This representation uses the operator fields acting on the fermion field:
$$\begin{array}{lrcl}
\mbox{mass dimension 1:}~~~~ & \mathbf{A} &=& \gamma^\mu A_\mu \\
\mbox{mass dimension 2:}~~~~ & \mathbf{F} &=& \vec{K}\cdot\vec{E}-\vec{J}\cdot\vec{B} \\
\mbox{mass dimension 3:}~~~~ & \mathbf{J}\, &=& \gamma^\mu~j_\mu \\
\end{array}$$
We now obtain the fundamental covariant description of the electromagnetic field:
$$/\!\!\! \partial\mathbf{A} = \mathbf{F}~~~~~~ ~~~/\!\!\!\partial\mathbf{F} = \mathbf{J}$$
In the first step we have applied the conservation law ##\partial_\mu A^\mu\!=\!0## on the diagonal and the second step involves all four of Maxwell's laws, the inhomogeneous ##\partial_\mu F^{\mu\nu}\!=\!j^\nu## as well as the homogeneous ##~\partial_\mu\! *\!\!F^{\mu\nu}\!=\!0##.

2) A single electroweak boson field
As given in the Lagrangian above. Note that each electroweak boson has its own set of gamma matrices.
$$\mathbf{U} ~=~\tfrac{\,g'}{\,2\,}\gamma^\mu_{_o}\gamma^5_{_o}Z_\mu + \tfrac{g}{2}\gamma^\mu_{_1}A_\mu + \tfrac{g}{2}\gamma^\mu_{_2}W^1_\mu + \tfrac{g}{2}\gamma^\mu_{_3}\gamma^5_{_o}W^2_\mu$$The documents, mathematica files and the stand alone MATLAB executable are available here,
but look at the video for the best introduction.
 
Last edited:
  • #203
A paper today on "String Landscape and Fermion Masses". They guess at the statistical distribution of fermion masses in string vacua, and then argue that the standard model fermions satisfy their hypothesis. Normally I don't have much interest in papers like this, since they prove so little. I would much rather see progress in calculating masses for individual vacua.

However, there's an oddity here. They model the distribution of quark masses, and then the distribution of charged lepton masses, using a two-parameter "Weibull distribution". The parameters are a shape parameter k and a (mass) scale parameter l. They find (equation 3.6), "surprisingly", that the two distributions have the same shape parameter, to three decimal places, so differing only by mass scale. Is this circumstantial evidence that a similar mechanism (e.g. @arivero's waterfall) is behind both sets of yukawas?
 
  • #204
mitchell porter said:
using a two-parameter "Weibull distribution". The parameters are a shape parameter k and a (mass) scale parameter l. They find (equation 3.6), "surprisingly", that the two distributions have the same shape parameter, to three decimal places, so differing only by mass scale. Is this circumstantial evidence that a similar mechanism (e.g. @arivero's waterfall) is behind both sets of yukawas?

Hmm, the main property Weibull distribution is that you can integrate it, so perhaps they are just seeing some exponential fitting. As for the coincidence of shape... How are they "fitting" the distribution anyway? max likelihood? for a sample of six points?
 
  • #205
Hmm, I can not reproduce the fit, perhaps because of precision or rounding errors, with scipy. I have no idea how the authors are using chi-square test and p-values in the paper, so I go with KS test.

Code:
Python 3.6.5 (default, Mar 31 2018, 19:45:04) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import scipy.stats as s
>>> import numpy as np
>>> def printStats(data,fit):
...     nnlf=s.weibull_min.nnlf(fit,np.array(data))
...     ks=s.stats.kstest(np.array(data),'weibull_min',fit)
...     print("Fit:",fit)
...     print("negloglikelihood",nnlf)
...     print(ks)
...
>>> data=[2.3,4.8,95,1275,4180,173210]
>>> printStats(data,s.weibull_min.fit(data, floc=0))
Fit: (0.26861598701150763, 0, 2288.475995797873)
negloglikelihood 51.591787735494115
KstestResult(statistic=0.15963622669415056, pvalue=0.9979920390593924)
>>> data=[0.511,106,1777]
>>> printStats(data,s.weibull_min.fit(data, floc=0))
Fit: (0.37366611506161873, 0, 229.48782534013557)
negloglikelihood 19.233771988350043
KstestResult(statistic=0.23629696537671507, pvalue=0.996122995979272)
>>>

Anyway even if scipy adjusts to 0.373 for leptons, their fit is not bad neither, let's fix the parameter and see
Code:
>>> printStats(data,s.weibull_min.fit(data, floc=0,f0=0.26861598701150763))
Fit: (0.26861598701150763, 0, 163.62855309410182)
negloglikelihood 19.44374499168725
KstestResult(statistic=0.25597858377056465, pvalue=0.9893658166203932)

The fit in this case reproduce the scale they found, 194. I wonder if what happens is that their fitter takes as starting point the value of the previous fit, or something so. Also, if we add the three leptons to the quark sector, so that
data=[0.511,106,1777,2.3,4.8,95,1275,4180,173210]
the fit is still
Code:
Fit: (0.2698428583536703, 0, 1156.8564935786583)
negloglikelihood 71.49265190220518
KstestResult(statistic=0.14728900912921583, pvalue=0.9897758037009418)

Thus telling that the same random distribution can of course generate values for the lepton sector. Unsurprising.Amusingly, we can indeed find the same k parameter in the two fits if we allow to move the origin of the quark sector
Code:
>>> data=[2.3,4.8,95,1275,4180,173210]
>>> printStats(data,s.weibull_min.fit(data))
Fit: (0.37359275206555403, 2.2999999999999994, 39837.607589227395)
negloglikelihood 30.744667740180212
KstestResult(statistic=0.48342279946216715, pvalue=0.08187510735420012)

but then same freedom in lepton sector goes to a different fit too.
 
  • #206
arivero said:
perhaps they are just seeing some exponential fitting
The paradigm of Tye et al is something like: We consider a landscape of string vacua in which vacua are indexed by fluxes (and other properties), and we suppose that the flux values are sampled from a uniform distribution. But the yukawas depend on the fluxes in an "anti-natural" way (Lubos's word), such that uniformly distributed fluxes translate into Weibull-distributed yukawas (distribution divergently peaked at zero). "Related distributions" at Wikipedia shows how a uniformly distributed variable can be mapped to an exponentially distributed variable, and then to a Weibull distribution.

Optimistically, we could construct a refined version of the paradigm in which we aim to get the sbootstrap from an SO(32) flux mini-landscape, and then the Koide waterfall ansatz from that. In section 3 of Tye et al, they talk about the (unspecified) functional dependence of yukawas on fluxes. One could add an intermediate dependence e.g. on Brannen's Koide parameters (phase and mass scale), and the number of sequentially chained Koide triplets. By treating the Brannen parameters as random variables that depend upon randomly distributed flux values, one can then study how the resulting masses are distributed, and what kind of dependency on the fluxes would make Tye et al's scenario work out.

(It is still mysterious why the lepton "waterfall", consisting of just one triplet, and the quark waterfall, consisting of four triplets, would have the same Weibull shape, but this might be clarified with further study. Since Weibull involves a bias towards low values, one would be looking at how the low end of the waterfall behaves. Is the Weibull fit so loose that a Brannen phase of 2/9, as for e,mu,tau, and a phase of 2/3, as for b,c,s, produce roughly the same behavior? Or maybe there's something about applying that Georgi-Jarlskog-like factor of 3 to both Brannen phase and Brannen mass, at the same time, which preserves Weibull shape? These are concrete questions that could actually be answered.)
 
Last edited:
  • #207
mitchell porter said:
It is still mysterious why the lepton "waterfall", consisting of just one triplet, and the quark waterfall, consisting of four triplets, would have the same Weibull shape, but this might be clarified with further study

I am disappointed that the fit algorithm in scipy fails to produce the same shape... I wonder how they are doing the fit, if R or some manual code, of different precision. The use of chi square points to some ad-hoc code; after all, the point of the Weibull distribution is that it has an exact and very simple cdf,
8e58af82dac320c43145af6825b9e27f5aaf5206
, and then it is very easy to calculate matchings even by hand. On the other hand, that could mean that they have found some analytic result and misinterpreted it as a probabilistic parameter.

The paper was not designed, I think, to give exact proportions, but to convey the message that even if you claim that yukawas are random, your theory should tell what the random distribution is, and statistical test for the likeliness of "living in this vacuum" can incorporate the information of the actual values of the yukawa couplings. And indeed is a good counter against the naive concept of equaling naturalness to likeliness.
 
  • #208
I now suspect that they simply decided apriori that shape should be the same. In the introduction to part 3, they say "Once dynamics introduces a new scale... it will fix l, while k is unchanged"; and in 3.2 they say colored and colorless particles fit this paradigm. So I think they just did some kind of joint fit, deliberately assuming (or aiming for) a common k value.
 
  • #209
mitchell porter said:
I now suspect that they simply decided apriori that shape should be the same. In the introduction to part 3, they say "Once dynamics introduces a new scale... it will fix l, while k is unchanged"; and in 3.2 they say colored and colorless particles fit this paradigm. So I think they just did some kind of joint fit, deliberately assuming (or aiming for) a common k value.

That was my suspicion too, as I can at leat get the same k if I do the fit with quarks... but then it is very puzzling that they claim chi^2=1 for leptons in 3.6. Again, I have no idea how do they calculate the chi coefficient.
 
  • #210
A remark: for the Anderson-Darling test statistics, the fit fixing k=0.269 seems to have better p-value in lepton sector that the direct fit from scipy.
Code:
>>> import scipy.stats as s
>>> data=[0.511,106,1777]
>>> fit=(0.37366611506161873, 0, 229.48782534013557)
>>> from skgof import ks_test, cvm_test, ad_test
>>> w=s.weibull_min(*fit)
>>> ad_test(data,w)
GofResult(statistic=0.25987976933243573, pvalue=0.9716940635456661)
>>> fit=(0.26861598701150763, 0, 163.62855309410182)
>>> w=s.weibull_min(*fit)
>>> ad_test(data,w)
GofResult(statistic=0.22716618686611634, pvalue=0.9893423546344761)
So the question of how has the coincidence happened depends on knowing how they are optimizing the parameters.
 
Back
Top