All the lepton masses from G, pi, e

  • Thread starter arivero
  • Start date
  • Tags
    Lepton Pi
In summary, the conversation revolved around using various equations and formulae to approximate the values of fundamental constants such as the Planck Mass and the fine structure constant. The discussion also delved into the possibility of using these equations to predict the masses of leptons and other particles. Some participants raised concerns about the validity of using such numerical relations, while others argued that it could be a useful tool for remembering precise values.

Multiple poll: Check all you agree.

  • Logarithms of lepton mass quotients should be pursued.

    Votes: 21 26.6%
  • Alpha calculation from serial expansion should be pursued

    Votes: 19 24.1%
  • We should look for more empirical relationships

    Votes: 24 30.4%
  • Pythagorean triples approach should be pursued.

    Votes: 21 26.6%
  • Quotients from distance radiuses should be investigated

    Votes: 16 20.3%
  • The estimate of magnetic anomalous moment should be investigated.

    Votes: 24 30.4%
  • The estimate of Weinberg angle should be investigated.

    Votes: 18 22.8%
  • Jay R. Yabon theory should be investigate.

    Votes: 15 19.0%
  • I support the efforts in this thread.

    Votes: 43 54.4%
  • I think the effort in this thread is not worthwhile.

    Votes: 28 35.4%

  • Total voters
    79
  • #106
Baker and Johnson have actually a whole forest of papers.

Following this review of classic bibliography, I also come across to the formula
[tex]
{M_0^2 \over M_V^2}= {3 \over 2 \pi} \alpha
[/tex]
which is world-famous, but I was unaware. Regretly it is about a single scalar charged particle, not a fermion, and the quotient against the vector boson gets this square dependence. The formula was found in
Radiative Corrections as the Origin of Spontaneous Symmetry Breaking by Sidney Coleman and Erick Weinberg, http://prola.aps.org/abstract/PRD/v7/i6/p1888_1 They even have a generalisation to SU(3)xU(1).

Incidentaly, one of these authors was contacted about our preprint 0503104, here is his statement: Given the current state of knowledge in the field, speculations concerning approximate numerical coincidences such as the ones you discuss do not constitute the degree of substantial new physics that is required for publication
 
Physics news on Phys.org
  • #107
Hans, I am impressed. It's too many digits for the accuracy.

If the standard model is an effective field theory from a deeper level, then the fine structure constant should be calculated from a series in that deeper (unified) level.

One supposes that such a unified field theory would be extremely strongly coupled (otherwise it'd be visible), and that our usual perturbation methods would fail, and therefore that calculations would be impossible. However, this is a way out of this.

Our usual experience with bound states is that when two particles are bound together, we expect the bound state to have a higher mass than either of the particles contributing to it. Of course the total mass is a little less than the sum of the masses, [tex]E=mc^2[/tex] and all that. But if the particles are extremely strongly bound, then the mass of the bound state could be negligible compared to the mass of either free particle.

For example, the mass of a free up quark is unknown, but all indications are that it would require a lot of energy to make one, so its mass should be extremely large. Present experimental limits say it should be much larger than the mass of a proton.

Now doing quantum mechanics in such a nonperturbational region might seem impossible, but this is not necessarily the case. In fact, infinite potential wells make for simple quantum mechanics problems. Perturbation theory may not be needed or appropriate.

Looking at QFT from the position eigenstate representation point of view, the creation operators for elementary particles have to work in infinitesimal regions of space. Suppose we want to do physics in that tiny region. The natural thing we'll do is to use a Gaussian centered at the position.

One way of representing the potential energy between two objects bound by extreme energies is to suppose that they each stress space-time (in the general relativistic stress-energy manner), but in ways that are complementary. Thus the sum makes for less stress to space-time than either of the separate particles.

In that case, if we represent the stress of each particle with a Gaussian, we end up deriving a potential energy that, for very low energies, works out as proportional to the square of distance. This is the classic harmonic oscillator problem, and the solution in QM is well known without any need for perturbation theory.

Now your series for the fine structure constant used a Gaussian form. Coincidence? I doubt it.

My suspicion is that this is a clue. My guess is that there is a unified field theory with equal coupling constants for everything, and that the strong force is strong because it has fewer coupling constants multiplied together in it. That would have to do with the factor in the exponential. This all has to do with my bizarre belief that even the leptons are composite particles.

Carl Brannen

As an aside, I once decided to see if the sum of inverses of cubes [tex]\zeta(3) = \sum 1/n^3[/tex] could be summed similarly to how the sum of inverses of squares or fourth powers could be summed. I wrote a C++ program that computed the sum out to 5000 decimal places (which requires a lot of elementary mathematics as the series converges very very slowly), and then did various manipulations on it to search for a pattern.

The most useful thing to know, when trying to determine if a high precision number is rational, is the series obtained by taking the fractional part of an approximation and inverting it. It's been over a decade, but I seem to recall that the name for this is the "partial fraction expansion".
 
  • #108
particle spectrum

Attached (!) I have drawn the whole elementary particle spectrum at logarithmic scale. Honoring Yablon, I have put a 1/137 line also between the tau and the electroweak vacuum.

There are four clearly distinguished zones, usually called the electromagnetic breaking, the chiral breaking, the hadronic scale (or SU(3) gap) and the electroweak breaking scale. SO I have encircled them with green rectangles.

(EDITED: If you are using the Microsoft Explorer, you will need to expand the jpg to full screen or almost)
 

Attachments

  • partspectrum.jpg
    partspectrum.jpg
    16 KB · Views: 756
Last edited:
  • #109
Checking in

Hi Alejandro:

Thanks for the recognition.

While I have stayed off the board for awhile I have not been inactive. I am working on a paper with a well-known nuclear theorist in Europe. I won't get into details yet, but I think you all will find it interesting once we are ready to "go public."



Jay.
 
  • #110
Still more references, if only for a future observer/reader of the thread...

Stephen L. Adler (of the anomaly fame) pursued during the seventies an eigenvalue condition in order to pinpoint the value of the fine structure constant:

Short-Distance Behavior of Quantum Electrodynamics and an Eigenvalue Condition for alpha, http://prola.aps.org/abstract/PRD/v5/i12/p3021_1 (http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D5,3021 )

PS: specially for new readers, please remember we are trying an at-a-glance view of this thread (and other results) in the wiki bakery http://www.physcomments.org/wiki/index.php?title=Bakery:HdV
 
Last edited by a moderator:
  • #111
Koide formula

We missed this one. It is a published formula, its exactitude is one of the better ones in the thread, and a very short review can be checked online as hep-ph/9603369
[tex]m_e+m_\mu+m_\tau=\frac 23 (\sqrt{m_e}+\sqrt{m_\mu}+\sqrt{m_\tau})[/tex]

Or, if you prefer: The vector having by components the square roots of leptonic masses helds and angle of 45 degrees with the vector (1,1,1). If you use experimental input, the angle is 45.0003 plus/minus 0.0012 degrees, according Esposito and Santorelli.

Jay can be specially interested on this property, because it derives almost trivially from asking trace preservation both of the mass matrix M and its square root (the 45 degrees being the maximum possible aperture without negative eigenvalues in square root of M).

EDITED: http://ccdb3fs.kek.jp/cgi-bin/img_index?198912199 is the first paper from Koide. (even before the improved values of tau?) From other references, it seems that the research was framed in the general context of "democratic family simmetry" ( a degenerated mass matrix filled with ones, so that when rotating to eigenvectors only the third generation is naturally massive).
 
Last edited by a moderator:
  • #112
Here's another lepton mass formula, one that gives all the mass ratios, and involves the Cabibbo angle:

Consider the matrix:

[tex]M = \left( \begin{array}{ccc}
\sqrt{2} & e^{i\delta} & e^{-i\delta} \\
e^{-i\delta} & \sqrt{2} & e^{i\delta} \\
e^{i\delta} & e^{-i\delta} & \sqrt{2} \\ \end{array} \right) [/tex]

where [tex]\delta = .222 = 12.72[/tex] degrees, is the Cabibbo angle. Let

[tex] r = e^{2 i \pi/3}, s = 1/r.[/tex]

Then the eigenvectors and eigenvalues of M are approximately:

[tex](1,r,s), \sqrt{m_e / 157}[/tex],
[tex](1,s,r), \sqrt{m_\mu / 157}[/tex],
[tex](1,1,1),\sqrt{m_\tau / 157}[/tex],

with the lepton masses in MeV.

Note that the sum of the eigenvalues of the matrix M are [tex]3 \sqrt{2}[/tex], and the square of this is 18. And [tex]M^2[/tex] has diagonal entries of 4, so a trace of 12. Since 12/18 = 2/3, the relationship between the lepton masses and their square roots already discussed on this thread is automatically provided by the choice of the diagonal value as [tex]\sqrt{2}[/tex]

The reason for using the Cabibbo angle is that the Cabibbo angle gives the difference between the quarks as mass eigenstates and the quarks as weak force eigenstates. If you believe, as I do, that the quarks and electrons are made from the same subparticles (which I call binons and which correspond to the idempotents of a Clifford algebra), then it is natural that the Cabibbo angle enters into the mass matrix for the leptons. Note that the Cabibbo angle is small enough that its sine is close to the angle.

Carl
 
Last edited:
  • #113
CarlB said:
Here's another lepton mass formula, one that gives all the mass ratios, and involves the Cabibbo angle:

Consider the matrix:

[tex]M = \left( \begin{array}{ccc}
\sqrt{2} & e^{i\delta} & e^{-i\delta} \\
e^{-i\delta} & \sqrt{2} & e^{i\delta} \\
e^{i\delta} & e^{-i\delta} & \sqrt{2} \\ \end{array} \right) [/tex]

where [tex]\delta = .222 = 12.72[/tex] degrees, is the Cabibbo angle. Let

[tex] r = e^{2 i \pi/3}, s = 1/r.[/tex]

Then the eigenvectors and eigenvalues of M are approximately:

[tex](1,r,s), \sqrt{m_e / 157}[/tex],
[tex](1,s,r), \sqrt{m_\mu / 157}[/tex],
[tex](1,1,1),\sqrt{m_\tau / 157}[/tex],

with the lepton masses in MeV.

Carl

Carl, this is really very interesting.

If one takes 0.222222047168 (465) for the Cabibbo angle then you
get the following lepton mass ratios:

[tex]\frac{m_\mu}{m_e}\ =\ 206.7682838 (54)[/tex]

[tex]\frac{m_\tau}{m_e}\ =\ 3477.441653 (83)[/tex]

[tex]\frac{m_\tau}{m_\mu}\ =\ 16.818061210 (38)[/tex]

Which is well within the experimental range:

[tex]\frac{m_\mu}{m_e}\ =\ 206.7682838 (54)[/tex]

[tex]\frac{m_\tau}{m_e}\ =\ 3477.48 (57)[/tex]

[tex]\frac{m_\tau}{m_\mu}\ =\ 16.8183 (27)[/tex]

The first one is no surprise because we solved the Cabibbo angle
to get this result, but the second surely is! This means that your
formula is predictive to 5+ digits!

Furthermore, the values given also are exact for Koide's formula
which is due to the [itex]\sqrt 2[/itex]'s on the diagonal of your matrix as you said:

[tex]m_e+m_\mu+m_\tau=\frac{2}{3} (\sqrt{m_e}+\sqrt{m_\mu}+\sqrt{m_\tau})^2[/tex]

[tex]1+206.7682838+3477.441653=\frac{2}{3} (\sqrt{1}+\sqrt{206.7682838}+\sqrt{3477.441653})^2[/tex]

Regards, Hans

P.S. The scale factor would be 156.9281952 (123)
 
Last edited:
  • #114
Hans de Vries said:
The first one is no surprise because we solved the Cabibbo angle
to get this result, but the second surely is! This means that your
formula is predictive to 5+ digits!
Hmm? Ah, you mean that you have used Carl's matrix to find a value for the Cabibbo angle, do you? It is actually very surprising to be able to find this angle from leptons; it is the mixing parameter ... of quarks!

Incidentally, the first model that originated Koide's formula also had a prediction for this angle, it was
[tex]
\tan \theta_c={\sqrt 3 (x_\mu - x_e) \over 2 x_\tau - x_\mu -x_e}
[/tex]
with [tex]x_l\equiv \sqrt{m_l}[/tex]- Ref http://prola.aps.org/abstract/PRL/v47/i18/p1241_1 Note that the context was similar to Carl's argument, ie a composite model for quarks and leptons.

Surely this result can be combined into Carl's matrix to get Koide's democratic mass matrix (http://prola.aps.org/abstract/PRD/v28/i1/p252_1,http://prola.aps.org/abstract/PRD/v39/i5/p1391_1)
which is more of less similar to Carl's but having [tex]\delta=0[/tex]
 
Last edited by a moderator:
  • #115
CarlB said:
[tex]M = \left( \begin{array}{ccc}
\sqrt{2} & e^{i\delta} & e^{-i\delta} \\
e^{-i\delta} & \sqrt{2} & e^{i\delta} \\
e^{i\delta} & e^{-i\delta} & \sqrt{2} \\ \end{array} \right) [/tex]
As a matter of notation, I'd call such matrix "[tex]A^\frac 12[/tex]" or something similar, given that its eigenvalues relate to square roots of masses.
 
  • #116
Keep up the good work! This is really facinating reading.
 
  • #117
arivero said:
Hmm? Ah, you mean that you have used Carl's matrix to find a value for the Cabibbo angle, do you? It is actually very surprising to be able to find this angle from leptons; it is the mixing parameter ... of quarks!

I actually liked the close proximity to 2/9 of [itex]\delta\ = 0.2222220..[/itex] without
the necessity of the latter actually being the angle used to systemize
the quark eigenstate mixing observed in weak hadronic decay.

After all the goal is to reduce the number of arbitrary parameters :^)

Regards, Hans
 
  • #118
Hans, thanks for supplying more accuracy. The fit to the lepton masses can't be any better or worse than the 2/3 formula fit. From there, my guess is that the Cabibbo angle fit is at least partly based on chance. That is, if you look at the CKM matrix, the other numbers aren't showing up. Maybe that's due to suppression from the high quark masses, and the Cabibbo angle being close is due to the SU(3) associated with the up, down and strange quarks having roughly similar masses.

By the way, I wonder what happens to the fit if you use [tex]\theta_C = 2/9[/tex]. One wonders if there is a relationship between the masses of the up, down and strange, and the deviation of the formula from 2/9. If there were, then maybe the same relationship would clear up the rest of the CKM matrix.

Arivero, thanks for the references. I'm not associated with a university, so if it doesn't show up on a google search I have to drive over to the University of Washington to look it up. I'll go by there this afternoon, I can hardly wait. And you're right that the formula isn't really any more predictive than the 2/3 formula, except for the Cabibbo angle coincidence.

If one assumes that the leptons and quarks, along with their various families and colors, are made from various combinations of just two subparticles each of which come in three equivalent colors, which I've been calling the [tex]|e_r>[/tex], [tex]|e_g>[/tex], [tex]|e_b>[/tex], [tex]|\nu_r>[/tex], [tex]|\nu_g>[/tex], [tex]|\nu_b>[/tex], then the mass matrix shown is a result of a branching ratio for interactions of the form:

[tex]|e_r> -> |e_r>[/tex] 50%
[tex]|e_r> -> |e_g>[/tex] 25%
[tex]|e_r> -> |e_b>[/tex] 25% (and cyclic in r, g, b)

with the Cabibbo angle providing the phases for the last two interactions. That is, a particle of a given color has a 50% chance of staying that same color, and 25% chances of switching to one of the other two colors. This has a direct interpretation in terms of angles, if you break it up into left and right handed parts.

I should mention what all this has to do with Higgs-free lepton masses.

Consider the Feynman diagrams (in the momentum representation) where each vertex has only two propagators, a massless electron propagator coming in, and a massless electron propagator coming out, and a vertex value of [tex]m_e[/tex]. When you add up this set of diagrams, the result is just the usual propagator for the electron with mass. Feynman's comment on this, (a footnote in his book, "QED: The strange theory of matter and light"), is that "nobody knows what this means". Well the reason that no one knows what it means is because these vertices can't be derived from a Lorentz symmetric Lagrangian.

But what the above comment does show is that it is possible to remove the Higgs from the standard model (along with all those parameters that go with it), if you are willing to assume Feynman diagrams that don't come from energy conservation principles.

You can take the same idea further by breaking the electron into left and right handed parts, and then assuming that the Feynman diagrams always swap a left handed electron traveling in one direction with a right handed electron traveling in the opposite direction. This preserves spin and is similar to the old "zitterbewegung" model of the electron.

If you take the same idea still further, and assume the electron, muon and tau are linear combinations of subparticles, as described above, then the lepton mass formula is natural to associate with the Cabibbo angle. By the way, the standard model includes a Higgs boson to take away the momentum from the left handed electron reversing direction, but its otherwise the same thing.

Mass is weird because it is only the mass interaction that allows left and right handed particles to interact. That's why the Higgs bosons are supposed to be spin-0, to allow the coupling of left and right handed fermions.

The zitterbewegung model was based on noticing that the only eigenvalues of the electron under the operator that measures electron velocity are +c and -c. So the assumption was that the electron moved always at speed c. Of course having the electron suddenly reverse direction in order for the zitterbewegung to work is a violation of conservation of momentum, but the fact is that you do get the usual electron propagator out of all this.

By the way, to do these calculations, it helps to choose a representation of the gamma matrices that diagaonalizes particle/antiparticle and spin-z. That is, the four entries on the spinors will correspond to left-handed electron, right handed electron, left handed positron and right handed positron. When you do this, the mass matrix will no longer be diagonal.

There are a lot of other things that come out of this, and most of them are quite noxious to physicists. For example, in order to have the left handed electron be composed of three subparticles, the subparticles have to carry fractional spin. Like fractional charge, the idea is that fractional spin is hidden from observation by the color force. Another example is that one tends to conclude that the speed of light is only an "effective" speed, and that the binons have to travel faster. Also, the bare binon interaction violates isospin but is also a very strong force.

Convincing the physics world to simultaneously accept so many hard things to swallow is essentially impossible.

Carl
 
  • #119
CarlB said:
By the way, I wonder what happens to the fit if you use [tex]\theta_C = 2/9[/tex].

I did try this yesterday, :^)

[tex]M^\frac{1}{2} = \left( \begin{array}{ccc}
\sqrt{2} & e^{i\delta} & e^{-i\delta} \\
e^{-i\delta} & \sqrt{2} & e^{i\delta} \\
e^{i\delta} & e^{-i\delta} & \sqrt{2} \\ \end{array} \right) [/tex]

With [itex]\delta \ =\ \frac{2}{9}[/itex]

one gets the following eigenvalues:

3.3650337331519900946141875218449
0.82054356652318236464766296162669
0.057063387444112687143215689158409

Squaring the ratios gives the following mass ratios:

3477.4728371045985323130012254729 = [itex]m_\tau /m_e[/itex]
206.77031597272938861255033931149 = [itex]m_\mu /m_e[/itex]
16.818046733377517056533911325581 = [itex]m_\tau /m_\mu[/itex]

Which are exact to circa one part per million.


Effectively two parameters are predicted, The first comes from
Koide's formula which was brought to our attention by Alejandro
and which you reworked into your matrix. The second comes from
the parameter [itex]\delta[/itex] and which one may hope to be either a simple
mathematical constant (2/9) or another SM parameter. (Cabibbo)


Regards Hans
 
Last edited:
  • #120
Hans, I just realized that the branching ratios I gave, i.e. [tex]|e_r> -> |e_g>[/tex] 50%, 25%, 25%, were probably incorrect. The reason is that when you compute a probability in QM you do it by taking
[tex]P = |<a|b>|^2[/tex],
and that means that the mass matrix has to be included four times, not twice. That means that the branching ratios actually are:

[tex]|e_r> -> |e_r>[/tex] 66.67%
[tex]|e_r> -> |e_g>[/tex] 16.67%
[tex]|e_r> -> |e_b>[/tex] 16.67%

I realized that the numbers had to be wrong when I was working out the branching ratio from another point of view. This is somewhat speculative, so bear with me, please. Remember that the unusual thing about SU(3) color is that it appears to be a perfect symmetry...

Suppose that nature has a particle that travels at some fixed speed near the speed of light and exerts an extreme stress on space-time. The stress being very high says that the energy contained in the particle is very high. Nature wishes to reduces this stress by cancelling it.

Suppose that there is a hidden dimension, and the stress has sinusoidal dependence on that hidden dimension. That is, when you average the stress over the hidden dimension, you get zero, but when you integrate the square of the stress over the hidden dimension you get a number that corresponds to a very high energy per unit volume.

One way that nature could arrange to minimize the total energy of the particle is by ganging it up with another particle of the same sort, but arranging for the phase of the other particle, in the hidden dimension, to cancel the first. Thus you would compute the potential energy of the combined particles by first summing their stresses, and then integrating over all space.

Because the cancellation would depend on how far apart the particles were, this would result in a force. It turns out that the force that results from this sort of thing is, to lowest order, compatible with the usual assumptions about the color force. That is, the force is proportional to the distance separated.

The reason the calculation works out this way is quite generic. That is, any force based on minimization of a potential, with the potential having a nice rounded bottom, will be approximately harmonic. So this coincidence really doesn't mean much in and of itself.

If it weren't for the Pauli exclusion principle, nature could cancel the first with another particle traveling in the same direction. So nature instead reduces the stress by combining several particles traveling in somewhat different directions.

It turns out that if you analyze this problem from the point of view of Clifford algebra (that is, you assume that the stress take the form of a Clifford algebra), there is no way to get a low energy bound state out of two particles. Instead, you have to go to three. Details are beyond the scope of this post. So let's assume that nature combines three particles, with their phases (as determined by an angular offset in the hidden dimension) different by 120 degrees. The fact that 360 degrees divides equally into three multiples of 120 degrees gives the explanation for why SU(3) is a perfect symmetry, but the details are beyond the scope of this post.

Let's assume that the "center of mass" of the three particles travels in the +z direction. Let the red particle be offset in the +x direction, with the green and blue offset appropriately around the z-axis. The three particles travel on a cone centered around the z-axis.

Let the opening angle of the cone be [tex]\theta_b[/tex], where b stands for "binding angle". We expect that b will be as small as nature can get away with, but that it will be balanced by Fermi pressure. That is, if [tex]\theta_b[/tex] is too small, the probability of the three particles being near each other goes down, and this raises the total energy.

Then the unit velocity vectors for the three particles are:
[tex]V_R = (s_b , 0,c_b)[/tex]
[tex]V_G = (-s_b/2, s_b\sqrt{3}/2,c_b)[/tex]
[tex]V_B = (-s_b/2,-s_b\sqrt{3}/2,c_b)[/tex],
where [tex]s_b, c_b = sin, cos(\theta_b)[/tex],

If the speeds of the individual particle are c', then the speed of the bound particle is [tex]c'\cos(\theta_b)[/tex]. If we assume that the bound particle is a handed electron, then this says that c' is faster than the speed of light by a factor of [tex]\sec(\theta_b)[/tex].

In other words, the subparticles would have to be tachyons that travel at some fixed speed faster than the speed of light. There is some experimental evidence for the existence of this sort of thing. It consists of observations of gamma ray bursts. EGRET observed a double gamma ray burst with a delay of about an hour between bursts. It's called 940217 in the literature and there are plenty of theoretical attempts (failures) to explain it.

If the gamma ray burst were caused by a collection of tachyonic particles all traveling in the same direction, then a single burst of tachyons would generate time separated bursts of gamma rays as the tachyons traveled through regular matter if the regular matter was distributed into two lumps.

I made the argument that binons might be an explanation for high energy cosmic rays at the PHENO2005 meeting. There are about a half dozen good reasons for expecting exactly the odd sort of behavior seen in the Centauro events from a binon. The reasoning is beyond the scope of this post. I will soon get around to putting up a copy of the argument on the PHENO2005 website.

Anyway, the above description of a tachyon bound state would also apply to the left handed electron. Assume that the red for the left handed electron is also oriented in the +x direction, but the bound particle is traveling in the -z direction.

Then the unit vectors for the left-handed electron are:
Then the unit velocity vectors for the three particles are:
[tex]V_R' = (s_b , 0,-c_b)[/tex]
[tex]V_G' = (-s_b/2, s_b\sqrt{3}/2,-c_b)[/tex]
[tex]V_B' = (-s_b/2,-s_b\sqrt{3}/2,-c_b)[/tex].

This means we can now compute angles between <R'|R>, and <G'|R>:

[tex]<R'|R> = s_b^2 - c_b^2[/tex],
[tex]<G'|R> = -s_b^2/2 - c_b^2[/tex],
[tex]<B'|R> = -s_b^2/2 - c_b^2[/tex],

The above give the cosines of the angles between the unit vectors. It's well known that probabilities in QM for things with an angle theta between them follow a law proportional to 1+cos(theta). After normalizing to unit probability, it turns out that the branching ratios do not depend on [tex]\theta_b[/tex]. (This is also what you'd expect from relativistic length contraction in the z direction.) Instead, you get exactly the branching ratios that just happen to fit the fermion mass matrix:

[tex]P_{R'R} = 2/3[/tex],
[tex]P_{G'R} = 1/6[/tex],
[tex]P_{B'R} = 1/6[/tex],

Carl
 
  • #121
CarlB said:
[tex]M = \left( \begin{array}{ccc}
\sqrt{2} & e^{i\delta} & e^{-i\delta} \\
e^{-i\delta} & \sqrt{2} & e^{i\delta} \\
e^{i\delta} & e^{-i\delta} & \sqrt{2} \\ \end{array} \right) [/tex]
where [tex]\delta = .222 = 12.72[/tex] degrees, is the Cabibbo angle. Let
The use of circulant or retrocirculant mixing matrices in order to implement a permutation symmetry between generations was abogated by Adler in the late nineties. He missed this formula of course because he wasn't interested on square roots, and besides he used the antidiagonal version of the matrix. Koide's arrived to Carls's version a bit after Adler, in hep-ph/0005137, and hoping to relate it to trimaximal mixing. Intriguingly, his parametrisation in this paper does not get Cabbibo angle.

A footnote in Weinberg's The Problem of Mass points to old attempts to derive Cabibbo's angle, not only from [tex]m_d/m_s[/tex] but also from pre-quark formalism, [tex]m^2_\pi/2m^2_K[/tex]. Fascinating.

Carl could be interested on the preon models that motivated the formula time ago in the eighties. Someones are available in the KEK preprint server. http://ccdb3fs.kek.jp/cgi-bin/img_index?8208021


Let me note here a personal communication from Dr. Koide.
Koide said:
The most difficult point in my mass formula is that
the charged lepton mass term is [tex]\Delta I =1/2[/tex] if we
consider mass generation by Higgs scalars, while if
we consider that mass term is given by a bilinear form,
it means that the term is [tex]\Delta I =1[/tex].
We need something beyond Standard Model.
 
Last edited by a moderator:
  • #122
Dear Dr. Rivero, Thanks for the Koide references. I picked up photocopies of the ones you listed yesterday.

Could you please explain your interpretation of Koide's comment on [tex]\Delta I =1/2[/tex]? I have no idea what [tex]\Delta I[/tex] is.

The problem with making the fundamental fermions out of a combination of a fermion and a boson is that it requires a force that isn't described, and it would imply the existence of more particles that haven't been observed. But to make them out of fermions requires either that the spins cancel, which also has problems with too many particles, or, more naturally, that the spin angular momentum of the subparticles be h-bar/6. But that violates the usual understanding of the relation between spin and statistics. On the other hand, the hiding of fractional spin by a subcolor force does remind one of the hiding of fractional charge by the color force.

There is no way that I would have come to these very radical and speculative conclusions if I'd started out by looking for a method of unifying the elementary particles. Instead, I started out by trying to understand Lorentz symmetry. So if what I've written seems insane, please understand that you're not getting the story as it unfolded to me. Instead, I'm including here only conclusions, and only those conclusions that contribute to a connected understanding.

My difficulty is that no one has enough patience to go through the reasoning I went through to get where I am now.

Suppose you discovered the unified field theory, but the theory required a modification of every assumption of both quantum mechanics and relativity. Maybe you can convince a few people to listen to an alternative interpretation of the foundations of one of those two theories, but long before they have heard your alternatives for both theories they will have concluded you are insane.

And to show equivalence to the standard model requires not only rewriting the foundations for both theories, but then to make simple but non trivial calculations in the new theory. There is not a chance in hell that you will convince anyone to sit through this.

So what you do instead is you use your understanding to find relationships between the parameters of the standard model. After I get papers written up to correspond to talks I gave at the PHENO2005 and APSNW meetings (on centauro cosmic rays and binding calculations for binons, respectively), I will start working on the fine structure coupling constant. When I'm done, I hope to have a complete derivation of the standard model from first principles. It will be impossible to get anyone to read it, but it should allow me to derive relationships between standard model parameters, and who knows, maybe then. Or perhaps new observations of the Centauro events will match my predictions.

Carl
 
  • #123
CarlB said:
Could you please explain your interpretation of Koide's comment on [tex]\Delta I =1/2[/tex]? I have no idea what [tex]\Delta I[/tex] is.
I guess it is about Isospin.

which also has problems with too many particles,

Personally I dislike composite models because of this; it is not easy to see how to interpret them in a realistic way without generating excessive particles at the same time.

Instead, I started out by trying to understand Lorentz symmetry

A very honourable quest.

Suppose you discovered the unified field theory, but the theory required a modification of every assumption of both quantum mechanics and relativity.
I would try to make sure that both QM and Relativity were got in the adequate limit, as well as QFT. On the contrary I would need to reproduce all the known results, a lifelong work.

long before they have heard your alternatives for both theories they will have concluded you are insane.
This is a different problem, and this thread is long enough already without philosophical discussions. As you can see, we have focused the thread on mass predictions, including analysis of the exactitude and full description of the input. I'd not like to miss focus. Perhaps in other thread.

So what you do instead is you use your understanding to find relationships between the parameters of the standard model.

Just a question here: are you claiming that your matrix above has been derived from your theory? Iff so, please give here a link to a corresponding webpage.
 
  • #124
CarlB said:
My difficulty is that no one has enough patience to go through the reasoning I went through to get where I am now.

Hi Carl

That's not for you to say. Maybe someone will have the patience. Could you give us a link, or explain some more of the fundamentals that you have in mind?

All the best
Kea
:smile:
 
  • #125
Dear Kea, Dr. Rivero is right, a long discussion really doesn't belong on this thread. At this time, I'm busily texing up a report for the PHENO2005 meeting. I should be done in a few days, and I'll link it in then.

Basically, the idea is to assume that one must generalize the Dirac propagator to form a Dirac equation that simultaneously contains multiple particles. One does this with a Clifford algebra. There are very few choices that one can make, and the choices pretty much amount to assumptions about the nature of the manifold that the Clifford algebra is defined on. (In other words, this is a "geometric algebra" variety of Clifford algebra as promoted by David Hestenes.)

Having a propagator that contains multiple particles, one derives the symmetries that the propagator must have, and sees whether these symmetries correspond to those observed in the natural world. Fitting the two together take a lot of work.

Carl
 
Last edited:
  • #126
> I guess it is about Isospin.

I understand now, but I doubt I understand to the extent that Koide meant. From my point of view, the problem with mass is that it is the only thing in QM that couples opposite handedness.

> I would try to make sure that both QM and
> Relativity were got in the adequate limit, as
> well as QFT.

Fortunately, I don't have to worry about relativity, as several other physicists have already worked out the details. I'm not a general relativity expert, but since there are 5 physicists publishing papers on the theory, I expect that at least one of them got it right. The GR theory goes by various names that differ according to the researcher, but a common name that seems to be coming to the fore is "Euclidean relativity". A place to start reading is Hestenes's review article, Almeida's "4DO" version, or Montanus's famous work:
http://modelingnts.la.asu.edu/pdf/GTG.w.GC.FP.pdf
http://arxiv.org/abs/physics/0406026
Hans Montanus, (1997) "Arguments Against the General Theory of Relativity and For a Flat Alternative," Physics Essays, Vol. 10, No. 4, pp 666-679.

It probably doesn't help with the physics community that Ron Hatch (engineer who developed GPS which depends on relativity calculations) supports the underlying modification of relativity (sometimes called "Lorentzian relativity"):
http://www.egtphysics.net/Gravity/Gravity.htm

> On the contrary I would need to reproduce all the
> known results, a lifelong work.

Not really a lifelong work. All you need to be able to reproduce are the underlying assumptions and, to the extent that you can, the parameters. From those, all the rest of the standard model follows. But it certainly isn't the sort of thing that a worker cranks out (or that a crank works out) in 3 months.

> Just a question here: are you claiming that
> your matrix above has been derived from
> your theory?

Yes. I gave a very brief talk on this at the University of Victoria at the APSNW2005 meeting two weeks ago. After I finish typing up the PHENO2005 paper from their meeting of May 1st, I'll type up the paper associated with this.

The mass matrix problem, after one assumes that the leptons are composites of (R,G,B), boils down to the question of how one calculates |<R|G>| as compared to |<R|R>|. If you make the assumption that R, G and B correspond to 120 degree rotations around a direction of propagation of the (handed) particle, then it is required that the mass matrix have the [tex]\sqrt{2}[/tex] down the diagonal (if the off diagonal elements are to have magnitude 1). The 120 degree assumption is not so radical as it might appear at first as it certainly explains why [tex]SU(3)_c[/tex] is a perfect symmetry.

The freedom to apply real world geometry to internal states of quantum particles comes from the use of the alternative version of relativity mentioned above. Within the assumptions of standard relativity, perfect Lorentz symmetry obtains, and this leads to the "no-go" theorems of Coleman-Mandula &c.

Carl
 
Last edited by a moderator:
  • #127
CarlB said:
There are very few choices that one can make, and the choices pretty much amount to assumptions about the nature of the manifold that the Clifford algebra is defined on. (In other words, this is a "geometric algebra" variety of Clifford algebra as promoted by David Hestenes.)

Ahh. How interesting. My colleagues here are very interested in Hestenes' approach. I look forward to seeing your report.

Cheers
Kea :smile:
 
  • #128
CarlB said:
Dear Kea, Dr. Rivero is right, a long discussion really doesn't belong on this thread.

Perhaps I have exagerated the point. Of course any direct justification of any of the formulae we are cataloging is on-topic. It is just that as I see it, it is better to wait for Carl to write down his note (next month? next couple of months?) and then to announce it here. It is no news, to theoretically specullate about preons, because the original paper was already from a preon theory.

On my side, I am trying to thing how a [tex]m^\frac 12[/tex] factor can appear in a modern theory. Classical non-relativistic theory needs such factors in order to go from Energy to Velocity, but in a relativistic theory we have the "c" highway.

On other hand the scattering matrix (whose poles are the masses) has two traditional expresions, as funcion of energy or as function of momentum, but we are in the same situation than above: when going relativisitic, there are not m^1/2 factors around.
 
  • #129
I probably have another 7 days of texing before I release the first of those papers. It won't have quite enough to get the lepton masses, but that should follow a few weeks later.

By the way, I should note an interesting fact that I don't think has been mentioned here. If you set the angle [tex]\delta[/tex] to zero in that [tex]M^{1/2}[/tex] matrix, you won't get a result that satisfies the square root mass formula unless you conclude that the square root of the electron mass is negative.

In that sense, it's a good thing that [tex]\delta[/tex] is as large as it is, otherwise we'd have missed the square root mass formula completely.

On the other hand, one could also take the point of view that the fundamental mass matrix is [tex]M[/tex], in which case the existence of a convenient square root of it is just a coincidence.

Carl
 
  • #130
I don't know what to make of this.

The problem with the binons that I believe underly the fundamental particles is that they have their spins aligned with their velocity vectors. That makes it necessary to have fractional spin or something similar. In the search for something similar, I found an unusual coincidence having to do with the structure of the eigenvectors for the fermion square root mass matrix, [tex]M^{1/2}[/tex].

The eigenvectors, with their eigenvalues, were:
[tex](1,1,1),\sqrt{m_\tau / 156.9281952 (123)}[/tex]
[tex](1,s,r), \sqrt{m_\mu / 156.9281952 (123)}[/tex]
[tex](1,r,s), \sqrt{m_e / 156.9281952 (123)}[/tex]

where [tex] r = e^{2 i \pi/3}, s = 1/r.[/tex].

The problem is that I have to have three subparticles have a spin that somehow unites to produce [tex]\hbar/2[/tex]. The natural way of doing this would be to use the usual Clebsch-Gordon coefficients. What I'd like to point out here is that the Clebsch-Gordon coefficients have a pattern curiously similar to the pattern of the above eigenvectors.

First, consider three spin-1/2 particles:
[tex]\frac{1}{2} \times \frac{1}{2} \times \frac{1}{2} = \frac{3}{2} + \frac{1}{2} + \frac{1}{2}' [/tex]
It's natural to associate one of the [tex]\frac{1}{2}[/tex] with the combined spin-1/2, but it turns out that if you look at the states that have [tex]S_z = 1/2[/tex], they have exactly the coefficients as the above three eigenvectors. That is:

Let
[tex]| R \rangle = |\frac{1}{2}\; \frac{1}{2}\rangle|\frac{1}{2}\; \frac{1}{2}\rangle|\frac{1}{2}\; \frac{-1}{2}\rangle[/tex]

[tex]| G \rangle = |\frac{1}{2}\; \frac{1}{2}\rangle|\frac{1}{2}\; \frac{-1}{2}\rangle|\frac{1}{2}\; \frac{1}{2}\rangle[/tex]

[tex]| B \rangle = |\frac{1}{2}\; \frac{-1}{2}\rangle|\frac{1}{2}\; \frac{1}{2}\rangle|\frac{1}{2}\; \frac{1}{2}\rangle[/tex]

Then

[tex]|\frac{3}{2} \;\frac{1}{2} \rangle = |R\rangle + |G\rangle + |B\rangle[/tex]

[tex]|\frac{1}{2} \;\frac{1}{2} \rangle = |R\rangle + r|G\rangle + s|B\rangle[/tex]

[tex]|\frac{1}{2} \;\frac{1}{2}' \rangle = |R\rangle + s|G\rangle + r|B\rangle[/tex]

In other words, the electron and muon families are alone, but there should be a spin-3/2 version of the tau. That is, according to this logic, there should be a spin-3/2 set of quarks and leptons.

Anyone seen them? I would think that the experimental results having to do with the number of lepton families based on counting neutrinos would apply, but maybe spin-3/2 neutrinos don't buy it. Or maybe it's a candidate for dark matter.

Carl

P.S. Part of the problem of quantum mechanics is the huge number of accidental symmetries. It's like navigating in a house of mirrors.
 
Last edited:
  • #131
CarlB said:
P.S. Part of the problem of quantum mechanics is the huge number of accidental symmetries. It's like navigating in a house of mirrors.

I can not but agree. Take SU(3)_color. It was introduced in order to solve the problem of (anti)symmetrizating protons and pions. But at that time there was only three quark flavours, thus another, approximate, SU(3). So the people involved on colour needed to remark, in their preprints, that SU(3)_c was independent of any SU 3, 4, 5, or 6 of flavour.

About m^1/2: I am surprised nobody mentioned harmonic oscillator frequency.
 
  • #132
Congratulations on your new arxiv posting!

http://arxiv.org/abs/hep-ph/0505220
The strange formula of Dr. Koide
Alejandro Rivero (Universidad de Zaragoza), Andre Gsponer
10 pages, 1 figure
"We present a short historical and bibliographical review of the lepton mass formula of Yoshio Koide, as well as some speculations on its extensions to quark and neutrino masses, and its possible relations to more recent theoretical developments."
 
  • #133
arivero said:
About m^1/2: I am surprised nobody mentioned harmonic oscillator frequency.

The formula works fine with the square roots of the quark masses as
well. Within the experimental values given by codata it seems.


[tex]M^\frac{1}{2} = \left( \begin{array}{ccc}
D & e^{i\delta} & e^{-i\delta} \\
e^{-i\delta} & D & e^{i\delta} \\
e^{i\delta} & e^{-i\delta} & D \\ \end{array} \right) [/tex]

Leptons:
[tex]D\ = \sqrt{2}[/tex]
[tex]\delta\ \ = 0.2222220471[/tex]

BSD quarks:
[tex]D\ = \sim 1.2987[/tex]
[tex]\delta\ \ = \sim 0.1089[/tex]

TCU quarks:
[tex]D\ = \sim 1.1320[/tex]
[tex]\delta\ \ = \sim 0.0706[/tex]

The angles are different for the various cases. Altough we do
define 2 parameters to get 2 mass ratio's per set of 3 masses,
this doesn't mean we can find solutions for any arbitrary set of
masses. This is not the case.



Regards, Hans

P.S. The following values fit with the [itex]\overline{MS}[/itex] running mass of the
bottum quark (~4.25GeV) rather than the 1S value (~4.75GeV)
used above.

BSD quarks:
[tex]D\ = \sim 1.3150[/tex]
[tex]\delta\ \ = \sim 0.1149[/tex]
 
Last edited:
  • #134
Yuri Danoyan "18 Degrees"

My last headache comes from a Yuri Danoyan, who wrote to tell me of some hadronic relationships. Basically it can be noticed that some products of mesons map approximately into barion masses. Particularly a pair of products are near the mass of the nucleon (proton or neutron):
[tex]
m_B m_\pi \approx m_D m_K \approx m_p^2
[/tex]
So Yuri proceeds by taking the quotient of every (pseudoscalar, no excited) meson against proton mass, and then plotting the arctan of this quotient. The pattern that emerges is a clustering reflecting the above approximation, but the unexpected phenomena is that all the intervals are about the same, some 18 degrees, symmetric around the 45 degrees or arctan(1), exhausting the quadrant.

If we believe Gell-Mann view of chiral perturbation theory, [square of] the first product should be or order K m_b (m_d+m_u)/2, while the [square of the] second one should be K m_c m_s, so the first equality amounts to [tex]m_b (m_d+m_u)/2 \approx m_c m_s[/tex]. Somehow poor result, because, after all, chiral p.t. does not apply to the bottom mesons (and very badly to charmed ones). Worse, chiral p. t. does not predict, afaik, a value for the nucleon mass, so even a way to analise the second equality is barren. Another problem is that the pure[tex]s\bar s[/tex]meson is not a mass eigenstate, and it should be extracted from the eta-eta' mixing so see how it fits in the patterns.

So I am afraid I will not be able to progress in this line, but anyway I thought it was worthwhile to mention it here.
 
Last edited:
  • #135
Wojciech Krolikowski

Recently I noticed hep-ph/0504256, from W Krolikowski, and I have read a bit across its bibliography. In the early nineties, he proposed a model of "algebraic partons" (yep, more preons) based on a Clifford-guided generalisation of Dirac-Kaehler equation. It was published in some minor journals, but also reported in http://prola.aps.org/abstract/PRD/v45/i9/p3222_1 (and having a sequel about http://prola.aps.org/abstract/PRD/v46/i11/p5188_1 that I haven't read yet).

Well, the point is that in his Phys Rev article, after some summations and normalisation of that extended Dirac formalism, and including a guesswork of a very Barut-like formula, he arrived to a prediction for the mass of the tau:
[tex]m_\tau={6\over 125}(351m_\mu+136m_e) =1783.47 MeV[/tex]

The numbers are ugly, but K. did a good work of justifying them, at least good enough for the Phys Rev D referee... And well, you can remember that at these times the reported value of tau mass was 1784 MeV (+2.7, -3.6) and how this value was problematic for Koide's formula. Here on the contrary the value was right.

Ok for the old times, but now the measured value is 1777. So what of Krolikowski?. Well, he reviewed his guesswork for the massformula and he found a sign to change somewhere (EDITED: he only needed to remove a somehow arbitrary -1 imposed subtly in the Phys Rev article to keep the three masses at positive values).

[tex]m_\tau={6\over 125}(351m_\mu-136m_e) =1776.80 MeV[/tex]

Amusing.

Unfortunately it is not exactly compatible with Koide's 1776.97 prediction, so we can not straighforwardly to intersect both equations. As Koide in his first papers, also Krolikowski has forseen some places to tweak its relationship, see eg hep-ph/0108157.
 
Last edited:
  • #136
CarlB said:
where degrees, is the Cabibbo angle.
CarlB said:
Convincing the physics world to simultaneously accept so many hard things to swallow is essentially impossible.

Well, no need to proceed simultaneously.

The Minakata Smirnov relation,
[tex] \theta_{\mbox{sun}} + \theta_{\mbox{cabibbo}} = {\pi \over 4} [/tex]

(hep-ph/0405088; I noted it some months later at sci.phys.research, and at PF the day after my birthday) has got some attention during the last year. A review by Minakata appeared yesterday as hep-ph/0505262. It has been renamed to "quark-lepton complementarity". The point is that, if true, it is evidence of a common origin of leptons and quarks. So it makes a good selling argument for Cabibbo in charged leptons. If it is Cabibbo and not the 2/9 of Hans!

The formula had been mentioned before in the thread (message #15) but not explicitly. I though the observation was vox-populi, but acording Minakata it was first voiced by Smirnov in a conference in December 2003, hep-ph/0402264. In july 2004 I was somehow depressed and I went on holidays to Benasque, simultaneusly to http://benasque.ecm.ub.es/benasque/2004neutrinos/2004neutrinos-talks.htm, so perhaps I overheard it around there.
 
Last edited by a moderator:
  • #137
Dr. Rivero, On the W Krolikowski papers,

Thanks for the links. The mass formula is stunning in its length, and surely the Koide is more attractive. The funny thing is that W Krolikowski is touching on a lot of subjects that are similar to mine, that is Clifford algebra.

And the links for the neutrino mixing angle coincidence with the Cabibo angle is very helpful.

My paper is coming along. I've recently found a fascinating verification in the Centauro high energy cosmic ray events and I can barely wait to get the paper finished. You will have provided many of the references that I will have to include, if you fail to request otherwise, I will include a note of appreciation in the paper.

I mentioned that it is difficult to get the physics community to accept too many impossible ideas at the same time. When you see the write-up, you will see that I was, if anything, understating the problem. But the Centauro data are very convincing. I am so ashamed to be still sitting on this, I'll go home now and write some more.

Carl
 
Last edited:
  • #138
Hans de Vries said:
The formula works fine with the square roots of the quark masses as
well. Within the experimental values given by codata it seems.

BSD quarks:
[tex]D\ = \sim 1.2987[/tex]
[tex]\delta\ \ = \sim 0.1089[/tex]

TCU quarks:
[tex]D\ = \sim 1.1320[/tex]
[tex]\delta\ \ = \sim 0.0706[/tex]

The angles are different for the various cases.
Regards, Hans

P.S. The following values fit with the [itex]\overline{MS}[/itex] running mass of the
bottum quark (~4.25GeV) rather than the 1S value (~4.75GeV)
used above.

BSD quarks:
[tex]D\ = \sim 1.3150[/tex]
[tex]\delta\ \ = \sim 0.1149[/tex]

Indeed some researchers have already noticed the BSD quarks seem close to be compatible with Foot's version of Koide's, ie that the vector [tex](\sqrt{m_b},\sqrt{m_s},\sqrt{m_d})[/tex] is at 45 degrees of (1,1,1).

For the other flavour of unix, :confused: er, not, I mean, for the TCU set, it should be possible to fit a 45 degrees rule but against the vector (1,1,0). This should be
[tex]\sqrt{m_t}+\sqrt{m_u} = \sqrt{ m_t+m_c+m_u} [/tex]
which simplifies to
[tex] {2 m_u \over m_c} = {m_c \over m_t} [/tex]

In fact decades ago this proportion, whithout the factor two, was suggested by an expert on textures (H. Fritzsch?) as a clue for a mass of the top higher than the then current expectations. The argument was drawn over a logarithmic plot, causing Feynman to protest that "In a log plot, even Sofia Loren adjusts to a straight line"
 
Last edited:
  • #139
alpha based quantisation.

http://www.slac.stanford.edu/spires/find/hep/www?rawcmd=FIND+A+MACGREGOR%2C+M , Int J of Mod Phys A V. 20, No. 4 (2005) 719-798 & 2893-2894, is mostly about hadrons, but it could be [part of] an answer to the Danoyan headache I mentioned before.

Nice plots.

Hmm, and who is this McGregor? First time I heard of him. Let's research SPIRES: As it seems (hep-ph/0309197), he is a senior, now retired, of the LLNL. Early in his worklife, he become interested in a mass systematics for hadrons (http://prola.aps.org/abstract/PRD/v13/i3/p574_1, 574) which did not got its way up to the mainstream (I wonder if because of excessive predictions, just as Barut's leptons, let's say), and time after he played with the idea of spinless quarks as well as the concept of "large electron" with pointlike charge.

It seems that he become impatient about the low impact of his model; one of the last publications before going "preprint only" was titled "Can 35 Pionic Mass Intervals Among Related Resonances Be Accidental?" (Nuovo Cim.A58:159,1980). In some sense, he shares adventure with another insider, Paolo Palazzi
 
Last edited by a moderator:
  • #140
This one does not score in the same quality that the rest of the thread, but well... Get [tex]\pi^0[/tex] and [tex]Z^0[/tex] and consider both their mass [tex]m[/tex] and their total decay width [tex]\Gamma[/tex]. We have for the 2004 PDG central values,

[tex] {m_\pi^3 \over \Gamma_\pi} {\Gamma_Z \over m_Z^3} = 1.04...[/tex]

The listed error for the decay width of pion is relatively high, so the result is compatible with the unit. Separately we had, considering errors
[tex] \sqrt{m_\pi^3 \over \Gamma_\pi}= 541...580 GeV[/tex]
[tex] \sqrt{m_Z^3 \over \Gamma_Z}= 551.0 ... 551.5 GeV[/tex]

Another way to see the same thing is to "calculate" the lifetime of neutral pion:
[tex]
\Gamma_{\pi^0}= \Big({M_{\pi^0} \over M_{Z^0} }\Big)^3 \; \Gamma_{Z^0}
[/tex]

We get 8.13E-17 s (8.1 eV), compare with a theoretical effective calculation such as hep-ph/0206007, and against experimental 8.4E-17 +-3*0.2E-17 s (7.8 eV) at PDG 2004.
 
Last edited:

Similar threads

Replies
1
Views
2K
Replies
6
Views
3K
Replies
13
Views
4K
Replies
11
Views
10K
Replies
15
Views
1K
Replies
49
Views
10K
Back
Top