Decay Rates: An Analysis of Dimensional Parameters and Extended Objects

In summary: Sargent's rule as a sum of terms of the form\Gamma_{M\to M} = f(m/M) * (M-\lambda m)^j * M^kwith j+k=1, and more generally sums of this kind of terms.
  • #1
arivero
Gold Member
3,496
173
In our exchange previous to my guest post, Dorigo suggested me to speak of decay rates, but I did not take the courage to extend about this topic. Still, it is an interesting one for a whole thread here, and see what are the ideas of the people of PF.

to start with, I think some dimensional analisis can be interesting: what can we say about decays of massive particles depending of the quantity of dimensional parameters in the theory. For instance, consider electromagnetism, where photon is massless and the fine structure constant is adimensional.

Another question I was pondering was the decay of extended objects (strings, branes, superstrings, (superbranes?) etc) against the decay of point like objects. What differences can we expect?
 
Physics news on Phys.org
  • #2
dimensional analisis

OK, first thinking: decay rate (or decay width, if you prefer) has units of mass. So if you have no scale, the only dependence you can get is
[tex]\Gamma(M)= g_M * M[/tex]

with g a constant depending only of the combinatorial details of the decay.

In this kind of theories, the only scale you can get -in principle- comes from decay products, if the decay is to a particle of mass m. Still, in the limit [tex]m\to 0[/tex], one should expect to recover the linear rule. So we can expect rules of the kind
[tex]\Gamma_{M\to m} = f(m/M) * (M- \lambda m)^j * M^k[/tex]
with j+k=1, and more generally sums of this kind of terms.
 
  • #3
next installment.

Suppose now you have a theory with a dimensional constant, for instance fermi [tex]G_F[/tex], and consider the decay of a masive particle into massless or almost massless particles, so that the mass of these particles is not so relevant... in this way we avoid a lot of pole and analyticity analysis.

We will have
[tex]
\Gamma= M* f(G_F M^2)
[/tex]
and while the naivest idea is to have cubic dependence on the mass, in the concrete case of Fermi interaction the coupling appears in the amplitude and then it is squared for the decay width, so the result is a quintic dependence of the mass.

We could also have dependence on [itex](M^2 - \lambda G_F)[/itex], should we?Other puzzling set of decays are the "long lived strings", where the dimensional coupling is used to reverse the depencence, and decay width goes inverse proportional to the mass. It is kind of amusing.
 
Last edited:
  • #4
n dimensional kinematics

One point about decay rates is that there are two components in the calculation: the amplitude for the probability of changing state and the size of the phase space, the momentum and energy distribution, for the decay products. Pretty obviously this part of the calculation will depend on the number of dimensions of space (or extra uncompactified dimensions, because the whirling about the compact dimensions is considered part of the final state). Is this dependence calculated explicitly in some webpage or could it be worthwhile to do here the exercise?
 
  • #5
I'll be reading this with great enthusiasm.
 
  • #6
CarlB said:
I'll be reading this with great enthusiasm.
Me too :-D. Meaning, I would hope some with more knowledge could mention the calculations of decay rates for strings or at least for bodies in arbitrary space time dimension.

(When I noticed the cubic scaling on electromagnetic decays someone, I do not remember who, told that it was "just kinematics". Of course the whole point was that, after removing kinematics, the extant factor coincided within the 1-sigma error for most of the known decays. But still, it will be interesting to see how the kinematics arise. )
 
  • #7
detour

It is interesting to consider what happens in an electroweak decay when the mass of the decaying particle evolves from less than M_W to more than M_W: the decay rate changes from quintic "Sargent's rule" dependence to cubic, and while the former decay was into three bodies, this later one is into two bodies, because the W particle can survive onshell.

It is a pretty way to understand the meaning of "unification" and "symmetry breaking" in the electroweak GWS theory.

References I found:
http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D30,947
http://www.slac.stanford.edu/spires/find/hep/www?j=PHLTA,B181,157
http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D37,2676
 
Last edited by a moderator:
  • #8
arivero said:
http://www.slac.stanford.edu/spires/find/hep/www?j=PHLTA,B181,157
http://www.slac.stanford.edu/spires/find/hep/www?j=PHRVA,D37,2676

Hmm the KEK-archived preprint for Phys.Rev.D37:2676,1988 has some typos, so let me to quote the full formula from the published version. They take |Vab| out (or equal unity). On my side, I will call m to the m_t, to lighten notation, and epsilon to m_b, to remark we are interested on small m_b.

With this notation, the paper defines also [itex]Q_0=(m^2+Q^2-\epsilon^2)/2m[/itex] and [itex]|\vec Q|^2=Q_0^2-Q^2[/itex] and then
[tex]
\Gamma={G^2_F m^5 \over 24 \pi^3} \int_0^{(m-\epsilon)^2} dQ^2
{M^4_W |\vec Q| \over (Q^2-M_W^2)^2 + M^2_W \Gamma_W^2}
(2 |\vec Q|^2 + 3 Q^2 (1-{Q_0\over m}))
[/tex]

I still fail to see how units match, on one side this [itex]Q[/itex] should be adimensional, and on the other hand it seems to have dimensions of mass. (EDIT: given that the overall integral has dimension of M^5 too, I am inclined to think that the m^5 in the prefactor is a "typo").The paper Phys.Lett.B181:157,1986. takes more care with dimensions and you get a formula you could actually plot. First they define an adimensional function f(a,b,c) and then they give (we allow ourserves to put Vqq equal unity as the previous paper)
[tex]
\Gamma= {G^2_F m^5 \over 192 \pi^3} f(\frac {m^2} {m^2_W}, \frac {\epsilon^2} {m^2}, \frac {\Gamma_W^2} {m^2_W}) [/tex]

The function f appears in the paper, eq (4), without -to me- noticeable typos. Ok, let's copy it for completeness:
[tex]
f(\rho,\mu,\gamma)= 2 \int_0^{(1-\sqrt \mu)^2} {dx \over ((1-x \rho)^2+\gamma^2)^2}
((1-\mu)^2+(1+\mu)x -2x^2)
\sqrt{1 + \mu^2 + x^2 -2 (\mu +\mu x +x)}
[/tex]

You can notice that these formulae do not take the Standard Model as granted, ie no relationship yet between W, its mass and its width, and G_F. We can contemplate the change from quintic to cubic dependence on m and still keep some intrigue about the role of fermi constant (although I can not understand how they have got to put W inside in the first place if they are still playing with fermi beta decay).

PS: note an overall factor 9 is needed if you want to account for all the decays of W and not only a partial width.
 
Last edited by a moderator:
  • #9
via gordon watts, this tv series informs us that Vtb is near unity, as said above
http://www.tvguide.com/tvshows/big-bang-theory/photos/288041/4
 
Last edited by a moderator:
  • #10
arivero said:
(EDIT: given that the overall integral has dimension of M^5 too, I am inclined to think that the m^5 in the prefactor is a "typo").

Still, there are further mismatches. Let's assume that factor was a typo, and let's take [itex]\epsilon \to 0[/itex] and [itex]\rho, \gamma [/itex] as above. Then Phys.Rev.D37:2676 amounts to
[tex]
\Gamma={G^2_F \over 24 \pi^3} {m^5 \over 4} \int_0^{1}
{dx \over (1 - x \rho )^2 + \gamma} (1 +x -x^2)
\sqrt {1 - 2x +x^2}
[/tex]
To be compared with the preprint of Phys.Lett.B181:157
[tex]
\Gamma= {G^2_F m^5 \over 192 \pi^3} 2 \int_0^{1} {dx \over ((1-x \rho)^2+\gamma^2)^2}
(1+x -2x^2)
\sqrt{1 + x^2 -2 x}
[/tex]

It almost works! Except for the Breit-Wigner-like denominator, which has a different power. In this case, it seems the mistake is in the second formula; note I have checked only the preprint, not the published version.
 
Last edited:
  • #11
Well, I think I can confirm that there was also a typo in the Phys.Lett.B181:157, and that this second ^2 exponent was most probably a bad interpretation of the instruction "remove this 2" to edit out the other ^2 over the gamma. Albeit in more recent papers, the authors prefer to keep the first ^2 and to redefine gamma as the square root of the one we are using here.

So the tree level formula for a weak charged decay into massless particles is

[tex]
\Gamma= 2 {G^2_F m^5 \over 192 \pi^3} \int_0^{1} {dx \over ((1-x \rho)^2+\gamma)}
(1+x -2x^2)
\sqrt{1 + x^2 -2 x}
[/tex]

where [itex]\rho=m^2/M_W^2[/itex], [itex]\gamma=\Gamma^2_W/M_W^2[/itex]
and you can still put a factor 9 if you want to account for the nine different decays of the W. (EDIT: question: but the [itex]\Gamma_W[/itex] in the formula is the total decay width, isn't it?)

Ah, there are more recent calculations of this formula in the literature, some of them even including the first QCD correction.

http://www.slac.stanford.edu/spires/find/hep/www?eprint=hep-ph/9302295
http://www.slac.stanford.edu/spires/find/hep/www?j=PRLTA,66,3105
http://www.slac.stanford.edu/spires/find/hep/www?j=NUPHA,B314,1
http://www.slac.stanford.edu/spires/find/hep/www?j=NUPHA,B320,20

Well, the next step, standard also in the literature, is to get rid of the integral. They are two different limits where you can do it:
1) consider the "narrow width limit", using the representation of the dirac delta as a limit of cauchy distribution.
2) consider the limit where m is a lot smaller than M_W.
 
Last edited by a moderator:
  • #12
I'd believe we are all using free software, so we can try maxima or wxmaxima
Code:
f(x,r,g):=(1+x-2*x^2)*sqrt(1+x^2-2*x)/((1-x*r)^2+g);
GF:1.16637*10^-5;
decay(m):=9*2*GF^2*m^5/(192*%pi^3)*(quad_qags(f(x,(m/80.4)^2,(2.14/80.4)^2), x, 0, 1)[1]);
still, I am having http://www.math.utexas.edu/pipermail/maxima/2007/007063.html :frown:
 
Last edited by a moderator:
  • #13
Really! The bad interface between numerics and analytics in maxima, 20 years later, is disgusting. I first used Macsyma, the VAX VMS version, in 1989.
Anyway:
Code:
h(m):=quad_qags(f(x,(m/80.4)^2,(2.14/80.4)^2), x, 0, 1); 
yy:makelist(float(exp(i/15)),i,-40,100); 
yh:makelist(h(z)[1],z,yy)$ 
plot2d([discrete,yy,yh]);  
plot2d([discrete,yy,yh],[gnuplot_preamble,"set logscale x; set logscale y"]);
plot2d([discrete,yy,yh],[gnuplot_term,jpeg],[gnuplot_out_file,"out.jpg"],[gnuplot_preamble,"set logscale x; set logscale y"]);

You are contemplating the mystery of symmetry restoration :bugeye:: as the mass increases beyond M_W, the decay is effective into three bodies instead of four and the integrand h(m) starts to deliver a dependency on m^-2.

Observe the second attached plot, and consider what should happen if the mass of W were smaller, while you start still from the top mass scale... the cubic line should prolongue, the change to quintic happening to lower energy. Ideally, a limit can be taken where the mass of W is null and no change to quintic happens; in this limit, isospin symmetry is restored, as the elders told us in the school.

For an excursion into the unknown, compare the third plot here with the one discussed in http://dorigo.wordpress.com/2006/09/14/a-mistery-behind-the-z-width/. Note that here we are using GeV, while that plot uses MeV. Note also we are not including QCD, neither here not in the other plot, and that here we are including the factor 9 of the different decays as if all the masses were zero except the top. The coincidence with the cubic line discussed in that entry is stronger if this factor 9 is not used.

EDIT: If you want to pursue further the numerical investigation, quad_qag will be better than quad_qags.
 

Attachments

  • outh.jpg
    outh.jpg
    12.2 KB · Views: 672
  • dec.jpg
    dec.jpg
    12.7 KB · Views: 643
  • dec3.jpg
    dec3.jpg
    16.2 KB · Views: 627
Last edited:
  • #14
arivero said:
For an excursion into the unknown, compare the third plot here with the one discussed in http://dorigo.wordpress.com/2006/09/14/a-mistery-behind-the-z-width/. Note that here we are using GeV, while that plot uses MeV. Note also we are not including QCD, neither here not in the other plot, and that here we are including the factor 9 of the different decays as if all the masses were zero except the top. The coincidence with the cubic line discussed in that entry is stronger if this factor 9 is not used.

What happens is that the fork here sits nice and mystically about a factor ten inside of the fork there. Thus the use of this fork to calculate the breaking of isospin in decay widths of a isomultiplet works nicely in the qualitative (dependence on M^2) but fails about one or two orders of magnitude in the quantitative.

The quintic leg would meet exactly the muon cross if the factor nine were progressively removed as the energy crosses the thresholds for the decay of virtual W into cs (-3), tau-nu (-1), ud (-3) and muon itself (-1). The exact calculation becomes lengthy because of the CKM mixing.

But there is not a clear way for the cubic leg here to reach the experimental cubic leg. If we enable QCD in the formula here, the decay width decreases (see Jezabek and Kuhn 1993).

In any case, note that the decays of the electrically neutral isospin partners of charged mesons are into photons, so it had been very surprising to get a match with only SU(2) weak and without the U(1) EM group. Still, I do not know yet how to enter it into the Z0 mystery line, so I guess that the detour stops here and we can go back to regular schedule.
 
Last edited:
  • #15
I still would like to derive this formula straight from the Feynman rules step by step. Meanwhile, let's attack the limits
arivero said:
[tex]
\Gamma=N_f* 2 * {G^2_F m^5 \over 192 \pi^3} \int_0^{1} {dx \over ((1-x \rho)^2+\gamma)}
(1+x -2x^2)
\sqrt{1 + x^2 -2 x}
[/tex]
where [itex]\rho=m^2/M_W^2[/itex], [itex]\gamma=\Gamma^2_W/M_W^2[/itex]
1) consider the "narrow width limit", using the representation of the dirac delta as a limit of cauchy distribution.
2) consider the limit where m is a lot smaller than M_W.

(2) is easy. It is the case [itex]\rho \to 0[/itex] and then
[tex]
\Gamma=N_f* 2 * {G^2_F m^5 \over 192 \pi^3} {1 \over 1+\gamma} \int_0^{1} dx
(1+x -2x^2)
\sqrt{1 + x^2 -2 x} = N_f * {G^2_F m^5 \over 192 \pi^3} * {1 \over 1+{\Gamma^2_W/M_W^2}}
[/tex]
and remember that for the standard model [itex]{\Gamma^2_W/M_W^2}=0.00071[/itex], so when the number of available particles for the decay is only[itex]N_f=1[/itex] we practically hit the usual result of muon decay (besides a correctness check, it is a realistic approx, but sort of inconsistent touch because we have set all the other quark or leptons masses to zero so it is really =9 always)
 
Last edited:
  • #16
(1) is almost the case [itex]\gamma \to 0 [/itex]. We do not want to go so far because in such case the integral gets an infinity. But if we are near enough then we can substitute
[tex]
\Gamma=N_f* 2 * {G^2_F m^5 \over 192 \pi^3} \int_0^{1} dx {\pi \over \sqrt \gamma} \delta(1-x \rho)
(1+x -2x^2)
\sqrt{1 + x^2 -2 x}=
[/tex]
[tex]
=N_f* 2 * {G^2_F m^5 \over 192 \pi^3} \int_0^{1} dx {\pi \over \rho \sqrt \gamma} \delta(\frac 1 \rho -x)
(1+x -2x^2)
\sqrt{1 + x^2 -2 x}=[/tex]
[tex]
=N_f* 2 * {G^2_F m^5 \over 192 \pi^2} { M_W^2 \over m^2 \sqrt {\Gamma^2_W/M_W^2}}
(1+{ M_W^2 \over m^2 } -2{ M_W^4 \over m^4 } )
\sqrt{1 + { M_W^4 \over m^4 } -2 { M_W^2 \over m^2 } }=[/tex]
[tex]
=N_f* 2 * {G^2_F m^3 \over 192 \pi^2} { M_W^3 \over \Gamma_W}
(1+{ M_W^2 \over m^2 } -2{ M_W^4 \over m^4 } )
\sqrt{1 + { M_W^4 \over m^4 } -2 { M_W^2 \over m^2 } }[/tex]

If furthermore (lets call it case 1') we consider that m is bigger than the mass of W, we get rid of the corrections and we get

[tex]
\Gamma=N_f* 2 * {G_F \ m^3 \over 192 \pi^2} { G_F M_W^3 \over \Gamma_W}
[/tex]

And if we want to believe that [tex]\Gamma_W = \frac 32 { G_F M_W^3 \over \pi \sqrt 2 }[/tex] then

[tex]
\Gamma=9* 2 * {G_F \ m^3 \over 192 \pi^2} {2 \pi \sqrt 2 \over 3 } =
12 \sqrt 2 * {G_F \ m^3 \over 192 \pi} = {G_F \ m^3 \over 8 \pi \sqrt 2 }
[/tex]

and yes, it is the standard approximation for the decay of the top quark.
 
Last edited:
  • #17
A qualitatively right estimate but still too small in the quantitative is the quotient between decay rates of two isospin partners, say neutral pion vs charged pion

[tex]
(N_f* 2 * {G_F \ m^3 \over 192 \pi^2} { G_F M_W^3 \over \Gamma_W})
/
(N_f * {G^2_F m^5 \over 192 \pi^3} * {1 \over 1+{\Gamma^2_W/M_W^2}})
[/tex]

it simplifies to

[tex]
({ 2 M_W^3 \over \Gamma_W})
/
( { m^2 \over \pi} * {1 \over 1+{\Gamma^2_W/M_W^2}})
[/tex]

lets call it q

[tex]
q= {2 \pi \over m^2} * {M_W^3 \over \Gamma_W} * (1+{\Gamma^2_W/M_W^2} )
\approx {4 \pi^2 \over 3 m^2} { \sqrt 2 \over G_F }
[/tex]

The failure is due to the neutral particles, whose decay seems to scale empirical and unexplainablely as the cubic law for Z0 decay, and not the cubic law for the top quark.
 
Last edited:
  • #18
arivero said:
The failure is due to the neutral particles, whose decay seems to scale empirical and unexplainablely as the cubic law for Z0 decay, and not the cubic law for the top quark.

Perhaps you could expound on this.
 
  • #19
CarlB said:
Perhaps you could expound on this.

The quintic law that sparkled the theoretical investigation on weak decay was discovered in 1933 by Sargent (and in fact there was some timid attempt of calling it "Sargent's rule"). It is still here as the lower leg of our fork, if you consider that the mass of the decaying particle is the equivalent of the "available energy" in nuclear beta decay.

The upper leg of our fork, corresponding to the situation where fermi constant does exist but it does not break isospin, has not been searched for. Except that textbooks tell us that if isospin were not broken then the charged pion and the neutral pion would have the same decay rates, and the same with all the isospin doublets.

Now, a couple years ago I found a line parallel to this upper leg, only that about a factor 10 higher. It is described in

http://dorigo.wordpress.com/2006/09/14/a-mistery-behind-the-z-width/

(Dorigo keeps open the comments there, so we can keep this thread on orthodoxy)

This second line unifies the electromagnetic decays of baryons in the same way that the quintic line unifies the weak decays, but it is even more precise because we have not a lot of particles whose main decay is electromagnetic.

The mysterious fact is that it also predicts the decay rate of the Z0.
 
Last edited:
  • #20
Another amusing detail. In the very asymetric limit where [itex]m_t >> M_W [/itex] AND [itex]\Gamma_W << M_W [/itex] we got the very symmetric formula

[tex]
\Gamma _t \Gamma_W={9* 2 \over 192 \pi^2}* (G_F \ m_t^3) * ( G_F \ M_W^3 )
[/tex]

We were able to obtain it because the starting formula already contains both G_F and \Gamma_W, in addition to M_W.

[tex]
\Gamma={N_f * G^2_F \over 24 \pi^3} \int_0^{m^2} dQ^2
{M^4_W \over (Q^2-M_W^2)^2 + M^2_W \Gamma_W^2}|\vec Q|
(2 |\vec Q|^2 + 3 Q^2 (1-{Q_0\over m}))
[/tex]
[itex]Q_0=(m^2+Q^2)/2m[/itex]
[itex]|\vec Q|^2=Q_0^2-Q^2[/itex]I.e., it contains an extra parameter respect to usual theories of weak interactions, where [itex]G_F [/itex] and [itex]\Gamma_W [/itex] can be calculated from the adimensional coupling of SU(2) and the mass of W.

So I am curious about what kind of Feynman rules, or other arguments, did the primigenial authors use to set up this formula. (The obvious guess is that everyone builds G_F out of the mass of W, but who knows...)
 
Last edited:
  • #21
I really want to enter in the issue of kinematics, and how the transition matrix conspires with it in order to get right the units of decay rates. Still, I am taking my time to think about. In principle, decay rates are best ground to think about this concept than cross-section "rates", because a lot of the conserved quantities start being zero. Meaning, we always hope from galilean or poincare invariance to have ten conserved quantities: Energy, momentum, angular momentum and center of mass. Only Energy and Angular Momentum are different of zero in the initial state of a decay process, and I am thinking about neglecting the later, or to consider it integrated away anyhow.

Let me remember from the beginning of the thread the initial point to meditate on kinematics: to see how many bodies do we get from the decay and then the way to measure (to integrate) the phase space, and what units does each integration deliver.

One can dispose of one-particle decays; they will amount to oscillations and mixing and I am not even sure if the resulting decay will produce a decay width [itex]\Gamma[/itex] or simply to move a little bit the mass poles of the mixed particles (remember that, at all, a particle is a pole at [itex] M + i \Gamma [/itex] there in the S-matrix).
 
  • #22
Last edited by a moderator:
  • #23
After clicking several times on your link, I find it returns me to this thread.
 
  • #24
A thinking: can superstring theory imitate the feat of our calculation above, the pivoting from a three body decay to a two body decay as it happens in quark decay above and below the mass of W?

I doubt it to be possible purely with open strings, ie the topology of a disk. We consider the in and out fermion states to be open strings at plus minus infinity, the kind of thing they represent, I believe, with vertex operators. We need some way to represent the intermediate particle [itex]W^\pm[/itex] and to control the dependence between decay rate and properties of W, and I do not see how. Same problem (how to locate [itex]W^\pm[/itex]?) in a purely closed string theory; in and out are just four holes in a sphere.

Perhaps the equivalent process would happen with the W boson being a closed string and the four fermions being open strings. In such way, the properties of the W boson could be encoded in the cylinder; the topology is still an sphere but two holes instead of four. Do you image the picture? Well, I would hope to find this calculation in some textbook; it has been 30 years since the invention of superstrings.
 
Last edited:
  • #25
this week http://arxiv.org/abs/1112.4809 , from Bernstein and Holstein, brings an interesting review of pi0 decay, both sides, theoretical and experimental.
 

FAQ: Decay Rates: An Analysis of Dimensional Parameters and Extended Objects

What is the purpose of the study?

The purpose of this study is to analyze the relationship between dimensional parameters and extended objects in order to better understand the decay rates of these objects. By studying how different dimensions and shapes affect decay rates, we can gain valuable insights into the processes of decay and potentially develop new methods for slowing or preventing it.

What is the methodology used in the study?

The study utilizes a combination of experimental and theoretical methods. Extended objects of varying dimensions and shapes are subjected to controlled decay processes, and the resulting decay rates are measured and analyzed. Mathematical models and simulations are also used to further explore and understand the relationship between dimensional parameters and decay rates.

What are the key findings of the study?

Some of the key findings of the study include the impact of surface area and volume on decay rates, as well as the role of curvature in determining the rate of decay. The study also highlights the importance of considering the material composition of extended objects in understanding their decay rates.

How can these findings be applied in practical settings?

The findings of this study can have practical applications in a variety of fields, such as material science, engineering, and conservation. By understanding how dimensional parameters affect decay rates, we can develop more effective preservation methods for objects and structures. Additionally, these insights can inform the design and construction of more durable and long-lasting materials.

What are the potential implications of this study for future research?

This study opens up new avenues for future research in the field of decay rates and extended objects. Further exploration and analysis of the relationship between dimensional parameters and decay rates can lead to a better understanding of the underlying mechanisms of decay. This can also inform the development of new technologies and techniques for preserving and prolonging the lifespan of objects and structures.

Back
Top