RG equation and invariance of the vertex function under scaling(Ryder)

In summary, Ryder wants to write down an equation similar to the renormalization group equation, which expresses the invariance of the vertex function under the change of scale. However, the equation does not agree with the one given in equation (9.38).
  • #1
center o bass
560
2
Hi. I have trouble understanding an argument in Lewis H. Ryder's QFT (second edition) at page 325 where he wants to write down an equation similar to the renormalization group equation which expresses the invariance of the vertex function [itex]\Gamma^{(n)}[/itex] under the change of scale.

The relevant equations in the book are (9.66), (9.67) and (9.68). The argument goes as follows;

Let [itex] p \to tp, \ \ \ m \to tm, \ \ \mu \to t\mu[/itex]. [itex]\Gamma^{(n)}[/itex] has mass dimension D given by

[tex] D = d + n(1- \frac{d}{2}) = 4 - n + \epsilon(\frac{n}{2} -1)[/tex] where [itex] d = 4-\epsilon[/itex]. Then

[tex] \Gamma^{(n)}(tp_i,g,m,\mu) = t^D\Gamma^{(n)}(p_i,g, t^{-1}m, t^{-1}\mu) = \mu^{D} F(g, \frac{t^2 p_i^2}{m \mu})[/tex]

(here I'm not sure what the author means by the function F. Is it that when you factor out [itex]\mu^D[/itex] you get some other function F which depend only on parameters in the combination stated? If so this does not seem to agree with equation (9.38) for [itex]\Gamma^{(4)}[/itex].)

so

[tex](t \frac{\partial}{\partial t} + m \frac{\partial}{\partial m} + \mu \frac{\partial}{\partial \mu} - D) \Gamma^{(n)} = 0.[/tex]


Question: how does ryder arrive at this result from the equations above?

P.S: I've tried to gain some understanding of the RG equations, running couplings etc. trough several books now and each book seem to have a different way of explaining them. If you have some reference which explains them in a similar way as Ryder (and sensible) I would be glad if you could give a reference.
 
Physics news on Phys.org
  • #3
Ah, thanks I will read it if I can't make sense of ryders argument. Does the above argument make sense to you?
 
  • #4
Or to anyone else?
 
  • #5
center o bass said:
[tex] \Gamma^{(n)}(tp_i,g,m,\mu) = t^D\Gamma^{(n)}(p_i,g, t^{-1}m, t^{-1}\mu) = \mu^{D} F(g, \frac{t^2 p_i^2}{m \mu})[/tex]

(here I'm not sure what the author means by the function F. Is it that when you factor out [itex]\mu^D[/itex] you get some other function F which depend only on parameters in the combination stated? If so this does not seem to agree with equation (9.38) for [itex]\Gamma^{(4)}[/itex].)

so

[tex](t \frac{\partial}{\partial t} + m \frac{\partial}{\partial m} + \mu \frac{\partial}{\partial \mu} - D) \Gamma^{(n)} = 0.[/tex]

Why do you say that the expression for ##\Gamma## does not agree with this equation? When ##\mu \partial/\partial\mu## acts on the ##\mu^D## prefactor, we get back ##D## to cancel the ##-D## in the equation. When the derivatives act on ##F##, we get a common factor of

$$\left(t \frac{\partial}{\partial t} + m \frac{\partial}{\partial m} + \mu \frac{\partial}{\partial \mu}\right) \frac{t^2 p_i^2}{m \mu} =0.$$
 
  • #6
fzero said:
Why do you say that the expression for ##\Gamma## does not agree with this equation? When ##\mu \partial/\partial\mu## acts on the ##\mu^D## prefactor, we get back ##D## to cancel the ##-D## in the equation. When the derivatives act on ##F##, we get a common factor of

$$\left(t \frac{\partial}{\partial t} + m \frac{\partial}{\partial m} + \mu \frac{\partial}{\partial \mu}\right) \frac{t^2 p_i^2}{m \mu} =0.$$

It seem to suggest that the parameters can only occur in the combination [itex]\frac{t^2 p_i^2}{m \mu}[/itex] after you have factored out [itex]\mu^D[/itex].

However

[tex]i\Gamma^{(4)} = \mu^D (g\mu^{-D} - \frac{g^2 \mu^{-2D}}{32\pi^2}( F(s,m, \mu) + F(t,m, \mu) + F(u,m \mu) - 3F(0,m, \mu))[/tex]

where

[tex] F(s,m, \mu) = \int_0^1 dz \ln \frac{s z(1-z) - m^2}{4\pi \mu^2}.[/tex]

So for example inside the logarithm the parameters occur in the combination [itex]\frac{t^2p^2}{\mu^2} [/itex] and [itex]\frac{m^2}{\mu^2} [/itex], when s = (p_1+p_2)^2.
 
  • #7
center o bass said:
So for example inside the logarithm the parameters occur in the combination [itex]\frac{t^2p^2}{\mu^2} [/itex] and [itex]\frac{m^2}{\mu^2} [/itex], when s = (p_1+p_2)^2.

Yes, I would agree that there are other dimensionless combinations that the function could depend on (##t^2p^2/m^2## is one more). You can also check that they work in the scaling DE. I don't presently have a copy of Ryder, but in my experience, there are other examples where the discussion falls short of being 100% accurate or complete.
 
  • #8
fzero said:
Yes, I would agree that there are other dimensionless combinations that the function could depend on (##t^2p^2/m^2## is one more). You can also check that they work in the scaling DE. I don't presently have a copy of Ryder, but in my experience, there are other examples where the discussion falls short of being 100% accurate or complete.

Well that's okay then!
Even though I have not had Ryder for long I think I've had the same experience, but Ryder also seem very readable.

Thanks alot! :)
 
  • #9
center o bass said:
Well that's okay then!
Even though I have not had Ryder for long I think I've had the same experience, but Ryder also seem very readable.

Thanks alot! :)

Yes, lot's of people just starting out with QFT like Ryder for being fairly gentle. I was to some extent one of them.

I realized I should make a more complete explanation of the situation with ##\Gamma## and the scaling variables, since I said something misleading earlier. We start with the variables ##(p_i,m,\mu)##. All of these have dimensions of [mass], so they scale the same way ##(p_i,m,\mu)\rightarrow (tp_i,tm,t\mu) ##. We can generate 2 dimensionless variables from 3 dimensionful ones. The simplest way would be to just use the ratios ##(p_i/\mu, m/\mu)##, which is what you found when you considered that Feynman integral. Ryder has made another choice, namely

$$\frac{p_i^2}{m\mu} = \frac{\left(\frac{p_i}{\mu} \right)^2 }{\frac{m}{\mu} },$$

but he's forgotten that there's another independent dimensionless combination to choose.

Dimensionless variables like we've introduced here are sometimes referred to as "projective" or "homogeneous" coordinates.
 
  • #10
fzero said:
Yes, lot's of people just starting out with QFT like Ryder for being fairly gentle. I was to some extent one of them.

I realized I should make a more complete explanation of the situation with ##\Gamma## and the scaling variables, since I said something misleading earlier. We start with the variables ##(p_i,m,\mu)##. All of these have dimensions of [mass], so they scale the same way ##(p_i,m,\mu)\rightarrow (tp_i,tm,t\mu) ##. We can generate 2 dimensionless variables from 3 dimensionful ones. The simplest way would be to just use the ratios ##(p_i/\mu, m/\mu)##, which is what you found when you considered that Feynman integral. Ryder has made another choice, namely

$$\frac{p_i^2}{m\mu} = \frac{\left(\frac{p_i}{\mu} \right)^2 }{\frac{m}{\mu} },$$

but he's forgotten that there's another independent dimensionless combination to choose.

Dimensionless variables like we've introduced here are sometimes referred to as "projective" or "homogeneous" coordinates.

Thanks for the elaboration!

So then when one has 3 dimensionfull quantities and one choose let's say ##(p_i/\mu, m/\mu)## as the dimensionless ones, these will in some sense span the space of combinations (other choices can be expressed in terms of these choices) so that we can confidently write that

$$\Gamma(tp_i, g, m,\mu) = \mu^D F(g, p_i/\mu,m/\mu)?$$

Btw: The g has dimensions after being renormalized. He defines it as

$$ g = g_0 \mu^D - \frac{g_0^2 \mu^D}{32\pi^2}\times (\text{something dimless})$$

to first order. Is it not then wrong to ignore it in this argument? I would think it would be more correct to say something like

$$\Gamma(tp_i, g, m,\mu) = \mu^D F(g/\mu, p_i/\mu,m/\mu).$$
 
  • #11
center o bass said:
Thanks for the elaboration!

So then when one has 3 dimensionfull quantities and one choose let's say ##(p_i/\mu, m/\mu)## as the dimensionless ones, these will in some sense span the space of combinations (other choices can be expressed in terms of these choices) so that we can confidently write that

$$\Gamma(tp_i, g, m,\mu) = \mu^D F(g, p_i/\mu,m/\mu)?$$

Yes. We know that ##\Gamma## has a particular mass dimension, so its form is set by that.

Btw: The g has dimensions after being renormalized. He defines it as

$$ g = g_0 \mu^D - \frac{g_0^2 \mu^D}{32\pi^2}\times (\text{something dimless})$$

to first order. Is it not then wrong to ignore it in this argument? I would think it would be more correct to say something like

$$\Gamma(tp_i, g, m,\mu) = \mu^D F(g/\mu, p_i/\mu,m/\mu).$$

I would first say that we already made a choice between using bare and renormalized quantities to express ##\Gamma##. In order to get the scaling differential equation, it was important to use the renormalized variables, since the bare parameters should be thought of as being independent of ##\mu##.

Second, I thought ##g## was the ##\phi^4## coupling. If so, how can it scale as a power of ##D## which is defined for each vertex function in terms of ##n##, its degree?function?
 
  • #12
This is the trouble with dimensional regularization! The scale slips in a quite hidden not very intuitive way. That's why I prefer BPHZ renormalization, where it becomes very clear, why there must be some renormalization scale involved in the renormalization procedure and why the renormalized (finite) couplings, wave-function normalization, and masses depend on this scale. See my qft manuscript for this treatment

http://fias.uni-frankfurt.de/~hees/publ/lect.pdf

In dimensional regularization, which of course is very useful from a more practical perspective, because it preserves many symmetries (except the trouble with chiral symmetries and [itex]\gamma_5[/itex], [itex]\epsilon_{\mu \nu \rho \sigma}[/itex], etc.). Here you get a scale due to the very fact that the dimension of the couplings depends on the dimension of space-time. In [itex]\phi^4[/itex] theory the coupling constant is dimensionless in four dimensions but must have dimension [itex]\mu^{d-4}[/itex], where [itex]\mu[/itex] is some aribtrary parameter of dimension energy/momentum. So to keep the coupling dimensionless you have to substitute [itex]\lambda \rightarrow \mu^{4-d} \lambda=\mu^{2 \epsilon} \lambda[/itex], and that's how the renormalization scale enters the game in dimensional regularization.

Also this is explained in my QFT script in some detail. I also must say, I don't understand Ryder's treatment, for the very reasons given by many in this thread. The simple one-loop four-point function (calculated, e.g., in dim. reg.) shows that Ryder's ansatz is not complete, as pointed out already in this thread.
 
  • #13
fzero said:
Yes. We know that ##\Gamma## has a particular mass dimension, so its form is set by that.



I would first say that we already made a choice between using bare and renormalized quantities to express ##\Gamma##. In order to get the scaling differential equation, it was important to use the renormalized variables, since the bare parameters should be thought of as being independent of ##\mu##.

Second, I thought ##g## was the ##\phi^4## coupling. If so, how can it scale as a power of ##D## which is defined for each vertex function in terms of ##n##, its degree?function?

Sorry. That was wrong indeed. It had dimensions of ##\mu^\epsilon##, where ##\epsilon = 4-d##. So it is dimensionless in the limit ##\epsilon \to 0##. Since we have expressed ##\Gamma## in terms of the renormalized parameters I guess one considers that this limit has already been taken.
 
  • #14
vanhees71 said:
This is the trouble with dimensional regularization! The scale slips in a quite hidden not very intuitive way. That's why I prefer BPHZ renormalization, where it becomes very clear, why there must be some renormalization scale involved in the renormalization procedure and why the renormalized (finite) couplings, wave-function normalization, and masses depend on this scale. See my qft manuscript for this treatment

http://fias.uni-frankfurt.de/~hees/publ/lect.pdf

In dimensional regularization, which of course is very useful from a more practical perspective, because it preserves many symmetries (except the trouble with chiral symmetries and [itex]\gamma_5[/itex], [itex]\epsilon_{\mu \nu \rho \sigma}[/itex], etc.). Here you get a scale due to the very fact that the dimension of the couplings depends on the dimension of space-time. In [itex]\phi^4[/itex] theory the coupling constant is dimensionless in four dimensions but must have dimension [itex]\mu^{d-4}[/itex], where [itex]\mu[/itex] is some aribtrary parameter of dimension energy/momentum. So to keep the coupling dimensionless you have to substitute [itex]\lambda \rightarrow \mu^{4-d} \lambda=\mu^{2 \epsilon} \lambda[/itex], and that's how the renormalization scale enters the game in dimensional regularization.

Also this is explained in my QFT script in some detail. I also must say, I don't understand Ryder's treatment, for the very reasons given by many in this thread. The simple one-loop four-point function (calculated, e.g., in dim. reg.) shows that Ryder's ansatz is not complete, as pointed out already in this thread.

Thanks! I will check out your manuscript and BHPZ (what does that stand for?) when time allows. You mention how the scale enters in dimensional regularization, but why is it a necessity that g remain dimensionless? It still seems a bit unmotivated to just multiply the interaction term in the lagrangian by ##\mu^{4-d}##.
 
  • #15
BPHZ stands for Bogoliubov, Parasiuk (renormalization in the space-time domain), Hepp (renormalization in energy-momentum domain, recursive scheme), and Zimmermann (explicit solution of the recursive scheme, known as "forest formula"). It's all in my manuscript (including the treatment with help of dim. reg.!).

The point to keep the coupling dimensionless is that in the limit [itex]d \rightarrow 4[/itex] it is dimensionless, and finally you want to go back to this "physical" space-time dimension. You have to introduce the renormalization scale in dim. reg. for quite technical reasons, particularly to deal with the arguments of logarithms which must be dimensionless always. As I said, the entering of the renormalization scale in dim. reg. is not very intuitive.

A very direct brute-force way is to introduce a momentum cutoff in the loop integrals, which already breaks Lorentz covariance, which makes it somewhat awkward. Instead you can also work in Euclidean field theory (after Wick rotation) and work with a four-momentum cutoff.

The most physical way is, in my opinion, BPHZ which doesn't regularize, but defines the UV divergent pieces (in [itex]\phi^4[/itex] theory the [itex]n[/itex]-point proper vertex functions up to [itex]n=4[/itex]) at a certain spacelike point in four-momentum space (momentum-subtraction schem). The renormalization scale is given by this space-like point. Another scheme is to define the divergent vertex function at four-momenta set to 0, but then you have to do this at some mass scale [itex]M>0[/itex] (at least for the vertex functions with superficial divergence degree <0; in [itex]\phi^4[/itex] only [itex]\delta=4-E=0[/itex] ([itex]E[/itex]: number of external legs) needs this definition, i.e., the four-point vertex which is dimenless in 4 dimensions containing logarithmic divergences). Here, this mass scale [itex]M[/itex] serves as renormalization scale. The trick with this is that the physical mass can also become 0 without trouble concerning IR divergences added to the problem by the original BPHZ momentum subtraction scheme, which subtracts all divergences at 0 four-momenta at the external legs, which only work for mass [itex]m>0[/itex]. This renormalization renormalization scheme, that defines all counterterms for the model at [itex]m=M>0[/itex] is called mass-independent renormalization scheme (MIR), because here only [itex]M[/itex] but not the physical mass [itex]m[/itex] is the dimensionful quantity, i.e., the renormalized quantities becomes independent of the physical (renormalized) mass.
 

FAQ: RG equation and invariance of the vertex function under scaling(Ryder)

1. What is the RG equation?

The RG (renormalization group) equation is a mathematical framework used in theoretical physics to study the behavior of physical systems at different length scales. It describes how the properties of a system change as we zoom in or out, and how these changes are related to the energy scale of the system.

2. How does the RG equation relate to the invariance of the vertex function under scaling?

The RG equation is based on the concept of scaling, which is the idea that the properties of a system remain the same when we change the length scale at which we observe it. This invariance of the system under scaling can be seen in the vertex function, which describes the interaction between particles, and remains unchanged as we zoom in or out of the system.

3. What is the significance of the invariance of the vertex function under scaling?

The invariance of the vertex function under scaling is significant because it allows us to make predictions about the behavior of physical systems at different length scales. By studying how the properties of the system change under scaling, we can understand how the system behaves at different energies and make predictions about its behavior at other length scales.

4. How is the RG equation used in practical applications?

The RG equation is used in various fields of theoretical physics, including particle physics, condensed matter physics, and statistical mechanics. It is used to study phase transitions, critical phenomena, and the behavior of complex systems. It is also used in the development of effective theories and the renormalization of physical quantities.

5. Are there any limitations to the RG equation and invariance of the vertex function under scaling?

While the RG equation and the invariance of the vertex function under scaling have been successful in explaining and predicting the behavior of many physical systems, they do have limitations. In some cases, these theories may fail to accurately describe the behavior of certain systems, especially at extreme energy scales. Additionally, the calculations involved in applying these theories can be complex and require simplifying assumptions, which may introduce errors.

Similar threads

Replies
0
Views
733
Replies
0
Views
226
Replies
1
Views
1K
Replies
1
Views
1K
Replies
3
Views
2K
Replies
6
Views
1K
Replies
5
Views
1K
Back
Top