Genereral:Questions about Srednicki's QFT

  • Thread starter haushofer
  • Start date
  • Tags
    Qft
In summary, the conversation discusses questions about QFT and the Srednicki book. The subject of Feynman diagrams and the functional Z is brought up, as well as the lack of tree-level contributions in \phi^3-theory. There is also a discussion about regularization and the "skeleton expansion" method described in chapter 19. Finally, there is a mention of the LSZ formula and the 't Hooft program, with suggestions for further reading.
  • #36
That's very clearyfying. Thanks RedX!
 
Physics news on Phys.org
  • #37
Hi everyone,

Just out of curiosity, does anyone know why the second line of eqn (5.10) is valid?

The reason I ask is because that form for the creation operator is derived in eqn (3.21) under the assumption of a free-field theory.

Why is the same form still valid in the interacting-field theory? Srednicki took great care later on (e.g., eqns 5.17, 5.18, 5.19) to make the interacting-field theory give the same result as the free-field theory, but seemed a bit careless in not explaining why you can use eqn. (3.21) for the creation operator in eqn (5.10) for the interacting-field theory.
 
  • #38
RedX said:
Hi everyone,

Just out of curiosity, does anyone know why the second line of eqn (5.10) is valid?

The reason I ask is because that form for the creation operator is derived in eqn (3.21) under the assumption of a free-field theory.

Why is the same form still valid in the interacting-field theory? Srednicki took great care later on (e.g., eqns 5.17, 5.18, 5.19) to make the interacting-field theory give the same result as the free-field theory, but seemed a bit careless in not explaining why you can use eqn. (3.21) for the creation operator in eqn (5.10) for the interacting-field theory.

"Let us guess that this still works in the interacting theory as well. One complication is that a^dagger (vec k) becomes time dependent..."

i.e. we define that a should work in the same way but also time dependent due to interactions.

See Weinberg for futher information.
 
  • #39
ansgar said:
"Let us guess that this still works in the interacting theory as well. One complication is that a^dagger (vec k) becomes time dependent..."

i.e. we define that a should work in the same way but also time dependent due to interactions.

See Weinberg for futher information.

So the free-field is given by the Fourier expansion:

[tex]\phi(x)=\int d^3 \tilde k [a(k)e^{ikx}+a^\dagger(k)e^{-ikx}][/tex]

where k is on-shell and [tex]d^3 \tilde k=\frac{d^3k}{2E_k}[/tex].

Adding time dependence to the coefficients leads to:

[tex]\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}][/tex]

However, deriving this:

[tex]a(k)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x) [/tex]

where [tex]A\bar \partial_0 B=A\partial_0 B-B\partial_0 A [/tex]

only works when a(k,t)=a(k), i.e., when a(k) is not a function of time.

Does this mean that in the interacting theory, [tex]\phi(x) [/tex] can't be written as:


[tex]\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}][/tex]
 
  • #40
Yes, you can still write the field phi like that -- it is simply the Fourier transform of the field phi. Remember that the field operator phi satisfies the equations of motion. In the free case these equations are linear in the field. When you take the Fourier transform of the field phi, the components a(k,t) also satisfy an equation of motion. What's nice about a free field theory is that all these equations of motion for the modes a(k,t) decouple, and can be solved seperately -- this is where the phase factor exp[iw_k t] comes from. So in a free field theory you have truly solved the time dependence of the operator / Fourier component a(k,t).

But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

But to get back to your question: the a(k,t) are the Fourier components of the field phi at time t. You can always define those. But only in the free field case do you have a simple relation between a(k,t_1) and a(k,t_2). This can be traced back to the decoupling of the equations of motions for the Fourier components. For the interacting case you need perturbation theory.
 
  • #41
xepma said:
So in a free field theory you have truly solved the time dependence of the operator / Fourier component a(k,t).

But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

So for the free field:

[tex] a(k,t)e^{ikx}=a(k)e^{-iwt}e^{ikx}=a(k)e^{ik_\mu x^\mu} [/tex]

but in general the field [tex]\phi(x,t)[/tex] is a linear combination of:

[tex] a(k,t)e^{ikx}[/tex] and hermitian conjugate.

This sounds good, and mathematically is correct, but the only problem I have with it is this equation seems no longer true:

[tex]a(k,t)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x,t) [/tex]

i.e., solving backwards for a(k,t) in terms of [tex]\phi(x,t) [/tex]

I know you said that solving for a(k,t) is unsolvable in the interacting case, as the equations are nonlinear so a(k,t) depends not only on coefficients at past times but also coefficients with different momenta. But I think you were referring to a simple time dependence like a(k,t)=(sin t)^3 t^2 log(t) a(k) . However, can you write a(k,t) not in terms of a definite function of t, but in terms of the unknown interacting field [tex]\phi(x,t) [/tex]?
According to Srednicki, you can, and the answer is the same as the free-field case:[tex]a(k,t)=\int d^3x e^{-ikx} \bar \partial_0 \phi(x,t) [/tex]

except now the field [tex]\phi(x,t) [/tex] is interacting and not free. I'm not sure how this is true in the interacting case.
 
  • #42
what happens if you actually is doing the math for the RHS of that equation? what does it become?
Use

[tex]
\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]
[/tex]

which according to xepma is true

where now the a is the annihilation operator for the true vacuum [tex] |\Omega \rangle [/tex]
 
  • #43
ansgar said:
what happens if you actually is doing the math for the RHS of that equation? what does it become?
Use

[tex]
\phi(x)=\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]
[/tex]

which according to xepma is true

where now the a is the annihilation operator for the true vacuum [tex] |\Omega \rangle [/tex]

Sure. But I should say that I was a bit careless with the notation. In some contexts e^(ikx) is the contraction of a 4-vector and others it is the contraction of a 3-vector. The formula that xepma is referring to I believe is the 3-vector case. Also I'm using the (-+++) signature.

[tex]
\int d^3x e^{i\vec{k}\cdot\vec{x}} \bar \partial_0 \phi(x)
[/tex]

Taking the time derivative on the left side is zero. So this expression becomes

[tex]
\int d^3x e^{i\vec{k}\cdot\vec{x}} \partial_0 \phi(x)
= \int d^3x e^{i\vec{k}\cdot\vec{x}} \partial_0 [\int d^3 \tilde k [a(k,t)e^{ikx}+a^\dagger(k,t)e^{-ikx}]] [/tex]

and I don't see how one can get rid of the time derivative of the creation and annihilation operators to get just the creation and annihilation operators without any derivatives.

So the expression is not equal to just a(k,t).
----------------------------------------------------------------------------------------
correction:

actually, I got everything mixed up, so ignore everything above this correction. here's the new post:

[tex]
\int d^3x e^{-ikx} \bar \partial_0 \phi(x)
[/tex]

So taking the time derivatives, this expression becomes

[tex]
i \int d^3x k_0e^{-ikx} \phi(x)
+\int d^3x e^{-ikx} \partial_0 [\int d^3 \tilde k [a(k,t)e^{i\vec{k}\cdot\vec{x}}+a^\dagger(k,t)e^{-i\vec{k}\cdot\vec{x}}]] [/tex]

but this to me runs into the same problem, that you'll get time derivatives of the creation and annihilation operator, so there is no way to get just the creation and annihilation operator without time derivatives.
 
Last edited:
  • #44
Never mind. I got it. It wasn't exactly pretty, so I probably didn't do it the best way, so I won't write the details here.

Basically you have this:
(1) [tex]

\int d^3x e^{-ikx} \bar \partial_0 \phi(x)

[/tex]
and for the time derivative of [tex]\phi(x)[/tex], use:

[tex]\dot{\phi}=i[H,\phi]=iH\phi-i\phi H [/tex]

Then show that (1) operating on |0> gives zero, (1) operating on [tex] a^\dagger(q)|0>[/tex] is zero unless q=k, in which case you get just |0>.

I think that's enough to prove that (1) = a(k)
 
  • #45
I already posted this in the homework/course section, but got no reply, so I'm crossposting here(Sorry for this)


Problem with the ordering of integrals in the derivation of the Lehmann-Kaller form of the exact propagator in Srednicki's book.

We start with the definition of the exact propagator in terms of the 2-point correlation function and introduce the complete set of momentum eigenstates and then define a certain spectral density in terms of a delta function. But the spectral density is also a function of 'k', so we cannot take the spectral density outside the integral over 'k'. Since that is not possible, the subsequent manipulations fail too.


2. Homework Equations

In Srednicki's book :
Equation 13.11 and 13.12

If that is incorrect, the use of 13.15 to get 13.16 is not possible.

3. The Attempt at a Solution

I don't see how it is possibe to derive the equation without that interchange.

I'd appreciate any clarifications on this issue. Am I missing some trivial thing?
 
  • #46
no the specral density is only a function of s

use eq. 13.9

we get

|< k,n | phi(0) | 0 >|^2 which is just a (complex) number.
 
  • #47
Sorry, I still do not get it. Isn't [tex] |<k,n|\phi(0)|0>|^{2} [/tex] dependent on 'k'? Could you please elaborate?
 
  • #48
msid said:
Sorry, I still do not get it. Isn't [tex] |<k,n|\phi(0)|0>|^{2} [/tex] dependent on 'k'? Could you please elaborate?

you might want to go back to basic QM...

it is a number since phi(0) is a number

here is a good review of those particular chapters from srednicki

www.physics.indiana.edu/~dermisek/QFT_09/qft-II-1-4p.pdf
 
  • #49
[tex]\sum_n |\langle k,n|\phi(0)|0\rangle|^{2} [/tex] depends on k, but not on any other 4-vectors. Since it is a scalar, it can depend only on k^2 = -s.
 
  • #50
ansgar said:
you might want to go back to basic QM...

it is a number since phi(0) is a number

here is a good review of those particular chapters from srednicki

www.physics.indiana.edu/~dermisek/QFT_09/qft-II-1-4p.pdf

[tex] \phi(0) [/tex] is not a number, it is an operator at a specified location in spacetime, which in this case is at the origin of it.

Avodyne said:
[tex]\sum_n |\langle k,n|\phi(0)|0\rangle|^{2} [/tex] depends on k, but not on any other 4-vectors. Since it is a scalar, it can depend only on k^2 = -s.

It makes sense that it can only depend on k^2 and k^2 = -M^2, which we are summing over. This is acceptable if the interchange of the summation over 'n' and the integral over 'k' are valid. Thanks a lot for the clarification, Avodyne.
 
  • #51
This is a great thread, I really need to read thoroughly when I get chance. I've just got through the spin zero part of Srednicki, and started on the Spin 1/2 stuff, however the group representation stuff is phasing me a bit here, all this stuff about (1,2) representation, (2,2) vector rep etc. I was wondering if anyone could explain what this means, or recommend any good books/online references that go through this stuff?
 
  • #52
1 means singlet, 2 means doublet.. it is "just" adding of two spin 1/2 particles, same algebra.
 
  • #53
xepma said:
But in the interacting case the field obeys the interaction version of the equations of motion, which is non-linear (there's a phi^3 term present in these equations). As a consequence the equations of motions of all the a(k,t) are coupled and highly non-linear. It becomes practically impossible to solve these, so there is no way to tell how a(k,t) at a later time depends on the a(k', t') at an earlier time. In fact, the only way to try to resolve the time-dependent structure of a(k,t) is through perturbation theory.

I have a question about this. Suppose you can solve for a(k,t) in an interacting theory. Does this mean you can calculate scattering amplitudes at finite times? So say I begin at t=-10, and want to figure out the probability amplitude of observing a state at t=129. Then can I say this is equal to:

[tex] <0|a(k_{final},t=129) a^\dagger(k_{initial},t=-10)|0>[/tex]

But I'm having trouble picturing this. Doesn't the Fock space get screwed up, because if you begin with one particle, all sorts of things are happening such as loops involving other particles. In other words, don't you have to extend your Fock space for virtual particles that might be off-shell?

In perturbation theory, is there an assumption that at t=+-infinity, that all interactions are turned off? Otherwise, wouldn't the Fock space have to include off-shell momenta?
 
  • #54
RedX said:
Never mind. I got it. It wasn't exactly pretty, so I probably didn't do it the best way, so I won't write the details here.

Basically you have this:
(1) [tex]

\int d^3x e^{-ikx} \bar \partial_0 \phi(x)

[/tex]
and for the time derivative of [tex]\phi(x)[/tex], use:

[tex]\dot{\phi}=i[H,\phi]=iH\phi-i\phi H [/tex]

Then show that (1) operating on |0> gives zero, (1) operating on [tex] a^\dagger(q)|0>[/tex] is zero unless q=k, in which case you get just |0>.

I think that's enough to prove that (1) = a(k)

I'm not convinced this is enough. You would need to see if it holds for all ppssible states.

I do however found a different way of writing the inverse,

[tex]a(k,t) = \int d^3x e^{-ikx+iwt}\left[\omega \phi(x,t) + i \Pi(x,t)\right][/tex]

where [tex]\Pi(x,t)[/tex] is the field conjugate to [tex]\phi(x,t)[/tex] (which is just the time-derivative). Now the mode expansion of the conjugate field is

[tex]\Pi(x,t) = -i \int\frac{d^3k}{(2\pi)^3(2\omega)} \omega\left[a(k,t) e^{ikx-iwt} - a^\dag(k,t) e^{-ikx+iwt}\right][/tex]

This is the same expansion as the non-interacting case, but again the modes pick up a time dependence. The expansion is restriced to this form due to the equal time commutation relations between the field phi and its conjugate. So in conclusion, yes the relation should hold in the interacting case.

I probably ran into the same problem as you: acting with the time derivative on the field phi generates time derivatives of a and a^dag as well. But I think the resolution lies in the fact that the basis of modes is complete, so these time derives can be written in terms of a linear sum of the modes a and a^\dag as well.
 
  • #55
RedX said:
I have a question about this. Suppose you can solve for a(k,t) in an interacting theory. Does this mean you can calculate scattering amplitudes at finite times? So say I begin at t=-10, and want to figure out the probability amplitude of observing a state at t=129. Then can I say this is equal to:

[tex] <0|a(k_{final},t=129) a^\dagger(k_{initial},t=-10)|0>[/tex]

Yes, you can solve the correlators in that case.

But I'm having trouble picturing this. Doesn't the Fock space get screwed up, because if you begin with one particle, all sorts of things are happening such as loops involving other particles. In other words, don't you have to extend your Fock space for virtual particles that might be off-shell?

You don't have to do anything with your Fock space. The virtual particles are intermediate states that pop when you perturbatively try to determine the time-dependence of the fields / the modes. They are a mathematical construction that pop up in perturbation theory. If you can solve the time dependence exactly you wouldn't be needing these virtual particles.

Let me give an example by the way of why the time-evolution of the mode operators is problematic. The [tex]\phi^4[/tex] interaction looks like:

[tex]H_I = \int d^3x \phi^4(x) [/tex]

Written in terms of the modes this is something like

[tex]H_I = \int d^3 k_1 d^3k_2 d^3k_3 d^3k_4 (2\pi)^3\delta(k_1+k_2+k_3+k+4) a^\dag_{k_1}a^\dag_{k_2} a_{k_3} a{k_4}[/tex]
This is probably not completely correct, but the point is, is that the interaction is given by (a sum over) a product of two creation and two annihilation operators subject to momentum conservation.

Now, the time evolution of the mode operator a is given by (Heisenberg picture)

[tex]-i\hbar \partial_t a = [H_0 + H_I,a][/tex]

Now the commutator of a with the interaction terms H_I will generate (a sum over) a product of three mode operators, [tex]a^\dag a a[/tex]. To solve it we need the time dependence of this product. You guessed it, this is given by:

[tex]-i\hbar \partial_t (a^\dag a a) = [H_0 + H_I,a^\dag a a][/tex]

But this commutator generates terms with an even larger product of mode operators! Which, in turn, also determine the time evolution of the operator a. And so the problem is clear: the time evolution of a product of mode operators: [tex]a^\dag\cdot a[/tex] is determined by the time dependence larger product of mode operators.

I hope I'm not being too vague here... ;)

In perturbation theory, is there an assumption that at t=+-infinity, that all interactions are turned off? Otherwise, wouldn't the Fock space have to include off-shell momenta?

Well, you're touching on something deep here. The assumption is, indeed, that at t= +/- infinity the theory is non-interacting. At these instances we can construct the Fock space, and assume the Fock space is the same for the interacting case. It's not a pretty assumption at all that these two Fock spaces (interacting vs non-interacting) are the same, and --as far as I know-- it's not clear it should hold. But I don't think there's a way around this at the moment... If you can define the Fock space directly in your interacting theory, that would be great. But I got no clue on how to do that.
 
  • #56
xepma said:
I'm not convinced this is enough. You would need to see if it holds for all ppssible states.

Using the (-+++) metric and [tex]\dot{\phi}=i[H,\phi]=iH\phi-i\phi H[/tex]:

[tex] i\int d^3x e^{-ikx} \bar \partial_0 \phi(x)=
i[i\int d^3x e^{-ikx}H\phi(x)-i\int d^3x e^{-ikx}\phi(x)H-i\int d^3x e^{-ikx} E_k\phi(x)] [/tex]

Using that [tex]\phi(x)=\int d^3 \tilde q [a(q,t)e^{iqx}+a^\dagger(q,t)e^{-iqx}][/tex] this becomes:

[tex]
=-H[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]
[/tex]
[tex]
+[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]H
[/tex]
[tex]
+E_k[\frac{a(k,t)}{2E_k}+\frac{a^\dagger(-k,t)}{2E_k}e^{2iE_kt}]
[/tex]

Now consider the operation of this on a state |M> with energy M. The 2nd terms (the creation operators) in each of the lines cancel because the Hamiltonian in the first line pulls out an energy of -(M+Ek), the second line pulls out an energy of M, and the third line is just Ek: these add to give zero. So far so good, so examine the 1st terms (the destruction operators). Consider two cases, where |M> contains the particle |k>, and not. If M does not, then all the destruction operators will produce zero acting on |M>. If |M> contains |k>, then the Hamiltonian brings out a -[M-Ek] in the 1st line, a +M in the second line, and an Ek in the 3rd line. This adds to 2Ek, and multiplying this by the term [tex]\frac{a(k,t)}{2E_k} [/tex], this leaves just the destruction operator.

Anyways, there are some subtleties that I'm bothered by, but I'm convinced that [tex]a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) [/tex] is still true in an interacting theory. What's remarkable is if you plug this expression in for a(k,t) to calculate the commutator of the a's, and use the canonical commutation relations, then you get the standard equal time commutation relations for a(k,t):

[tex][a(k,t),a^\dagger(q,t)]=\delta^3(k-q)(2\pi)^32E_k [/tex]

All that's required is that [tex]\Pi=\frac{\partial \mathcal L}{\partial \dot{\phi}}=\dot{\phi} [/tex], so that you can identify the time derivative terms in [tex]a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) [/tex] as the canonical momentum, so the commutation relation is easy to take.

So basically, we're pretty much required to have the Lagrangian be no more than 2nd order in time derivatives so that the canonical momentum is just the time derivative of the field, or there is no equal time commutation relations for the creation operators.

So basically the relation [tex]a(k,t)= i\int d^3x e^{-ikx} \bar \partial_0 \phi(x) [/tex] is actually more fundamental than the Lagrangian!

Anyways, I was reading Srednicki again, and he showed that [tex]a^\dagger(k,t) [/tex] can actually create multiparticle states when acting on the vacuum! This is equation (5.23). However, as t goes to +- infinity, you don't have this happening.

This is interesting stuff, but I heard that getting into more detail on this stuff takes constructive quantum field theory.
 
Last edited:
  • #57
The notion of a (so-called real) particle is ambiguous at "finite" time, basically due to the HUP. Or, to say it the other way around, particles become well-defined after enough time has passed in order to subdue the HUP. The poles of the correlation function are (what should be) interpreted as particles, not the lines on Feynman diagrams. In the spirit of QM, only a superposition of different particle number states, not a definite particle number state, exists at finite time.
 
  • #58
Equation 11.20 of Srednicki's book is the expression for the probability per unit time for the scattering of two particles. It is equal to a Lorentz invariant part times a non-Lorentz invariant part, and the non-Lorentz invariant part is:

[tex]\frac{1}{E_1E_2V}[/tex]

I'm having trouble seeing how aliens on a spaceship observing the LHC will see everything slowed down by [tex]\frac{1}{\sqrt{1-v^2}} [/tex], where v is the velocity of the Earth relative to their spaceship.

In equation 11.48 for the decay rate it's obvious this is true, that aliens will observe a decay time longer by [tex]\frac{1}{\sqrt{1-v^2}} [/tex].

But is there a quick way to verify that [tex]\frac{1}{E_1E_2V}[/tex] divided by [tex]\frac{1}{E_1'E_2'V'}[/tex] is equal to [tex]\frac{1}{\sqrt{1-v^2}} [/tex] in the primed frame?
 
  • #59
Okay, I sort of intuitively derived it, after I read Weinberg's section on cross-sections.

First of all, I'm a bit shocked that if you take the probability per unit time, and divide by the flux, you get something Lorentz invariant. Weinberg on page 138 Vol 2, says: "It is conventional to define the cross-section to be a Lorentz-invariant function of 4-momenta."
Seriously, is that how the cross-section is defined? I thought it would be defined by the experimentalist reporting his or her results by dividing by the flux, because that's all that they can do! By sheer luck when you do that, you get something that's Lorentz-invariant!

Anyways, dividing by the flux, you get a now Lorentz-invariant part that looks like this:

[tex]\frac{1}{E_1E_2V}\frac{V}{u} [/tex]

where [tex]\frac{u}{V} [/tex] is the flux, and u is the relative velocity defined as:

[tex]u=\frac{\sqrt{(p_1\cdot p_2)^2-m_{1}^2m_{2}^2}}{E_1E_2} [/tex]
in 3.4.17 of Weinberg.

Now examine the [tex]\frac{V}{u} [/tex] term.

If you are boosting away from the COM frame, V undergoes a length contraction, and also the length numerator of u undergoes a length contraction. These cancel. But the time denominator of u undergoes a time dilation, so overall [tex]\frac{V}{u} [/tex] increases. That means
[tex]\frac{1}{E_1E_2V}[/tex] must decrease, since their product is Lorentz-invariant. So the probability per unit time is smaller in a boosted frame, which is time dilation.

My special relativity is really shoddy, so I just assumed that the COM frame of two particles is like a rest frame of one particle, so that a boost away from this frame results in a length contraction and a time dilation. Does anyone know how to do this more rigorously (legitimately)?
 
  • #60
I think that you must restrict to the case pA.pB = -|pA||pB| (where A and B are the incoming particles), and therefore restrict the set of Lorentz transformations to rotations and only longitudinal boosts. I'm pretty sure that's what Weinberg means, but I don't have his book, so I can't check. The cross section is invariant to longitudinal boosts, but not transverse boosts.
 
Last edited:
  • #61
turin said:
I think that you must restrict to the case pA.pB = -|pA||pB| (where A and B are the incoming particles), and therefore restrict the set of Lorentz transformations to rotations and only longitudinal boosts. I'm pretty sure that's what Weinberg means, but I don't have his book, so I can't check. The cross section is invariant to longitudinal boosts, but not transverse boosts.

I sort of read the same thing on some lecture notes for experimentalists. Basically, they say that since the cross-section is an area, a boost perpendicular to the area results in no change. That would seem to imply that if the boost is not perpendicular to the area, then the dimension of the area in the direction of the boost should get length contracted, decreasing the cross-section.

But Weinberg's expression for the cross section is Lorentz-invariant to boosts in any direction:

[tex]d\sigma=\frac{1}{4\sqrt{(k_1\cdot k_2)^2-m_{1}^2m_{2}^2}}
|\mathcal T|^2dLIPS_n(k_1+k_2) [/tex]

where all vectors in that expression are 4-vectors, and [tex]k_1,k_2 [/tex] are the incoming 4-momenta of the two colliding particles.
 
  • #62
I have a number of basic conceptual questions from chapter 5 on the LSZ formula. Firstly, Srednicki defines [tex] a^{\dag}_1 :=\int d^3k f_1(\vec{k})a^{\dag}(\vec{k})[/tex], where [tex] f_1(\vec{k}) \propto exp[-(\vec{k}-\vec{k}_1)^2/4\sigma^2] [/tex].

I understand how this creates a particle localized in momentum space near [tex] \vec{k}_1 [/tex] as it is weighted sum over momentum effectively, but don't understand how this endows the particle with any kind of position, or a position near the origin for that matter as Srednicki states?

Also Srednicki asks us to consider the state [tex] a^{\dag}_{1}\mid 0 \rangle[/tex]. Why exactly does this state propagate and spread out, and why is it localized far from origin at [tex] t \rightarrow \pm \infty [/tex]

I follow Srednicki's mathematics leading to 5.11, but I can't see why the creation ops would no longer be time independent in an interacting theory or why indeed there would be any issue in assuming that free field creation ops would work comparably in an interacting theory.
 
  • #63
I also have a few question regarding equations coming from Srednicki's book, though I'm afraid they are all rather trivial.

Trial version of the book can be found http://www.physics.ucsb.edu/~mark/ms-qft-DRAFT.pdf"

First, why is equation 14.33, [itex] {A^{\varepsilon /2}} = 1 + \frac{\varepsilon }{2}\ln A + O({\varepsilon ^2})[/itex] true?

Second, equations 28.19 till 28.21, where it is argued that from
[itex]0 = (1 + \frac{{\alpha G_1^'(\alpha )}}{\varepsilon } + \frac{{\alpha G_2^'(\alpha )}}{{{\varepsilon ^2}}} + ...)\frac{{d\alpha }}{{d\ln \mu }} + \varepsilon \alpha[/itex]

and some physical reasoning in the paragraph below this equation, we should have have [itex]\frac{{d\alpha }}{{d\ln \mu }} = - \varepsilon \alpha + \beta (\alpha )[/itex]

Now, I do not understand how the terms on the RHS of this equation are fixed. The text says the beta function is determined by matching the [itex]O({\varepsilon ^0})[/itex] terms, which should give [itex]\beta (\alpha ) = {\alpha ^2}G_1^'(\alpha )[/itex]

What [itex]O({\varepsilon ^0})[/itex] terms? How to match them?
Also, what are the [itex]O({\varepsilon})[/itex] terms?

thank you
 
Last edited by a moderator:
  • #64
Lapidus said:
why is equation 14.33, [itex] {A^{\varepsilon /2}} = 1 + \frac{\varepsilon }{2}\ln A + O({\varepsilon ^2})[/itex] true?
[tex]A^{\varepsilon /2}=\exp[({\varepsilon /2})\ln A].[/tex]

Now expand the exponential in powers of [itex]\varepsilon[/itex].
Lapidus said:
What [itex]O({\varepsilon ^0})[/itex] terms? How to match them? Also, what are the [itex]O({\varepsilon})[/itex] terms?
Take the equation

[itex]\frac{{d\alpha }}{{d\ln \mu }} = - \varepsilon \alpha + \beta (\alpha )[/itex]

and plug it into

[itex]0 = (1 + \frac{{\alpha {G_1^'}(\alpha )}}{\varepsilon } + \frac{{\alpha {G_2^'}(\alpha )}}{{{\varepsilon ^2}}} + ...)\frac{{d\alpha }}{{d\ln \mu }} + \varepsilon \alpha.[/itex]

This is supposed to be true for all [itex]\varepsilon[/itex], so if we do a Laurent expansion in powers of [itex]\varepsilon[/itex], the coefficient of each power of [itex]\varepsilon[/itex] must be zero. The coefficient of [itex]\varepsilon^0[/itex] (that is, the constant term with no powers of [itex]\varepsilon[/itex]) is

[tex]\alpha^2 G'_1(\alpha)-\beta(\alpha),[/tex]

so this must equal zero.
 
  • #65
Thanks, Avodyne!

I got two more. (Actually, I have plenty more.)

Inhttps://www.physicsforums.com/showpost.php?p=2331156&postcount=13" of this thread, Haushofer asks how we get from 27.11 to 27.12.

I got this far [itex]\ln {m_{ph}} = \ln m + \frac{1}{2}\ln \left[ {\frac{5}{{12}}2\alpha \left( {\ln (\mu /m} \right) + {c^'}} \right] + \frac{1}{2}\ln O({\alpha ^2})[/itex] but what now?

Also, in chapter 9, in the paragraph below equation 9.9 it says that we will see that
[itex]Y = O(g)[/itex] and [itex]{Z_i} = 1 + O({g^2})[/itex]. Where and when do we see it? In equation 9.18? Does 9.18 require Y and Z to be first respectively second order in g?
 
Last edited by a moderator:
  • #66
You are not taking logarithms correctly!

Start with:

[tex]m_{ph}^2 = m^2\left[1 + c\alpha + O(\alpha^2)\right].[/tex]

Take the log:

[tex]\ln(m_{ph}^2) = \ln(m^2)+\ln\left[1 + c \alpha + O(\alpha^2)].[/tex]

Now use [itex]\ln(m^2)=2\ln m[/itex] and [itex]\ln[1+ c \alpha + O(\alpha^2)]= c\alpha.[/itex]

As for Y and the Z's, I would say 9.20 for Y, 14.37 and 14.38 for Z_ph and Z_m, and 16.12 for Z_g.
 
  • #67
Good stuff, Avodyne!

I will be back with more. For now, many thanks!
 
  • #68
I have a question about this part: why can't [tex]\frac{d \alpha}{d ln \mu}[/tex] be an infinite series in /positive/ powers of epsilon? I see that if it is a /finite/ series in positive powers of epsilon, then it must terminate at the eps^1 term, since if it goes to eps^n with n > 1 then there's no way the eps^n term on the right hand side can be zero. But I could imagine the appropriate cancellations happening if it is an infinite series.
 
  • #69
Because you would end up with an infinite series in [itex]\varepsilon[/itex] that summed to zero. The only such series has all zero coefficients.
 
  • #70
My Srednicki questions for today!

As I already had problems with the rather trivial 14.33, the more daunting equations 14.34 and 14.36 are unfortunately also not clear to me.

Question 1

how to go from 14.32

[tex]\frac{1}{2}\alpha \Gamma ( - 1 + \frac{\varepsilon }{2})\int\limits_0^1 {dxD(\frac{{4\pi {{\tilde \mu }^2}}}{D}} {)^{\varepsilon /2}}[/tex]

to 14.34

[tex] - \frac{1}{2}\alpha \left[ {(\frac{2}{\varepsilon } + 1)(\frac{1}{6}{k^2} + {m^2}) + \int\limits_0^1 {dxD\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{{{e^\gamma }D}}} \right)} } \right][/tex]

knowing that

[tex]D = x(1 - x){k^2} + {m^2}\][/tex] and [tex]\int\limits_0^1 {dxD = \frac{1}{6}{k^2} + {m^2}} [/tex] and [tex]\Gamma ( - 1 + \frac{\varepsilon }{2}) = - (\frac{2}{\varepsilon } - \gamma + 1)[/tex] and [tex]{A^{\varepsilon /2}} = 1 + \frac{\varepsilon }{2}\ln A + O({\varepsilon ^2} [/tex]

My first step would be

[tex] - (\frac{2}{\varepsilon } - \gamma + 1)\int\limits_0^1 {dx} D\left[ {1 + \frac{\varepsilon }{2}\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{D}} \right)} \right] = - (\frac{2}{\varepsilon } - \gamma + 1)\left[ {(\frac{1}{6}{k^2} + {m^2}) + \frac{\varepsilon }{2}\int\limits_0^1 {dx} D\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{D}} \right)} \right][/tex]

Question 2

how to go from 14.34

[tex] - \frac{1}{2}\alpha \left[ {(\frac{2}{\varepsilon } + 1)(\frac{1}{6}{k^2} + {m^2}) + \int\limits_0^1 {dxD\ln \left( {\frac{{4\pi {{\tilde \mu }^2}}}{{{e^\gamma }D}}} \right)} } \right] - A{k^2} - B{m^2} + O({\alpha ^2})[/tex]

to 14.36

[tex]\frac{1}{2}\alpha \int\limits_0^1 {dxD\ln (D/{m^2}) - } \left\{ {\frac{1}{6}\alpha \left[ {\frac{1}{\varepsilon } + \ln (\mu /m) + \frac{1}{2}} \right] + A} \right\}{k^2} - \left\{ {\alpha \left[ {\frac{1}{\varepsilon } + \ln (\mu /m) + \frac{1}{2}} \right] + B} \right\}{m^2} + O({\alpha ^2})[/tex]

with the help of redefining
[tex]\mu \equiv \sqrt {4\pi } {e^{ - \gamma /2}}\tilde \mu [/tex]

Sadly, even after staring at it for half an hour, I have no clue what he does here. Though, I assume it is certainly rather simply.

thanks in advance for any hints and help
 
Last edited:

Similar threads

Replies
4
Views
1K
Replies
3
Views
2K
Replies
4
Views
2K
Replies
9
Views
2K
Replies
8
Views
2K
Replies
4
Views
2K
Replies
1
Views
2K
Back
Top