Feynman Rules for Crossing Lines in Green's Function Diagrams?

In summary, the conversation discusses the application of Feynman rules to a specific diagram in quantum field theory. The rules dictate that there is a vertex at a certain point, and the diagram contributes to the overall amplitude. The conversation also delves into calculations using these rules, including the use of matrix notation and the correct labeling of indices. There is also a discussion about the calculation of the matrix exponential and the resulting Lagrangian. Overall, the conversation provides a deeper understanding of the application of Feynman rules in quantum field theory.
  • #1
latentcorpse
1,444
0
http://www.damtp.cam.ac.uk/user/tong/qft/qft.pdf

Consider the Feynman rules for Green's Functions given at the top of p79 in these notes.

Now let us consider the diagram given in the example on p78.
Take for example the 2nd diagram in the sum i.e. the cross one where x1 is joined to x4 and x2 is joined to x3 and these two lines cross over each other.

Suppose I wanted to apply the Feynman rules to this diagram:

I am not sure if the point where they cross is a vertex or not? I'm going to assume that it must be otherwise that diagram would be the same as the 1st one in the sum (but with vertices relabelled). So let us label the vertex with the spacetime position y.

The Feynman rules then tell us that this Feynman diagram contributes

[itex]-i \lambda \int d^4y \int \frac{d^4k}{(2 \pi)^4} \frac{i e^{-ik \cdot (x_1-x_4)}}{k^2-m^2+i \epsilon} \int \frac{d^4p}{(2 \pi)^4} \frac{ie^{-ip \cdot (x_2 - x_3)}}{p^2-m^2+i \epsilon}[/itex]

Is this correct? Can it be simplified? It looks pretty messy?

Presumably when we added the other contributions from all the other diagrams, we'd end up with a nice final expression for [itex]G^{(4)}(x_1 , \dots , x_4)[/itex]?

Thanks.
 
Physics news on Phys.org
  • #2
If the point y is a vertex, then we need to write propagators from each point [tex]x_i[/tex] to [tex]y[/tex]. Defining the momenta to point inwards, we could write

[tex]-i \lambda \int d^4y \prod_i \Delta_F(y,x_i).[/tex]

Note that the role of the integration over y is to enforce momentum conservation through the integral

[tex]\int d^4y e^{-iy\sum_i k_i} \sim \delta^4\left(\sum_i k_i\right).[/tex]

Amplitudes look a bit simpler in momentum space, but as things go, this one is pretty tame.
 
  • #3
fzero said:
If the point y is a vertex, then we need to write propagators from each point [tex]x_i[/tex] to [tex]y[/tex]. Defining the momenta to point inwards, we could write

[tex]-i \lambda \int d^4y \prod_i \Delta_F(y,x_i).[/tex]

Note that the role of the integration over y is to enforce momentum conservation through the integral

[tex]\int d^4y e^{-iy\sum_i k_i} \sim \delta^4\left(\sum_i k_i\right).[/tex]

Amplitudes look a bit simpler in momentum space, but as things go, this one is pretty tame.

I see. Thanks. Can you just confirm that for that diagram we were talking about, there is ACTUALLY a vertex at y, yes?
 
  • #4
latentcorpse said:
I see. Thanks. Can you just confirm that for that diagram we were talking about, there is ACTUALLY a vertex at y, yes?

Yes, there's a vertex in that diagram. The diagram without a vertex that connects [tex]x_1\leftrightarrow x_4[/tex] and [tex]x_2\leftrightarrow x_3[/tex] is part of the "2 Similar" diagrams in the preceeding term.
 
  • #5
fzero said:
Yes, there's a vertex in that diagram. The diagram without a vertex that connects [tex]x_1\leftrightarrow x_4[/tex] and [tex]x_2\leftrightarrow x_3[/tex] is part of the "2 Similar" diagrams in the preceeding term.

Thanks. I do have one or two other questions about later in these notes though.

(i) On p82, he calculates in (4.9) the matrix [itex]M^{12}[/itex] using the formula (4.8).

Now I tried to replicate this basic calculation and got the elements the wrong way round. I have traced this back to me taking the [itex]\mu[/itex] index to label the rows and the [itex]\nu[/itex] index to label the columns. However, to get the right answer, it seems it must be the other way around. I don't know how we were expected to know this though as he doesn't explain what the indices label and I thought that by convention the first index was for rows and the second for columns, no?

Have I made a mistake here or something?This is then used in (4.29) to get the matrix exponential he has written down. However, he hasn't put in the 1/2 I don't think so I think (4.29) should be
[itex]\text{exp } \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{\phi^3}{2} & 0 \\ 0 & - \frac{\phi^3}{2} & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}[/itex]
If we then substitute [itex]\phi^3=2 \pi[/itex]
we get
[itex]\text{exp } \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & \pi & 0 \\ 0 & - \pi & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}[/itex]
Why is that equal to 1?

(ii) Under (4.53) on p90 he says that [itex]\gamma^\mu \gamma^\nu \partial_\mu \partial_\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \} \partial_\mu \partial_\nu[/itex]
I cannot see why this is true! If I expand out that commutator I get [itex]\frac{1}{2} ( \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu )[/itex] but the [itex]\gamma[/itex] matrices don't commute so surely that identity is wrong?

Thanks a lot.
 
Last edited:
  • #6
It looks like he's using the opposite convention for the matrix form or just made typos that propagated through the notes.

For the 2nd question, the 1/2 is accounted for by the sum over [tex]\rho,\sigma[/tex]. You get [tex]\phi^3[/tex], not [tex]\phi^3/2[/tex].
 
  • #7
fzero said:
It looks like he's using the opposite convention for the matrix form or just made typos that propagated through the notes.

For the 2nd question, the 1/2 is accounted for by the sum over [tex]\rho,\sigma[/tex]. You get [tex]\phi^3[/tex], not [tex]\phi^3/2[/tex].

Hmmm. Sorry I don't see it. How does that sum give us a factor of 2 to cancel the 1/2?
 
  • #8
latentcorpse said:
Hmmm. Sorry I don't see it. How does that sum give us a factor of 2 to cancel the 1/2?

[tex]\Omega_{\rho\sigma} \mathcal{M}^{\rho\sigma} = \Omega_{12} \mathcal{M}^{12} +\Omega_{21} \mathcal{M}^{21} [/tex]
 
  • #9
fzero said:
[tex]\Omega_{\rho\sigma} \mathcal{M}^{\rho\sigma} = \Omega_{12} \mathcal{M}^{12} +\Omega_{21} \mathcal{M}^{21} [/tex]

Ok. So then we get to the matrix

[itex]\text{exp } \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 2 \pi & 0 \\ 0 & - 2 \pi & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}[/itex]
How can we show that this matrix exponential is 1?

Also, under (4.53) on p90 he says that [itex]\gamma^\mu \gamma^\nu \partial_\mu \partial_\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \} \partial_\mu \partial_\nu[/itex]
I cannot see why this is true! If I expand out that commutator I get [itex]\frac{1}{2} ( \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu )[/itex] but the [itex]\gamma[/itex] matrices don't commute so surely that identity is wrong?

And then finally in (4.62) on p91, he calculates the lagrangian. However, I find that
[itex]L=\psi^\dagger \gamma^0 ( i \gamma^\mu \partial_\mu - m) \psi = i \begin{pmatrix} u_+^\dagger \\ u_-^\dagger \end{pmatrix} \gamma^0 \gamma^\mu \partial_\mu \begin{pmatrix} u_+ & u_- \end{pmatrix} - m \begin{pmatrix} u_+^\dagger \\ u_-^\dagger \end{pmatrix} \gamma^0 \begin{pmatrix} u_+ & u_- \end{pmatrix}[/itex]
which doesn't look like it's going to give the right answer. In particular there are factors of [itex]\gamma^0[/itex] floating about in awkward places...

Thanks.
 
  • #10
latentcorpse said:
Ok. So then we get to the matrix

[itex]\text{exp } \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 2 \pi & 0 \\ 0 & - 2 \pi & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}[/itex]
How can we show that this matrix exponential is 1?

[tex] \exp \begin{pmatrix} 0 & \phi \\ -\phi & 0 \end{pmatrix} = \begin{pmatrix} \cos\phi & \sin\phi \\ -\sin\phi & \cos\phi \end{pmatrix} [/tex]

Also, under (4.53) on p90 he says that [itex]\gamma^\mu \gamma^\nu \partial_\mu \partial_\nu = \frac{1}{2} \{ \gamma^\mu , \gamma^\nu \} \partial_\mu \partial_\nu[/itex]
I cannot see why this is true! If I expand out that commutator I get [itex]\frac{1}{2} ( \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu )[/itex] but the [itex]\gamma[/itex] matrices don't commute so surely that identity is wrong?

[tex]\partial_\mu \partial_\nu[/tex] is symmetric.

And then finally in (4.62) on p91, he calculates the lagrangian. However, I find that
[itex]L=\psi^\dagger \gamma^0 ( i \gamma^\mu \partial_\mu - m) \psi = i \begin{pmatrix} u_+^\dagger \\ u_-^\dagger \end{pmatrix} \gamma^0 \gamma^\mu \partial_\mu \begin{pmatrix} u_+ & u_- \end{pmatrix} - m \begin{pmatrix} u_+^\dagger \\ u_-^\dagger \end{pmatrix} \gamma^0 \begin{pmatrix} u_+ & u_- \end{pmatrix}[/itex]
which doesn't look like it's going to give the right answer. In particular there are factors of [itex]\gamma^0[/itex] floating about in awkward places...

Thanks.

[tex]\bar{\psi}[/tex] has a factor of [tex]\gamma^0[/tex], see (4.40).
 
  • #11
fzero said:
[tex] \exp \begin{pmatrix} 0 & \phi \\ -\phi & 0 \end{pmatrix} = \begin{pmatrix} \cos\phi & \sin\phi \\ -\sin\phi & \cos\phi \end{pmatrix} [/tex]



[tex]\partial_\mu \partial_\nu[/tex] is symmetric.



[tex]\bar{\psi}[/tex] has a factor of [tex]\gamma^0[/tex], see (4.40).
Haven't I put the [itex]\gamma^0[/itex] in though in my previous post?
 
  • #12
Actually I have now figured out how that works.

What about in (4.117), why does [itex]\sqrt{ p \cdot \sigma} = \sqrt{ E + p^3}[/itex] here?
Whereas in (4.115) it was equal to [itex]\sqrt{E - p^3}[/itex]

And in (4.118), why does this operator give us 1/2 when it acts on (4.116)? Thanks.
 
Last edited:
  • #13
latentcorpse said:
Actually I have now figured out how that works.

What about in (4.117), why does [itex]\sqrt{ p \cdot \sigma} = \sqrt{ E + p^3}[/itex] here?
Whereas in (4.115) it was equal to [itex]\sqrt{E - p^3}[/itex]

(4.115) has the spin up state, while (4.117) is spin down. Alternatively, you can just write out the matrices and compute.

And in (4.118), why does this operator give us 1/2 when it acts on (4.116)? Thanks.

I can't find a definition for [tex]\hat{p}_i[/tex], but I assume it's just the unit vector, so [tex]\hat{p}_i = (0,0,1)[/tex]. Then [tex]\sigma^3[/tex] appears in the operator and [tex]h=1/2[/tex] follows by direct computation.
 
  • #14
fzero said:
(4.115) has the spin up state, while (4.117) is spin down. Alternatively, you can just write out the matrices and compute.

I don't follow this still.

in (4.115) we have [itex]\sqrt{p \cdot \sigma} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \sqrt{ E-p^3} \begin{pmatrix} 1 \\ 0 \end{pmatrix}[/itex] using the definition of 4 vector products in Minkowski space [itex]p \cdot \sigma =p^0 \sigma_0 - p^i \sigma_i[/itex]

I don't see how this calculation would be any different for (4.117)?
 
  • #15
Because

[itex]
\sigma^3 \begin{pmatrix} 1 \\ 0 \end{pmatrix} \neq \sigma^3 \begin{pmatrix} 0 \\ 1 \end{pmatrix}.
[/itex]
 
  • #16
fzero said:
Because

[itex]
\sigma^3 \begin{pmatrix} 1 \\ 0 \end{pmatrix} \neq \sigma^3 \begin{pmatrix} 0 \\ 1 \end{pmatrix}.
[/itex]

Ok. That makes sense.

What about the bit inbetween eqns (4.126) and (4.127). He claims that [itex](p \cdot \bar{\sigma})(p' \cdot \sigma) = (p_0 + p_i \sigma^i)(p_0-p_i \sigma_i)[/itex]

I disagree. I find:

[itex]p \cdot \bar{\sigma} = p_\mu \bar{\sigma}^\mu = p_0 \bar{\sigma}^0 + p_i \bar{\sigma}^i = p_0 - p_i \sigma^i[/itex]

since[itex]\bar{\sigma}^i=-\sigma^i[/itex]

and then

[itex]p' \cdot {\sigma} = p'_\mu \sigma^\mu = p'_0 {\sigma}^0 + p'_i \sigma}^i = p_0 - p_i \sigma^i[/itex]

since [itex]p'^i=-p^i[/itex]

Is that right?
 
  • #17
Your calculation is correct. He's made a mistake in carrying out the multiplication in the 2nd line of (4.126). He should have [tex]\sqrt{(p\cdot\sigma)(p'\cdot\sigma)}[/tex] in the first term and [tex]\sqrt{(p\cdot\bar{\sigma})(p'\cdot\bar{\sigma})}[/tex] in the second. Both of those expressions give [tex]p_0^2 - \vec{p}^2[/tex] under the square root.
 
  • #18
fzero said:
Your calculation is correct. He's made a mistake in carrying out the multiplication in the 2nd line of (4.126). He should have [tex]\sqrt{(p\cdot\sigma)(p'\cdot\sigma)}[/tex] in the first term and [tex]\sqrt{(p\cdot\bar{\sigma})(p'\cdot\bar{\sigma})}[/tex] in the second. Both of those expressions give [tex]p_0^2 - \vec{p}^2[/tex] under the square root.

Yes. I was doing it in my head. I guess if I'd written it out, I would have seen that he hadn't carried the factors through properly from the matrix multiplication.

And could you have a look at (5.22)? Here I find that [itex]\vec{\alpha}=\gamma^0 \vec{\gamma}[/itex]
i.e. there should be no minus sign.

Since the Dirac equation gives

[itex]( i \gamma^\mu \partial_\mu - m ) \psi =0[/itex]
[itex](i \gamma^0 \partial_t + i \gamma^i \partial_i - m) \psi=0[/itex]
[itex] i \frac{\partial \psi}{\partial t} = - i \gamma^0 \gamma^i \partial_i \psi+ m \gamma^0 \psi[/itex]
[itex]i \frac{\partial \psi}{\partial t} = - \vec{\alpha} \cdot \vec{\nabla} \psi + m \beta \psi[/itex]

So we would find that [itex]\vec{\alpha}= \gamma^0 \vec{\gamma}[/itex], no?

Also, under (5.34), I don't understand the reasoning for that minus sign being there?

And in the paragraph where he's trying to explain why the minus sign is there he claims that [itex]\{ \psi(x) , \bar{\psi}(y) \} = 0 [/itex]. But we showed in (5.14) that [itex]\{ \psi(x) , \psi^\dagger(y) \} = \delta_{\alpha \beta} \delta^{(3)}(x-y)[/itex] as a result of fermionic quantisation. Wouldn't this imply there is a non trivial commutation relation for [itex] \{ \psi(x) , \bar{\psi}(y) \}[/itex] since [itex]\bar{\psi} = \psi^\dagger \gamma^0[/itex] after all?
 
Last edited:
  • #19
latentcorpse said:
Yes. I was doing it in my head. I guess if I'd written it out, I would have seen that he hadn't carried the factors through properly from the matrix multiplication.

And could you have a look at (5.22)? Here I find that [itex]\vec{\alpha}=\gamma^0 \vec{\gamma}[/itex]
i.e. there should be no minus sign.

Since the Dirac equation gives

[itex]( i \gamma^\mu \partial_\mu - m ) \psi =0[/itex]
[itex](i \gamma^0 \partial_t + i \gamma^i \partial_i - m) \psi=0[/itex]
[itex] i \frac{\partial \psi}{\partial t} = - i \gamma^0 \gamma^i \partial_i \psi+ m \gamma^0 \psi[/itex]
[itex]i \frac{\partial \psi}{\partial t} = - \vec{\alpha} \cdot \vec{\nabla} \psi + m \beta \psi[/itex]

So we would find that [itex]\vec{\alpha}= \gamma^0 \vec{\gamma}[/itex], no?

[tex] i \gamma^\mu \partial_\mu - m = i \gamma^0 \partial_t - i \gamma^i \partial_i -m[/tex]

Also, under (5.34), I don't understand the reasoning for that minus sign being there?

And in the paragraph where he's trying to explain why the minus sign is there he claims that [itex]\{ \psi(x) , \bar{\psi}(y) \} = 0 [/itex]. But we showed in (5.14) that [itex]\{ \psi(x) , \psi^\dagger(y) \} = \delta_{\alpha \beta} \delta^{(3)}(x-y)[/itex] as a result of fermionic quantisation. Wouldn't this imply there is a non trivial commutation relation for [itex] \{ \psi(x) , \bar{\psi}(y) \}[/itex] since [itex]\bar{\psi} = \psi^\dagger \gamma^0[/itex] after all?

Those should be equal time commutation relations, note that 5.14 depends on the spatial variables only. Besides, time and normal ordering are definitions of the product that subtract off anything you'd have gotten from commutation relations. Just the overall sign must be kept track of for fermions.
 
  • #20
fzero said:
[tex] i \gamma^\mu \partial_\mu - m = i \gamma^0 \partial_t - i \gamma^i \partial_i -m[/tex]
Surely not. [itex]a^\mu b_\mu = a^0 b_0 + a^i b_i[/itex]
We only get a minus sign when we include the metric as follows:
[itex]a^\mu b_\mu = \eta_{\mu \nu} a^\mu b^\nu = a^0 b^0 - a^i b^i[/itex]
No?
fzero said:
Those should be equal time commutation relations, note that 5.14 depends on the spatial variables only. Besides, time and normal ordering are definitions of the product that subtract off anything you'd have gotten from commutation relations. Just the overall sign must be kept track of for fermions.
So what's the actual reason for the minus sign then?
 
  • #21
latentcorpse said:
Surely not. [itex]a^\mu b_\mu = a^0 b_0 + a^i b_i[/itex]
We only get a minus sign when we include the metric as follows:
[itex]a^\mu b_\mu = \eta_{\mu \nu} a^\mu b^\nu = a^0 b^0 - a^i b^i[/itex]
No?

You're right, I was a bit too fast. [tex]p_\mu = (p_0, - \vec{p})[/tex], but [tex]\partial_\mu = (\partial_t,\partial_i)[/tex]. I don't believe that the minus sign affects anything else said in that section though.

So what's the actual reason for the minus sign then?

The time-ordered product still has to respect the signs of the anticommutation relation.
 
  • #22
fzero said:
You're right, I was a bit too fast. [tex]p_\mu = (p_0, - \vec{p})[/tex], but [tex]\partial_\mu = (\partial_t,\partial_i)[/tex]. I don't believe that the minus sign affects anything else said in that section though.
This won't change what I did though since [itex]p_\mu[/itex] doesn't appear in this calculation does it?

fzero said:
The time-ordered product still has to respect the signs of the anticommutation relation.
Why does [itex]\{ \psi(x) , \bar{\psi}(y) \} = 0[/itex] though? Can't we pull out a factor of [itex]\gamma^0[/itex] and then have [itex]\gamma^0 \{ \psi(x) , \bar{\psi}(y) \}[/itex] which would certainly bee non-zero?

Also, I noticed something that's bugging me. Take for example, (5.40), he says that he is expanding to second order and so he gets a factor of [itex]\frac{(-i \lambda)^2}{2}[/itex] as you would expect from the exponential in Dyson's formula. But didn't we show earlier in (3.23) that the factor of [itex]\frac{1}{2}[/itex] isn't necessary?

And lastly, on p120, I'm getting very confused trying to source the origin of these minus signs. Take for example the equation beneath (5.48), why does

[itex]\langle 0 | c^{r'}_{\vec{q'}} b^{s'}_{\vec{p'}} {c^m_{\vec{l_1}}}^\dagger {b^n_{\vec{l_2}}}^\dagger[/itex]
rearrange to give us the minus sign - I have been playing about with this using the anticommutation relations and cannot for the life of me get it to work out right!

Thanks a lot.
 
  • #23
may have sorted it all except the cbcb bit at the end there...
 

FAQ: Feynman Rules for Crossing Lines in Green's Function Diagrams?

What are Feynman Rules for Green's Function?

Feynman Rules for Green's Function are a set of mathematical rules used in quantum field theory to calculate the interactions between particles. They are named after physicist Richard Feynman, who developed them in the 1940s.

Why are Feynman Rules for Green's Function important?

Feynman Rules for Green's Function are important because they allow us to calculate the probability amplitudes for particle interactions in quantum field theory. This is essential for understanding the behavior of particles at the fundamental level.

How do Feynman Rules for Green's Function work?

Feynman Rules for Green's Function use diagrams to represent particle interactions and the associated mathematical rules to calculate the probability amplitudes. The diagrams are made up of lines and vertices, each representing different particles and interactions, and the rules dictate how to combine them to get the final amplitude.

What is the relationship between Feynman Rules for Green's Function and Feynman diagrams?

Feynman Rules for Green's Function and Feynman diagrams are closely related. The rules provide a systematic way of translating Feynman diagrams into mathematical expressions, making it easier to calculate the probability amplitudes for particle interactions.

Are Feynman Rules for Green's Function applicable to all interactions?

Yes, Feynman Rules for Green's Function are applicable to all interactions in quantum field theory, including the strong, weak, and electromagnetic interactions. They can also be extended to include interactions with gravity through the use of quantum gravity theories.

Back
Top