Grassmann Algebra: Derivative of $\theta_j \theta_k \theta_l$

  • Thread starter Thread starter latentcorpse
  • Start date Start date
  • Tags Tags
    Algebra Grassmann
AI Thread Summary
The discussion revolves around the differentiation of products of Grassmann numbers, specifically the expression for the derivative of the product $\theta_j \theta_k \theta_l$. It clarifies that the minus sign in the derivative arises from the properties of Grassmann numbers, which anti-commute. The conversation also touches on Grassmann integration, emphasizing that only the last term survives in integrals involving Grassmann variables due to their nilpotent nature. The participants explore how to derive identities involving the antisymmetric tensor and the implications of integration results, including the cancellation of terms when performing integrals. Overall, the thread provides insights into the mathematical framework of Grassmann algebra and its applications in theoretical physics.
  • #51
latentcorpse said:
Got it. Thanks.

Although, one thing is bothering me: when we do the integration by parts -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a
haven't we lost an index on the first term there? So the indices aren't balanced in this equation?

As I mentioned back in post #45, that term should be a surface integral. Look up Stokes' theorem.
 
Physics news on Phys.org
  • #52
fzero said:
As I mentioned back in post #45, that term should be a surface integral. Look up Stokes' theorem.

\int_\Omega ( \vec{\nabla} \times \vec{F} ) \cdot \vec{da} = \int_{\partial \Omega} \vec{F} \cdot d \vec{s}

But we don't have a curl in our expression, do we?
 
  • #53
latentcorpse said:
\int_\Omega ( \vec{\nabla} \times \vec{F} ) \cdot \vec{da} = \int_{\partial \Omega} \vec{F} \cdot d \vec{s}

But we don't have a curl in our expression, do we?

A special case of Stokes' theorem is the divergence theorem, so check that one out.
 
  • #54
fzero said:
A special case of Stokes' theorem is the divergence theorem, so check that one out.

<br /> -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \int_{S^{d-1}} n_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a<br />

where n^\mu are the components of the vector normal to the surface of the S^{d-1}.
Is that correct?

Then do we just say that this term vanishes because either A or \lambda vanishes at infinity. Which one is it though? Am I right in saying it would need to be \lambda since we don't have an A term only derivatives of A?
Even if it is the \lambda term that vanishes - I don't understand physically why it should?

Thanks.
 
  • #55
latentcorpse said:
<br /> -\frac{1}{2} \int d^dx \partial_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a = - \frac{1}{2} \int_{S^{d-1}} n_\mu \lambda^a \Delta^{\mu \nu} A_\nu^a + \frac{1}{2} \int d^dx \lambda^a \partial_\mu \Delta^{\mu \nu} A_\nu^a<br />

where n^\mu are the components of the vector normal to the surface of the S^{d-1}.
Is that correct?

Then do we just say that this term vanishes because either A or \lambda vanishes at infinity. Which one is it though? Am I right in saying it would need to be \lambda since we don't have an A term only derivatives of A?
Even if it is the \lambda term that vanishes - I don't understand physically why it should?

Thanks.

We just require that the variation of the fields vanishes at the boundary. We're only considering infinitesimal variations, so we can do this.
 
Back
Top