- #1
- 683
- 412
- TL;DR Summary
- If ##[A(x), j_{\{\lambda\}}(y)]=0## for ##(x-y)^2<0## then ##A(x)## is a local polynomial.
Hi, I'm reading Zavialov's book on QFT and there's a statement there I was interested in finding how to prove it. The statement is as follows:
If ##[A(x), j_{\{\lambda\}}(y)]=0## for ##(x-y)^2<0## then ##A(x)## is a local polynomial.
The relevant definitions are:
$$A = \sum_n \int A_n(x_1,\ldots, x_n) :\phi(x_1)\cdots\phi(x_n): dx_1 \cdots dx_n$$
$$j_{\{\lambda\}}(x)=:\phi_{(\lambda_1)}(x)\cdots \phi_{(\lambda_n)}(x):$$
$$\phi_{(\lambda_1)}(x) = \left(\frac{\partial}{\partial x}\right)^{(\lambda_i)}\phi(x)$$
and a local polynomial is just a linear combination of ##j_{\{\lambda\}}(x)##.
Does anyone have a proof of this fact?
This are the main thoughts I came up trying to prove the statement:
The first thing is that the book does not explicitly define ##A(x)##, but I think it is probably defined by the natural
$$A(x) = \sum_n \int A_n(x,x_1,\ldots, x_n) :\phi(x_1)\cdots\phi(x_n): dx_1 \cdots dx_n$$
From here the first thing I thought was that the commutator ##[A(x), j(y)]## reduces to know the value of the commutators
$$[:\phi(x_1)\cdots \phi(x_n):, j(y)]$$
And since the derivatives can be extracted from the commutator at the end we need to compute
$$[:\phi(x_1)\cdots \phi(x_n):, :\phi(y_1)\cdots \phi(y_m):]$$
Now, using the Wick theorem we can get rid of the normal ordering, and using the relation ##[A,BC]=B[A,C]+[A,B]C##, we should really be able to reduce everything to the commutator ##[A(x), \phi(y)]##. So, either I'm neglecting something important or the assumption ##[A,j]=0 \forall j## seems to be innecesary, just needing ##[A,\phi]=0##, no?
Then we can write the commutators
$$[\phi(x_1)\cdots \phi(x_n), \phi(y)] = \sum_{i=1}^n [\phi(x_i), \phi(y)] \phi(x_1)\cdots\phi(x_{i-1})\phi(x_{i+1})\cdots\phi(x_n)$$
My idea then is trying to prove that this sum cannot vanish unless each particular term vanishes. We know that ##[\phi(x_i), \phi(y)]## vanishes when ##(x_i-y)^2<0## so the condition ##[A,\phi]=0## would imply that the functions ##A_n(x,x_1,\ldots, x_n)## must vanish whenever ##(x_i-y)^2\geq 0##.
Since this must be true for all ##y## such that ##(x-y)^2<0## this implies that ##A_n(x,x_1,\ldots, x_n)## must be zero whenever ##x_i\neq x## and therefore all the functions ##A_n(x,x_1,\ldots, x_n)## would be linear combinations of ##\delta(x-x_i)## and their derivatives. Which would prove that ##A(x)## is a local polynomial.
I don't know if anyone knows if this is the right approach. The part that I think I'm more distant to actually prove (with some rigour) is to prove that indeed the sum cannot be zero unless each individual term, which seems plausible to me. But I have no clue.
If anyone knows the proof or wants to share any thoughts about it, I'd like to hear you.
If ##[A(x), j_{\{\lambda\}}(y)]=0## for ##(x-y)^2<0## then ##A(x)## is a local polynomial.
The relevant definitions are:
$$A = \sum_n \int A_n(x_1,\ldots, x_n) :\phi(x_1)\cdots\phi(x_n): dx_1 \cdots dx_n$$
$$j_{\{\lambda\}}(x)=:\phi_{(\lambda_1)}(x)\cdots \phi_{(\lambda_n)}(x):$$
$$\phi_{(\lambda_1)}(x) = \left(\frac{\partial}{\partial x}\right)^{(\lambda_i)}\phi(x)$$
and a local polynomial is just a linear combination of ##j_{\{\lambda\}}(x)##.
Does anyone have a proof of this fact?
This are the main thoughts I came up trying to prove the statement:
The first thing is that the book does not explicitly define ##A(x)##, but I think it is probably defined by the natural
$$A(x) = \sum_n \int A_n(x,x_1,\ldots, x_n) :\phi(x_1)\cdots\phi(x_n): dx_1 \cdots dx_n$$
From here the first thing I thought was that the commutator ##[A(x), j(y)]## reduces to know the value of the commutators
$$[:\phi(x_1)\cdots \phi(x_n):, j(y)]$$
And since the derivatives can be extracted from the commutator at the end we need to compute
$$[:\phi(x_1)\cdots \phi(x_n):, :\phi(y_1)\cdots \phi(y_m):]$$
Now, using the Wick theorem we can get rid of the normal ordering, and using the relation ##[A,BC]=B[A,C]+[A,B]C##, we should really be able to reduce everything to the commutator ##[A(x), \phi(y)]##. So, either I'm neglecting something important or the assumption ##[A,j]=0 \forall j## seems to be innecesary, just needing ##[A,\phi]=0##, no?
Then we can write the commutators
$$[\phi(x_1)\cdots \phi(x_n), \phi(y)] = \sum_{i=1}^n [\phi(x_i), \phi(y)] \phi(x_1)\cdots\phi(x_{i-1})\phi(x_{i+1})\cdots\phi(x_n)$$
My idea then is trying to prove that this sum cannot vanish unless each particular term vanishes. We know that ##[\phi(x_i), \phi(y)]## vanishes when ##(x_i-y)^2<0## so the condition ##[A,\phi]=0## would imply that the functions ##A_n(x,x_1,\ldots, x_n)## must vanish whenever ##(x_i-y)^2\geq 0##.
Since this must be true for all ##y## such that ##(x-y)^2<0## this implies that ##A_n(x,x_1,\ldots, x_n)## must be zero whenever ##x_i\neq x## and therefore all the functions ##A_n(x,x_1,\ldots, x_n)## would be linear combinations of ##\delta(x-x_i)## and their derivatives. Which would prove that ##A(x)## is a local polynomial.
I don't know if anyone knows if this is the right approach. The part that I think I'm more distant to actually prove (with some rigour) is to prove that indeed the sum cannot be zero unless each individual term, which seems plausible to me. But I have no clue.
If anyone knows the proof or wants to share any thoughts about it, I'd like to hear you.