Renormalizable quantum field theories

In summary, the cut-off for energy in Quantum Field Theory is an artificial limitation that is imposed in order to keep the integrals from diverging. This works well for low energy calculations, but not for high energy interactions. There are two main ways that this cut-off is justified: by comparing calculations with experimental data or by using comparisons with other theories.
  • #36
Bob_for_short said:
Eugene, you give too much flattery to the renormalized S-matrix. It is known/shown that the elastic processes are impossible - the S-matrix elements are identically equal to zero! It is the inclusive cross sections that are different from zero. So even after the renormalizations you have to work hard with the IR problem.

Bob.

Yes, I know that. There are ultraviolet divergences and there are infrared divergences. But let us solve one problem at a time. Otherwise the whole thing becomes too confusing. I am interested to solve the ultraviolet problem first. Temporarily, we can assign a small non-zero mass to the photon, and thus avoid the annoying issues with "soft" photons and exclusive/inclusive cross sections.
 
Physics news on Phys.org
  • #37
meopemuk said:
Yes, I know that. There are ultraviolet divergences and there are infrared divergences. But let us solve one problem at a time. Otherwise the whole thing becomes too confusing. I am interested to solve the ultraviolet problem first. Temporarily, we can assign a small non-zero mass to the photon, and thus avoid the annoying issues with "soft" photons and exclusive/inclusive cross sections.

It turns out the the both problems can be solved at once: writing a physical Hamiltonian without self-action but with a potential interaction of coupled charges (electroniums) eliminates both mathematical problems and makes the theory physical: the soft radiation in present automatically as excitations of the internal (or relative) degrees of freedom of compound systems - electroniums. The inclusive consideration is now obligatory, otherwise the elastic S-matrix contains only zero-valued elements. I would love to work out the Novel QED with you. You have everything for that. Just take may ideas and solutions and make calculations. If anything seems to you strange or non-comprehensible - I will stand by. You always can count on me. I have outlined how the Lamb shift and the anomalous magnetic moment can be calculated (straightforwardly, without artificial cut-offs). We can start tomorrow, if you like.

Vladimir Kalitvianski.
 
Last edited:
  • #38
Bob_for_short said:
It turns out the the both problems can be solved at once: writing a physical Hamiltonian without self-action but with a potential interaction of coupled charges (electroniums) eliminates both mathematical problems and makes the theory physical:ted (straightforwardly, without artificial cut-offs). We can start tomorrow, if you like.
Vladimir Kalitvianski.

Sorry. I have not read your paper completely.

Is your theory relativistic or non-relativistic?
Does your theory keep Lorentz invariant?
 
  • #39
ytuab said:
Sorry. I have not read your paper completely.
Is your theory relativistic or non-relativistic?
Does your theory keep Lorentz invariant?

Yes, relativistic and Lorentz invariant. If you like to discuss details, I invite you to the Independent Research forum where I posted a thread on this subject.

Bob.
 
  • #40
meopemuk said:
I see your point, but it doesn't make sense to me.

If we return to the Weinberg's Lagrangian, then what you call "the original non-renormalized Hamiltonian" is equivalent to L_0 and L_1 in (11.1.7) and (11.1.8). These are finite operators, and their definition does not involve loop integrals. So, they are independent on the cutoff.
Okay here is the first difficulty. In Weinberg [itex]L_{0}[/itex] in (11.1.7) is a well defined self-adjoint operator. [itex]L_{1}[/itex] in (11.1.8) is not. It isn't self-adjoint and it's not semi-bounded. Hence [itex]L_{0} + L_{1}[/itex] is not a well defined operator. I can sketch a proof if you wish, it involves some functional analysis.

meopemuk said:
The cutoff-dependent divergent part is L_2. So, the sum L_0 + L_1 + L_2 is still cutoff-dependent and divergent. I don't see where the cancelation comes from.

What am I missing here?
Let me describe the procedure.
Okay, so [itex]L_{0}[/itex] is a well behaved operator so let's leave it alone. [itex]L_{1}[/itex] is not well defined, so let's make it well defined by intorducing a cutoff. Taking the cutoff to infinity would also mean [itex]L_{2}[/itex] is undefined so let's leave it cutoff dependant.
Now let's add the cutoff [itex]L^{\Lambda}_{1}[/itex] and [itex]L^{\Lambda}_{2}[/itex] to obtain:
[itex]L^{\Lambda}_{int} = L^{\Lambda}_{1} + L^{\Lambda}_{2}[/itex].

The [itex]\Lambda \rightarrow \infty[/itex] limit of [itex]L^{\Lambda}_{int}[/itex] is a well-defined self adjoint operator with no cutoff dependence. We can then add it to [itex]L_{0}[/itex] (although by "adding" it you have to deal with certain technical difficulties but I'm being lose), to obtain a well-defined cutoff independant theory with well-defined finite time evolution.
 
  • #41
Avodyne said:
What you mean is that divergences cancel when you calculate observable quantities
I hope my previous post makes things clearer, but what I'm saying is that you can remove all cutoff dependence from the Hamiltonian and have it finite after you have performed renormalization.

Avodyne said:
But if you try to write it down explicitly, it will still have cutoff-dependent coefficients in it.
No it doesn't. It has no cutoff dependence once you take the limit and it is finite in that limit.
 
  • #42
Dear DarMM,

What would you prefer: to get rid of UV and IR infinities perturbatively or to do with a physically correct theory without physical and mathematical difficulties?

Bob.
 
  • #43
Bob_for_short said:
Yes, relativistic and Lorentz invariant. If you like to discuss details, I invite you to the Independent Research forum where I posted a thread on this subject.

Bob.

I have now read " Atom as a Dressed Nucleus".

"Positive charge cloud" theory is very interesting, and I think your idea is true.

But I'm sorry if I make mistakes.

I think your paper is not relavent to the divergent problems of QFT ( is only relevant to 1/r?).
Your paper is nonrelativistic and does not keep Lorentz invariant?
It is almost impossible to solve the all divergent problems of QFT keeping Lorentz invariant.

How do you think about it?
 
  • #44
DarMM said:
[itex]L_{1}[/itex] is not well defined, so let's make it well defined by intorducing a cutoff.

I don't understand why you are saying that L_1 is not well defined and cutoff-dependent. First note that L_1 is interaction "density", so in order to obtain the interaction operator, L_1 must be integrated on d^3x. So, in total there are 4 integrations in this expression: one on d^3x and 3 momentum integrals (which come from definitions of quantum fields). Quantum fields also supply exponential factors like exp(ipx). Their integration on d^3x results in one momentum delta function (which simply expresses the fact that our interaction conserves the total momentum). As a result we obtain a sum of terms. Each term has 3 momentum integrals whose integrands contain 1 momentum delta function and a product of 3 creation/annihilation operators. This is exactly the standard form of any interaction operator as shown in Weinberg's eq. (4.4.1). There is no cutoff in this equation.
 
  • #45
meopemuk said:
I don't understand why you are saying that L_1 is not well defined
I'm saying it because it's not. It's not self-adjoint or semi-bounded, there are proofs of this fact in a few books. The basic problem is that [itex]\bar\psi\gamma^{\mu}\psi[/itex] is an operator valued distribution, but in more than two dimensions...
First note that L_1 is interaction "density", so in order to obtain the interaction operator, L_1 must be integrated on d^3x. So, in total there are 4 integrations in this expression: one on d^3x and 3 momentum integrals (which come from definitions of quantum fields). Quantum fields also supply exponential factors like exp(ipx). Their integration on d^3x results in one momentum delta function (which simply expresses the fact that our interaction conserves the total momentum). As a result we obtain a sum of terms. Each term has 3 momentum integrals whose integrands contain 1 momentum delta function and a product of 3 creation/annihilation operators. This is exactly the standard form of any interaction operator as shown in Weinberg's eq. (4.4.1).
...after performing integration it's not a well-defined operator. This was proven by Wightman in the 1960s. I can provide you with some references. So [itex]\int{\bar\psi\gamma^{\mu}\psi}A_{\mu}[/itex] is not a well defined operator, this is rigorous mathematical fact.

and cutoff-dependent.
It's not cutoff-dependent, I'm just doing that so I can add it to the counterterm part without getting into the rigours of distribution theory. I'm introducing a cutoff in order to turn it into an operator which is well-defined and then adding it to the cutoff counterterm part obtaining another operator which is well-defined when the cutoff is removed.

There is no cutoff in this equation.
Of course, but the problem is that equation doesn't give you an operator unless it is cutoff.
Think about it, if the integral of [itex]L_{1}[/itex] was an operator why would you even need renormalization? If it was well-defined there would be no loop divergences.
 
Last edited:
  • #46
DarMM said:
...if the integral of [itex]L_{1}[/itex] was an operator why would you even need renormalization? If it was well-defined there would be no loop divergences.

In my opinion, L_1 is a well defined operator. In any case, whatever subtle irregularities were found in it by Wightman, they are no match for the explicitly divergent constants deltam, Z_2 and Z_3 in L_2. So, I don't see how any cancelation of divergences is possible in L_1 + L_2.

My understanding of the origin of loop divergences is different from yours. In my view the major problem is that L_1 (when expressed in terms of creation/annihilation operators) contains "trilinear" terms, like

L_1 = a*ac + a*c*a + ...

where a is annihilation operator for electrons and c is annihilation operator for photons. When you calculate the 2nd order S-operator with interaction L_1 you need to take the product of two copies of L_1

S = (a*ac + a*c*a + ...)(a*ac + a*c*a + ...)

After normal ordering you may notice that there is a non-zero term of the type

(loop integral) a*a

This term describes some kind of "scattering of the electron on itself", i.e., self-interaction. One effect produced by this term is that the electron mass in the interacting theory is different from the electron mass in the free theory. That's why the electron mass renormalization is needed. In QED things are even worse: the momentum dependence of the trilinear interactions is such that the loop integral is divergent, so the electron mass correction is infinite.

As long as you have trilinear interaction terms in your Hamiltonian you'll always have renormalization problems. In the "dressed particle" approach these trilinear interaction terms are called "bad". The idea of this approach is to change the Hamiltonian so that these bad terms are not present. All interactions must be written in terms of "good" operators only. An example of such a "good" operator is

a*a*aa

You may notice that if you take a product of two such operators and normal-order this product, you'll never get terms of the type a*a. So, there can be no "corrections" to the electron mass. No renormalization is needed.

The question is: can we get rid of the "bad" terms in the Hamiltonian and still obtain the same accurate S-matrix as we know it from renormalized QED? The answer is "yes", and the "dressed particle" approach shows how this can be done.
 
  • #47
ytuab said:
I have now read " Atom as a Dressed Nucleus".
"Positive charge cloud" theory is very interesting, and I think your idea is true.
But I'm sorry if I make mistakes.
I think your paper is not relavent to the divergent problems of QFT ( is only relevant to 1/r?).
Your paper is nonrelativistic and does not keep Lorentz invariant?
It is almost impossible to solve the all divergent problems of QFT keeping Lorentz invariant.
How do you think about it?

Hamiltonian formulation of QED looks like non Lorentz invariant but it is in fact. It's been proven many times. I use the Hamiltonian formulation in the gauge invariant (Dirac's) variables (known also as a Coulomb gauge). I build the relativistic Hamiltonian basing on the physics of the quantum mechanical charge smearing outlined in the first part of the article. The resulting Hamiltonian (see also "Reformulation instead of Renormalizations", formula (60)) is relativistic but free from the self-action. This, new formulation, is free from non-physical entities and describes the right physics analogous to the atomic scattering description. Preliminary non relativistic estimations show that it is right. I have not presented the detailed relativistic calculations but it is clear that they only bring some numerical corrections to the right physics obtained already in the non-relativistic approximation.

Bob.
 
  • #48
meopemuk said:
In my opinion, L_1 is a well defined operator.
It's not though and this isn't some subtlety, it is the origin of the divergences in quantum field theories. Operating on any vector in the Hilbert space twice with [itex]L_{1}[/itex] (which is second order in perturbation theory) maps to a vector outside Fock space, hence the divergences.

meopemuk said:
In any case, whatever subtle irregularities were found in it by Wightman, they are no match for the explicitly divergent constants deltam, Z_2 and Z_3 in L_2.
Yes, in fact they are. You can prove that they are matches for these counterterms, are you contesting the proofs? If you are I suggest we start with the quartic scalar case.

meopemuk said:
So, I don't see how any cancelation of divergences is possible in L_1 + L_2.
Maybe so, but that doesn't change the fact that these cancellations indeed occur. Take a look at Glimm's "Boson fields with the [itex]\Phi^{4}[/itex]interaction in three dimensions", to see it at work in the case of quartic scalar theory in three dimensions.

My understanding of the origin of loop divergences is different from yours. In my view the major problem is that L_1 (when expressed in terms of creation/annihilation operators) contains "trilinear" terms, like

L_1 = a*ac + a*c*a + ...

where a is annihilation operator for electrons and c is annihilation operator for photons. When you calculate the 2nd order S-operator with interaction L_1 you need to take the product of two copies of L_1

S = (a*ac + a*c*a + ...)(a*ac + a*c*a + ...)

After normal ordering you may notice that there is a non-zero term of the type

(loop integral) a*a

This term describes some kind of "scattering of the electron on itself", i.e., self-interaction. One effect produced by this term is that the electron mass in the interacting theory is different from the electron mass in the free theory. That's why the electron mass renormalization is needed. In QED things are even worse: the momentum dependence of the trilinear interactions is such that the loop integral is divergent, so the electron mass correction is infinite.

As long as you have trilinear interaction terms in your Hamiltonian you'll always have renormalization problems. In the "dressed particle" approach these trilinear interaction terms are called "bad". The idea of this approach is to change the Hamiltonian so that these bad terms are not present. All interactions must be written in terms of "good" operators only. An example of such a "good" operator is

a*a*aa

You may notice that if you take a product of two such operators and normal-order this product, you'll never get terms of the type a*a. So, there can be no "corrections" to the electron mass. No renormalization is needed.

The question is: can we get rid of the "bad" terms in the Hamiltonian and still obtain the same accurate S-matrix as we know it from renormalized QED? The answer is "yes", and the "dressed particle" approach shows how this can be done.
Remember that by Haag's theorem, interacting theories do not live in Fock space and hence cannot in general be written using creation and annhilation operators.
Anyway if you look at any of the literature on rigorous field theory you will see that the origin of ultraviolet divergences has been known for a long time as being due to two facts:
1. The producting of operator valued distributions resulting in ill-defined powers which will not not give rise to operators when integrated.
2. The non-Fock representations needed for interacting theories.

These are the origins of the ultraviolet difficulties. They are the reason [itex]L_{1}[/itex] is divergent. In fact [itex]L_{1}[/itex] is a tremendously divergent object, it even has what physicists would call nonperutrbative infrared divergences. To say it's well-defined up to some subtleties is totally provably wrong.

See the paper of Glimm above for an example of how badly divergent [itex]L_{1}[/itex] is even in the case of a scalar field theory.
 
  • #49
There can be no cancelation between L_1 and L_2 for the simple reason that L_1 is first order in the coupling constant (1st power in e) and L_2 is composed of 2nd, 3rd, 4th etc. order terms.

DarMM said:
Operating on any vector in the Hilbert space twice with [itex]L_{1}[/itex] (which is second order in perturbation theory) maps to a vector outside Fock space, hence the divergences.

This agrees with what I was saying: the (2nd order) product L_1 * L_1 is divergent (due to loop integrals). This divergence is compensated by the divergent 2nd order term in L_2. However, as I said above, there is no cancelation of divergences in the sum L_1 + L_2.
 
  • #50
meopemuk said:
There can be no cancelation between L_1 and L_2 for the simple reason that L_1 is first order in the coupling constant (1st power in e) and L_2 is composed of 2nd, 3rd, 4th etc. order terms.
This stuff is strange and perhaps I've been explaining badly. Think of what [itex]L_{1}[/itex] is meant to be a function of, it's a function of the interacting fields [itex]\psi[/itex] and [itex]A_{\mu}[/itex] which are themselves functions of e. So [itex]L_{1}[/itex] does contain terms of higher order in e through its dependence on the interacting fields.

Does this make sense?

However I must say that the best way of thinking of this probably isn't through the use of perturbative additive renormalizations, but with nonperturbative multiplicative renormalizations where you can this whole thing become a problem related to the theory of distributions.
At the end of the day however it has been proven that cancellations do take place between [itex]L_{1}[/itex] and [itex]L_{2}[/itex].
 
  • #51
Bob_for_short said:
Hamiltonian formulation of QED looks like non Lorentz invariant but it is in fact. It's been proven many times. I use the Hamiltonian formulation in the gauge invariant (Dirac's) variables (known also as a Coulomb gauge). I build the relativistic Hamiltonian basing on the physics of the quantum mechanical charge smearing outlined in the first part of the article. The resulting Hamiltonian (see also "Reformulation instead of Renormalizations", formula (60)) is relativistic but free from the self-action. This, new formulation, is free from non-physical entities and describes the right physics analogous to the atomic scattering description. Preliminary non relativistic estimations show that it is right. I have not presented the detailed relativistic calculations but it is clear that they only bring some numerical corrections to the right physics obtained already in the non-relativistic approximation.

Bob.

I think you misunderstand "the relativistic and Lorentz invariant" of divergent problems.

I have not yet read your paper "Reformulation instead of Renormalizations",
But as far as I read your first paper, there is nothing about solving the divergent problems
keeping Lorentz invariant.

The divergence is caused by the infinit loop (the action of infinit photon, particles and antiparticles) and divergent 4-momentum integral (which keep Lorentz invariant).
It is not caused only by 1/r as you say.

I do not like the idea of "bare mass or bare charge ".
I think the idea of QFT has reached the limit.
 
  • #52
DarMM said:
This stuff is strange and perhaps I've been explaining badly. Think of what [itex]L_{1}[/itex] is meant to be a function of, it's a function of the interacting fields [itex]\psi[/itex] and [itex]A_{\mu}[/itex] which are themselves functions of e. So [itex]L_{1}[/itex] does contain terms of higher order in e through its dependence on the interacting fields.

Does this make sense?

Not yet. I see that our disagreement about QFT is even deeper than I thought. I always thought that quantum fields present in Weinberg's L_0 + L_1 + L_2 are *free* quantum fields. So, L_1 has only 1st order contributions. Apparently, you disagree with that.
 
  • #53
ytuab said:
I think you misunderstand "the relativistic and Lorentz invariant" of divergent problems.

I have not yet read your paper "Reformulation instead of Renormalizations",
But as far as I read your first paper, there is nothing about solving the divergent problems
keeping Lorentz invariant.

As I said previously, the relativistic theory of interacting particles or fields can be cast in the Hamiltonian form, so it is a multi-particle quantum mechanics. Such a form is covariant, it's been proven.

ytuab said:
The divergence is caused by the infinit loop (the action of infinit photon, particles and antiparticles) and divergent 4-momentum integral (which keep Lorentz invariant).

Do you understand what you are writing? The divergences are caused by divergences. It is a tautology. There is no physical mechanism behind such statements.

ytuab said:
It is not caused only by 1/r as you say.

Yes, it does. I give an example in "Atom...". If your potential in the integral is, roughly speaking,

1/(r+a) (i.e., "cut-off" or finite at r=0),

but you try to use a perturbation theory like that:

1/(r+a)=1/r - a/r2 + a2/r3 -...,

your integral will diverge at small r.

As I said previously, I not only use a better initial approximation for interacting fields, but also remove the self-action. So my Hamiltonian is different. It is well defined physically and mathematically, contrary to the standard QED Hamiltonian.

Bob_for_short.
 
Last edited:
  • #54
Bob_for_short said:
As I said previously, the relativistic theory of interacting particles or fields can be cast in the Hamiltonian form, so it is a multi-particle quantum mechanics. Such a form is covariant, it's been proven.
Bob_for_short.

I'm sorry to displease you.
But I'm still convinced that your paper is nonrelativistic and doesn't keep Lorentz invariant.

Because If you solve the divergent problems (infinit bare charge and mass ...) under the Lorentz invariant condition,
your paper will be immediately accepted by the top journal such as "Nature" or "Science".
So?

The relativistic particle is a point particle.
If you use " (natural) cut off ", the part of integral becomes noncontinuous and the upper and lower limit of momentum will appear. So this state doesn't keep Lorentz invariant.

I don't believe a point particle, so I don't believe QFT (and QM).
 
  • #55
ytuab said:
I'm sorry to displease you.
But I'm still convinced that your paper is nonrelativistic and doesn't keep Lorentz invariant.

It is a very superficial impression. I had no objections from experienced researchers.

Because If you solve the divergent problems (infinit bare charge and mass ...) under the Lorentz invariant condition, your paper will be immediately accepted by the top journal such as "Nature" or "Science".
So?

Not immediately. They all require the complete relativistic calculations, not only formulation.

The relativistic particle is a point particle.

As I showed in "Atom...", the point-like particle is the inclusive rather than "elastic" picture of scattering in QM.

If you use " (natural) cut off ", the part of integral becomes noncontinuous and the upper and lower limit of momentum will appear. So this state doesn't keep Lorentz invariant.

Yes, look at the atomic form-factors: they contain characteristic "cloud" sizes a0 or (me/MA)a0, for example. There is nothing wrong with it. On the contrary, it is natural unlike artificial cut-offs in the standard QFT.

I don't believe a point particle, so I don't believe QFT (and QM).

And I am trying to build a trustful and working theory.

Bob_for_short.
 
  • #56
meopemuk said:
Not yet. I see that our disagreement about QFT is even deeper than I thought. I always thought that quantum fields present in Weinberg's L_0 + L_1 + L_2 are *free* quantum fields. So, L_1 has only 1st order contributions. Apparently, you disagree with that.
No not really and to be honest we don't really have deep disagreements, I've just been vague at times for brevity. Let me try to explain in full. Some of the issues come from me talking in general rather than sticking to Weinberg. I'm wasn't clear about the non-Fock nature of the problem.
If you'll allow me, I will take the case of [itex]\phi^{4}[/itex] in three dimensions since it is somewhat easier to deal with.

The first thing I should say is that the cancellations that I'm talking about probably can't be understood best as perturbative cancellations, but rather as direct operator cancellations or operator identities.

Anyway let's take the Hamiltonian of [itex]\phi^{4}_{3}[/itex]. The interacting part, is [itex]\int{\lambda\phi^{4}}[/itex], this is the analogue of [itex]L_{1}[/itex] in Weinberg. Immediately you can prove this isn't a well defined operator on Fock space. In my previous post I tried to demonstrate how badly behaved the operator is by showing that it can't act twice on a vector. Let me say the real problem, it's not self adjoint. This is true even for the [itex]L_{1}[/itex] term in QED. An even bigger problem is that when added to the free part of the Hamiltonian it causes the total Hamiltonian to be unbounded, meaning there is no positivity of energy.
Now I know it seems strange, but it is a proven fact that these [itex]L_{1}[/itex] terms are just as divergent or badly behaved as the counterterms. Even if you can't "see it" and I accept that it may be difficult, it is a fact that they are highly divergent.

In [itex]\int{\lambda\phi^{4}}[/itex] physicists usually get around this with mass renormalization. We add a term to the Hamiltonian [itex]\delta m^{2}\int{\phi^{2}}[/itex]. Now [itex]\delta m^{2}[/itex] contains terms up to order [itex]\lambda^{2}[/itex]. I'm claiming that this results in a well-defined Hamiltonian. However you rightly ask how can this be possible if [itex]\int{\lambda\phi^{4}}[/itex] is only first order in [itex]\lambda[/itex] and [itex]\delta m^{2}[/itex] goes up to second order?

The truth is, it can't in Fock Space, which is the crux of Haag's theorem. If you move to the correct representation of the canonical commutation relations, or in physicist's speak "the interacting Hilbert space", then the cancellations are possible. See the paper by Glimm for details.

So yes, since Weinberg remains in Fock space then these cancellations cannot occur. However we know that interacting theories can't live in Fock space.

If one wants to stick to Fock space then you'll be presented with an odd situation, you'll have order by order cancellations for the S-matrix, but you'll have a poorly defined Hamiltonian. Not just because of the counterterms, but also because of [itex]L_{1}[/itex].

I also just want to mention that renormalization basically turns out to be Wick ordering in a non-Fock space.

Is that better?
 
  • #57
Bob_for_short said:
Yes, look at the atomic form-factors: they contain characteristic "cloud" sizes a0 or (me/MA)a0, for example. There is nothing wrong with it. On the contrary, it is natural unlike artificial cut-offs in the standard QFT.
.

I think you probably forgot that "the integral part " of the Hamiltonian(QED) should be Lorentz invariant. this part must be continuous and has no upper and lower limit of momentum.
In your papar, this is not commented anywhere. It is strange, I think.

If we only get the part of the hamiltonian Lorentz invariant, it is insufficient.

And I think both the natural and artificial cut-off doesn't keep Lorentz invariant.
 
  • #58
ytuab said:
I think you probably forgot that "the integral part " of the Hamiltonian(QED) should be Lorentz invariant. this part must be continuous and has no upper and lower limit of momentum. In your papar, this is not commented anywhere. It is strange, I think. If we only get the part of the hamiltonian Lorentz invariant, it is insufficient.

There are no limits in the Fourier integral itself. It is a form-factor that "cuts-of" certain parts in integration.

I forgot nothing. You just do not believe to that I wrote. There are well known things that go without saying. The proof that I wrote about is valid for all Hamiltonian terms including the four-fermion Coulomb term.

And I think both the natural and artificial cut-off doesn't keep Lorentz invariant.

As I showed in "Atom...", the elastic cross section can be measured. It contains the positive charge cloud size. Very very roughly, there is a dimensionless ratio of this size and the impact parameter in the elastic cross section.

The same is valid for inelastic cross sections.

But if you add up all cross sections, these dependencies smooth out and you obtain the Rutherford cross section, as if the target charge were point-like. That it what is observed in inclusive experiments. The inclusive picture is illusory, not real. That is why starting from assigning 1/r to a charge leads to bad mathematical expressions.

Bob.
 
Last edited:
  • #59
DarMM said:
the crux of Haag's theorem... However we know that interacting theories can't live in Fock space.

I think I know what Haag's theorem is, and in my (perhaps ill-informed) opinion this theorem does not present a significant obstacle for developing QFT in the Fock space. This theorem basically says that "interacting field" cannot have a manifestly covariant Lorentz transformation law. Some people say that this violates the relativistic invariance and, therefore, is unacceptable. However, I would like to disagree.

A quantum theory is relativistically invariant if its ten basic generators (total energy, total momentum, total angular momentum, and boost) satisfy Poincare commutation relations. These commutators have been proven for interacting QFT. For example, in the case of QED the detailed proof is given in Appendix B of

S. Weinberg, "Photons and Gravitons in S-matrix theory: Derivation of charge conservation and equality of gravitational and inertial mass", Phys. Rev. 135 (1964), B1049.

So, relativistic non-invariance is out of question. I think that the absence of the manifestly covariant transformation law of the "interacting field" is not a big problem. Actually, one can perform QFT calculations without even mentioning "interacting field" at all. It is quite sufficient to have a Hamiltonian and obtain the S-operator from it by usual Rules of Quantum Mechanics.

My claim remains that the Hamiltonian L_0 + L_1 is well-defined. However S-matrix divergences appear when products like L_1 * L_1 are calculated. In the renormalization theory these divergences get canceled by the addition of (divergent) counterterms L_2 in the Hamiltonian. So, the full Hamiltonian

H = L_0 + L_1 + L_2

is cutoff-dependent and divergent in the limit of removed cutoff. This divergence is not a big deal in regular QFT, where we are interested only in the S-matrix. However, if one day we decide to study the time evolution of states and observables in QFT, we may hit a difficult problem due to the absence of a well-defined Hamiltonian. Fortunately, this day seems to be quite far away, because experimental information about the time evolution of colliding particles is virtually non-existent.

I think that our disagreement reflects two different philosophies about dealing with interacting QFT. In your approach (which is widely accepted), you seek solution by leaving the Fock space. In my approach (which is less known) I stay in the Fock space and try to change the original Hamiltonian by "dressing". It may well happen that both philosophies are correct (or that both are wrong).
 
  • #60
meopemuk said:
I think I know what Haag's theorem is, and in my ... opinion this theorem does not present a significant obstacle for developing QFT in the Fock space.

I agree with you here. In fact, there may be different QFTs with different interaction Hamiltonians. In my Novel QED I stay within Fock spaces without problem.

I think that our disagreement reflects two different philosophies about dealing with interacting QFT. In your approach (which is widely accepted), you seek solution by leaving the Fock space. In my approach (which is less known) I stay in the Fock space and try to change the original Hamiltonian by "dressing". It may well happen that both philosophies are correct (or that both are wrong).

Both of you perform perturbative renormalizations of the standard QED (i.e., with self-action) however it is named. Eugene's approach keeps the fundamental constants intact and discards perturbative corrections to them. This is a typical renormalization prescription. Of course, it is also a perturbative dressing. What I propose is a non perturbative dressing and a physical interaction without wrong self-action and without wrong renormalizations.

Bob_for_short.
 
Last edited:
  • #61
meopemuk said:
I think I know what Haag's theorem is, and in my (perhaps ill-informed) opinion this theorem does not present a significant obstacle for developing QFT in the Fock space. This theorem basically says that "interacting field" cannot have a manifestly covariant Lorentz transformation law. Some people say that this violates the relativistic invariance and, therefore, is unacceptable. However, I would like to disagree.
The theorem says that any translationally invariant theory even non-manifestly Lorentz invariant ones, which satisfy the Wightman axioms cannot live in Fock space. So you can avoid it only if you drop one of the Wightman axioms, because dropping translation invariance would be a bit much. Maybe your dressing approach drops one of the Wightman axioms?

So, relativistic non-invariance is out of question. I think that the absence of the manifestly covariant transformation law of the "interacting field" is not a big problem. Actually, one can perform QFT calculations without even mentioning "interacting field" at all. It is quite sufficient to have a Hamiltonian and obtain the S-operator from it by usual Rules of Quantum Mechanics.
However it's been proven that the QED S-operator is not Hilbert Schmidt on Fock space and hence is not well defined nonperturbatively. This may not be a problem though if you only want things to work perturbatively. Maybe you disagree that there should be a nonperturbative QED, it's not necessarily a bad position.

My claim remains that the Hamiltonian L_0 + L_1 is well-defined.
It's not though, I mean it has been proven that it's not well-defined as an operator on Fock space, even in two dimensions. This is really the only thing about your position that I don't understand. It has been proven to not be self-adjoint or semibounded. How can you claim it is well-defined if there are proofs that it is not? This is a genuine question, maybe you mean something specific by "well-defined" which doesn't require the Hamiltonian to be self-adjoint or semi-bounded, or are you contesting the proofs?
I think that our disagreement reflects two different philosophies about dealing with interacting QFT. In your approach (which is widely accepted), you seek solution by leaving the Fock space. In my approach (which is less known) I stay in the Fock space and try to change the original Hamiltonian by "dressing". It may well happen that both philosophies are correct (or that both are wrong).
Maybe this is what you mean, that after this "dressing" [itex]L_{0} + L_{1}[/itex] is well-defined as an operator on Fock space. All I'm saying is that [itex]L_{0} + L_{1}[/itex] as it is defined in Weinberg is not well-defined, which is a fact with a rigorous mathematical proof behind it.
 
  • #62
DarMM said:
The theorem says that any translationally invariant theory even non-manifestly Lorentz invariant ones, which satisfy the Wightman axioms cannot live in Fock space. So you can avoid it only if you drop one of the Wightman axioms, because dropping translation invariance would be a bit much. Maybe your dressing approach drops one of the Wightman axioms?

There are many different formulations of Haag's theorem. I suspect that we have different things in mind. There is a nice paper, which discusses exactly the relationship between Haag's theorem and dressing (I hope our moderators won't be mad at me for mentioning this reprint)

M.I. Shirokov, "Dressing" and Haag's theorem, http://www.arxiv.org/abs/math-ph/0703021

DarMM said:
All I'm saying is that [itex]L_{0} + L_{1}[/itex] as it is defined in Weinberg is not well-defined, which is a fact with a rigorous mathematical proof behind it.

Could you give me exact reference where this has been proved? I would like to take a look.
 
  • #63
ytuab said:
I think you misunderstand "the relativistic and Lorentz invariant" of divergent problems.


It is not caused only by 1/r as you say.

I do not like the idea of "bare mass or bare charge ".
I think the idea of QFT has reached the limit.

I am coming from a different direction but am interested in the same thing - cut off.



I would like to say (for a paper I am writing) that below a certain cut off distance
the universe has no answer because it runs out of 'precision',
because it requires too much data to exactly define such a fine grained system.
Thats why (another reason) we must 'cut off' (I want to write in paper if possible)
and also why it is legitimate to do so.

So if this length is about a plank length, then there would be no interaction differences
found between, let's say .003456 and .003457 plank lengths because it is below the cut
off because such a smalll difference will no be definable in terms of interactions -
such a small difference is 'not recognised' or able to trigger an event -its below a detectable precision limit.

Why? In this view data converts algorithmically to length and cannot be infinitely precise
it would require too many bits, ie the universe has not got infinite data and hence
infinite precision at its disposal.




I am very interested in collaborating with anyone that can help me to a more formal
exposition.
 
  • #64
p764rds said:
I am coming from a different direction but am interested in the same thing - cut off.

I would like to say (for a paper I am writing) that below a certain cut off distance
the universe has no answer because it runs out of 'precision',
because it requires too much data to exactly define such a fine grained system.
Thats why (another reason) we must 'cut off' (I want to write in paper if possible)
and also why it is legitimate to do so.

So if this length is about a plank length, then there would be no interaction differences
found between, let's say .003456 and .003457 plank lengths because it is below the cut
off because such a smalll difference will no be definable in terms of interactions -
such a small difference is 'not recognised' or able to trigger an event -its below a detectable precision limit.

Why? In this view data converts algorithmically to length and cannot be infinitely precise
it would require too many bits, ie the universe has not got infinite data and hence
infinite precision at its disposal.

I am very interested in collaborating with anyone that can help me to a more formal
exposition.

Your idea is in fact very popular idea of the coarse graining and it is very well developed.
W. Heizenberg advanced an idea of the fundamental length many years ago just to have a fundamental cut-off. In the statistical physics of phase transitions similar idea was employed by Kenneth Wilson in his renorm-group approach. It was then borrowed by QFT physicists to say that QFT and QED in particular are, maybe, the so called effective field theories.

Unfortunately this is not the case: the standard QED results, after renormalizations, are finite and do not contain any fundamental length or a cut-off at all. That means there should be a sort-cut to obtain the same finite results directly, without infinite bare parameters and infinite counter-terms to detract them. And, of course, without appealing to a fundamental length idea.

I promote such a short-cut. It encounters a huge resistance because people just do not believe in its existence. Factually though nobody could find a sole mathematical or physical error in my articles. It is a problem of prejudice which is the most difficult at the moment. The conceptual and mathematical difficulties have already been resolved.

Bob.
 
  • #65
p764rds said:
I am coming from a different direction but am interested in the same thing - cut off.
I would like to say (for a paper I am writing) that below a certain cut off distance
the universe has no answer because it runs out of 'precision',
because it requires too much data to exactly define such a fine grained system.
Thats why (another reason) we must 'cut off' (I want to write in paper if possible)
and also why it is legitimate to do so.

So if this length is about a plank length, then there would be no interaction differences
found between, let's say .003456 and .003457 plank lengths because it is below the cut
off because such a smalll difference will no be definable in terms of interactions -
such a small difference is 'not recognised' or able to trigger an event -its below a detectable precision limit.
Why? In this view data converts algorithmically to length and cannot be infinitely precise
it would require too many bits, ie the universe has not got infinite data and hence
infinite precision at its disposal.
I am very interested in collaborating with anyone that can help me to a more formal
exposition.

It will be a great thing to solve the divergent problems keeping Lorentz invariant.
Infinit bare mass, charge and divergent problems are inevitable in QFT.

I think I need to change the idea of QFT basically.

Bob_for_short said:
There are no limits in the Fourier integral itself. It is a form-factor that "cuts-of" certain parts in integration.

I forgot nothing. You just do not believe to that I wrote. There are well known things that go without saying. The proof that I wrote about is valid for all Hamiltonian terms including the four-fermion Coulomb term.
Bob.

In your paper (page 15 ), Equation (23) is not Lorentz invariant. You notice that?
You say Eq(23) is " relativistic Hamiltonian".

In the second term of Eq(23), the integral of d3R must be d4R (integral of space and time).
And R1 and R2 must not have the upper and lower limit of space and time.
The first term also must be the integral d4P.
And All in your Eq(23) is not Lorentz invariant. You confirm that?

In your paper you say "the problem of IR and UV divergences is removed in QED".
But if the Eq(23) is not Lorentz invariant, this conclution is not proper.
 
  • #66
ytuab said:
It will be a great thing to solve the divergent problems keeping Lorentz invariant. Infinit bare mass, charge and divergent problems are inevitable in QFT. I think I need to change the idea of QFT basically.

Before this text you quote the post of p764rds, not mine. The problems you mention are inevitable is the QFTs with self-action term.
In your paper (page 15 ), Equation (23) is not Lorentz invariant. You notice that?
You say Eq(23) is " relativistic Hamiltonian".

Have you ever seen a standard QED Hamiltonian in the Coulomb gauge? It is of the same structure but contains in addition a self-action term. My Hamiltonian does not contain it.
In the second term of Eq(23), the integral of d3R must be d4R (integral of space and time). And R1 and R2 must not have the upper and lower limit of space and time.
The first term also must be the integral d4P. And All in your Eq(23) is not Lorentz invariant. You confirm that?

You are just unfamiliar with the Hamiltonians of QED in the Coulomb gauge. The integrals are correct: d3R1d3R2. Read S. Weinberg or any other textbook on this particular subject to make sure I am right.
In your paper you say "the problem of IR and UV divergences is removed in QED".
But if the Eq(23) is not Lorentz invariant, this conclution is not proper.

And if it is invariant, this conclusion is correct.

Read also "Reformulation instead of renormalizations" for another motivation to construct formula (60).

Bob.
 
  • #67
meopemuk said:
There are many different formulations of Haag's theorem. I suspect that we have different things in mind. There is a nice paper, which discusses exactly the relationship between Haag's theorem and dressing (I hope our moderators won't be mad at me for mentioning this reprint)

M.I. Shirokov, "Dressing" and Haag's theorem, http://www.arxiv.org/abs/math-ph/0703021
Actually we're talking about the same thing. If you look at the reference it states Haag's theorem requires only translational and rotational invariance, not Lorentz invariance, which is why it affects some Galilean/non-relativistic field theories. However the reference also explains how the dressed approach gets around this. As I suspected you drop one of the Wightman axioms, namely that the interacting field operators transform covariantly. This allows you to remain in Fock space. Thanks for the references.

Could you give me exact reference where this has been proved? I would like to take a look.
To get an idea of the issues involved in the d = 2 case, take a look at:
Fermion currents in 1+1 dimensions
Carey, Hurst, O'Brien
J. Math. Phys. 24, p. 2212


For general problems related to only integrating fields over space see:
A.S. Wightman and L. Gårding,
Fields as operator valued distributions in relativistic quantum field theory.
Ark. f Fys., t. 28, 1965, p. 129
 
  • #68
Bob_for_short said:
Have you ever seen a standard QED Hamiltonian in the Coulomb gauge? It is of the same structure but contains in addition a self-action term. My Hamiltonian does not contain it.

You are just unfamiliar with the Hamiltonians of QED in the Coulomb gauge. The integrals are correct: d3R1d3R2. Read S. Weinberg or any other textbook on this particular subject to make sure I am right.

And if it is invariant, this conclusion is correct.
Bob.

Do you say the charge which is almost still (k^2 << m^2) or something?
I think what you say is probably the approximation.

For example, At calculation of Lamb shift, this approximation is used. (using d3k d3x integral instead of d4k d4x integral).

But Due to this approximation, this doesn't keep Lorentz invariant.

And Coulomb gauge doesn't keep Lorentz invariant ( Lorentz gauge does.)
And Coulomb gauge violates causality.

see http://en.wikipedia.org/wiki/Gauge_fixing
 
Last edited:
  • #69
DarMM said:
As I suspected you drop one of the Wightman axioms, namely that the interacting field operators transform covariantly. This allows you to remain in Fock space.

That's exactly right. I mentioned the non-covariance in an earlier post. I don't see a good reason for the "interacting field" to be covariant. It might sound counter-intuitive, but the full interacting theory is still relativistically invariant (in the sense described in Weinberg's vol. 1).

Thank you for the references.
 
  • #70
Re Haag's thm, the unitary "dressing" approach, etc...

(I know should probably stay quiet, but I'll offer my
$0.02 worth. BTW, some related stuff was discussed a
while back in this thread:
https://www.physicsforums.com/showthread.php?t=177865
which also explained some of the differences between
orthodox QFT and Meopemuk's approach.)

Anyway...

The widely-known formulations of Haag's thm tend to be based
on having an irreducible set of operators parameterized by
Minkowski spacetime coordinates. Covariance under a Lorentz
boost is then formulated with reference to these spacetime
coords.

The point of Shirokov's paper:

M.I. Shirokov, "Dressing" and Haag's theorem,
Available as: http://www.arxiv.org/abs/math-ph/0703021

is that such a view of "spacetime covariance" under Lorentz
boosts is untenable in an interacting QFT. (But the
incompatibilities between relativistic interactions and
naive Lorentz transformation of spacetime trajectories have
already been known for a long time in other guises.)

Another perspective on Haag's thm was given in Barton's
little book:

G. Barton, "Introduction to Advance Field Theory",
Interscience 1963,

(It might be possible to access a copy via
http://depositfiles.com/en/files/4816818 , or at
http://www.ebookee.com.cn/Introduction-to-advanced-field-theory_166416.html
but I haven't actually tried these out.)

Barton explains and emphasizes the role of unitarily
inequivalent representations of the CCRs, (which Weinberg
doesn't even mention), and concludes his analysis of Haag's
thm by saying (p157) "...the correspondence between
vector space in which the auxiliary (in) and (out) fields
are defined, and that in which the [interacting field(s)
are] defined, is necessarily mediated by an improper
[unitary] transformation.
" Here, "improper" means a
transformation between inequivalent representations, i.e.,
between disjoint Fock spaces.

(For any readers unfamiliar with unitarily inequivalent
representations, the Bogoliubov transformations of condensed
matter theory are a simple example.)

So, previously in this thread where "the Fock space" has
been mentioned, one must understand that there is not one
Fock space mathematically, but rather an uncountably
infinite number of disjoint Fock-like spaces. The unitary
dressing transformations form part of a technique to find
which one is physically correct.

A related approach of Shebeko+Shirkov, complementary to
Meopemuk's, can be found in

Shebeko, Shirokov,
"Unitary Transformations in QFT and Bound States"
Available as: nucl-th/0102037

My take on both approaches is this:

Starting from a Fock space corresponding to the free theory,
and an initial assumption about the form of the interaction,
one investigates the Hamiltonian and S-matrix, finds they're
ill-behaved in terms of high energy and infinite numbers of
particles, then performs an (improper) unitary
transformation at a particular order of perturbation, then
performs something similar to the usual mass and charge
renormalization (since even improper unitary transformations
alone seem unable to cure this kind of divergence), then
(at the next perturbation order) performs another improper
unitary transformation, and so on. All of this is aimed at
finding an S-matrix, a Hamiltonian, and a space in which
both are physically sensible (stable vacuum and 1-particle
states, finite operators, etc, etc).

HTH.
 

Similar threads

Replies
41
Views
4K
Replies
115
Views
8K
Replies
1
Views
315
Replies
57
Views
6K
Replies
4
Views
933
Back
Top