Haag's Theorem & Wightman Axioms: Solving Problems in QFT?

  • A
  • Thread starter ftr
  • Start date
  • Tags
    Qft
In summary, Haag's theorem and Wightman axioms show that the vacuum sector of a relativistic QFT in 4 space-time dimensions must look different from a Fock space. Recently, a colleague pointed me to a review of Haag's theorem and related works which supports this claim. However, it is a pity that teachers of QFT do not discuss asymptotic states of interacting particles more often, as this would help to clear up some of the uncertainty surrounding the theory.
  • #36
Demystifier said:
The Haag's theorem is an artifact of taking the infinite-volume limit too seriously. It is brilliantly explained in
A. Duncan, The Conceptual Framework of Quantum Field Theory (2012)
https://www.amazon.com/dp/0199573263/?tag=pfamazon01-20
Sec. 10.5 How to stop worrying about Haag's theorem

But even if we take the infinite volume limit too seriously, Haag's theorem is not a problem. It turns out that the results from the wrong derivations can be rigourously derived at the level of formal perturbation theory, which makes sense if the theory is also shown to exist.
 
Last edited by a moderator:
Physics news on Phys.org
  • #37
A. Neumaier said:
But the standards model, on which all particle physics is based, is completely independent of general relativity.

Drop Poincare invariance, and you have essentially no constraints on the kind of action to consider. The terms kept in standard model are distinguished solely by Poincare invariance, renormalizability, and the assumed internal group, both extracted from experimental data and very well verified.

Couldn't you put the system on a torus?
 
  • #38
Demystifier said:
You claimed that "only the infinite volume limit gives physically correct results". Do you still stand with this claim?
Yes. This limit defines the theory, though of course every sufficiently accurate approximation to the limit is experimentally indistinguishable from it. But without having the limiting theory there is no way to tell what is sufficiently accurate...

In particular, current lattice calculations don't match experiment if they don't extrapolate to the limit. This is typically done by doing computations for a number of lattice sizes, doing finite renormalization to match the defining physical parameters, and then performing the extrapolation. On a fixed lattice, results are generally poor.
 
  • #39
atyy said:
if the theory is also shown to exist.
Though that is at present not the case in 4D. Moreover, even the formal perturbation theory needs infinite renormalization, which destroys the Fock space structure. No Hilbert space is left, except the free, asymptotic one at times ##\pm\infty##. Finite time calculations cannot be justified in this way.

atyy said:
Couldn't you put the system on a torus?
One can, but this preserves only translations, breaks Lorentz symmetry, and does not get rid of the infinities that characterize the perturbative approach based on Fock spaces.
 
  • #40
A. Neumaier said:
One can, but this preserves only translations, breaks Lorentz symmetry, and does not get rid of the infinities that characterize the perturbative approach based on Fock spaces.

But if we put the system on the torus, do all the formal reasons you gave for constraining the form of the standard model Lagrangian remain (with Haag's theorem not applying)?
 
  • #41
The fact that QFT's require renormalization almost certainly leads to inequivalent Hilbert spaces. Haag's theorem is just an independent argument for this fact, but even if doesn't hold, the renormalized Hilbert space will usually be different. A crucial difference between QM and QFT is that in QM, we have the Stone-von-Neumann uniqueness theorem, while in QFT, there are infinitely many inequivalent representations of the field Weyl algebra and none of them is physically preferred.
 
  • Like
Likes atyy and martinbn
  • #42
atyy said:
But if we put the system on the torus, do all the formal reasons you gave for constraining the form of the standard model Lagrangian remain (with Haag's theorem not applying)?
It depends. If you first derive the model in Minkowski space, use the restrictions there, and then put the result of the torus, you get of course just the same limited number of terms as in Minkowski space, since this is determined by the still unregulated UV behavior, and not by the IR behavior regulated by the compactness of the torus.

But if you start on the torus from scratch without assuming a relation to about Minkowski space you have far too many possibilities since there is no constraint from Lorentz invariance! Thus the number of possible parameters grows tremendously as there is no longer a (symmetry) reason why many of them should be equal.
 
  • #43
A. Neumaier said:
Yes. This limit defines the theory, though of course every sufficiently accurate approximation to the limit is experimentally indistinguishable from it. But without having the limiting theory there is no way to tell what is sufficiently accurate...

In particular, current lattice calculations don't match experiment if they don't extrapolate to the limit. This is typically done by doing computations for a number of lattice sizes, doing finite renormalization to match the defining physical parameters, and then performing the extrapolation. On a fixed lattice, results are generally poor.
Well, it defines obviously not the theory we use, because we get well-defined answers with renormalized perturbation-theory techniques (sometimes you have to resum to get rid of IR problems or problems along the light cone, particularly in many-body QFT at finite temperature and/or density; see, e.g., the AMY photon rates in a (Quark-Gluon) Plasma).

To calculate physical observables in perturbation theory you have to first regularize the theory. One way to get rid of the problems related to Haag's theorem is to put the system in a finite box (the example of a mass term as perturbation in Duncan is just great to understand the principle behind it). Another way are the more standard techniques of regularization in momentum space (cutoff, Pauli-Vilars, dim. Reg., heat-kernel/theta-function regularization) and then take the limit after renormalization or directly renormalize with BPHZ-substraction techniques. The results are the known successes of the Standard Model.

Another way is the lattice approach, which has its merits particularly in QCD in the vacuum (e.g., mass spectrum of hadrons) and at finite temperature (zero baryo-chemical potential but even beyond that to study, e.g., the Equation of State); I'd not call these results "poor".
 
  • #44
vanhees71 said:
it defines obviously not the theory we use
Defined was here not meant in the mathematical sense. It identifies the lattice theory with the corresponding Poincare invariant continuum theory, of which renormalized perturbation theory is another approach to approximately define it in the mathematical sense.

vanhees71 said:
the lattice approach, which has its merits particularly in QCD in the vacuum (e.g., mass spectrum of hadrons) and at finite temperature (zero baryo-chemical potential but even beyond that to study, e.g., the Equation of State); I'd not call these results "poor".
They are poor if evaluated on a fixed lattice, and give good results only when extrapolated to the limit, as I mentioned. You need to consider how the results change with the lattice spacing, and because convergence is extremely slow one must numerically extrapolate to the limit of zero lattice spacing to get the physically relevant approximations form the lattice calculations. These numeral limits can be quite different from the values obtained at the computationally feasible lattice spacing. The difference is like the difference between computing ##\sum_{k=1}^\infty 1/k^2## from the first few terms of the series (very poor approximation) or from a Pade-accelerated numerical limiting procedure based on the same number of coefficients (which gives a good approximation of the sum).
 
Last edited:
  • Like
Likes vanhees71
  • #45
Sure, with a few lattice points you don't get the results I mentioned, and continuum extrapolation is mandatory.
 
  • #46
A. Neumaier said:
The cure is the renormalization program. It sacrifices the Hilbert space (at finite times) to restore finiteness and predictability.
I disagree. If one uses a lattice with periodic boundary conditions as the regularization, the regularized theory lives in a standard Hilbert space. And, of course, at finite times.
 
  • #47
A. Neumaier said:
The cure is the renormalization program. It sacrifices the Hilbert space (at finite times) to restore finiteness and predictability.
Denis said:
If one uses a lattice with periodic boundary conditions as the regularization, the regularized theory lives in a standard Hilbert space. And, of course, at finite times.
True but this does not invalidate my statement.

First, your recipe requires a splitting into space and time, sacrificing instead covariance.

Second, the identification with the observables in these approximations is different at different lattice spacings and periods. Thus to identify the observables in the physical limit where the spacing goes to zero and the period to infinity, one needs renormalization. And in 4 dimensions nobody so far has been able to define a proper Hilbert space in the corresponding limit. Thus the physical Hilbert space is lost and replaced by an approximate one, different for each particular approximation scheme (which reflects its lack of physicality).

Third, the real time lattice formulation proposed by you is numerically extremely poorly behaved, not really suitable for prediction. It cannot even be used for scattering calculations since in a compact space, scattering is absent.

Lattice QFT calculations are almost universally done in a Euclidean framework where both space and imaginary time are discretized. This not only sacrifices covariance, but also time in favor of its imaginary version!
 
  • #48
PeterDonis said:
You're leaving out at least three key physical phenomena that QFT can predict and ordinary non-relativistic QM can't:

(1) The existence of processes where particles are created or destroyed;

(2) The existence of antiparticles;

(3) The connection between spin and statistics.
Relativistic QM (e.g. S-matrix theory) gives you all those. Field theory is not required.
 
  • #49
A. Neumaier said:
True but this does not invalidate my statement.
First, your recipe requires a splitting into space and time, sacrificing instead covariance.
So what, your statement was not about covariance, but about Hilbert space and finite times.
A. Neumaier said:
Second, the identification with the observables in these approximations is different at different lattice spacings and periods. Thus to identify the observables in the physical limit where the spacing goes to zero and the period to infinity, one needs renormalization. And in 4 dimensions nobody so far has been able to define a proper Hilbert space in the corresponding limit.
That this limit is not well-defined is also not a problem. Anyway one has to expect that near Planck length all this has to be replaced by a different theory. Once QFT is not more than a large distance approximation, there is no need to have a valid limit.
A. Neumaier said:
Thus the physical Hilbert space is lost and replaced by an approximate one, different for each particular approximation scheme (which reflects its lack of physicality).
Anyway everything you do is only approximate. So, approximation is not at all a lack of physicality.
A. Neumaier said:
Third, lattice QFT calculations are almost universally done in a Euclidean framework where both space and imaginary time are discretized. This not only sacrifices covariance, but also time in favor of its imaginary version! (The real time lattice formulation is numerically extremely poorly behaved, not really suitable for prediction.)
Of course, imaginary time makes no sense in a Schroedinger equation. But it is fine as a numerical method to compute lower energy states. Probably it makes sense also for something else, I'm not sure. Whatever, it does not matter at all if something is numerically poor if it is conceptually clean. That for numerical success one sometimes has to do some dirty things, ok, such is life.
 
  • #50
Denis said:
Once QFT is not more than a large distance approximation, there is no need to have a valid limit.
One needs the limiting concept including renormalization already to know how to extrapolate from the otherwise poor lattice calculations.

Denis said:
it does not matter at all if something is numerically poor if it is conceptually clean.
Your lattice Hilbert space allows no scattering, hence is far from being conceptually clean.
 
  • #51
A. Neumaier said:
One needs the limiting concept including renormalization already to know how to extrapolate from the otherwise poor lattice calculations.
If one wants to exptrapolate - ok, then extrapolate. Conceptually one does not have to extrapolate.
A. Neumaier said:
Your lattice Hilbert space allows no scattering, hence is far from being conceptually clean.
What means "allows no scattering"? Does not allow to handle some limits in space and time going to infinity? In real physics there are no infinities, so conceptually one does not need them. Conceptually the infinite limits are only an approximation, and not sufficient to define the theory. Conceptually you need results for finite distances.

Of course, FAPP all one needs is the scattering matrix, any finite results are essentially useless, and the difference between the limit and our finite reality is completely irrelevant. But conceptually the situation is the reverse one. A theory which computes only a scattering matrix is simply and obviously incomplete, not even completely defined. But if one can somehow nicely compute a limit, like the scattering matrix, is conceptually irrelevant.
 
  • #52
Denis said:
If one wants to exptrapolate - ok, then extrapolate. Conceptually one does not have to extrapolate.
Then one does not get numerical results agreeing with experiments.
Denis said:
What means "allows no scattering"?
It means that the concept of scattering cannot even be defined, hence the conceptual basis of all elementary particle experiments is missing. Just the opposite of being conceptually clean - namely conceptually defective.
 
  • #53
A. Neumaier said:
Then one does not get numerical results agreeing with experiments.
But the inability to get numerical results is not a conceptual problem. But a numerical one.
A. Neumaier said:
It means that the concept of scattering cannot even be defined, hence the conceptual basis of all elementary particle experiments is missing. Just the opposite of being conceptually clean - namely conceptually defective.
The scattering matrix is simply some infinite limit, which may be useful to approximate real physics, which is not infinite. But conceptually you don't need any scattering. All what we can measure is localized in a finite region around some small planet known as Earth.

A conceptually well-defined theory is something completely different from one which allows to compute efficiently what we observe. I have no problem to recognize that for such approximate computations even computations with dimensional regularizations may be useful, and give much better approximations than a well-defined lattice computation. But this does not make dimensional regularization in any way a meaningful theoretical concept, and it does in no way invalidate the lattice conceptually.
 
  • #54
Denis said:
But the inability to get numerical results is not a conceptual problem. But a numerical one.
How do you conceptually relate the numerical results to the theory? For this you need renormalization and a limit.
Denis said:
The scattering matrix is simply some infinite limit, which may be useful to approximate real physics, which is not infinite. But conceptually you don't need any scattering.
How do you define scattering in a compact space conceptually? Real physics is always based on the asymptotic concept, and the experiments are approximate realizations of the concept. Without considering the asymptotics the whole conceptual clarity gained by quantum physics is lost. One has no longer well-defined asymptotic states that define what a particle is, nor has one sensible concepts of scattering angle and transition probabilities; everything becomes murky.

But since you consider this to be conceptually clean I guess I have nothing more to say.
 
  • #55
Conceptually, indeed the limits we take in mathematics are idealizations and simplifications. E.g., in classical continuuum mechanics we work with continuous quantities like the mass density, i.e., you take a fluid and mathematically an infinitely small volume out of it, determine its mass and call the mass within the volume divided by the volume the density of the matter at this point in space at the given time. In fact, what's described by this idealized quantity is a macroscopically small volume (i.e., a volume within which the spatial changes of the relevant quantities can be considered negligibly small) but a microscopically large one (i.e., there must be a large number of particles within this volume element, and the fluctuations (quantum and/or thermal) should be small on average over this volume.

The same holds true for QFT. You (try to) define it as Poincare covariant theory in Minkowski ##\mathbb{R}^4##, but that fails for all physically relevant models, and it's likely that in this rigorousity it's doomed to fail for fundamental reasons related with Haag's theorem and all that. On the other hand, of course, realtivistic QFT in the way it is treated by physicists as an effective theory, is very successful, and the way to cure Haag's desastrous theorem is indeed to regularize it somehow with the effect to make space and energy-momentum finite in some sense. E.g., you can put it in a box, impose convenient periodic boundary conditions to have welldefined momentum operators etc. etc. an then you make also a momentum cutoff to get rid of UV trouble. That let's you at least define something like scattering matrix elements within this regularized model and then take appropriate limits to get S-matrix elements from appropriately renormalized perturbative N-point functions comparable with experiment.

Of course, with about 70 years experience in such regularization procedures nobody would do such a brute-force regularization but rather uses more convenient prescriptions, working, e.g., in a manifestly covariant way using dimensional regularization or the heat-kernal-##\zeta##-function method, because that simplifies the task of the practical calculation. At the end you have an effective theory defined by renormalized perturbation theory.

Lattice regularization is of course also another way used in lattice-QCD calculations, and also here you have to employ continuum extrapolations using scaling laws and other mathematical tricks to get the numbers of the continuum theory out beyond the perturbative approach.

Of course each of these approximation schemes has its limitations, but what's the underlying theory we approximate with this practitioners' version of relativistic QFT is not known today (neither do we know, whether such a theory really exists).
 
  • #56
A. Neumaier said:
How do you conceptually relate the numerical results to the theory? For this you need renormalization and a limit.

How do you define scattering in a compact space conceptually? Real physics is always based on the asymptotic concept, and the experiments are approximate realizations of the concept. Without considering the asymptotics the whole conceptual clarity gained by quantum physics is lost. One has no longer well-defined asymptotic states that define what a particle is, nor has one sensible concepts of scattering angle and transition probabilities; everything becomes murky.

But since you consider this to be conceptually clean I guess I have nothing more to say.
Well, the opposite is true in practice. To put your fields in a finite volume with periodic boundary conditions makes the definition of free particles way easier in this "model universe".

Take only the apparently simple task to evaluate the propagator of a free field with mass ##m_1## from the interaction picture, where the free Lagrangian is the one for a field with mass ##m_2## and the interaction Lagrangian taking care of the mass shif, ##\mathcal{L}_I=-(m_1^2-m_2^2) \phi^2/2##. In the finite spacetime with periodic boundary conditions, it's a no-brainer. In infinite Minkowski space it's a little bit more complicated. Another way out is to take the propgators and self-energies as limits of appropriately regularized complex functions in the sense of distributions. Here the question is what's the appropriate regularization, and the answer in my opinion is to use the causality structure of the various Green's functions with their appropriate "##\mathrm{i} \epsilon## descriptions" to tackle this problem directly in the infinite-four-volume limit, but indeed the approach with the finite spacetime with periodic boundary conditions is way more simple.

Another apparently simple example is the Bose gas of non-interacting particles. Put it in the box, and it's no trouble at all. Everything is smooth and finite, and it's clear what happens in the zero-temperature limit. The infinite-volume limit is tricky, as everybody knows who has thought about this calculation carefully. In real life we have always finite volumes or cold quantum gases in traps of many kinds, where you also have effectively a finite volume. Thermodynamics also is defined in finite volumes. So again, the infinite-volume limit has to be taken physically as a limit where surface effects are negligible and the limit itself as formal rather than physical.
 
  • #57
A. Neumaier said:
How do you conceptually relate the numerical results to the theory? For this you need renormalization and a limit.
No. I can have a well-defined theory which is defined as a lattice theory. With some hypothetical critical length, say, Planck length. Ok, once the actual lattice computations would use a much larger critical distance for the lattice, I would need some renormalization to connect the two lattice theories. But I would not need any limit.
A. Neumaier said:
How do you define scattering in a compact space conceptually?
I don't have to. Except for some approximation of experiments which are all done for finite distances.
A. Neumaier said:
Real physics is always based on the asymptotic concept, and the experiments are approximate realizations of the concept. Without considering the asymptotics the whole conceptual clarity gained by quantum physics is lost.
There is no clarity at all in such a limit, there is only some simplification of the computations. At the cost of loss in clarity.
A. Neumaier said:
One has no longer well-defined asymptotic states that define what a particle is, nor has one sensible concepts of scattering angle and transition probabilities; everything becomes murky.
Nobody prevents you from doing approximations. But such approximations are in no way "sensible concepts". They are murky approximations.

vanhees71 said:
Lattice regularization is of course also another way used in lattice-QCD calculations, and also here you have to employ continuum extrapolations using scaling laws and other mathematical tricks to get the numbers of the continuum theory out beyond the perturbative approach.

Of course each of these approximation schemes has its limitations, but what's the underlying theory we approximate with this practitioners' version of relativistic QFT is not known today (neither do we know, whether such a theory really exists).
Agreement with most, but with this part I have to disagree. The point is that we know such a theory really exists - namely, we know that lattice theory for a sufficiently small lattice size exists, is well-defined and everything. And such a lattice theory is a theory which, in its large distance limit, will be described by some continuous field theory. So, the lattice theory is well-defined, and so its large distance limit is as well-defined as possible for such a limit. Say, as well-defined as phonon theory for condensed matter theory.
 
  • #58
Denis said:
, I would need some renormalization to connect the two lattice theories. But I would not need any limit.
So you now agree that you need renornalization. Thus you agree that
A. Neumaier said:
The cure is the renormalization program.
But in addition you need to be sure that your lattice at Planck length agrees with conventional (Poincare invariant, continuum) quantum field theory - otherwise you have no reason to believe that your lattice at Planck length reproduces (for QED, day) the experimental results whose most accurate predictions are at present only available from conventional (Poincare invariant, continuum) quantum field theory. Without that you have nothing in hand except a hypothesis. But agreement with the latter is postulating the limit without any conceptual guarantees that this limit exists. Or, without the limit, you'd need to do the same hard analysis that cureently is too hard for handling ther continuum case - since from a mathematical point of view, extrapolation to the Planck scale and getting things resonably bounded there is essentially as difficult as doing it for any scale (and hence in the limit).
Denis said:
And such a lattice theory is a theory which, in its large distance limit, will be described by some continuous field theory.
This is not known but just a belief, and in the absence of a belif in the existence of the limiting theory pure wishful thinking. To prove your belief correct you'd have to construct the continuum theory, and then there is no longer a point of having the lattice, except as a scaffolding for the construction. But the construction of the continuum theory from the lattice seems to be a dead end; no significant progress has been made in the last 15 years.
vanhees71 said:
You (try to) define it as Poincare covariant theory in Minkowski R4R4\mathbb{R}^4, but that fails for all physically relevant models, and it's likely that in this rigorousity it's doomed to fail for fundamental reasons related with Haag's theorem and all that.
No. it is completely unrelated to Haag's theorem. The latter is also valid in 1- and 2-dimensional space (plus time), but has not been an obstacle in constructing interacting Poincare invariant QFTs. The difficulty in 3-dimensional space is no Haag's theorem but the difficulty of establishing (or disproving) the necessary global estimates for the approximation errors.
 
  • #59
A. Neumaier said:
So you now agree that you need renornalization. Thus you agree that
We need a quite unproblematic variant of renormalization - between two conceptually well-defined finite theories.
A. Neumaier said:
But in addition you need to be sure that your lattice at Planck length agrees with conventional (Poincare invariant, continuum) quantum field theory - otherwise you have no reason to believe that your lattice at Planck length reproduces (for QED, day) the experimental results whose most accurate predictions are at present only available from conventional (Poincare invariant, continuum) quantum field theory.
It has to agree with it only in the large distance approximation.
A. Neumaier said:
But agreement with the latter is postulating the limit without any conceptual guarantees that this limit exists.
No. All I need is to compute the large distance approximation of my well-defined (guaranteed to exist) lattice theory.
A. Neumaier said:
Or, without the limit, you'd need to do the same hard analysis that cureently is too hard for handling ther continuum case - since from a mathematical point of view, extrapolation to the Planck scale and getting things resonably bounded there is essentially as difficult as doing it for any scale (and hence in the limit).
You may have numerical problems to do the renormalization - that means, to connect the free fundamental parameters of the lattice theory with predictions of some large distance observables you can compare with real observations.
A. Neumaier said:
This is not known but just a belief, and in the absence of a belif in the existence of the limiting theory pure wishful thinking. To prove your belief correct you'd have to construct the continuum theory, and then there is no longer a point of having the lattice, except as a scaffolding for the construction.
I do not need any wishful thinking. The lattice theory is well-defined. If there are numerical problems to compute something out of it, not nice, but a numerical problem, not a conceptual one. To solve such a numerical problem, there are a lot of approaches, one of them being the development of a large distance approximation.
The conceptual requirements for such approximations are much lower. They should allow to make some computations, and reach plausible results, as approximations, that's all. So, they do not even have to define some consistent theories. Say, there may be some nice classical large distance limit, but somehow the corresponding quantum large distance limit ends up in a numerical mess. So what? Or we can obtain some semiclassical approximation. The semiclassical approximation is known to be inconsistent as a theory. But as an approximation it is fine.
 
  • #60
Denis said:
It has to agree with it only in the large distance approximation.
but in an approximation much finer than what can be calculated numerically. Thus a theory is needed how to match these widely differing scales.

Denis said:
they do not even have to define some consistent theories. Say, there may be some nice classical large distance limit, but somehow the corresponding quantum large distance limit ends up in a numerical mess. So what? Or we can obtain some semiclassical approximation. The semiclassical approximation is known to be inconsistent as a theory. But as an approximation it is fine.
If you call this mess conceptually clean we are light years away in the use of such terms.
 
  • #61
A. Neumaier said:
but in an approximation much finer than what can be calculated numerically. Thus a theory is needed how to match these widely differing scales. If you call this mess conceptually clean we are light years away in the use of such terms.
No. We do not need a theory for this, all we need is computations, approximate computations. There is no need for different theories, one well-defined theory is sufficient, and for this well-defined theory a lattice theory with periodic boundary conditions is a good candidate.

You need a theory to match a theory with some lattice approximation of that theory?

What I call conceptually clean is lattice theory with periodic boundary conditions. This is a well-defined theory, everything finite. This characterization of the lattice theory as conceptually clean does not depend on the lattice spacing and the size. Because "conceptually clean" is about concepts, not about our ability to compute something.

Approximate computations may be messy, they are in fact always messy.

What makes the difference is if there is a well-defined theory which one attempts to approximate - which is the case if that theory is a lattice theory - or if one attempts to approximate something which is not even well-defined - which is the case in continuous Lorentz-covariant field theory, even in the nice renormalizable case.
 
  • #62
Denis said:
What I call conceptually clean is lattice theory with periodic boundary conditions. This is a well-defined theory, everything finite.
But in this conceptually clean theory everything of interest has been cleaned away.

There is neither a notion of bound states nor a notion of scattering matrix, nor a notion of resonances, nor a notion of particles, nor a notion of angular momentum, nor a notion of canonical commutation relations.

Nothing is left of all the stuff needed to make conceptual sense of experimental data.
 
Last edited:

Similar threads

Back
Top