Haag's Theorem: Importance & Implications in QFT

In summary, Haag's theorem states that the S operator in QFT, initially assumed to be unitary, is not unitary anymore due to its interaction with fields. This is due to the fact that the Interaction Picture does not exist in QFT. The importance of this theorem lies in the fact that it applies to rigorous QFT where the aim is to construct a Lorentz invariant QFT that exists for all energies. However, in 3+1 dimensions, most QFTs are believed not to exist at all, unless they are asymptotically free or safe, making it difficult to construct a non-asymptotically free theory. This theorem also has implications for defining the Hamiltonian as a self-adjoint operator
  • #36
Demystifier said:
In particular, at the top of page 180 the author writes:
"This is an ultraviolet "regularization" in the usual terminology. It should be stressed, however, that here this is a consequence of the causal distribution splitting and not an ad hoc recipe".
Sorry, I should have explained this better. There is still a "regularisation" of sorts, but no physical cutoff.

Therefore, I do not think it is correct to say that there is no regularization in the Epstein-Glaser approach. It's only that the regularization is mathematically better justified.

More technically, the Epstein-Glaser approach starts from the observation that time ordering of field operators introduces step functions theta(t-t'), which are ill defined at zero. Therefore, the theta functions are replaced by certain better defined regularized functions, which avoid problematic UV divergences.
That is the original form of the Epstein-Glaser approach, if you are interested the modern form of the Epstein-Glaser approach is as follows.

Let's say the propagator of a scalar field theory is ##D(x-y)##, then the bubble second order diagram in ##\phi^4## theory in position space is:

##\int{D(x_1 - x_3)D(x_2 - x_3)D^{2}(x_3 - x_4)D(x_4 - x_5)D(x_4 - x_6)d^{4}x_{3}d^{4}x_{4}}##

Of course ##S(x_1,x_2,x_3,x_4,x_5,x_6) = D(x_1 - x_3)D(x_2 - x_3)D^{2}(x_3 - x_4)D(x_4 - x_5)D(x_4 - x_6)## is not a well-defined distribution on ##\mathbb{R}^{24}##, in the sense that there are some test functions that when integrated against it have a divergent result.

So you restrict the space of test functions from ##\mathcal{D}(\mathbb{R}^{24})## to some subspace ##\mathcal{A}## on which ##S(x_1,x_2,x_3,x_4,x_5,x_6)## is a sensible linear functional, typically the space of test functions which vanish on the ##x_i = x_j## hyperplanes. This is the regularisation in the Epstein-Glaser approach, but hopefully you can see why it's not really a physical cutoff. I don't know what it would correspond to physically.

You then prove that there is essentially a unique distribution ##\tilde{S}## defined on all test functions in ##\mathcal{D}(\mathbb{R}^{24})## which:
1. Agrees with ##\tilde{S}## on ##\mathcal{A}##.
2. Obeys relativity and causality.

Essentially ##S## has extensions thanks to the Hahn-Banach theorem to all of ##\mathcal{D}(\mathbb{R}^{24})##, but only one (up to a constant) obeys relativity and causality.

Hence we have renormalization via Hahn-Banach + Relativity.

However really the only way to work with a QFT without any regulator at all, is to know in advance which Hilbert Space it is defined on. On that Hilbert space there will be no divergences. However we aren't really able to do this at the moment, except with Algebraic QFT, which removes Hilbert Spaces but at the cost of being so general you can't work with a specific QFT.
 
Physics news on Phys.org
  • #37
Another, much more physical way, is not to regularize at all, but take the Feynman rules to provide the integrands of divergent integrals and then directly subtract the divergences, choosing a renormalization scheme (and in case of anomalies choose which currents should stay conserved in the quantum version of the considered field theory). This is known as the BPHZ approach and works for UV divergences.

You still have to regularize the IR divergences somehow, which occur when massless particles are present (as IR and collinear divergences of the loop integrals) and then do the appropriate resummations of diagrams (Bloch-Nordsieck/Kinoshita-Lee-Nauenberg).

While the UV divergences are a "mathematical decease", i.e., multiplying distributions, leading to problems with the definition of the products, the IR divergences are a "physical decease". The Epstein-Glaser approach cures this in a mathematical way, which is somewhat complicated but very reassuring in the sense that it shows that the usual renormalization theory provides the unique solution for the problem, obeying all physical constraints on the S matrix (in a perturbative sense).

The IR divergences come into the game, because at the very beginning we evaluate unphysical S-matrix elements in using the wrong asymptotic free states. E.g., it doesn't make sense to look at elastic electron-electron scattering in the sense of just two free electrons in the initial and the final state, because any accelerated charge radiates and thus you always have soft photons around the electrons. So the correct asymptotic free states of interacting electrons are always states acompanied by an indefinite number of (soft) photons (coherent states).

This problem already occurs in non-relativistic quantum theory for the much simpler problem of Coulomb scattering. Also there the plain waves are not the correct asymptotic states but the socalled "Coulomb distorted waves". There it becomes clear that this problem is due to the long-range nature of the Coulomb potential, which vanishing at infinity only with [itex]1/r[/itex].
 
  • #38
DarMM said:
This is the regularisation in the Epstein-Glaser approach, but hopefully you can see why it's not really a physical cutoff. I don't know what it would correspond to physically.

Essentially ##S## has extensions thanks to the Hahn-Banach theorem to all of ##\mathcal{D}(\mathbb{R}^{24})##, but only one (up to a constant) obeys relativity and causality.
Just for an analogy: Dimensional regularisation is also not really a physical cutoff and I don't know what it would correspond to physically. It is based on analytic continuation, which is more-or-less unique due to the (Cauchy or whoever) theorem.
 
  • #39
Does the EG method really avoid the UV problem?

For example, http://arxiv.org/abs/hep-th/0403246 says "In general the series does not converge (in any norm), but in a few cases it is at least Borel summable, cf. [45]."

Also http://arxiv.org/abs/0810.2173 says "One should note that no statements about the convergence of the full series eq. (3.2) can be made in general."
 
Last edited:
  • #40
Demystifier said:
Just for an analogy: Dimensional regularisation is also not really a physical cutoff and I don't know what it would correspond to physically. It is based on analytic continuation, which is more-or-less unique due to the (Cauchy or whoever) theorem.
That's true actually, I don't know what I was thinking! They also both share the weakness that there is no obvious non-perturbative version of either of them.

By the way, dimensional regularisation is essentially part of the theory of hyperfunctions, at least I find that a clearer way of understanding it. There's an old paper by Yasunori Fujii on this.
 
  • #41
atyy said:
Does the EG method really avoid the UV problem?

For example, http://arxiv.org/abs/hep-th/0403246 says "In general the series does not converge (in any norm), but in a few cases it is at least Borel summable, cf. [45]."

Also http://arxiv.org/abs/0810.2173 says "One should note that no statements about the convergence of the full series eq. (3.2) can be made in general."
That's not the UV problem. That's the problem of convergence of the perturbative series, which occurs even in quantum mechanics where there are no divergent integrals. Renormalisation methods like Epstein-Glaser, Dim Reg, Hard cutoffs, e.t.c. have nothing to say about this.
 
  • #42
DarMM said:
That's not the UV problem. That's the problem of convergence of the perturbative series, which occurs even in quantum mechanics where there are no divergent integrals. Renormalisation methods like Epstein-Glaser, Dim Reg, Hard cutoffs, e.t.c. have nothing to say about this.

I guess the more standard term might be the problem of "UV completeness". At the non-rigourous level, QED and Einstein gravity are usually thought not to be UV complete, and that there is truly a cut-off energy above which the theories don't exist, and some other more complete theory has to be used. Usually only asymptotically free or asymptotically safe theories are considered to have a chance of being UV complete. So either QED and Einstein gravity are asymptotically safe (since they are not asymptotically free), or they are not UV complete. Does the Epstein-Glaser method (or other points of view from rigourous QFT) challenge this heuristic thinking?
 
  • #43
atyy said:
I guess the more standard term might be the problem of "UV completeness". At the non-rigourous level, QED and Einstein gravity are usually thought not to be UV complete, and that there is truly a cut-off energy above which the theories don't exist, and some other more complete theory has to be used. Usually only asymptotically free or asymptotically safe theories are considered to have a chance of being UV complete. So either QED and Einstein gravity are asymptotically safe (since they are not asymptotically free), or they are not UV complete. Does the Epstein-Glaser method (or other points of view from rigourous QFT) challenge this heuristic thinking?
A few points:

1. Epstein-Glaser is not really rigorous QFT. It's a way of handling divergences in perturbation theory that's more rigorous than some others, although it isn't really more rigorous than the precise version of dimensional regularization. In perturbing a interacting quantum field theory, all of these methods assume there is something to perturb, i.e. that the theory exists, a non-rigorous statement. Rigorous QFT is really about proving the theories do exist.
Even if the theory did exist, you'd still have to prove its correlation functions and scattering cross-sections were smooth, in order for a perturbative series to exist. All perturbative methods like Epstein-Glaser assume the QFT exists and has smooth scattering cross-sections.

2. UV completeness is a non-perturbative phenomena, so perturbative methods like Epstein-Glaser can't say anything about it unfortunately.

3. UV completeness is a separate issue from convergence of the series. UV completeness is related to the UV divergences, but both issues are unconnected with series summation discussed in your links.
 
  • Like
Likes Demystifier and atyy
  • #44
DarMM said:
A few points:

1. Epstein-Glaser is not really rigorous QFT. It's a way of handling divergences in perturbation theory that's more rigorous than some others, although it isn't really more rigorous than the precise version of dimensional regularization. In perturbing a interacting quantum field theory, all of these methods assume there is something to perturb, i.e. that the theory exists, a non-rigorous statement. Rigorous QFT is really about proving the theories do exist.
Even if the theory did exist, you'd still have to prove its correlation functions and scattering cross-sections were smooth, in order for a perturbative series to exist. All perturbative methods like Epstein-Glaser assume the QFT exists and has smooth scattering cross-sections.

2. UV completeness is a non-perturbative phenomena, so perturbative methods like Epstein-Glaser can't say anything about it unfortunately.

3. UV completeness is a separate issue from convergence of the series. UV completeness is related to the UV divergences, but both issues are unconnected with series summation discussed in your links.

Is rigourous existence of a theory equivalent to "UV completeness"?

Also, is there any way for the perturbative series to make sense if the theory doesn't exist? For example, Scharf's book seems to develop the perturbation theory for QED, which at the heuristic level is usually thought not to exist (in the sense of not being UV complete), since it is not asymptotically free, and suspected not to be asymptotically safe. Heuristically, the perturbation series is thought to make sense as a low energy effective theory. Does this have a counterpart in more rigourous views of perturbation theory, or do they require that what they are perturbating does indeed exist?
 
Last edited:
  • #45
atyy said:
Is rigourous existence of a theory equivalent to "UV completeness"?
Yes, basically. One way you could see it is that a field theory exists rigorously, if you can prove that it is its own UV completion.

atyy said:
Also, is there any way for the perturbative series to make sense if the theory doesn't exist?
The perturbative method of dealing with a QFT constructs a formal series in the interaction strength. This construction is well-defined even if the supposed QFT doesn't exist. If it does exist then (under assumptions) the formal perturbative series is equal to the Taylor expansion of the QFT. If the quantum field theory doesn't exist, then it's just a meaningless formal construction.

You'd have to read Connes and others, but the basic picture is that the perturbative series is just something you can build from a Lagrangian, a formal construction (technically it's a map to a certain complete symmetric algebra, but that's not important). Whether that series actually means anything depends on whether the theory described by the Lagrangian actually exists or not. However you can always construct it, it is a mathematically well-defined operation, even if it is physically meaningless.
 
  • Like
Likes 1 person
  • #46
DarMM said:
The perturbative method of dealing with a QFT constructs a formal series in the interaction strength. This construction is well-defined even if the supposed QFT doesn't exist. If it does exist then (under assumptions) the formal perturbative series is equal to the Taylor expansion of the QFT. If the quantum field theory doesn't exist, then it's just a meaningless formal construction.

You'd have to read Connes and others, but the basic picture is that the perturbative series is just something you can build from a Lagrangian, a formal construction (technically it's a map to a certain complete symmetric algebra, but that's not important). Whether that series actually means anything depends on whether the theory described by the Lagrangian actually exists or not. However you can always construct it, it is a mathematically well-defined operation, even if it is physically meaningless.

Hmmm, this does seem different from the Wilsonian heuristic, in which the theory need not be UV complete in order for the perturbation series to be meaningful as a low energy effective theory.

If the formal series requires the rigourous existence of the quantum field theory in order to be physically meaningful, and given that experiments indicate that the perturbative series in QED seems physically meaningful, does this then suggest that QED may be rigourously constructed, possibly through asymptotic safety?
 
  • #47
atyy said:
Hmmm, this does seem different from the Wilsonian heuristic, in which the theory need not be UV complete in order for the perturbation series to be meaningful as a low energy effective theory.
Well, if you make a field theory effective with an explicit cutoff, then the theory is mathematically well-defined and the perturbative series is equal to the Taylor series of the theory. So the perturbative series does make sense in the presence of a cutoff.

If the formal series requires the rigourous existence of the quantum field theory in order to be physically meaningful, and given that experiments indicate that the perturbative series in QED seems physically meaningful, does this then suggest that QED may be rigourously constructed, possibly through asymptotic safety?
No. QED on its own and QED + Weak Theory, both give the same diagrams at low energy, so these results are not really a test of just QED. QED + Weak + Strong, probably rigorously exists.

However I should say, there are some suggestions that QED may be constructable through asymptotic safety, but they're not very conclusive. I would suggest reading Montvay and Munster's book on lattice field theory, their chapter on gauge theories contains a lot of references about the continuum limit of QED.
 
  • Like
Likes 1 person
  • #48
Actually atyy, please keep asking questions, the Wilsonian and rigorous point of views on QFT often seem to contradict each other, but that is simply because they come at it from completely different angles.

The easiest way I could explain it is that the Wilsonian viewpoint is concerned with the relations between various effective field theories. The main idea being the renormalization group flow, which maps an action at scale ##\Lambda## to the action at scale ##\Lambda^{'}## which has the same physics/produces the same expectation values.
In this framework, there is the critical point in the space of actions, the action which corresponds to physics at ##\Lambda \rightarrow \infty##. The space of pure QED actions probably poses no critical point except the free theory. However if we extend the space of actions to include QED + Yang-Mills, then there are non-trivial critical points. We say that we have to "ultraviolet complete" QED, i.e. extend the space of actions.

The rigorous point of view disposes with the cutoff actions and how they relate to each other, the physics as ##\Lambda\rightarrow\infty## isn't built up from a renormalization group flow. Instead we attempt to define the theory directly at ##\Lambda = \infty## of in lattice terms ##a = 0##.

Really there is currently no proof that anything exists in the ##\Lambda\rightarrow\infty## limit. Rigorous QFT attempts to show that there actually is something in that limit. That there really is a continuum theory.
 
  • Like
Likes 1 person
  • #49
DarMM, thanks a lot! The last two posts, and your comments throughout the thread have been very helpful. (I tried to clicking on "Thanks" for both your posts, but must have done something wrong, since I'm getting a message about "negative thanks".)
 
  • #50
DarMM said:
A QFT doesn't require a regulator, in the sense that it can exist mathematically without one. For practical calculations however one almost always needs a cutoff, since we currently aren't capable of directly solving a QFT without help from a field theory, which is singular with respect to the interacting theory and so we see infinities.

However there is the Epstein-Glaser formalism which requires no cutoffs (Infrared or ultraviolet) nor subtracting infinite quantities, instead it uses the Hahn-Banach theorem.

The Epstein-Glaser formalism has an intrinsic IR cutoff, since the x-dependent coupling function must have compact support. One can take the IR limit g(x) --> 1 only for selected results.
 
  • #51
I have a proposed resolution to Haag's Theorem, to appear shortly in International Journal of Quantum Foundations (ijqf.org). Preprint version is here:
http://arxiv.org/abs/1502.03814
 
  • #52
You tackle this ever intriguing subject through a highly controversial theoretical proposition. It somewhat reminds of ET Jaynes' refutation of standard (mathematically non-rigorous) QED through his (in)famous 'neoclassical radiation theory'. I wonder what your reviewers said, but there's a password behind that report.
 
  • #53
Well, note that Wheeler thought this theoretical approach (i.e. direct action) was a perfectly fine way to go, as reflected by his 2003 comments that I quoted in the paper:

"[Wheeler-Feynman theory] swept the electromagnetic field from between the charged particles and replaced it with “half-retarded, half advanced direct interaction” between particle and particle. It was the high point of this work to show that the standard and well-tested force of reaction of radiation on an accelerated charge is accounted for as the sum of the direct actions on that charge by all the charges of any distant complete absorber. Such a formulation enforces global physical laws, and results in a quantitatively correct description of radiative phenomena, without assigning stress-energy to the electromagnetic field. ([9], p. 427)".

And he comments further,

"One is reminded of an argument against quantum theory advanced by Einstein, Podolsky and Rosen in a well-known paper (1931) …The implicit nonlocality of [the EPR entanglement experiment], they argue, is at odds with the idea that physics should be fundamentally local…As has been evidenced by many experimental tests, the view of nature espoused by Einstein et al is not quite correct. Various experiments have shown that distant measurements can affect local phenomena. That is, nature is not described by physical laws that are entirely local. Effect from distant objects can influence local physics…this example from quantum theory serves to illustrate that it may be useful to expand our notions regarding what types of physical laws are ‘allowed’. "([9], pp 426-7; emphasis in original text)

The reviewer wanted clarification of a number of points, but they were open-minded about the direct action approach, as I hope you will be. And the fact that it was accepted for publication means that it was technically sound/correct according to the reviewer. Let's not let the specter of 'controversy' prevent us from considering a legitimate idea, especially when it's suggested by John Wheeler who was one of the originators of the approach and who still thought it was a good idea decades later. Boltzmann's ideas about atoms were initially "controversial" too.
 
  • #54
rkastner said:
I have a proposed resolution to Haag's Theorem, to appear shortly in International Journal of Quantum Foundations (ijqf.org). Preprint version is here:
http://arxiv.org/abs/1502.03814

But Haag's theorem presents no problem for formulating QFT rigourously. There are already rigourous QFTs in 2+1 spacetime dimensions.
 
  • #55
Well we need relativistic quantum theory to work in 3+1 spacetime.
That has not been resolved in the QFT picture. I address that in my paper.
 
  • #56
rkastner said:
Well we need relativistic quantum theory to work in 3+1 spacetime.
That has not been resolved in the QFT picture. I address that in my paper.

In your proposal, does one have to use the Transactional Interpretation, or does Copenhagen still work?
 
  • #57
I resolve the problems presented by Haag's theorem by using the direct-action picture of fields. This is what the Transactional Interpretation is based on, so yes, TI involves a formulation of quantum theory, including relativistic quantum theory, that is immune to the consequences of Haag's theorem.

The Copenhagen Interpretation (CI) is not something that can deal with Haag's theorem, which reveals logical inconsistencies in the formulation of quantum field theory.
CI is basically an instrumentalist interpretation of quantum theory--i.e. it views the theory as just an instrument for making predictions about empirical phenomena rather than a theory that tells us about reality itself. This doesn't address the issues raised by Haag's theorem, so it can't be used to resolve them.
 
  • #58
Rkastner, despite the Haag's theorem, the standard regularized and renormalized QFT leads to non-trivial finite measurable predictions which are in excellent agreement with experiments. Does your theory lead to the same measurable predictions as the standard theory? (Let me guess: you haven't checked this out yet.)

In addition, let me note that I have found an error in your paper. In Sec. 1 you require that the vacuum should be annihilated by the Hamiltonian (either the free Hamiltonian or the interacting one). But this is wrong. The vacuum is not defined as a state with zero energy. The vacuum is defined as the state with lowest energy (ground state), but lowest energy does not need to be zero. For example, the ground state of a single quantum harmonic oscillator has energy larger than zero.
 
Last edited:
  • #59
rkastner said:
CI is basically an instrumentalist interpretation of quantum theory--i.e. it views the theory as just an instrument for making predictions about empirical phenomena rather than a theory that tells us about reality itself. This doesn't address the issues raised by Haag's theorem, so it can't be used to resolve them.
I disagree with that too. CI, as an instrumentalist interpretation, does address issues raised by the Haag's theorem. The Haag's theorem is a consequence of the infinite number of degrees of feedom in QFT, especially the IR ones. CI has developed practical instrumentalists methods of dealing with such systems, by methods of regularization and renormalization. In this way, from a practical instrumentalist point of view, the problems raised by the Haag's theorem are avoided.
 
  • #60
Demystifier said:
Rkastner, despite the Haag's theorem, the standard regularized and renormalized QFT leads to non-trivial finite measurable predictions which are in excellent agreement with experiments. Does your theory lead to the same measurable predictions as the standard theory? (Let me guess: you haven't checked this out yet.)

In addition, let me note that I have found an error in your paper. In Sec. 1 you require that the vacuum should be annihilated by the Hamiltonian (either the free Hamiltonian or the interacting one). But this is wrong. The vacuum is not defined as a state with zero energy. The vacuum is defined as the state with lowest energy (ground state), but lowest energy does not need to be zero. For example, the ground state of a single quantum harmonic oscillator has energy larger than zero.

Concerning the alleged error, I think you misunderstand. The term "vacuum" in this context is the state with zero quanta, |0>, not zero energy. The ground state is indeed annihilated by the Hamiltonian defined as proportional to the number operator a(dag)a, since the eigenvalue of the number operator for |0> is zero. (See Wiki, http://en.wikipedia.org/wiki/Canonical_quantization for details on this definition of the Hamiltonian). Here's a relevant passage from Earman and Fraser (2005):

"...And suppose that the vacuum state is the ground state in that it is an eigenstate of the Hamiltonian with eigenvalue 0.."

And concerning your first question, it has long been known that the direct action theory is empirically equivalent to QFT given the appropriate boundary conditions. This fact is discussed at some length in my paper.
 
  • #61
Demystifier said:
I disagree with that too. CI, as an instrumentalist interpretation, does address issues raised by the Haag's theorem. The Haag's theorem is a consequence of the infinite number of degrees of feedom in QFT, especially the IR ones. CI has developed practical instrumentalists methods of dealing with such systems, by methods of regularization and renormalization. In this way, from a practical instrumentalist point of view, the problems raised by the Haag's theorem are avoided.

Good point. If one thinks that the quantum world doesn't exist, then I suppose it doesn't matter if a theorem shows that the interaction picture of fields doesn't exist. :) Evasion of fundamental questions about reality is a good pragmatic tactic for getting on with one's life I suppose. But in my view it is inconsistent with the spirit of science. And I argue in both my books that this sort of evasion is wholly unnecessary -- indeed it is based on specific metaphysical and epistemological assumptions which are not necessarily true at all. Just as Kant's view that Euclidean spacetime had to be a basic feature of knowable reality was shown to be wrong.
 
  • #62
rkastner said:
And concerning your first question, it has long been known that the direct action theory is empirically equivalent to QFT given the appropriate boundary conditions. This fact is discussed at some length in my paper.

At some point the direct action theory, if successful, should probably diverge from standard QED. If standard QED is not asymptotically safe, then it will blow up at high energies (Landau pole), and fail to make predictions for those experiments. Another way to see it is that QED is a lattice theory in finite volume and finite lattice spacing. If the direct action theory is successful, then it has to work in infinite volume and at arbitrarily high energy, so it should diverge from standard QED. Does the direct action theory do this?
 
  • #63
These sorts of infinities in QFT are artifacts of the need to renormalize, which is another aspect of the consistency problems inherent in QFT. They only appear because of the assumed infinite degrees of freedom of the putative mediating fields, which are denied in the direct action picture. The direct action theory does not require renormalization, so it's immune to these problems. It is empirically equivalent to QFT to the extent that the latter makes non-divergent empirical predictions. (See p. 7 of my preprint which discusses the Rohrlich theory). Caveat: there may be a slight deviation from QED in exotic systems such as heavy He-like ions which I've briefly explored in qualitative terms (see http://arxiv.org/abs/1312.4007)
 
  • #64
rkastner said:
These sorts of infinities in QFT are artifacts of the need to renormalize, which is another aspect of the consistency problems inherent in QFT. They only appear because of the assumed infinite degrees of freedom of the putative mediating fields, which are denied in the direct action picture. The direct action theory does not require renormalization, so it's immune to these problems. It is empirically equivalent to QFT to the extent that the latter makes non-divergent empirical predictions. (See p. 7 of my preprint which discusses the Rohrlich theory). Caveat: there may be a slight deviation from QED in exotic systems such as heavy He-like ions which I've briefly explored in qualitative terms (see http://arxiv.org/abs/1312.4007)

So direct action theory holds in infinite volume and at arbitrary energy? In other words, direct action theory is claimed to be a UV completion of standard QED?
 
  • #65
I have not seen it stated in those terms, but there is no self-energy divergence in the direct action theory. Rohrlich makes this point on p. 351 of this paper: http://philpapers.org/rec/ROHTED (It's a chapter in an edited collection by J. Mehra. You may be able to find the book excerpt online.)
 
  • #66
rkastner said:
I have not seen it stated in those terms, but there is no self-energy divergence in the direct action theory. Rohrlich makes this point on p. 351 of this paper: http://philpapers.org/rec/ROHTED (It's a chapter in an edited collection by J. Mehra. You may be able to find the book excerpt online.)

I think the self-energy divergence usually means a high energy cut-off is needed, so that doesn't seem to address the infinite volume requirement. Is the direct action theory also claimed to formulate QED in infinite volume?
 
Last edited:
  • #67
I am not aware of any problem facing the direct-action theory for the case of infinite volume. Let me know if you see anything that might suggest otherwise.
And thanks for your interest in this topic.
 
  • #68
rkastner said:
Concerning the alleged error, I think you misunderstand. The term "vacuum" in this context is the state with zero quanta, |0>, not zero energy. The ground state is indeed annihilated by the Hamiltonian defined as proportional to the number operator a(dag)a, since the eigenvalue of the number operator for |0> is zero. (See Wiki, http://en.wikipedia.org/wiki/Canonical_quantization for details on this definition of the Hamiltonian). Here's a relevant passage from Earman and Fraser (2005):

"...And suppose that the vacuum state is the ground state in that it is an eigenstate of the Hamiltonian with eigenvalue 0.."

And concerning your first question, it has long been known that the direct action theory is empirically equivalent to QFT given the appropriate boundary conditions. This fact is discussed at some length in my paper.
Thanks for the reply, now I am becoming more interested. :smile:

First something trivial. I have noted a typo in your Ref. [12]; the volume should be 4, not 6.

Now some non-trivial questions:
1. If the two formulations are empirically equivalent, then why the Wheeler-Feynman (WF) one is much less popular?
2. In particular why Feynman himself abandoned it?
3. Is perhaps WF more complicated in practical applications?
4. Can WF be generalized to Yang-Mills theory?
5. Do infinities appear in a similar way as in standard formulation, and can they be cured by an appropriate renormalization theory?
6. How would you comment the following statement at wikipedia?
http://en.wikipedia.org/wiki/Wheeler–Feynman_absorber_theory
"Finally, the main drawback of the theory turned out to be the result that particles are not self-interacting. Indeed, as demonstrated by Hans Bethe, the Lamb shift necessitated a self-energy term to be explained. Feynman and Bethe had an intense discussion over that issue and eventually Feynman himself stated that self-interaction is needed to correctly account for this effect."
 
Last edited:
  • #69
rkastner said:
I am not aware of any problem facing the direct-action theory for the case of infinite volume. Let me know if you see anything that might suggest otherwise.
And thanks for your interest in this topic.

I took a quick look at the Davies papers in J Phys A, and he mentions that the system has to be in a light tight box. At least naively, that seems to require finite volume.
 
  • #70
Demystifier said:
Thanks for the reply, now I am becoming more interested. :smile:

Now some non-trivial questions:
1. If the two formulations are empirically equivalent, then why the Wheeler-Feynman (WF) one is much less popular?
2. In particular why Feynman himself abandoned it?
3. Is perhaps WF more complicated in practical applications?
4. Can WF be generalized to Yang-Mills theory?
5. Do infinities appear in a similar way as in standard formulation, and can they be cured by an appropriate renormalization theory?
6. How would you comment the following statement at wikipedia?
http://en.wikipedia.org/wiki/Wheeler–Feynman_absorber_theory
"Finally, the main drawback of the theory turned out to be the result that particles are not self-interacting. Indeed, as demonstrated by Hans Bethe, the Lamb shift necessitated a self-energy term to be explained. Feynman and Bethe had an intense discussion over that issue and eventually Feynman himself stated that self-interaction is needed to correctly account for this effect."

Thanks, I'll check into the typo.
1. I can't find a good reason for the general lack of interest in direct-action picture, especially given that Wheeler was still advocating it in 2005 as noted in my paper. I'm trying to remedy that with my current work.
2. This is related to your #6--see below.
3. It does seem easier to use quantized fields as stand-ins for unknown charge configurations, so probably yes, although Wheeler didn't seem to think so, as I note in my paper on Haag's thm.
4. I don't see why not. Worth exploring.
5. No, since infinities result from the assumption that there are Fock space states for all interactions, which is denied in direct-action picture.
6. Apparently Feynman was mistaken. You don't need to omit all self-interaction to use the direct action picture successfully. Davies showed how to do this in QED (Davies 1971 and 1972 papers). I think Feynman was overly committed to the assumption of zero self-interaction when that is not necessary. Perhaps he didn't notice that once you include quantum indistinguishability of currents, there is no real fact of that matter as to what constitutes self-interaction and what doesn't. So of course at the quantum level you are naturally going to have to allow for some self-interaction, which is just the right kind to explain such things as the Lamb Shift. As I've noted, Wheeler in 2005 saw direct-action picture as perfectly viable. So apparently he disagreed with Feynman's abandonment of it.
 

Similar threads

Replies
5
Views
2K
Replies
24
Views
8K
Replies
39
Views
12K
Replies
1
Views
2K
Replies
9
Views
6K
Replies
79
Views
17K
Replies
4
Views
3K
Replies
68
Views
9K
Back
Top