How to derive Born's rule for arbitrary observables from Bohmian mechanics?

In summary, the conversation discusses the derivation of Born's rule for arbitrary observables from Bohmian mechanics. Three possible sources for this derivation are mentioned, including a paper by the author of the conversation. Further discussion includes the mathematical equivalence of the three derivations, the assumption of tensor product structure, and the role of unitarity in the derivation. The conversation also touches on the reduction of perceptibles to macroscopic positions and the measurement of angular momentum in practice.
  • #141
A. Neumaier said:
Well, how to get the anomalous magnetic moment of the electron from a lattice calculation to the known accuracy? You cannot get even close with present lattice technology!
I do not care about getting high accuracy first. Initially, I care about having a well-defined theory. Once one has a well-defined theory, one can start to improve the approximation methods. So, for theories with low interaction constants like 1/137 or so, it makes sense to look for approximation methods which make use of it, say, using some variant of a power series. Don't forget that this would be quite irrelevant for defining dBB trajectories - it is about methods to compute something well-defined in QT as well as dBB.
 
  • Like
Likes Demystifier
Physics news on Phys.org
  • #142
A. Neumaier said:
Well, all Lorentz covariant QFT looks problematic to me from the Bohmian point of view, because nothing Bohmian survives renormalization.
It's good to know what really bothers you, so that we don't need to discuss all other technicalities that are not directly related to renormalization.

A. Neumaier said:
You cannot get even close with present lattice technology!
Present is the key word. If we had much much stronger computers which can handle lattices with much much bigger number of vertices, then there are no many doubts that ##g-2## could be be computed on the lattice with a great accuracy.
 
Last edited:
  • #143
Elias1960 said:
I do not care about getting high accuracy first. Initially, I care about having a well-defined theory.
Standard renormalized QED at 6 loops is a perfectly well-defined covariant quantum field theory that gives excellent predictions. Its only defect is that it (extremely slightly) violates the axioms of Wightman. Since you discard wightman's axioms as well, you have no reasons left to consider QED ad ill-defined. Thus you should care about standard QED.
Elias1960 said:
Once one has a well-defined theory, one can start to improve the approximation methods.
These are already well developed, to the point of giving results with 12cdecimals of relative accuracy!
Elias1960 said:
So, for theories with low interaction constants like 1/137 or so, it makes sense to look for approximation methods which make use of it, say, using some variant of a power series. Don't forget that this would be quite irrelevant for defining dBB trajectories - it is about methods to compute something well-defined in QT as well as dBB.
There are lots of well-defined theories completely unrelated to experiment. They are completely irrelevant. To claim physical content for a theory you need to show that you can reproduce the experimental results!

Thus to make a Bohmian version of QED based on a lattice you need to spell out which precise lattice field theory (at which lattice spacing, with which interaction constants) you want to consider. For lack of computational evidence you would have to prove theoretically (not just say some handwaving words!) - which is probably impossible in the face of QED triviality and the Fermion doubling problem - that you can accurately approximate this lattice theory in some way that reproduces the standard low energy results of QED. Only then you have a substantiated claim.
 
  • Like
Likes weirdoguy and dextercioby
  • #144
Demystifier said:
Present is the key word. If we had much much stronger computers which can handle lattices with much much bigger number of vertices,
Until this is the case (most likely never, since the computers would need more memory than the size of the universe allows) you only have a dream full of wishful thinking.
Demystifier said:
then there are no many doubts that ##g-2## could be be computed on the lattice with a great accuracy.
According to the studies on triviality, there are even less doubts that ##g-2## would coms out to be zero to whatever great accuracy your imagined supersupercomputer will be able to muster.
 
  • #145
A. Neumaier said:
According to the studies on triviality, there are even less doubts that ##g-2## would coms out to be zero to whatever great accuracy your imagined supersupercomputer will be able to muster.
We discussed that in another thread and didn't in fact agreed on this.
 
  • #146
Demystifier said:
We discussed that in another thread and didn't in fact agreed on this.
Well, you didn't demonstrate the truth of your conjecture, it is just a belief. Beliefs don't count in physios, thus there is at present no Bohmian version of QED making contact with experiment, only a hope.
 
  • Like
Likes weirdoguy
  • #147
A. Neumaier said:
Well, you didn't demonstrate the truth of your conjecture, it is just a belief. Beliefs don't count in physios, thus there is at present no Bohmian version of QED making contact with experiment, only a hope.
Fine, but the problem is not in Bohmian mechanics itself. Instead, the problem is in the lattice formulation of QED, irrespective of the interpretation (Copenhagen, Bohmian, thermal, or whatever). The standard practice is to work with a non-lattice type of regularization, which gives numbers that agree with experiments, but has its own mathematical problems because such non-lattice regularizations are not mathematically rigorous.
 
  • Like
Likes atyy
  • #148
Demystifier said:
Fine, but the problem is not in Bohmian mechanics itself. Instead, the problem is in the lattice formulation of QED, irrespective of the interpretation (Copenhagen, Bohmian, thermal, or whatever). The standard practice is to work with a non-lattice type of regularization, which gives numbers that agree with experiments, but has its own mathematical problems because such non-lattice regularizations are not mathematically rigorous.
Yes, and the reason is that QED is not defined on the lattice but on the continuum. It is to any fixed loop order Lorentz covariant and mathematically well-defined (in causal perturbation theory, which constructs everything, the S-matrix, the Hilbert space and the field operators). Already loop order 1 gives an excellent match with experiment, though for very high accuracy one needs orders up to six.
The level of rigor is the same as for lattice theories that break the covariance.

If you want to latticise QED for a subsequent Bohmian treatment you need to specify how to do it in a way that preserve its predictive properties at the same level of rigor. Simply speculating that it can be done is not enough.
Demystifier said:
That was not the deal. The deal was that you solve everything within standard quantum theory (including the relativistic covariant version of standard quantum theory; it's up to you whether you will use relativistic QM, relativistic QFT, or whatever you want) and make a relativistic covariant measurable prediction (e.g. some probability distribution of measurement outcomes). After you do all this (you can use existing results from the literature), I explain how the same measurable results can be obtained from the point of view of Bohmian mechanics.

If you complain that it's unfair because you must do the hard part while my part is easy, that's exactly my point.
All the hard work had already been done in 1948 and was rewarded in 1954 by a Nobel prize. The results of the hard work can be found in any textbook treating QED; many thousands of students learn it every year.
So there is no need for me to do any additional work.
Demystifier said:
I think you took this example because you don't need to do any work, while my job would be hard
I don't understand how you can call you job hard given that you said before that
Demystifier said:
Bohmian mechanics is easy, once one understands how standard quantum theory works.
How standard QED works is understood very well. If you don't like the anomalous magnetic moment, pick instead your preferred scattering amplitude.
 
Last edited:
  • #149
A. Neumaier said:
Standard renormalized QED at 6 loops is a perfectly well-defined covariant quantum field theory that gives excellent predictions.
Is it a theory at all? It is nothing but an approximation for a particular experiment, namely scattering of particles which start and end with free particles far away.
A. Neumaier said:
Its only defect is that it (extremely slightly) violates the axioms of Wightman. Since you discard wightman's axioms as well, you have no reasons left to consider QED ad ill-defined. Thus you should care about standard QED.
No, it is not even a consistent theory. And I do not care about accuracy of an approximation of a not even well-defined theory, I care first about having a well-defined theory.
A. Neumaier said:
These are already well developed, to the point of giving results with 12cdecimals of relative accuracy!
There are lots of well-defined theories completely unrelated to experiment. They are completely irrelevant. To claim physical content for a theory you need to show that you can reproduce the experimental results!
Once I have a well-defined theory, which I have if I use a lattice regularization, then I can start about using your renormalized QED at 6 loops to compute approximations. So, no problem. Nobody forbids me to use such not-even-theories as approximations for particular situations like scattering.
A. Neumaier said:
Thus to make a Bohmian version of QED based on a lattice you need to spell out which precise lattice field theory (at which lattice spacing, with which interaction constants) you want to consider.
I can consider a particular lattice theory in general, using unspecified constants. Who was it who has referenced that paper where lattice computations have been used to compute the renormalization down to the place where the Landau pole should appear, but it did not appear on the lattice? So, to compute the renormalization is something possible and has been already done, and once in this case all lattice approximations are well-defined theories. All one has to do is to compute with this program the resulting large distance limit of the constants and to compare them with observation.
A. Neumaier said:
For lack of computational evidence you would have to prove theoretically (not just say some handwaving words!) - which is probably impossible in the face of QED triviality and the Fermion doubling problem - that you can accurately approximate this lattice theory in some way that reproduces the standard low energy results of QED. Only then you have a substantiated claim.
QED triviality is not a problem of lattice theory, it is a problem which appears only in the limit of the lattice distance going to zero. Which I propose explicitly not to do. To go with the lattice distance below Planck length simply makes no sense at all. Don't forget that a lattice theory remains well-defined if the interaction constant is greater than 1, while you will fail completely with your Feynman diagrams.

Then, fermion doubling is first of all a problem of getting the accuracy. It appears if you approximate the first derivatives in a node n with ##\frac{f(n+1)-f(n-1)}{2h}##, but not if you use the less accurate ##\frac{f(n+1)-f(n)}{h}##. Just to clarify that it is not unsolvable in principle. But, ok, even if we prefer higher accuracy, we can get rid of unnecessary doublers. In this case, we can use staggered fermions, which reduces the doublers to four. Then, to regularize the theory, we need discretization in space only, not in time. If one uses the original Dirac equation with the ##\alpha_i,\beta##, this gives a staggered evolution equation on a 3D lattice. This reduces the doubling problem by another factor two, thus, gives two Dirac fermions. Completely sufficient for the SM, where fermions appear only in electroweak doublets.

For details, with the explicit 3D lattice, see arxiv:0908.0591
 
  • Like
Likes atyy
  • #150
Elias1960 said:
Is it a theory at all? It is nothing but an approximation for a particular experiment, namely scattering of particles which start and end with free particles far away.

No, it is not even a consistent theory. And I do not care about accuracy of an approximation of a not even well-defined theory, I care first about having a well-defined theory.
Of course QED at a fixed number of loops is a theory, an established part of theoretical physics. It is mathematically as well-defined and as consistent as lattice field theory, and gives far superior results.
Scharf's book on QED (did you ever try to read it?) defines everything (not only the S-matrix but the Hilbert space and the field operators) in completely rigorous terms.

Its only defect is that we know nothing rigorous about the limit when the number of loops grows indefinitely, but this is no worse than that we know nothing rigorous about lattice QFTs when the lattice spacing goes to zero.
Elias1960 said:
I use a lattice regularization, then I can start about using your renormalized QED at 6 loops to compute approximations. So, no problem. Nobody forbids me to use such not-even-theories as approximations for particular situations like scattering.
The problem is that you need to show that renormalized QED at 6 loops is actually a valid approximation - which is dubious in the light of triviality results!
Elias1960 said:
QED triviality is not a problem of lattice theory, it is a problem which appears only in the limit of the lattice distance going to zero.
QED triviality is a problem of relating the lattice QED to the continuum QED. Lacking this relation means lacking support for the claim that one approximates the other at the physical values of the parameters defining the specific theory.
Elias1960 said:
For details, with the explicit 3D lattice, see arxiv:0908.0591
This says nothing about how well the successful continuum theory for the standard model approximates the proposed lattice theory, hence does not do what you want it to do.
 
  • #151
A. Neumaier said:
Of course QED at a fixed number of loops is a theory, an established part of theoretical physics. It is mathematically as well-defined and as consistent as lattice field theory, and gives far superior results.
Scharf's book on QED (did you ever try to read it?) defines everything (not only the S-matrix but the Hilbert space and the field operators) in completely rigorous terms.
Really, completely rigorous? Scharf writes:
After the discussion of free fields it is high time to come to interactions. We will
introduce interactions in the framework of scattering theory. Here the basic object is
the scattering matrix (S-matrix). It maps the asymptotically incoming free fields on
the outgoing ones; the interaction is switched off at large times and large distances.
Then the S-matrix can be expressed by free fields as a formal power series.
If I tell a mathematician that I would like to define a theory using a formal power series, and that this is completely rigorous, the only answer will be laughing.
A. Neumaier said:
Its only defect is that we know nothing rigorous about the limit when the number of loops grows indefinitely, but this is no worse than that we know nothing rigorous about lattice QFTs when the lattice spacing goes to zero.
No, it is much worse, because the first terms of some formal power series is even less that your formal power series itself. And, no, a power series for some particular problem named scattering is not yet a theory.
A. Neumaier said:
The problem is that you need to show that renormalized QED at 6 loops is actually a valid approximation - which is dubious in the light of triviality results!
The assumption that your 6 loop QED is a theory at all is much more dubious. Does it have a Hamilton operator? It cannot be the QED Hamilton operator, because it would give that formal power series instead.
A. Neumaier said:
QED triviality is a problem of relating the lattice QED to the continuum QED.
Lacking this relation means lacking support for the claim that one approximates the other at the physical values of the parameters defining the specific theory.
This relation is well-defined in the other direction - QED as the large distance approximation of lattice QED. And the impossibility to define QED in a reasonable way for arbitrary small distances does is not a problem for this relation at all.
A. Neumaier said:
This says nothing about how well the successful continuum theory for the standard model approximates the proposed lattice theory, hence does not do what you want it to do.
You seem to see problems where there are none - computing continuous large distance approximations for discrete theories was a standard job of condensed matter physics even before atoms have been actually observed. On the other hand, you sell me a formal power series as a rigorous definition of a theory.
 
  • Like
  • Sad
Likes Tendex, weirdoguy and atyy
  • #152
Elias1960 said:
If I tell a mathematician that I would like to define a theory using a formal power series, and that this is completely rigorous, the only answer will be laughing.
The answer to your response makes me laugh about your views on rigor. You should not try to educate an expert mathematician like me...

Formal power series are completely rigorous mathematical objects, studied by many pure mathematicians. The only thing one cannot do rigorously with a formal power series is an evaluation at a nonzero argument. But for 6 loop QED not even that is needed, since all power series are truncated at degree 6, and this gives polynomials with perfectly well-defined values.
Elias1960 said:
The assumption that your 6 loop QED is a theory at all is much more dubious. Does it have a Hamilton operator? It cannot be the QED Hamilton operator, because it would give that formal power series instead.
It is a 6-loop approximation to the QED Hamilton operator, as well-defined as a lattice field Hamiltonian.[/QUOTE]

Elias1960 said:
This relation is well-defined in the other direction - QED as the large distance approximation of lattice QED.
No, this is your unproved conjecture, at present wishful thinking without a shred of evidence.

There is a large body of theory of QED at few loops, taught to every physicist. But there is no theory at all about lattice QED, and nobody has ever taught this nonexistent theory. Nevertheless, @Elias1960 claims the opposite - that continuum QED is not a theory but lattice QED is one. Why is nobody teaching this revolutionary insight into what makes a theory?

Even lattice QCD, which (unlike lattice QED, which is a caricature of a field theory) produces numerically sensible results, happens on Euclidean spacetime. Unlike 6 loop QED, it is not based on a Schrödinger equation (which is the basis of quantum mechanics) but uses instead a dynamics in discrete imaginary time. To get information about application to quantum physics one needs to do analytic continuation to real time - the possibility of which depends on the Osterwalder Schrader theorem, a result from continuum field theory borrowed even by the lattice QCD community!
Elias1960 said:
computing continuous large distance approximations for discrete theories was a standard job of condensed matter physics even before atoms have been actually observed.
What a high quality argument that cannot even distinguish between classical mechanics and quantum mechanics!
 
Last edited:
  • Like
Likes dextercioby
  • #153
A. Neumaier said:
Yes, and the reason is that QED is not defined on the lattice but on the continuum. It is to any fixed loop order Lorentz covariant and mathematically well-defined (in causal perturbation theory, which constructs everything, the S-matrix, the Hilbert space and the field operators). Already loop order 1 gives an excellent match with experiment, though for very high accuracy one needs orders up to six.
The level of rigor is the same as for lattice theories that break the covariance.
No. A lattice theory on a big cube with periodic boundary conditions is a well-defined quantum theory with a finite number of degrees of freedom. Even if you restrict yourself to loop order 6 or so you have infinities and have to use renormalization and so on, which makes the whole theory ill-defined.
A. Neumaier said:
If you want to latticise QED for a subsequent Bohmian treatment you need to specify how to do it in a way that preserve its predictive properties at the same level of rigor. Simply speculating that it can be done is not enough.
No. Nobody wants to "lattice" QED. QED as a continuum theory is not well-defined. Instead, lattice QED is a well-defined theory from the start. This makes a big difference.

And, note: To show that a certain well-defined lattice theory has some well-defined continuum approximation is easy. Even if this continuous approximation makes sense only as an approximation, as long as one avoids to consider it below the critical length, it works nicely.
A. Neumaier said:
The answer to your response makes me laugh about your views on rigor. You should not try to educate an expert mathematician like me...
Formal power series are completely rigorous mathematical objects, studied by many pure mathematicians.
Of course, there are a lot of things one can do with formal power series which are completely rigorous, like multiplying them or so. Math works with everything which has a precise definition. But you have to care about the context: We talk here about a theory which makes sense only if that formal power series converges at least for some non-zero values.
A. Neumaier said:
The only thing one cannot do rigorously with a formal power series is an evaluation at a nonzero argument. But for 6 loop QED not even that is needed, since all power series are truncated at degree 6, and this gives polynomials with perfectly well-defined values.
But without any rigorous relation to reality if that reality is defined by some
A. Neumaier said:
It is a 6-loop approximation to the QED Hamilton operator, as well-defined as a lattice field Hamiltonian.
Really? Can you give me a source where this approximation is given? AFAIK, these approximations are approximations for particular scattering amplitudes, and presuppose a scattering problem, you know, with free particles coming from minus infinity and moving toward plus infinity as free particles. This is, of course, the most important problem if the only devices you have to test the theory in question are particle accelerators. But how to work with this if you have, say, a finite more or less stable many-particle configuration is completely ignored.

Instead, the lattice Hamiltonian is defined for everything, for every physical situation.

A. Neumaier said:
No, this is your unproved conjecture, at present wishful thinking without a shred of evidence.
I would think that to prove such a triviality would not even be worth a publication. Because too trivial.
A. Neumaier said:
There is a large body of theory of QED at few loops, taught to every physicist. But there is no theory at all about lattice QED, and nobody has ever taught this nonexistent theory. Nevertheless, @Elias1960 claims the opposite - that continuum QED is not a theory but lattice QED is one. Why is nobody teaching this revolutionary insight into what makes a theory?
Because physicists simply don't care about having a well-defined theory. At least not those working in particle physics. They care about getting, with whatever dirty tricks, numerical results. Those who care end in AQFT, which is IMHO an impasse because it insists of having a field theory and exact relativistic symmetry.

Again, how often I have to repeat it, I have different priorities. I don't work in particle physics, and so I don't have to compute any such coefficients. I work in fundamental physics, and my priority is that a theory worth to be considered has to be, first of all, well-defined. In a mathematical, and not in your sloppy physical meaning of well-defined. So, if I would think relativistic field theories are a nice idea for a fundamental theory, I would care about AQFT. I don't think so. On a fundamental level, we have to think about incorporating gravity, which is nonrenormalizable, thus, can make sense only as an effective field theory, as an approximation. Instead, lattice theories don't have this problem. They can be rigorously defined.

BTW, that lattice theories can be used as a regularization of QFT is part of reasonable standard courses, not?
A. Neumaier said:
Even lattice QCD, which (unlike lattice QED, which is a caricature of a field theory) produces numerically sensible results, happens on Euclidean spacetime. Unlike 6 loop QED, it is not based on a Schrödinger equation (which is the basis of quantum mechanics) but uses instead a dynamics in discrete imaginary time. To get information about application to quantum physics one needs to do analytic continuation to real time - the possibility of which depends on the Osterwalder Schrader theorem, a result from continuum field theory borrowed even by the lattice QCD community!
First, nobody cares where to borrow theorems if they are theorems. I would not even resist to take a theorem from string theory or even MWI if it would be useful.

Then, I do not care about the computations made as long as there is no well-defined theory at all.

Then, lattice QED cannot be a caricature of field theory because it is not even a field theory and has no intention to become a field theory.

Last but not least, the evolution on Euclidean spacetime is a nice method to find the lowest energy states. Instead of evolving like ##e^{i E t}## they evolve like ##e^{-E t}##, which heavily suppressed high energy parts. So, it defines a reasonable method for computations even without any analytic continuation. If it is appropriate to use it for computations depends on the context.
A. Neumaier said:
What a high quality argument that cannot even distinguish between classical mechanics and quantum mechanics!
What a high quality argument which does not even mention where the difference between quantum and classical mechanics makes a difference.

Renormalization techniques have been useful in classical theory too, Wilson has gained a Nobel for such work, which lead to much better understanding of some phase transitions. The most important step is to identify and get rid in the limit of all the effects which play a role only for small distances and can be ignored in the large distance limit. This appears in modern language following Wilson by suppression of the higher order non-renormalizable contributions, but this is what has been done in classical condensed matter theory too.

The problems related with doubling are completely classical. The problems with obtaining exact gauge invariance on a lattice is also completely classical.
 
Last edited:
  • #154
Elias1960 said:
if you restrict yourself to loop order 6 or so you have infinities and have to use renormalization and so on, which makes the whole theory ill-defined.
No. In Scharf's approach there is not a single infinity, but he gets precisely the same renormalized results as the standard textbook recipes, which indeed sound somewhat ill-defined.
Elias1960 said:
Nobody wants to "lattice" QED. QED as a continuum theory is not well-defined. Instead, lattice QED is a well-defined theory from the start.
This is not true. The lattice QED action is not given from the start but is determined by discretizing the continuum QED action, which motivates everything because it is well-known to give excellent results. In lattice QED, one has - in place of the 2-parameter family of standard QED whose parameters (the renormalized electron mass and charge) have a clear physical meaning - a huge, messy family of theories with different predictions, depending on the details of the discretization, including which discrete point set and associated triangulation you choose. Different people would find it hard to agree on a single choice.
Elias1960 said:
We talk here about a theory which makes sense only if that formal power series converges at least for some non-zero values.
Only you talk about such a theory. Scharf talks about a theory defined by causality axioms (which are likely to be mathematically consistent though that hasn't been proved) and then constructs it approximately order by order, always respecting covariance and avoiding ill-defined items. This is no different in spirit from approximating the QED action by lattice actions to avoid ill-defined items. But Scharf actually reproduces the traditional results while lattice QED reproduces nothing and only has a dream that it possibly could reproduce these results.
Elias1960 said:
Really? Can you give me a source where this approximation is given?
The second edition of Scharf's book constructs the field operators to fixed loop order acting on the asymptotic Fock space, which has its natural Hilbert space structure. Their Heisenberg dynamics is given by the time shift operator. It defines a 1-parameter group which has an infinitesimal generator, and this is the Hamiltonian.
It is well-defined by general theorems of functional analysis, though Scharf doesn't express it explicitly.
Elias1960 said:
AFAIK, these approximations are approximations for particular scattering amplitudes, and presuppose a scattering problem, you know, with free particles coming from minus infinity and moving toward plus infinity as free particles.
You obviously didn't read Scharf's book in depth. He constructs much more than scattering amplitudes; the latter are just the check that everything agrees with the tradition.
Elias1960 said:
I would think that to prove such a triviality would not even be worth a publication. Because too trivial.
Your thoughts don't count, only your proofs. The devil is in the details.
Elias1960 said:
Because physicists simply don't care about having a well-defined theory. At least not those working in particle physics. They care about getting, with whatever dirty tricks, numerical results.
Those who do completely ignore lattice QED since it presently doesn't deliver any results that can be compared with experiment, and never will.

But mathematical physicist care, and they also ignore lattice QED, as an irrelevant theory.
Elias1960 said:
Then, I do not care about the computations made as long as there is no well-defined theory at all.
Indeed, you don't care at all about computations! But this is a vice, not a virtue.
Elias1960 said:
Renormalization techniques have been useful in classical theory too, Wilson has gained a Nobel for such work, which lead to much better understanding of some phase transitions.
From https://www.nobelprize.org/prizes/physics/1982/press-release/ :
The Nobel committee said:
Wilson’s theory for critical phenomena gave a complete theoretical description of the behaviour close to the critical point and gave also methods to calculate numerically the crucial quantities.
He got his Nobel price for making quantitative calculations that agreed with experiment.

While you make wishful claims about methods for QED that don't calculate anything even remotely resembling experiment!
 
Last edited:
  • Like
Likes weirdoguy and Motore
  • #155
A. Neumaier said:
No. In Scharf's approach there is not a single infinity, but he gets precisely the same renormalized results as the standard textbook recipes, which indeed sound somewhat ill-defined.

This is not true. The lattice QED action is not given from the start but is determined by discretizing the continuum QED action, which motivates everything because it is well-known to give excellent results. In lattice QED, one has - in place of the 2-parameter family of standard QED whose parameters (the renormalized electron mass and charge) have a clear physical meaning - a huge, messy family of theories with different predictions, depending on the details of the discretization, including which discrete point set and associated triangulation you choose. Different people would find it hard to agree on a single choice.

Only you talk about such a theory. Scharf talks about a theory defined by causality axioms (which are likely to be mathematically consistent though that hasn't been proved) and then constructs it approximately order by order, always respecting covariance and avoiding ill-defined items. This is no different in spirit from approximating the QED action by lattice actions to avoid ill-defined items. But Scharf actually reproduces the traditional results while lattice QED reproduces nothing and only has a dream that it possibly could reproduce these results.

The second edition of Scharf's book constructs the field operators to fixed loop order acting on the asymptotic Fock space, which has its natural Hilbert space structure. Their Heisenberg dynamics is given by the time shift operator. It defines a 1-parameter group which has an infinitesimal generator, and this is the Hamiltonian.
It is well-defined by general theorems of functional analysis, though Scharf doesn't express it explicitly.

You obviously didn't read Scharf's book in depth. He constructs much more than scattering amplitudes; the latter are just the check that everything agrees with the tradition.

Your thoughts don't count, only your proofs. The devil is in the details.

Those who do completely ignore lattice QED since it presently doesn't deliver any results that can be compared with experiment, and never will.

But mathematical physicist care, and they also ignore lattice QED, as an irrelevant theory.

Indeed, you don't care at all about computations! But this is a vice, not a virtue.

From https://www.nobelprize.org/prizes/physics/1982/press-release/ :

He got his Nobel price for making quantitative calculations that agreed with experiment.

While you make wishful claims about methods for QED that don't calculate anything even remotely resembling experiment!

It is generally thought that there is not yet any example of 3+1D relativistic QFT in either finite or infinite volume.

https://www.claymath.org/sites/default/files/yangmills.pdf
"In fact, at present we do not know any non-trivial relativistic field theory that satisfies the Wightman (or any other reasonable) axioms in four dimensions. So even having a detailed mathematical construction of Yang–Mills theory on a compact space would represent a major breakthrough. Yet, even if this were accomplished, no present ideas point the direction to establish the existence of a mass gap that isuniform in the volume. Nor do present methods suggest how to obtain the existence of the infinite volume limit T4→R4."

At the informal physics level, there are arguments that QCD or Yang Mills exists as a continuum relativistic QFT, but this is generally not thought to apply to QED, because of the Landau pole. Perhaps the Landau pole is misleading, and QED could be asymptotically safe, but that is generally regarded as extremely speculative. Hence at both the informal or formal levels, there is no consensus that QED exists as a continuum relativistic QFT.

Certainly it would be nice to have more work in how to derive QED as a low energy effective field theory from lattice QED, but at the physics level the probability of this being possible is generally placed well above the probability of QED as a continuum relativistic QFT. For example, Capitani states in his article on Lattice Perturbation Theory that "in principle all known perturbative results of continuum QED and QCD can also be reproduced using a lattice regularization instead of the more popular ones. However, calculating in such a way the correction to the magnetic moment of the muon (to make an example) would be quite laborious."

Yes, I do remember that we've talked about this before and disagreed.
 
Last edited:
  • Like
Likes Elias1960
  • #156
A. Neumaier said:
No. In Scharf's approach there is not a single infinity, but he gets precisely the same renormalized results as the standard textbook recipes, which indeed sound somewhat ill-defined.
Simply by circumventing the places where the infinities appear, by considering a finite 6-loop part of a formal power series where the important question is if that power series converges to something at all, is not a big deal.
A. Neumaier said:
This is not true. The lattice QED action is not given from the start but is determined by discretizing the continuum QED action, which motivates everything because it is well-known to give excellent results.
Kepler has motivated his research by astrological considerations. Who cares? Should we care about astrology given that considerations based on astrology have given such important results?
A. Neumaier said:
In lattice QED, one has - in place of the 2-parameter family of standard QED whose parameters (the renormalized electron mass and charge) have a clear physical meaning - a huge, messy family of theories with different predictions, depending on the details of the discretization, including which discrete point set and associated triangulation you choose. Different people would find it hard to agree on a single choice.
The point being? For the spatial discretization itself, there is the straightforward candidate Z3NZN3 which has two additional parameters: lattice distance and size of the cube. Given that both are not expected to influence the physical parameters, but only the connection between bare parameters and renormalized parameters, this is not a big deal. Ok, one can as well use other lattices, there are a lot of 3D lattices, look at any book about crystals for a classification. But what would be the point of considering more complex models? Ockham suggests not to do such things. The two additional parameters mentioned above are clearly motivated by real problems - they cut the UV and IR infinities.
A. Neumaier said:
Only you talk about such a theory. Scharf talks about a theory defined by causality axioms (which are likely to be mathematically consistent though that hasn't been proved) and then constructs it approximately order by order, always respecting covariance and avoiding ill-defined items. This is no different in spirit from approximating the QED action by lattice actions to avoid ill-defined items.
No, the first key difference is your "likely to be mathematically consistent though that hasn't been proved".

And the other one is the central one of our discussion, namely that I do not aim approximating some QED action by lattice action. The aim is to have a rigorously well-defined QT - and a lattice theory on Z3NZN3 with n field degrees of freedom is simply a standard quantum theory on Q≅RN3⋅nQ≅RN3⋅n, so that there is no doubt that it is well-defined - which is viable. To prove the viability, one can use the results which are known for the large distance continuous approximation, which is QED.
A. Neumaier said:
But Scharf actually reproduces the traditional results while lattice QED reproduces nothing and only has a dream that it possibly could reproduce these results.
To get the continuous large distance limit of a lattice theory is a standard procedure. And this standard procedure is sufficient to realize this "dream". Remember, mathematical results have no copyright which forbids their use in other theories. So, all that has been found about QFT by your beloved Scharf can be used too for lattice QED.
A. Neumaier said:
Your thoughts don't count, only your proofs. The devil is in the details.
The devil in the details of large distance limits of lattice equations is well-known by those who have made lattice computations to model our real world. Such details are, in particular the doublers.
A. Neumaier said:
Those who do completely ignore lattice QED since it presently doesn't deliver any results that can be compared with experiment, and never will.
And you continue to ignore that numerical computations are, for fundamental considerations, only the second step, after having defined a consistent theory. Moreover, you don't get the point that those who think that a lattice theory is fundamental can use all your different approximation methods too. And they have a much better base for doing this, namely, a rigorously well-defined theory.
A. Neumaier said:
But mathematical physicist care, and they also ignore lattice QED, as an irrelevant theory.
As I said, they have essentially chosen AQFT, in one variant or another. AFAIU your description, Scharf is doing something in the same direction, he dreams up some axioms which a in his opinion reasonable RQFT should fulfill and then derives something.
A. Neumaier said:
Indeed, you don't care at all about computations! But this is a vice, not a virtue.
No, it is neither a vice nor a virtue, but a natural subdivision of labor. Both things are necessary - approximate computations as well as rigorous foundations. Some people care about the most accurate computations, others about rigorous foundational theories.

Compare the investments into various directions of research in particle physics. The large majority is doing computations either for the SM or for the various variants of QFT beyond the SM. In comparison, those who care about rigorous foundations are already a minority, and those who think that AQFT is an impasse, and other approaches have to be used to find candidates for fundamental rigorously defined theories are even less.

Given that in fundamental, that means necessarily speculative, research it makes no sense to concentrate everything into a single approach, reducing fundamental research to AQFT would be stupid. So, my specialization is a reasonable choice. Even if, as every other direction in fundamental physics, a risky one.
A. Neumaier said:
He got his Nobel price for making quantitative calculations that agreed with experiment.
He started a revolution in our understanding of renormalization, the main problem in QFT. Feel free to reduce him to computing some numbers for phase transitions - it was simply a consequence, an application of what he has reached.
A. Neumaier said:
While you make wishful claims about methods for QED that don't calculate anything even remotely resembling experiment!
I don't. To obtain the continuous limit of a lattice theory is not that difficult and has been done many times starting from the early development of condensed matter theory. The problems which may appear there, like doublers, are well known and sufficiently well understood. I would not even start to write a proof that a quite straightforward lattice QED, uses the well-known methods to suppress doublers and Wilson's proposal to put gauge fields on the lattice, gives QED in the limit. The only reason for doing this would be the guess that physicists don't care about proving theorems rigorously, so that one can hope that simply nobody has done that thing, and get a cheap publication that way.
 
Last edited:
  • Sad
  • Skeptical
Likes weirdoguy and Motore
  • #157
atyy said:
It is generally thought that there is not yet any example of 3+1D relativistic QFT in either finite or infinite volume.
Only by those who regard ''relativistic QFT'' as synonymous with ''satisfying the Wightman (or any other reasonable) axioms''. But they also exclude lattice QFT.

My only claim was that 6-loop QED is (in the version given in the second edition of Scharf's QED book) a mathematically well-defined covariant quantum theory, complete with a Hilbert space and a unitary representation of the Poincare group, hence including a Hamiltonian. Thus it has everything a lattice QFT has. (Since locality holds only approximately in 6-loop QED, it does not quite satisfy the Wightman axioms, so there is no conflict with your quote.)

But in addition, 6-loop QED is covariant and in excellent computational agreement with experiment, while lattice QED based on a nonphysical finite spacetime and produced computationally so far not a single experimentally verified prediction. The only predictions Elias1960 was able to muster are those where he approximates lattice QED by few loop covariant QED. By Ockham's razor, one should discard lattice QED since it needs extra structure (a discrete spacetime and cutoffs) but produces no predictive benefit for it. (The only claimed benefit, mathematical consistency, is shared by 6-loop QED.)

atyy said:
Capitani states in his article on Lattice Perturbation Theory that "in principle all known perturbative results of continuum QED and QCD can also be reproduced using a lattice regularization instead of the more popular ones. However, calculating in such a way the correction to the magnetic moment of the muon (to make an example) would be quite laborious."
He neither proves this statement nor does he give a single reference to a proof. Thus this is a claim without any substantiation, just reflecting his belief. In view of the known triviality results, strong evidence would be needed to make it believable to a skeptic.

By the way, why do you quote my whole long post when you address only a tiny part of it? It is clearer and saves display time on the smartphone when you cut it down to what you really address.
 
Last edited:
  • #158
Elias1960 said:
For the spatial discretization itself, there is the straightforward candidate Z3NZN3 which has two additional parameters: lattice distance and size of the cube. [...]The two additional parameters mentioned above are clearly motivated by real problems - they cut the UV and IR infinities.
But they don't have a physical meaning since you claim them to be essentially arbitrary.
Elias1960 said:
I would not even start to write a proof that a quite straightforward lattice QED, uses the well-known methods to suppress doublers and Wilson's proposal to put gauge fields on the lattice, gives QED in the limit. The only reason for doing this would be the guess that physicists don't care about proving theorems rigorously, so that one can hope that simply nobody has done that thing, and get a cheap publication that way.
If you would start, you would start see the problems. This is not just a simple one page exercise, otherwise a proof would have appeared in papers such as Capitani's article on Lattice Perturbation Theory that atyy quoted from.

Our philosophies are worlds apart, and I have nothing new to say, so I stop here this exchange.
 
Last edited:
  • Like
Likes weirdoguy and dextercioby
  • #159
A. Neumaier said:
But they don't have a physical meaning since you claim them to be essentially arbitrary.
They have a clear physical meaning, which you can find out if you study the theory of crystals. Theories with different lattices will be physically different theories. But their large distance limit will be the same continuous QFT. So, these differences will have no easily observable large distance effects.
A. Neumaier said:
If you would start, you would start see the problems. This is not just a simple one page exercise, otherwise a proof would have appeared in papers such as Capitani's article on Lattice Perturbation Theory that atyy quoted from.
If I would be Capitani I would not see any reason why to include a paper with such an irrelevant (for the study of Lattice Perturbation Theory) paper. He writes:
Details of the lattice formulation that are only relevant at the nonperturbative level will not be discussed in this review.
But the large distance limit of the SM is non-pertubative because the SM contains QCD.

A. Neumaier said:
Only by those who regard ''relativistic QFT'' as synonymous with ''satisfying the Wightman (or any other reasonable) axioms''. But they also exclude lattice QFT.
As they should, given that a lattice theory is not a QFT at all.
A. Neumaier said:
But in addition, 6-loop QED is covariant and in excellent computational agreement with experiment, while lattice QED based on a nonphysical finite spacetime and produced computationally so far not a single experimentally verified prediction.
Nonsense. In such a lattice theory, the physical space is defined by the lattice. Don't use the word "physical" where you want to express only your personal metaphysical prejudice for a different theory with a different spacetime model.

Again, "produced computationally" is not a valid criterion. Once the corresponding QFT has been identified as the continuous large distance limit of a particular lattice theory, all predictions computed by whatever methods for that QFT are predictions of the lattice theory too.
A. Neumaier said:
By Ockham's razor, one should discard lattice QED since it needs extra structure (a discrete spacetime and cutoffs) but produces no predictive benefit for it. (The only claimed benefit, mathematical consistency, is shared by 6-loop QED.)
Except that it is a rigorously well-defined theory, which your theory is not. Sorry, I simply don't believe your claim that those finite loop approximations are really completely well-defined theories. That claim was not simply a reference to Scharf, but you combine claims about Scharf (which I would have to check, the first check I made ended not in your favor) with some of your own arguments, something I would have to check in detail myself before accepting. But the claim is simply not plausible because, if try, it would have been proven explicitly and widely known. It is, last but not least, a completely different thing to have a series of rigorously well-defined nontrivial QFTs so that one can hope that in the limit they give the QFT one likes, and the actual state of a formal power series of approximations where the restriction to a finite number gives nothing.
A. Neumaier said:
He neither proves this statement nor does he give a single reference to a proof. Thus this is a claim without any substantiation, just reflecting his belief. In view of the known triviality results, strong evidence would be needed to make it believable to a skeptic.
As explained many times, the triviality results are completely unrelated.
We have a limit ##l=nh \to \infty## of lattice QED is continuous QED. A result which, say, could be proven by considering only lattices with lattice distance ##h \gg l_{Planck}##.
Triviality is that ##l\to 0## of QED. Which you can prove restricting yourself completely to distances ##l \ll l_{Planck}##. So, connecting both shows that you completely miss the problem to be proved. Moreover, it does not relate to lattice theory at all but is a failure of QED as a continuous theory.

BTW, the conceptual advantages of lattice theories even if considered only as a method of regularization have been mentioned by Capitani at the very beginning:
A spacetime lattice can be viewed as a regularization which is nonperturbative. Since the other known regularizations, like dimensional regularization or Pauli-Villars, can be defined only order by order in perturbation theory, the lattice regularization has this unique advantage over them. It is a regularization which is not tied to any specific approximation method, and which allows calculations from first principles employing various numerical and analytical methods, without any need to introduce models for the physics or additional parameters
You completely ignore the large distance strong coupling results, in particular confinement, a key issue for the viability of the SM, where your QED methods give nothing at all:
Confinement means that quarks, the fundamental fields of the QCD Lagrangian, are not the states observed in experiments, where one can see only hadrons, and the free theory has no resemblance to the observed physical world.

A. Neumaier said:
Our philosophies are worlds apart, and I have nothing new to say, so I stop here this exchange.
Your choice. But I agree, our philosophies are indeed worlds apart, and you had nothing new to say.
 
  • Sad
Likes weirdoguy
  • #160
Elias1960 said:
They have a clear physical meaning, which you can find out if you study the theory of crystals.
Spacetime is not a crystal.
Elias1960 said:
You completely ignore the large distance strong coupling results, in particular confinement
This is irrelevant for the present discussion because confinement is not a property of QED, and we were discussing the latter.
 
  • #161
A. Neumaier said:
Spacetime is not a crystal.
The theory of crystals shows you that atomic lattices with different crystal structures have different physical properties. So, this is also to be expected from theories which replace continuous QFT with a lattice QFT.

Your preferred philosophy about space and time is irrelevant for this question. There are, in fact, two straightforward interpretations of such a lattice theory. In the first one, space itself is a lattice, and the variables which define the state of a particular lattice point change in time. In the second one, the lattice itself is similar to an atomic lattice, and the lattice nodes move, described by trajectories in some more fundamental continuous space. In this case, the lattice has nothing to do with spacetime at all.
A. Neumaier said:
This is irrelevant for the present discussion because confinement is not a property of QED, and we were discussing the latter.
It is not irrelevant that what you propose as a big advantage is essentially worthless if we look at the bigger picture.

You propose to define the theory by developing an approximation which is completely hopeless to use even as an approximation for QCD. So, this is at best a strange mathematical exercise, without any hope to be applicable even to existing established theories like QCD.

I propose to define the theory as a lattice theory. Which is known to be unproblematic everywhere, for every field theory. The definition is simple and mathematically rigorous, and trivial. Not only for the Euclidean approach (a useful method to approximate lowest energy states), but also for the Schrödinger equation. The first nontrivial thing is, then, the derivation of the large distance approximation, which is a continuous field theory. The techniques for this are well-understood. They always give some field theory, even if sometimes happen naively unexpected things like the appearance of doublers of fermions. There is renormalization, but the effects of renormalization are always finite effects - relations between the lattice theories of two different lattice sizes, say, ##h## and ##h'=2^k h##.

Nobody has a problem if one uses strange and physically completely meaningless approaches to compute approximate results. Dimensional regularization gives good approximations? Fine. Does this suggest that it makes physical sense to consider physical theories in spacetime dimension ##4-\varepsilon##? No. Dimensional regularization remains a method for computing approximately some expressions in some well-defined theory in dimension ##3+1##, and this method can be used even by those who think physical theories in ##4-\varepsilon##-dimensional spacetimes are complete nonsense.

And the same holds for QED too. Feynman diagrams are nice tools for computing approximations, but nothing more.

Let's note that such a different evaluation of methods for computation and theories has a good methodological base. For computations, neither Ockham's razor applies nor other principles of simplicity and beauty - all that matters is the correct mathematical result. This is because math delivers formulas which, once proven, are known to be true, and if the proof is ugly, long, complicate, and uses methods inapplicable in similar situations does not matter at all. Proven means proven, point. Our certainty about the result does not depend on the properties of the proof. Similar things can be said about approximate computations. They may be ugly and dirty, it does not matter.

For fundamental theories, we have, instead, to apply some completely different principles. Simplicity and beauty of the theory are key ingredients. The physical predictions of the theory another one. But, note: Nothing depends on human ability to compute these predictions accurately, or, if humans can compute it only approximately, how accurate they can compute it and how much sources they have to spend to compute it. A prediction of a theory, as far as it is relevant for the application of Popper's criterion of empirical content, does not depend on the ability of humans to compute the correct numbers predicted by the theory. It depends only on the question if that number is uniquely defined, given the parameters of the theory.

The same holds, in principle, for the human ability to make experiments too. A fundamental theory will possibly make non-trivial predictions only for distances below Planck length, leaving humanity without a chance to test it seriously. This is, in fact, what has to be expected. But this would not be a conceptual problem of the theory. What matters is that it makes well-defined predictions for everything. Well-defined, not well-computed or well-tested.
 
  • #162
Elias1960 said:
In the second one, the lattice itself is similar to an atomic lattice, and the lattice nodes move, described by trajectories in some more fundamental continuous space. In this case, the lattice has nothing to do with spacetime at all.
That's indeed the idea used in my "Bohmian mechanics for instrumentalists".
 
  • #163
Elias1960 said:
The theory of crystals shows you that atomic lattices with different crystal structures have different physical properties. So, this is also to be expected from theories which replace continuous QFT with a lattice QFT.
Yes, and there are a huge number of possibilities for a lattice QED, while there is just one continuum QED.
Elias1960 said:
Your preferred philosophy about space and time is irrelevant for this question.
Its the philosophy of all physics textbooks, without exception. While your philosophy is a minority point of view.
Elias1960 said:
It is not irrelevant that what you propose as a big advantage is essentially worthless if we look at the bigger picture.
Only in the same meaningless sense that the standard model is essentially worthless if we look at the bigger picture which must include gravity.
Elias1960 said:
The definition is simple and mathematically rigorous, and trivial.
... and for QED completely useless. Nobody ever used it.
 
  • #164
A. Neumaier said:
Its the philosophy of all physics textbooks, without exception. While your philosophy is a minority point of view.
... and for QED completely useless. Nobody ever used it.
And a 51% majority is in physics always right, we live in physical democracy, SCNR.
A. Neumaier said:
Yes, and there are a huge number of possibilities for a lattice QED, while there is just one continuum QED.
Which will probably start to matter if we reach the region where the lattice spacing becomes important.
A. Neumaier said:
Only in the same meaningless sense that the standard model is essentially worthless if we look at the bigger picture which must include gravity.
In part. It, indeed, makes not much sense to start to consider approaches which would have to be thrown away in any reasonable approach to the inclusion of gravity. Are there reasonable approaches to QG which use lattices? Yes, there are, LQG, thus, for lattice theory not a problem too.

The difference is that there is no agreement about those plausible approaches, so that this restriction does not give us anything. But there is essentially consensus about the SM, so in this case this gives a lot of additional information.
 
  • #165
A. Neumaier said:
... and for QED completely useless. Nobody ever used it.
In this thread, and in the whole subforum in fact, we do not discuss what is useful and used in practice. For the sake of being useful in practice, shut-up-and-calculate is enough. Bur here we are discussing foundations. In foundations it can be very important whether something is mathematically precisely defined, even when it is not very useful for making testable predictions.
 
  • #166
Elias1960 said:
And a 51% majority is in physics always right
A 100% majority of textbooks asserts that QED is based on continuous spacetime. That's called scientific consensus.
Elias1960 said:
But there is essentially consensus about the SM
Yes, and the consensus is that the standard model is based on continuous fields on a locally Minkowski spacetime, and that lattice QCD is only an approximation.
 
Last edited:

Similar threads

Replies
5
Views
3K
Replies
1
Views
2K
Replies
3
Views
2K
Replies
14
Views
2K
Replies
109
Views
10K
Replies
34
Views
3K
Back
Top