How to derive Born's rule for arbitrary observables from Bohmian mechanics?

In summary, the conversation discusses the derivation of Born's rule for arbitrary observables from Bohmian mechanics. Three possible sources for this derivation are mentioned, including a paper by the author of the conversation. Further discussion includes the mathematical equivalence of the three derivations, the assumption of tensor product structure, and the role of unitarity in the derivation. The conversation also touches on the reduction of perceptibles to macroscopic positions and the measurement of angular momentum in practice.
  • #71
A. Neumaier said:
Thanks. But p. 180 has no proof at all, and the outline at the top of the next page makes the (in general unwarranted, see post #42 above) assumption that one can neglect both the system Hamiltonian and the detector Hamiltonian and that one may therefore only consider the interaction term.
I don't understand why this would be a problem. It is, as mentioned, physical level proof, many things which a mathematician would have to specify and prove will be simply ignored as trivialities. Here, the critical part is clearly the interaction term and not the Hamiltonians of the parts.
A. Neumaier said:
Well, we are reaching experimentally smaller and smaller distances. Thus preferred frame effects should at some point become observable. When depends on the actual model you propose for an effective QED. None exists in the literature, there are only toy theories that significantly deviate from QED even in the large-distance limit.
At some point. This point may be far away. To propose models beyond effective theory (QED is itself an effective theory, and as an effective theory does not need a model of itself) which would give QED in the large distance limit smells like ether theory, thus, is essentially a no go today. So, even if they would exist in the literature, they would be ignored and possibly could not even be discussed here.
 
Physics news on Phys.org
  • #72
Demystifier said:
OK, now we more or less agree. You are right that actually proving the existence of appropriate decoherence is nontrivial. I have assumed it, not proved it. But I hope you will agree that, from what is already known about decoherence (analytically solved toy models and numerically solved more complicated models), the assumption of existence of appropriate decoherence is a rather plausible and reasonable assumption. If it would turn out that a more rigorous analysis shows nonexistence of appropriate decoherence, it would be very surprising. So, strictly speaking, I have not rigorously proved my claim, but I have given a very plausible argument for that.
Well, in (the arxiv version of) your paper you don't mention the assumption of existence of appropriate spatial decoherence but you give instead a shallow argument ''proving'' decoherence (15) without stating the required nondemolition assumption.

I don't know whether the assumption of existence of appropriate spatial decoherence is plausible for the measurement of an arbitrary observable. It is plausible for some observables, but needs an argument in the general case.

Elias1960 said:
I don't understand why this would be a problem. It is, as mentioned, physical level proof, many things which a mathematician would have to specify and prove will be simply ignored as trivialities. Here, the critical part is clearly the interaction term and not the Hamiltonians of the parts.
Well, that the neglect of the unperturbed part is a serious assumption - even on the level of physics - is pointed out in the measurement paper by Wigner quoted earlier, valid only for nondemolition measurements (which are rare).
 
Last edited:
  • #73
Elias1960 said:
At some point. This point may be far away. To propose models beyond effective theory (QED is itself an effective theory, and as an effective theory does not need a model of itself) which would give QED in the large distance limit smells like ether theory, thus, is essentially a no go today.
It is unreasonable to argue yourself with questions of principles but to criticize the use of questions of principles in my arguments.

The point where there is an effective QED tractable by Bohmian mechanics may also be far away. Lattice theories don't do it at present.
 
  • #74
A. Neumaier said:
It is unreasonable to argue yourself with questions of principles but to criticize the use of questions of principles in my arguments.
I don't get the point. My point that one cannot use the requirement of fundamentality of some particular symmetry as a decisive argument, given that most symmetries in physics are only approximate symmetries, is indeed a question of principle. But your remark that "preferred frame effects should at some point become observable" is nothing but a quite optimistic side remark. Which is, essentially, irrelevant, given that one cannot use it as an argument that alternatives, where they are approximate, should not be studied or so.
A. Neumaier said:
The point where there is an effective QED tractable by Bohmian mechanics may also be far away. Lattice theories don't do it at present.
Why would a lattice theory not do it? Do you have in mind problems constructing appropriate lattice models like putting chiral gauge fields as exact gauge symmetries on the lattice, or fermion doubling? On the other side, I see none. The lattice theory on a large cube is finite-dimensional, that already removes the main technical problem, what remains is standard BM theory.

For chiral gauge theories, there is no need for exact gauge symmetry, they are anyway massive. With staggered fermions, combined with the point that time can be left continuous, which reduces the problem by yet another factor 2, we end with two Dirac fermions. Which is all that is necessary for the SM where they appear only in electroweak doublets.
 
Last edited:
  • #75
Elias1960 said:
Why would a lattice theory not do it?
Lattice QED suffers from the triviality problem. in a continuum limit, it does not converge to covariant QED but to a free theory. Thus there is no cutoff that would result in a good approximation to QED. Necessary would be an approximation accurate to 12 decimal digits...
 
  • #76
A. Neumaier said:
Lattice QED suffers from the triviality problem. in a continuum limit, it does not converge to covariant QED but to a free theory. Thus there is no cutoff that would result in a good approximation to QED. Necessary would be an approximation accurate to 12 decimal digits...
According to what Wikipedia writes about the triviality problem it is a problem of ##\Lambda \to \infty##, thus, not of effective field theory.

If you cannot get 12 decimal digits with lattice theories given modern computers, so what? I do not object if those who make the real computations use even conceptually completely meaningless things like dimensional regularization. I'm interested in lattice theory out of conceptual questions, namely that it is a conceptually meaningful theory in itself, and is, at least in principle, also a candidate for theory beyond QFT.
 
  • Like
Likes Tendex and Demystifier
  • #77
Elias1960 said:
According to what Wikipedia writes about the triviality problem it is a problem of ##\Lambda \to \infty##, thus, not of effective field theory.
You wanted to replace QED by a version with finite cutoff, hence it applies. It also applies to lattice approximations, since a lattice approximation corresponds to ##\Lambda=s^{-1}##, where ##s## is the lattice spacing. (It does not apply to lattice QCD since QCD is asymptotically free and hence has no triviality problem.)
Elias1960 said:
If you cannot get 12 decimal digits with lattice theories given modern computers, so what? I do not object if those who make the real computations use even conceptually completely meaningless things like dimensional regularization. I'm interested in lattice theory out of conceptual questions, namely that it is a conceptually meaningful theory in itself, and is, at least in principle, also a candidate for theory beyond QFT.
It is not a matter of today's computers. No lattice approximation, even when solved with exact arithmetic and arbitrary lattice size, will be close to QED since for large lattice spacing the error is huge and for (not very) tiny lattice spacing triviality sets in. This happens already at currently feasible lattice sizes!

Thus one cannot use lattice approximations to make arguments of principle.
 
  • #78
A. Neumaier said:
You wanted to replace QED by a version with finite cutoff, hence it applies. It also applies to lattice approximations, since a lattice approximation corresponds to ##\Lambda=s^{-1}##, where ##s## is the lattice spacing.
The Wiki version describes it quite clear as a problem which does not exist for finite cutoffs. They give the formula
$$ g_{obs}={\frac {g_{0}}{1+\beta _{2}g_{0}\ln \Lambda /m}}$$
This gives the problematic zero for ##g_{obs}## only in the limit ##\Lambda\to\infty##. But for finite ##\Lambda## everything is fine. Except that if it reaches the Landau pole which gives infinity:
$$ g_{0}={\frac {g_{obs}}{1-\beta _{2}g_{obs}\ln \Lambda /m}}$$
The Landau pole is beyond anything imaginable,##10^{286} eV##, and we are interested here only in ##\Lambda## of at most Planck scale ##10^{28} eV##.

Maybe you mean something like what Wiki describes in the following words:
Lattice gauge theory provides a means to address questions in quantum field theory beyond the realm of perturbation theory, and thus has been used to attempt to resolve this question.
Numerical computations performed in this framework seems to confirm Landau's conclusion that QED charge is completely screened for an infinite cutoff.
But even here the problem is claimed to be one of the infinite cutoff. I have taken a look into arxiv:hep-th/9712244 cited as support there and it suggests something different from
A. Neumaier said:
No lattice approximation, even when solved with exact arithmetic and arbitrary lattice size, will be close to QED since for large lattice spacing the error is huge and for (not very) tiny lattice spacing triviality sets in. This happens already at currently feasible lattice sizes!
First, the very point of lattice theory combined with renormalization is that one can compute the renormalized parameters out of the original one on quite small lattices. They use a ##16^4## lattice. Once one can iterate this from anywhere down to the large distance limit, the limits for ##\Lambda## in this method have nothing to do with the limits for ##\Lambda## in lattice computations for, say, scattering coefficients.

Then, again, the point is simply that QED cannot be used down to arbitrary distances, because the limit fails. As an effective field theory, it is fine. For finite lattice sizes below the Landau pole region, a finite ##g_{0}## will give a nonzero ##g_{obs}##. The result of this paper, instead, remove the problem which could be created by a too-small Landau pole. If there would be, say, a Landau pole below Planck length, then the lattice theory for Planck length could possibly not give the originally intended large distance limit. But, according to the paper, there is no such danger at all for QED. And even if this would not be similar for the SM, the Landau pole is according to the paper with ##10^{34} GeV## yet much greater than Planck scale ##10^{19} GeV##.
A. Neumaier said:
Thus one cannot use lattice approximations to make arguments of principle.
One certainly can make such arguments. Like the one that such a lattice theory is a well-defined theory, and that it makes no problem to define a Bohmian version for it.
 
  • #79
A. Neumaier said:
Lattice QED suffers from the triviality problem. in a continuum limit, it does not converge to covariant QED but to a free theory. Thus there is no cutoff that would result in a good approximation to QED. Necessary would be an approximation accurate to 12 decimal digits...
I think it is a problem of the continuum (covariant) QED, not a problem of lattice QED. QED in a continuum is trivial, while QED on a lattice is not.
 
  • #80
Elias1960 said:
for finite ##\Lambda## everything is fine. Except that if it reaches the Landau pole which gives infinity:
$$ g_{0}={\frac {g_{obs}}{1-\beta _{2}g_{obs}\ln \Lambda /m}}$$
But for finite ##\Lambda## one is always far away from the covariant formulas that are used to make the predictions!
Elias1960 said:
I have taken a look into arxiv:hep-th/9712244 cited as support there [...] They use a ##16^4## lattice.
and they get near triviality already at this crude resolution not only in the continuum limit! At higher resolution it will be even closer to triviality, not closer to covariant QED!
Elias1960 said:
First, the very point of lattice theory combined with renormalization is that one can compute the renormalized parameters out of the original one on quite small lattices.
Only for asymptotically free theories such as QCD. It cannot be done for QED, hence there is no good lattice approximation for QED. If it could have been done it would have been done already!
Elias1960 said:
Then, again, the point is simply that QED cannot be used down to arbitrary distances, because the limit fails.
The limit only fails for lattice theories and other approximations with a fixed cutoff. This proves that lattice theories cannot approximate QED.

In causal perturbation theory there is no need for a cutoff. The formulas derived there can be used at any reasonable renormalization scale and are covariant from the start.
Demystifier said:
I think it is a problem of the continuum (covariant) QED, not a problem of lattice QED. QED in a continuum is trivial, while QED on a lattice is not.
No. On the contrary:

All experimentally verified predictions have been made with covariant QED, i.e., using the nontrivial, covariant renormalized continuum QED without cutoff (e.g., in [URL='https://www.physicsforums.com/insights/causal-perturbation-theory/']causal perturbation theory[/URL], expanded in powers of the fine structure constant).

On the other hand, no experimentally verified predictions have ever been made with lattice QED. Indeed, QED on a lattice, no matter how crude or fine its spacing, cannot give correct predictions, since its continuum limit is trivial, and triviality sets in already at lattice spacings that can be tested computationally.

This has been discussed already in other threads, e.g., here and here (and followups).
 
Last edited:
  • Like
Likes mattt
  • #81
Demystifier said:
I think it is a problem of the continuum (covariant) QED, not a problem of lattice QED. QED in a continuum is trivial, while QED on a lattice is not.
Like @A. Neumaier I'm not sure this argument is very solid even though it is presented in many texts. We have examples of theories which have Landau poles perturbatively and whose Lattice formulation tends to the free theory as the continuum limit is approached, but which are actually perfectly well defined in the continuum limit with non-trivial interactions. An example is the Gross-Neveu model.

The case for the Standard Model is even weaker, since there we have indications that even perturbatively and numerically it's not trivial. See Callaway's well known paper:
https://www.sciencedirect.com/science/article/abs/pii/0370157388900087
 
  • Like
Likes Demystifier and mattt
  • #82
A. Neumaier said:
Thus arriving at decoherence is quite nontrivial - it is indeed the only real difficulty of decoherence theory. Not that it cannot be done in particular settings, but it is done with sophisticated machinery, not with the tools of the 1930s that you employ. I haven't seen a decoherence analysis for measuring arbitrary system operators. Should you know one, it might solve the problem, and I'd be very interested in a reference.
On a related note: Have you seen a decoherence analysis of something like a Bell test?

In the cases which have been studied a lot, we have a system in a superposition state which gets turned into a mixed state by interacting with the environment. What I haven't seen yet is an analysis where the initial superposition state is an entangled state and the interaction happens only between one of the subsystems and its environment.
 
  • #83
A. Neumaier said:
But for finite ##\Lambda## one is always far away from the covariant formulas that are used to make the predictions!
This is a claim based on nothing. You have no base at all for claims about the size of the error of a lattice computation with, say, a Planck length as the lattice size.
A. Neumaier said:
and they get near triviality already at this crude resolution not only in the continuum limit! At higher resolution it will be even closer to triviality, not closer to covariant QED!
Looks like, first, you have not understood what they have computed on this ##16^4## lattice, and, second, you have not understood the problem with triviality.

The ##16^4## lattice has been used to compute the best approximation of the lattice equations on the ##16^4## lattice on the ##8^4## sublattice. This is one step of computation of the renormalized coefficients of the equations. This is something completely different than a computation of some physical prediction of QED on some particular lattice. They can use, say, the ##16^4## lattice with Planck length lattice size to compute the coefficients of the renormalized lattice theory on the ##8^4## sublattice with 2 times Planck length lattice size. Then, in a next step, they can use a ##16^4## lattice with 2 times Planck length lattice size to compute the coefficients of the renormalized lattice theory on the ##8^4## sublattice with 4 times Planck length lattice size. And so on. Or they could start this business with ##10^{-100}## Planck length as the lattice size. This is a completely different type of lattice computation which has nothing to do with lattice computations for any observable effects in QED.

And the problem with triviality does not even exist in the lattice theory. It exists only for the limit. All you have to do in the lattice theory is to compute the bare coefficients on the finest lattice by the renormalization techniques which give the appropriate observable value in the large distance limit.
A. Neumaier said:
Only for asymptotically free theories such as QCD. It cannot be done for QED, hence there is no good lattice approximation for QED. If it could have been done it would have been done already!
Here you mingle two completely different problems. Namely the problem of how to define a reasonable, meaningful lattice theory which gives QED in the limit - which is quite trivial - and the problem of finding a lattice theory which is good for making real computations. This second problem may be unsolvable, in the sense that conceptually meaningless methods like dimensional regularization will always give higher accuracy with the same computational effort than a lattice computation.
A. Neumaier said:
The limit only fails for lattice theories and other approximations with a fixed cutoff. This proves that lattice theories cannot approximate QED.
It shows only that QED is not a well-defined theory in the limit. And other methods have failed to show that QED is even in the continuum limit a well-defined theory. That means, there is nothing mathematically well-defined to approximate. QED itself is meaningful only as an approximation of some more fundamental theory.
A. Neumaier said:
In causal perturbation theory there is no need for a cutoff. The formulas derived there can be used at any reasonable renormalization scale and are covariant from the start.
Your beloved causal method gives only some asymptotic series, which is nothing. Asymptotic series are something comparable to computing sums like 1 - 2 + 3 - 4 + ...
A. Neumaier said:
All experimentally verified predictions have been made with covariant QED, i.e., using the nontrivial, covariant renormalized continuum QED without cutoff (e.g., in [URL='https://www.physicsforums.com/insights/causal-perturbation-theory/']causal perturbation theory[/URL], expanded in powers of the fine structure constant).
On the other hand, no experimentally verified predictions have ever been made with lattice QED.
Don't cry, these claims remain irrelevant even boldfaced. Again, I have no problem acknowledging that the most efficient way to compute predictions is to use such an asymptotic series or other ill-defined things like dimensional regularization. It would be an accident if it would be simple, boring and straightforward lattice theory which would be also the most efficient way to make computations.
A. Neumaier said:
Indeed, QED on a lattice, no matter how crude or fine its spacing, cannot give correct predictions, since its continuum limit is trivial, and triviality sets in already at lattice spacings that can be tested computationally.
Wrong. The straightforward limit would simply have an infinite interaction constant. Which makes no sense. But for every finite ##\Lambda##, everything else is fine. There would be some (very large, but so what) bare interaction constant which gives the correct interaction constant in the large distance limit. Your "triviality sets in" makes no sense. The only fact behind this is that the bare interaction constant increases with ##\Lambda## in an unbounded way. This, indeed, starts immediately but proves nothing.
 
  • Like
Likes Tendex
  • #84
Elias1960 said:
Your beloved causal method gives only some asymptotic series, which is nothing
In QM and QFT perturbation theory is always asymptotic, it's a bit extreme to say it is "nothing". @A. Neumaier 's point is that one has no Landau pole in this method. This is similar to the Gross-Neveu model where Landau poles show up in one method of perturbation theory and not another expansion, such as the ##\frac{1}{N}## expansion.

It shows we can't trust a Landau pole in one perturbative method to be a conclusive argument for triviality.
 
  • #85
DarMM said:
In QM and QFT perturbation theory is always asymptotic, it's a bit extreme to say it is "nothing".
In QFT, there is serious doubt that the continuous theory is even well-defined. For a proof that the continuous theory is well-defined, it is nothing.
To find reasonable empirical predictions, asymptotic series are fine. So, it depends on what one wants to reach.
DarMM said:
@A. Neumaier 's point is that one has no Landau pole in this method.
This is also what the paper which used lattice theory computations has claimed to have shown. So if you are right, there is nothing to argue about.
DarMM said:
It shows we can't trust a Landau pole in one perturbative method to be a conclusive argument for triviality.
Agreement too.

My point is a quite different one, namely that triviality is an issue only if one cares about having a well-defined continuum limit of QFT. In the effective field theory approach, one does not care about this, and so the whole problem is non-existing. For a finite cutoff, there will be parameters which give the correct QED in the large distance, and the triviality problem appears if the zero cutoff limit would give an infinite interaction constant (and if one replaces this by a finite value, the large distance limit becomes zero).
 
  • Like
Likes Tendex and Demystifier
  • #86
DarMM said:
We have examples of theories ... whose Lattice formulation tends to the free theory as the continuum limit is approached, but which are actually perfectly well defined in the continuum limit with non-trivial interactions. An example is the Gross-Neveu model.
I would like to learn more about this, can you give a reference for the claims above on the lattice and continuum versions of the Gross-Neveu model?
 
  • #87
Demystifier said:
I would like to learn more about this, can you give a reference for the claims above on the lattice and continuum versions of the Gross-Neveu model?
It's not easy stuff to read just to tell you.

I'd start with Vincent Rivasseau's "From perturbative to constructive Renormalization"
 
  • Like
Likes mattt and Demystifier
  • #88
Elias1960 said:
In QFT, there is serious doubt that the continuous theory is even well-defined
I wouldn't say this. We have several examples of continuum theories in 2D and 3D which are well-defined. Balaban also demonstrated the existence of a continuum limit for Yang-Mills in 4D, so I don't think any serious doubt remains. It's the infinite volume limit that is more difficult.
 
  • Like
Likes mattt, dextercioby, weirdoguy and 1 other person
  • #89
Elias1960 said:
For a finite cutoff, there will be parameters which give the correct QED in the large distance
No. Indeed, this cannot be proved without taking a continuum limit, since only then the Lorentz invariance characteristic for QED appears. But the continuum limit is obstructed by the Landau pole.
Thus your claim is wishful thinking.
 
  • Like
Likes weirdoguy
  • #90
A. Neumaier said:
No. Indeed, this cannot be proved without taking a continuum limit, since only then the Lorentz invariance characteristic for QED appears. But the continuum limit is obstructed by the Landau pole.
Thus your claim is wishful thinking.
Sorry, but Lorentz invariance is nothing characteristic for QED but a quite general property for wave equations. If the lattice equation gives in the large distance limit a wave equation, it has also Lorentz invariance.

That the continuum limit does not exist in a meaningful way is clear, with or without Landau pole we have the triviality problem. Note also that minor distortions of Lorentz covariance are unproblematic as long as they cannot be observed at the distances accessible now.

DarMM said:
I wouldn't say this. We have several examples of continuum theories in 2D and 3D which are well-defined. Balaban also demonstrated the existence of a continuum limit for Yang-Mills in 4D, so I don't think any serious doubt remains. It's the infinite volume limit that is more difficult.
If you think so, your choice. The formulation "demonstrated the existence of" sounds dubious, not like "has constructed an example of". Whatever, if he gets the prize for solving the Millenium problem I will no longer use this claim.

In fact, even if one can somehow define them, it will be not worth much, given that non-renormalizable gravity is anyway only an effective theory.
 
  • Sad
Likes weirdoguy
  • #91
Elias1960 said:
If you think so, your choice. The formulation "demonstrated the existence of" sounds dubious, not like "has constructed an example of". Whatever, if he gets the prize for solving the Millenium problem I will no longer use this claim.
"Demonstrate the existence of" is completely normal language. Do you just disagree with everything?
He has constructed the continuum limit. What hasn't been shown is that the infinite volume limit exists with a mass gap, which is required for the Millenium problem.

You were saying there is serious doubt over the existence of the continuum limit. There isn't, due to completely well defined 2D and 3D theories, as well as existence results for the 4D continuum limit. People working in constructive QFT don't have doubts over continuum QFT existing.

Elias1960 said:
In fact, even if one can somehow define them, it will be not worth much, given that non-renormalizable gravity is anyway only an effective theory.
This is again a non-sequiter. We were talking about whether QED and other QFTs have continuum limits. You said there were serious doubts, there aren't. Now what, there's a problem with this because nobody has formulated Quantum Gravity or something?
 
  • Like
Likes mattt, dextercioby and weirdoguy
  • #92
DarMM said:
He has constructed the continuum limit. What hasn't been shown is that the infinite volume limit exists with a mass gap, which is required for the Millenium problem.
I have tried to find the relevant paper and found this:
Balaban, T. (1987). Renormalization Group Approach to Lattice Gauge Field Theories I. Commun. Math. Phys. 109, 249-301
Balaban, T. (1988). Renormalization Group Approach to Lattice Gauge Field Theories II. Commun. Math. Phys. 116, 1-22
Are these the relevant papers?
(That would be funny if the best results about the very existence of continuous theories have been reached by the same lattice methods which Neumaier thinks cannot be applied to QED.)
DarMM said:
You were saying there is serious doubt over the existence of the continuum limit. There isn't, due to completely well defined 2D and 3D theories, as well as existence results for the 4D continuum limit. People working in constructive QFT don't have doubts over continuum QFT existing.
Ok, I will take this into account and formulate my position in the future differently.
DarMM said:
This is again a non-sequiter. We were talking about whether QED and other QFTs have continuum limits. You said there were serious doubts, there aren't. Now what, there's a problem with this because nobody has formulated Quantum Gravity or something?
There is no problem with this. I have based my statement about the serious problems on what I have heard about this question, from people less optimistic about this than you. I have not checked that myself given that for me it was an irrelevant side issue. If the situation is better, fine, I will remember this. But I don't have to change anything else, and the point of my remark about gravity was to explain why it is, in my opinion, only a quite irrelevant side issue.
 
  • #93
DarMM said:
I wouldn't say this. We have several examples of continuum theories in 2D and 3D which are well-defined. Balaban also demonstrated the existence of a continuum limit for Yang-Mills in 4D, so I don't think any serious doubt remains. It's the infinite volume limit that is more difficult.

Is it the case that the 4D limit has been established, but not the 3D limit? In describing Balaban's 3D work, http://www.claymath.org/sites/default/files/yangmills.pdf says that the contiuum limit has not been established: "This is an important step toward establishing the existence of the continuum limit on a compactified space-time. These results need to be extended to the study of expectations of gauge-invariant functions of the fields."

That article also seems to indicate that the 4D finite volume problem is open, and it is not just the infinite volume problem that remains: "These steps toward understanding quantum Yang–Mills theory lead to a vision of extending the present methods to establish a complete construction of theYang–Mills quantum field theory on a compact, four-dimensional space-time. One presumably needs to revisit known results at a deep level, simplify the methods,and extend them."
 
  • #94
Balaban has established that there is a continuum limit of the action, i.e. there is a well defined theory in the continuum. He never established that expectation values of gauge invariant operators are unique, nor did he prove certain analyticity properties for them.

These are usually necessary to solve what is called the finite volume case in constructive field theory, but they're not really the issues the average physicist means when they say the continuum limit. They usually mean there being something well defined and nontrivial in the continuum limit.

Thus Balaban has shown the continuum limit exists, but not demonstrated it has certain uniqueness and analytic properties for Wilson loops.
 
  • Like
Likes vanhees71
  • #95
DarMM said:
Thus Balaban has shown the continuum limit exists, but not demonstrated it has certain uniqueness and analytic properties for Wilson loops.

Have you heard the story about why he gave up working on Yang Mills? He moved house, and the movers lost the box with his notes on Yang Mills.
 
  • #97
atyy said:
Have you heard the story about why he gave up working on Yang Mills? He moved house, and the movers lost the box with his notes on Yang Mills.
Yes from yourself years ago! Makes one want to cry! :cry:
 
  • #98
Demystifier said:
$$\rho(\vec{x},\vec{y}) =|\Psi(\vec{x},\vec{y})|^2
\simeq \sum_k|c_k|^2 |\Psi_k(\vec{x},\vec{y})|^2 ~~~~~(1)$$
In the second equality we have assumed that ##A_{kq}(\vec{x})## are macro distinct for different ##k##, which we must assume if we want to have a system that can be interpreted as a measurement of ##K##.
In (1) [equation label added by me] you assume without justification that the ##\Psi_k(\vec{x},\vec{y})## with different ##k## have approximately disjoint support. This is unwarranted without a convincing analysis.
 
  • #99
A. Neumaier said:
In (1) [equation label added by me] you assume without justification that the ##\Psi_k(\vec{x},\vec{y})## with different ##k## have approximately disjoint support. This is unwarranted without a convincing analysis.
So you want to see an explicit calculation based on the theory of decoherence, right? If this is what would satisfy you, I will try to find one in the literature.
 
  • #100
Demystifier said:
So you want to see an explicit calculation based on the theory of decoherence, right? If this is what would satisfy you, I will try to find one in the literature.
Whatever you need to justify it without assuming nondemolition. Wigner's analysis indicates to me that this is impossible.
 
  • #101
A. Neumaier said:
Wigner's analysis indicates to me that this is impossible.
I don't understand that claim, can you explain how Wigner indicates that it is impossible?
 
  • #102
Demystifier said:
I don't understand that claim, can you explain how Wigner indicates that it is impossible?
I had discussed this in post #42.
 
  • #103
A. Neumaier said:
I had discussed this in post #42.
It's basically the objection that non-demolition is not a reasonable assumption. But I don't see how is that related to the assumption that detector wave functions are approximately separated in the position space.
 
  • #104
Demystifier said:
It's basically the objection that non-demolition is not a reasonable assumption. But I don't see how is that related to the assumption that detector wave functions are approximately separated in the position space.
No. Wigner's statement (quoted at the end of my post #42) essentially says that nondemolition is a necessary condition to get the wanted decomposition. Of course you assume only that the decomposition is approximate, so the argument by Wigner is not watertight in your case.

But your argument is completely absent - you just write some formulas and then jump without further justification to the desired conclusion [namely to (1) in post #98]!
 
  • #105
A. Neumaier said:
But your argument is completely absent - you just write some formulas and then jump without further justification to the desired conclusion [namely to (1) in post #98]!
I totally disagree, but if you think so I don't know what argument to offer without repeating myself.
 

Similar threads

Replies
5
Views
3K
Replies
1
Views
2K
Replies
3
Views
2K
Replies
14
Views
2K
Replies
109
Views
10K
Replies
34
Views
3K
Back
Top