Local realism ruled out? (was: Photon entanglement and )

In summary, the conversation discussed the possibility of starting a new thread on a physics forum to discuss evidence for a specific perspective. The topic of the thread was related to the Bell theorem and its potential flaws on both theoretical and experimental levels. The original poster mentioned that their previous posts on this topic had been criticized, but their factual basis had not been challenged until recently. They also noted that the measurement problem in quantum mechanics is a well-known issue and cited a paper that they believed supports the idea that local realism has not been ruled out by existing experiments. The other participant in the conversation disagreed and stated that the paper did not rule out local realism and provided additional quotes from experts in the field. Ultimately, the conversation concluded with both parties holding differing views
  • #561
RUTA said:
For example, we work through "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory," D. Dehlinger & M.W. Mitchell, Am. J. Phys. 58, Sep 2002, 903-910, in detail.

RUTA, do you ever run that experiment in your lab?
 
Physics news on Phys.org
  • #562
DrChinese said:
RUTA, do you ever run that experiment in your lab?

I'm a theorist. I was told as an undergrad to avoid the lab -- I destroyed too much equipment :-)
 
  • #563
RUTA, it would be great if you could explain one thing to me, regarding photon entanglement (superposition polarization):

Are the entangled superposition (of two photons) described by one single wavefunction?
 
  • #564
DevilsAvocado said:
RUTA, it would be great if you could explain one thing to me, regarding photon entanglement (superposition polarization):

Are the entangled superposition (of two photons) described by one single wavefunction?

Yes, |psi> ~ |HH> + |VV> is what Dehlinger created (well, close thereto, see eqns 1 and 6). |psi> ~ |HV> - |VH>, called the "singlet state," also gives results consistent with the Mermin device.
 
  • #565
RUTA said:
Yes, |psi> ~ |HH> + |VV> is what Dehlinger created (well, close thereto, see eqns 1 and 6). |psi> ~ |HV> - |VH>, called the "singlet state," also gives results consistent with the Mermin device.

WOW! Just great! Many thanks RUTA!

I’m working on a "personal surprise" that’s going to cause "some trouble" in the "EPR-FTL-Department". :wink:

Will post it in https://www.physicsforums.com/showthread.php?t=395509" in a couple of days...


Just a small follow-up: A measurement on any of these two photons will collapse/decohere the wavefunction/"singlet state", right?


EDIT: I think I found the answer in http://www.optics.rochester.edu/workgroups/lukishova/QuantumOpticsLab/homepage/mitchel1.pdf" :
Despite the randomness, the choice of a clearly has an effect on the state of the idler photon: it gives it a definite polarization in the |Va>i ,|Ha>i basis, which it did not have before the measurement.
 
Last edited by a moderator:
  • #566
DevilsAvocado said:
Just a small follow-up: A measurement on any of these two photons will collapse/decohere the wavefunction/"singlet state", right?

It will collapse the wavefunction, but neither party knows whether the other has made a measurement -- they both get what looks to them like totally random results (50-50 V H outcomes, regardless of setting) whether or not the other guy is doing anything at his end. You only see "weirdness" in the correlations, which are exchanged at sub-light or light speed b/w observers.
 
  • #567
RUTA said:
It will collapse the wavefunction, ...

This is just marvelous! Thanks again!

This is going to be very interesting and fun, as soon as I have everything ready for posting. Watch out! :smile:
 
  • #568
DevilsAvocado said:
This is just marvelous! Thanks again!

This is going to be very interesting and fun, as soon as I have everything ready for posting. Watch out! :smile:

Looking forward to it... :smile:
 
  • #569
DrChinese said:
Looking forward to it... :smile:

Me too. (why am I suddenly getting 'nervous'... ?:bugeye:?)

:wink:
 
  • #570
DevilsAvocado said:
Me too. (why am I suddenly getting 'nervous'... ?:bugeye:?)

:wink:

Because you're about to learn something via one of DrC's painful lessons. No pain, no gain :-)
 
  • #571
RUTA said:
Because you're about to learn something via one of DrC's painful lessons. No pain, no gain :-)

Aw, I promise to be gentle.

Actually RUTA, I am quite in the same boat right now. I just completed a draft of a paper which is available for comments - and yours would be welcome. It has nothing to do with this thread, but check it out if anyone wants to skewer me:

DrC's New Paper and opportunity to bash me with your comments

Here's your chance! Email me (I'm not ready for a new thread quite yet as I submitted to PF Independent Research for review).
 
  • #572
RUTA said:
DrC's painful lessons. No pain, no gain :-)
DrChinese said:
I promise to be gentle.


There seems to be some "entangled discrepancy" here... flip side of the coin...? :biggrin:

I better keep my big mouth shut until there’s something more substantial for the "wolf" to tear apart. :eek:
 
  • #573
Demystifier said:
That's interesting, because my explicit Bohmian model of relativistic nonlocal reality does involve a "meta time".
Now I have a better understanding of the physical meaning of this "meta time". It can be viewed as a generalization of the notion of proper time. It is also formally analogous to the Newton absolute time (even though it is fully relativistic covariant). More details can be found in
http://xxx.lanl.gov/abs/1006.1986
 
  • #574
I have not posted here for quite some time as I did not feel I could add anything new. I am posting now because, on the one hand, the thread has apparently drawn a lot of interest, on the other hand, my paper has just been accepted for publication in the International Journal of Quantum Information (there is a preprint at http://www.akhmeteli.org/akh-prepr-ws-ijqi2.pdf ), so I guess it would be appropriate to summarize its results here, as they are quite relevant to this discussion.

So the article starts with the equations of (non-second-quantized) scalar electrodynamics. They describe a Klein-Gordon particle (a scalar particle described by the Klein-Gordon equation) interacting with electromagnetic field (described by the Maxwell equations). It is shown that this model is equivalent (at least locally) to a local realistic model – modified electrodynamics without particles, as the matter (particle) field can be naturally eliminated from the equations of scalar electrodynamics, and the resulting equations describe independent evolution of the electromagnetic field (electromagnetic 4-potential). Furthermore, this evolution is shown to be equivalent to unitary evolution of a certain (second-quantized) quantum field theory.

This is clearly relevant to the topic of this thread: indeed, it turns out that unitary evolution of a quantum field theory can be reproduced in a local realistic (LR) model, so it is impossible to rule out the LR model without using some additional postulates, such as the projection postulate of the quantum theory of measurement. On the other hand, as I argued repeatedly, this postulate directly contradicts the unitary evolution.
 
  • #575
akhmeteli said:
This is clearly relevant to the topic of this thread: indeed, it turns out that unitary evolution of a quantum field theory can be reproduced in a local realistic (LR) model, so it is impossible to rule out the LR model without using some additional postulates, such as the projection postulate of the quantum theory of measurement.
Can this quantum field theory be used to describe entangled particles and predict the results of Aspect-type experiments? Are you claiming that the LR model can violate any Bell inequalities?
 
  • #576
JesseM said:
Can this quantum field theory be used to describe entangled particles

Yes, this quantum field theory (QFT) can definitely be used to describe entangled particles.

However, this answer, while correct, can be misleading, because another question is relevant here: "Can this local realistic model (LRM) be used to describe entangled particles?" These two questions are not equivalent, as QFT and LRM are not equivalent, they just have the same evolution. One can say that LRM is a subset of QFT.

So what is the answer to the second question? The short answer is "yes". However, it depends on how you would answer the following question: "Can a 3-dimensional body be used to describe its 2-dimensional projections?" If you believe it can, then this LRM can definitely be used to describe entangled particles. If you believe it cannot, then the answer to this question is negative.

Let me explain. The states of the LRM are so called generalized coherent states, which are a superposition of several (infinite number of) states having definite number of particles, including a state with, say, 2 particles, so an entangled state of two particles is a projection of a state of the LRM.

JesseM said:
and predict the results of Aspect-type experiments?

I think so. As I argued here, quoting the leading experts in the field, the genuine Bell inequalities have never been violated in Aspect-type experiments so far.

However, a caveat is required here as well. I don't claim that the QFT or the LRM correctly describe the entire Nature, as, for example, being based on the scalar electrodynamics, they do not describe electronic spin. However, the scalar electrodynamics is a decent theory, successfully describing a very wide area of phenomena.

JesseM said:
Are you claiming that the LR model can violate any Bell inequalities?

No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

So I don't think one can demand that the LRM faithfully reproduce the relevant mutually contradicting predictions of quantum theory. On the other hand, this LRM has exactly the same evolution as the QFT.

By the way, this also suggests that one needs more than unitary evolution to prove the violations in quantum theory.
 
  • #577
akhmeteli said:
I think so. As I argued here, quoting the leading experts in the field, the genuine Bell inequalities have never been violated in Aspect-type experiments so far.
I haven't read the whole thread, are you just talking about experimental loopholes like the ones discussed here? There have been experiments that closed the detector efficiency loophole and experiments that closed the locality loophole, but no experiment that closed both loopholes simultaneously--still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.
akhmeteli said:
No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.
What do you mean by "mutally contradicting postulates"? Remember, in its basic form QM is nothing more than a recipe for making predictions about experimental results, it doesn't come with any built-in interpretation of the "meaning" of this recipe...the fact that the recipe involves calculating the evolution of the wavefunction between measurements and then using the projection postulate to get the probabilities of different measurement results doesn't imply that either the wavefunction or the "collapse of the wavefunction" have any independent reality outside the fact that when we use this recipe we do get correct statistical predictions. Indeed, the example of Bohmian mechanics proves that we are free to believe there is some underlying model that explains the origin of the probabilities given in the recipe without the need to assume anything special really happens during the measurement process. And the only assumption about ordinary QM used in Bell's proof that QM is incompatible with local realism is the assumption that the recipe does indeed give correct statistical predictions about experimental results, regardless of the underlying explanation for the predicted statistics.

Quantum field theory is also just a recipe for making predictions, and although I haven't studied QFT I'm pretty sure that known quantum field theories like quantum electrodynamics do mirror nonrelativistic QM in predicting violations of Bell inequalities. Does the simplified quantum field theory you are considering differ from known quantum field theories in this respect?
 
  • #578
akhmeteli said:
No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

So I don't think one can demand that the LRM faithfully reproduce the relevant mutually contradicting predictions of quantum theory. On the other hand, this LRM has exactly the same evolution as the QFT.

By the way, this also suggests that one needs more than unitary evolution to prove the violations in quantum theory.

So, are you claiming that QM's prediction of the violation of Bell inequalities is wrong?
 
  • #579
JesseM said:
I haven't read the whole thread, are you just talking about experimental loopholes like the ones discussed here?

Yes, that's what I am talking about.

JesseM said:
There have been experiments that closed the detector efficiency loophole and experiments that closed the locality loophole, but no experiment that closed both loopholes simultaneously

I agree. Some people think that closing separate loopholes in separate experiments is good enough though. In post 34 of this thread I asked one of them:

"what’s wrong [then] with the following reasoning: planar Euclidian geometry is wrong because it predicts that the sum of angles of any triangle is 180 degrees, whereas experiments demonstrate with confidence of 300 sigmas or more that the sums of angles of a quadrangle on a plane and a triangle on a sphere are not equal to 180 degrees."

I have never heard an answer from anybody.

JesseM said:
--still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.

I agree, "most experts would agree" on that. But what conclusions am I supposed to draw from that? That the model I offer is "very contrived"? I cannot agree with that, as it's essentially old good scalar electrodynamics (non-second-quantized). That the model does not "get correct predictions (agreeing with those of QM) for the experiments that have already been performed"? But it has the same evolution as the relevant QFT. I agree that the QFT is not the same as the standard quantum electrodynamics (QED), but it is pretty close, so I guess the predictions will be close to those of QED in many cases, although, as I admitted, the QFT fails to describe the electronic spin, for example. So while I cannot state that the LRM gives correct predictions for all experiments performed so far, I would say it suggests that a local realistic theory giving correct predictions for the past experiments and failing in an ideal experiment must not necessarily be "very contrived".

JesseM said:
What do you mean by "mutally contradicting postulates"? Remember, in its basic form QM is nothing more than a recipe for making predictions about experimental results, it doesn't come with any built-in interpretation of the "meaning" of this recipe...the fact that the recipe involves calculating the evolution of the wavefunction between measurements and then using the projection postulate to get the probabilities of different measurement results doesn't imply that either the wavefunction or the "collapse of the wavefunction" have any independent reality outside the fact that when we use this recipe we do get correct statistical predictions.

I think the postulates are indeed "mutually contradicting", as the projection postulate predicts transformation of a pure wavefunction into a mixture and it predicts irreversibility. Neither is true for unitary evolution. Of course, you can indeed avoid a contradiction, saying (following von Neumann) that unitary evolution is correct between measurements, and the projection postulate is correct during measurements. But I think it is rather difficult to cling to that position now, 80 years after von Neumann. Are you ready to say that if you call something "an instrument", it evolves in one way, and if you don't call it that, it evolves differently? Do you think that unitary evolution is wrong for instruments? Or for observers? I quoted Schlosshauer in post 41 in this thread, he reviewed modern experiments and concluded, among other things (please see the exact wording in post 41), that unitary dynamics has been confirmed everywhere it was tested and that there is no positive evidence of collapse.


JesseM said:
Indeed, the example of Bohmian mechanics proves that we are free to believe there is some underlying model that explains the origin of the probabilities given in the recipe without the need to assume anything special really happens during the measurement process.

No, this is not quite so. If I understand Demystifier (https://www.physicsforums.com/showpost.php?p=2167542&postcount=19) correctly (and he has written maybe dozens of articles on Bohmian mechanics), although the projection postulate can be derived in Bohmian mechanics, it can only be derived as an approximation, maybe a very good approximation, but an approximation.

JesseM said:
And the only assumption about ordinary QM used in Bell's proof that QM is incompatible with local realism is the assumption that the recipe does indeed give correct statistical predictions about experimental results, regardless of the underlying explanation for the predicted statistics.

The problem is this recipe includes mutually contradictory components, so it cannot be always correct.

JesseM said:
Quantum field theory is also just a recipe for making predictions, and although I haven't studied QFT I'm pretty sure that known quantum field theories like quantum electrodynamics do mirror nonrelativistic QM in predicting violations of Bell inequalities.

I think this is correct, but they still have the same mutually contradictory components as the standard quantum theory (SQM), so what I said about SQM is true about quantum field theories, such as QED.


JesseM said:
Does the simplified quantum field theory you are considering differ from known quantum field theories in this respect?

I don't think it differs in this respect, if you include the standard measurement theory in it. But I did not say the LRM reproduces both unitary evolution and the measurement theory of this QFT, it just reproduces its unitary evolution. As unitary evolution and measurement theory are mutually contradictory, I don't think the failure to reproduce the measurement theory is a weak point of the LRM.
 
  • #580
RUTA said:
So, are you claiming that QM's prediction of the violation of Bell inequalities is wrong?

Not exactly. I suspect that this prediction may be wrong, but I cannot claim that it is wrong. Indeed, I do understand that the violations can be found in a loophole-free experiment, say, tomorrow. Following other people, I am just saying (right now, not tomorrow) that 1) there has been no evidence of violations of the genuine Bell inequalities so far, and that 2) mutually contradictory assumptions are required to derive the QM's prediction of the violation of Bell inequalities. Therefore, local realism has not been ruled out so far.
 
  • #581
akhmeteli said:
I agree. Some people think that closing separate loopholes in separate experiments is good enough though. In post 34 of this thread I asked one of them:

"what’s wrong [then] with the following reasoning: planar Euclidian geometry is wrong because it predicts that the sum of angles of any triangle is 180 degrees, whereas experiments demonstrate with confidence of 300 sigmas or more that the sums of angles of a quadrangle on a plane and a triangle on a sphere are not equal to 180 degrees."

I have never heard an answer from anybody.
This is kind of a strawman, no one is asking you to adopt a general principle along the lines of "if X is true when condition Y but not condition Z holds, and X is also true when condition Z but not condition Y holds, then we can assume X is true when both conditions Y and Z hold simultaneously". Rather, the reason physicists think we can be pretty confident that Bell inequalities would be violated in an experiment where both loopholes were closed simultaneously has to do with specific considerations about the physical situation we're looking at, like the idea I already mentioned that it would require a very contrived local theory that would exploit both loopholes in just the right way that it would perfectly agree with QM in all experiments done to date.
JesseM said:
still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.
akhmeteli said:
I agree, "most experts would agree" on that. But what conclusions am I supposed to draw from that? That the model I offer is "very contrived"?
Are you claiming that your model gives correct statistical predictions about the empirical results of all the Aspect-type experiments that have been done to date?
akhmeteli said:
That the model does not "get correct predictions (agreeing with those of QM) for the experiments that have already been performed"? But it has the same evolution as the relevant QFT.
That seems like a slightly evasive answer, since you later say that you distinguish the unitary evolution aspect of QM/QFT from the projection postulate, and only claim that your model reproduces the unitary evolution, but isn't the projection postulate the only way to get actual predictions about empirical experiments from QM/QFT? Do you claim that your model can correctly predict actual empirical experimental results in the types of experiments that have been done to date, yes or no?
akhmeteli said:
I think the postulates are indeed "mutually contradicting", as the projection postulate predicts transformation of a pure wavefunction into a mixture and it predicts irreversibility.
Why is this a "contradiction", if we don't assume that either the wavefunction or its collapse on measurement are in any sense "real", but just treat them as parts of a pragmatic recipe for making quantitative predictions about experimental results? Do you claim there are any situations where the two postulates don't lead to a unique prediction about the statistics we should expect to see in some empirical experiment? If so, what situation would that be?
akhmeteli said:
Neither is true for unitary evolution. Of course, you can indeed avoid a contradiction, saying (following von Neumann) that unitary evolution is correct between measurements, and the projection postulate is correct during measurements.
Yes, this is just what the pragmatic recipe says we should do.
akhmeteli said:
But I think it is rather difficult to cling to that position now, 80 years after von Neumann. Are you ready to say that if you call something "an instrument", it evolves in one way, and if you don't call it that, it evolves differently?
Personally I believe there are some true set of laws that describe what's "really" going on (I'd favor some type of many-worlds type view) and which work exactly the same for interactions between quantum systems and "instruments" as they do for interactions between individual particles. But again, if QM is treated just as a pragmatic recipe for making predictions which says nothing about the underlying "reality" one way or another, then in practice I don't think there is much ambiguity about what constitutes a "measurement", my understanding is that it's basically synonymous with interactions that involve environmental decoherence. And the types of experiments that physicists do are typically carefully controlled to prevent environmental decoherence from any other system besides the assigned "measuring device" (for example, a double-slit experiment with an electron will be done in a vacuum to prevent decoherence from interactions between the electrons and air molecules).
akhmeteli said:
Do you think that unitary evolution is wrong for instruments? Or for observers?
I don't think it's likely to be wrong in reality since I favor some sort of variant of the many-worlds interpretation, but I do think it's hard to get concrete predictions about empirical results using unitary evolution alone
akhmeteli said:
I quoted Schlosshauer in post 41 in this thread, he reviewed modern experiments and concluded, among other things (please see the exact wording in post 41), that unitary dynamics has been confirmed everywhere it was tested and that there is no positive evidence of collapse.
You didn't actually give a link to the paper, but you seem to be talking about this one. Anyway, Schlosshauer seems to be just arguing for the many-worlds interpretation (see the discussion beginning with 'The basic idea was introduced in Everett’s proposal of a relative-state view of quantum mechanics' on p. 1) and against any sort of objective collapse theory (see p. 13 where he talks about 'physical collapse models'--note that such models would actually be empirically distinguishable from ordinary QM in certain situations, like if information could be recorded and then 'erased' in a sufficiently large system completely isolated from environmental decoherence), but this is not the same as arguing that on a pragmatic level there's anything wrong with using the projection postulate to get quantitative predictions about experimental results. And it typically requires a lot of sophisticated argument to show how any many-worlds type interpretation can give concrete predictions in the form of probabilities (see the preferred basis problem), with no complete agreement among many-worlds advocates on how to do this (Schlosshauer discusses the problem on p. 14 of the paper, in the section 'Emergence of probabilities in a relative-state framework'); I think they all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate, though. Indeed, Schlosshauer says at the beginning of that section that "The question of the origin and meaning of probabilities in a relative state–type interpretation that is based solely on a deterministically evolving global quantum state, and the problem of how to consistently derive Born’s rule in such a framework, has been the subject of much discussion and criticism aimed at this type of interpretation." And a bit later he says "The solution to the problem of understanding the meaning of probabilities and of deriving Born’s rule in a relative-state framework must therefore be sought on a much more fundamental level of quantum mechanics."
JesseM said:
Indeed, the example of Bohmian mechanics proves that we are free to believe there is some underlying model that explains the origin of the probabilities given in the recipe without the need to assume anything special really happens during the measurement process.
akhmeteli said:
No, this is not quite so. If I understand Demystifier (https://www.physicsforums.com/showpost.php?p=2167542&postcount=19) correctly (and he has written maybe dozens of articles on Bohmian mechanics), although the projection postulate can be derived in Bohmian mechanics, it can only be derived as an approximation, maybe a very good approximation, but an approximation.
I don't think Demystifier was actually saying that there'd be situations where Bohmian mechanics would give different predictions about empirical results than the normal QM recipe involving the Born rule; I think he was just saying that in Bohmian mechanics the collapse is not "real" (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield. In section 4 of the Stanford article on Bohmian mechanics, they say:
However, the form given above has two advantages: First, it makes sense for particles with spin — and all the apparently paradoxical quantum phenomena associated with spin are, in fact, thereby accounted for by Bohmian mechanics without further ado. Secondly, and this is crucial to the fact that Bohmian mechanics is empirically equivalent to orthodox quantum theory, the right hand side of the guiding equation is J/ρ, the ratio of the quantum probability current to the quantum probability density. This shows first of all that it should require no imagination whatsoever to guess the guiding equation from Schrödinger's equation, provided one is looking for one, since the classical formula for current is density times velocity.

...

This demonstrates that all claims to the effect that the predictions of quantum theory are incompatible with the existence of hidden variables, with an underlying deterministic model in which quantum randomness arises from averaging over ignorance, are wrong. For Bohmian mechanics provides us with just such a model: For any quantum experiment we merely take as the relevant Bohmian system the combined system that includes the system upon which the experiment is performed as well as all the measuring instruments and other devices used in performing the experiment (together with all other systems with which these have significant interaction over the course of the experiment). The "hidden variables" model is then obtained by regarding the initial configuration of this big system as random in the usual quantum mechanical way, with distribution given by |ψ|2. The initial configuration is then transformed, via the guiding equation for the big system, into the final configuration at the conclusion of the experiment. It then follows that this final configuration of the big system, including in particular the orientation of instrument pointers, will also be distributed in the quantum mechanical way, so that this deterministic Bohmian model yields the usual quantum predictions for the results of the experiment.
akhmeteli said:
I don't think it differs in this respect, if you include the standard measurement theory in it. But I did not say the LRM reproduces both unitary evolution and the measurement theory of this QFT, it just reproduces its unitary evolution. As unitary evolution and measurement theory are mutually contradictory, I don't think the failure to reproduce the measurement theory is a weak point of the LRM.
But if it only reproduces unitary evolution, can it reproduce any of the empirical predictions about probabilities made by the standard pragmatic recipe which includes the Born rule? Or can it only predict complex amplitudes, which can't directly be compared to empirical probabilities without making use of the Born rule or some subtle many-worlds type argument?

One last thing: note that Bell's proof strictly speaking showed that QM was incompatible with local realism if we assume that part of the definition of "realism" is that each measurement has a unique outcome, rather than each experiment splitting the experimenter into multiple copies who observe different outcomes. See the simple toy model I provided in post #11 of this thread showing how, if two experimenters Alice and Bob split into multiple copies on measurement and the universe doesn't have to decide which copy of Alice is matched to which copy of Bob until there's been time for a signal to pass between them, then we can get a situation where a randomly selected Alice-Bob pair will see statistics that violate Bell inequalities in a purely local model. Likewise, see my post #8 on this thread for links to various many-worlds advocates arguing that the interpretation is a purely local model.
 
Last edited:
  • #582
JesseM said:
This is kind of a strawman, no one is asking you to adopt a general principle along the lines of "if X is true when condition Y but not condition Z holds, and X is also true when condition Z but not condition Y holds, then we can assume X is true when both conditions Y and Z hold simultaneously".
I am happy that you don’t use this argument. But it does not look like a strawman to me. See, e.g., post 7 in this thread. Furthermore, Aspelmeyer and Zeilinger wrote as follows (see the reference in post 385 in this thread):
"But the ultimate test of Bell’s theorem is still missing:
a single experiment that closes all the loopholes at once.
It is very unlikely that such an experiment will disagree
with the prediction of quantum mechanics, since this
would imply that nature makes use of both the detection
loophole in the Innsbruck experiment and of the
locality loophole in the NIST experiment. Nevertheless,
nature could be vicious, and such an experiment is desirable
if we are to finally close the book on local realism."
While they are careful enough to avoid saying anything that is factually incorrect, they do use this argument. So this argument is indeed widely used.
JesseM said:
Rather, the reason physicists think we can be pretty confident that Bell inequalities would be violated in an experiment where both loopholes were closed simultaneously has to do with specific considerations about the physical situation we're looking at, like the idea I already mentioned that it would require a very contrived local theory that would exploit both loopholes in just the right way that it would perfectly agree with QM in all experiments done to date.
I believe I addressed this statement in my previous post and I am not sure I have anything to add.

JesseM said:
Are you claiming that your model gives correct statistical predictions about the empirical results of all the Aspect-type experiments that have been done to date?

That seems like a slightly evasive answer, since you later say that you distinguish the unitary evolution aspect of QM/QFT from the projection postulate, and only claim that your model reproduces the unitary evolution, but isn't the projection postulate the only way to get actual predictions about empirical experiments from QM/QFT? Do you claim that your model can correctly predict actual empirical experimental results in the types of experiments that have been done to date, yes or no?
I appreciate that my answer may look evasive, but I was not trying to sweep anything under the carpet, so maybe the question is not quite appropriate? Let me give you an example. Suppose I’d ask you if the Schroedinger equation correctly describes all experiments performed so far? Yes or no? Strictly speaking, the correct answer is “no”, because the equation is not relativistic and does not describe the electronic spin. But perhaps you’ll agree that this “correct” answer is somewhat misleading because this is a damn good equation :-) So if you want a yes or no answer, then no, the model I offer cannot describe all experiments performed so far, e.g., because it does not describe the electronic spin, and I said so in my previous post. However, this is a quite decent model, as it includes the entire scalar electrodynamics, a well-established theory.
JesseM said:
Why is this a "contradiction", if we don't assume that either the wavefunction or its collapse on measurement are in any sense "real", but just treat them as parts of a pragmatic recipe for making quantitative predictions about experimental results? Do you claim there are any situations where the two postulates don't lead to a unique prediction about the statistics we should expect to see in some empirical experiment? If so, what situation would that be?
According to the projection postulate, after a measurement, the system is in an eigenstate, so another measurement will produce the same result (say, if the relevant operator commutes with the Hamiltonian). According to unitary evolution, though, a measurement cannot turn a superposition of states into a mixture, so there is a probability that the next measurement will return a different result. If this is not a contradiction, what is? Another situation where the two postulates don’t lead to a unique prediction is, I believe, a loophole-free Bell experiment. You cannot get a violation using just unitary evolution.

JesseM said:
Yes, this is just what the pragmatic recipe says we should do.

Personally I believe there are some true set of laws that describe what's "really" going on (I'd favor some type of many-worlds type view) and which work exactly the same for interactions between quantum systems and "instruments" as they do for interactions between individual particles.
This is just great, so we pretty much agree with each other. Then what seems to be the problem?:-)
JesseM said:
But again, if QM is treated just as a pragmatic recipe for making predictions which says nothing about the underlying "reality" one way or another, then in practice I don't think there is much ambiguity about what constitutes a "measurement", my understanding is that it's basically synonymous with interactions that involve environmental decoherence. And the types of experiments that physicists do are typically carefully controlled to prevent environmental decoherence from any other system besides the assigned "measuring device" (for example, a double-slit experiment with an electron will be done in a vacuum to prevent decoherence from interactions between the electrons and air molecules).
JesseM, again, it looks like we pretty much agree. I could agree, say, that the difference between unitary evolution and the projection postulate can be explained by environmental decoherence, but let us agree first what we are talking about. This thread is not about quantum theory being good or bad, everybody agrees that it is extremely good. The question of this thread is whether local realism has been ruled out or not. You see, you are talking about something “pragmatic”, but the question of this thread is not exactly pragmatic. As I said earlier in this thread, Nature cannot be “approximately local” or “approximately nonlocal”, it is either precisely local or precisely nonlocal. Or, if you disagree, then please explain what “approximate locality” can possibly be, because I don’t have a slightest idea:-) So yes, quantum theory is extremely good, but this is not relevant to the issue at hand.

JesseM said:
I don't think it's likely to be wrong in reality since I favor some sort of variant of the many-worlds interpretation, but I do think it's hard to get concrete predictions about empirical results using unitary evolution alone
Again, I agree, but, as I noted in our previous discussion (https://www.physicsforums.com/showpost.php?p=1706652&postcount=78), you may just complement unitary evolution with the Born rule as an operational principle.
JesseM said:
You didn't actually give a link to the paper, but you seem to be talking about this one.
That’s correct. Though I did not give a direct link, post 41 referenced post 31, where there is a reference to the article:-) Sorry for the inconvenience:-)
JesseM said:
Anyway, Schlosshauer seems to be just arguing for the many-worlds interpretation (see the discussion beginning with 'The basic idea was introduced in Everett’s proposal of a relative-state view of quantum mechanics' on p. 1) and against any sort of objective collapse theory (see p. 13 where he talks about 'physical collapse models'--note that such models would actually be empirically distinguishable from ordinary QM in certain situations, like if information could be recorded and then 'erased' in a sufficiently large system completely isolated from environmental decoherence), but this is not the same as arguing that on a pragmatic level there's anything wrong with using the projection postulate to get quantitative predictions about experimental results. And it typically requires a lot of sophisticated argument to show how any many-worlds type interpretation can give concrete predictions in the form of probabilities (see the preferred basis problem), with no complete agreement among many-worlds advocates on how to do this (Schlosshauer discusses the problem on p. 14 of the paper, in the section 'Emergence of probabilities in a relative-state framework'); I think they all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate, though. Indeed, Schlosshauer says at the beginning of that section that "The question of the origin and meaning of probabilities in a relative state–type interpretation that is based solely on a deterministically evolving global quantum state, and the problem of how to consistently derive Born’s rule in such a framework, has been the subject of much discussion and criticism aimed at this type of interpretation." And a bit later he says "The solution to the problem of understanding the meaning of probabilities and of deriving Born’s rule in a relative-state framework must therefore be sought on a much more fundamental level of quantum mechanics."
Again, I agree that quantum theory is a great practical value, but we are not discussing practicality. Again, it seems we both seem to agree that unitary evolution is always correct. However, it is worth mentioning that you are both telling me that you favor many worlds interpretation(s) and that there is no “complete agreement” on how “any many-worlds type interpretation can give concrete predictions in the form of probabilities”. This means that “many-worlds” people can actually live without the projection postulate. They may “all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate”, but, strictly speaking, they are just unable to derive these probabilities. And it is good for them that they cannot derive those probabilities, because if they derived them from unitary evolution, that would mean that they made a mistake somewhere, as you cannot derive from unitary evolution something that directly contradicts it – the projection postulate. Let me emphasize that for all practical purposes you don’t need the Born rule or the projection postulate as precise principles – if they are approximately correct, they may be good enough for practice, but not when you’re trying to understand if Nature is local or not

JesseM said:
I don't think Demystifier was actually saying that there'd be situations where Bohmian mechanics would give different predictions about empirical results than the normal QM recipe involving the Born rule; I think he was just saying that in Bohmian mechanics the collapse is not "real" (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield.
There is no need to guess what he said, as I gave you the reference to what he actually said. He said that the projection postulate is an approximation in Bohmian mechanics. Of course, you are free to disagree with him, with me or anybody else, but if you do, just say so. Do you believe that the projection postulate can be derived in Bohmian mechanics as a precise principle? With all due respect, I strongly doubt that it can (for reasons I explained), so could you give me a reference to such a result? The Born rule is one thing, the projection postulate is something different.

JesseM said:
In section 4 of the Stanford article on Bohmian mechanics, they say:
Again, the Born rule is one thing, the projection postulate is something different. In the quote from Stanford encyclopedia (SE), I’d say, the Born rule is an operational principle. Furthermore, everything they say can be applied to the model I offer. Moreover, one can say that this model is a variant of Bohmian mechanics, which just happens to be local.

JesseM said:
But if it only reproduces unitary evolution, can it reproduce any of the empirical predictions about probabilities made by the standard pragmatic recipe which includes the Born rule? Or can it only predict complex amplitudes, which can't directly be compared to empirical probabilities without making use of the Born rule or some subtle many-worlds type argument?
As I said, your SE quote above applies to this model. If you believe the Bohmian mechanics can reproduce “any of the empirical predictions about probabilities”, then why should you have a problem with this model? If you don’t believe that, well, at least this model is in good company:-)

JesseM said:
One last thing: note that Bell's proof strictly speaking showed that QM was incompatible with local realism if we assume that part of the definition of "realism" is that each measurement has a unique outcome, rather than each experiment splitting the experimenter into multiple copies who observe different outcomes. See the simple toy model I provided in post #11 of this thread showing how, if two experimenters Alice and Bob split into multiple copies on measurement and the universe doesn't have to decide which copy of Alice is matched to which copy of Bob until there's been time for a signal to pass between them, then we can get a situation where a randomly selected Alice-Bob pair will see statistics that violate Bell inequalities in a purely local model. Likewise, see my post #8 on this thread for links to various many-worlds advocates arguing that the interpretation is a purely local model.
I see. I am just not sure such radical ideas as many worlds are really necessary. Furthermore, as I said in our previous discussion, I believe unitary evolution implies that no measurement is ever final, so, strictly speaking, there are never any definite outcomes, but they may seem definite, as transitions between different states of a macroscopic instrument can take an eternity.

In general, I would say our positions have a lot in common.
 
  • #583
With all due respect akhmeteli, to a layman like me, this looks like a "beat around the bushes"...?

The title of your paper is: "IS NO DRAMA QUANTUM THEORY POSSIBLE?"

I could be wrong, but I interpret "NO DRAMA QUANTUM THEORY" as no "spooky action at a distance", i.e. local realism. But then you say:
Is it possible to offer a "no drama" quantum theory? Something as simple (in principle) as classical electrodynamics - a local realistic theory described by a system of partial differential equations in 3+1 dimensions, but reproducing unitary evolution of quantum theory in the configuration space?

Of course, the Bell inequalities cannot be violated in such a theory. This author has little, if anything, new to say about the Bell theorem, and this article is not about the Bell theorem. However, this issue cannot be "swept under the carpet" and will be discussed in Section 5 using other people's arguments.
(My emphasis)

In Section 5, you state:
In Section 3, it was shown that a theory similar to quantum field theory (QFT) can be built that is basically equivalent to non-second-quantized scalar electrodynamics on the set of solutions of the latter. However, the local realistic theory violates the Bell inequalities, so this issue is discussed below using other people's arguments.
I take for granted that this is the (calamitous) typo??
While the Bell inequalities cannot be violated in local realistic theories, there are some reasons to believe these inequalities cannot be violated either in experiments or in quantum theory. Indeed, there seems to be a consensus among experts that "a conclusive experiment falsifying in an absolutely uncontroversial way local realism is still missing".
(My emphasis)

To me this looks like a not very fair 'mixture' of; personal speculations + bogus statements + others statements concerning the current status of EPR-Bell experiments, resulting in the stupendous conclusion that Bell "inequalities cannot be violated either in experiments or in quantum theory" ...!:bugeye:?

And how on Earth is this 'compatible' with your initial statement:
This author has little, if anything, new to say about the Bell theorem, and this article is not about the Bell theorem.
?:confused:?

I trust in RUTA (Mark Stuckey). He’s a working PhD Professor of Physics:
RUTA said:
When I first entered the foundations community (1994), there were still a few conference presentations arguing that the statistical and/or experimental analyses of EPR-Bell experiments were flawed. Such talks have gone the way of the dinosaurs. Virtually everyone agrees that the EPR-Bell experiments and QM are legit, so we need a significant change in our worldview. There is a proper subset who believe this change will be related to the unification of QM and GR :-)
(My emphasis)

I looked at http://www.akhmeteli.org/" and there are no references at all...?

To me, this looks like "personal speculations", and not mainstream physics:
akhmeteli said:
... This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

And to be frank, your reasoning also looks dim. You are claiming a Local Realistic Model (LRM) that is not capable of violating Bell's Inequality, but that doesn’t matter, because – "these inequalities cannot be violated either in experiments or in quantum theory".

Exactly how do you derive "cannot" from your previous statements ...?:eek:?
 
Last edited by a moderator:
  • #584
akhmeteli said:
Not exactly. I suspect that this prediction may be wrong, but I cannot claim that it is wrong. Indeed, I do understand that the violations can be found in a loophole-free experiment, say, tomorrow.

If the prediction is wrong, then QM is wrong. That's the bold assertion I'm fishing for :-)


akhmeteli said:
Following other people, I am just saying (right now, not tomorrow) that 1) there has been no evidence of violations of the genuine Bell inequalities so far,

Given the preponderance of experimental evidence and the highly contrived nature by which loop holes must exist to explain away violations of Bell inequalities, the foundations community has long ago abandoned any attempt to save local realism. But, you're right, there are no truly "loop hole free" experiments, so die hard local realists can cling to hope.

akhmeteli said:
and that 2) mutually contradictory assumptions are required to derive the QM's prediction of the violation of Bell inequalities. Therefore, local realism has not been ruled out so far.

Are you talking about the measurement problem? That applies to all QM predictions, not just those that violate Bell inequalities.
 
  • #585
JesseM said:
There have been experiments that closed the detector efficiency loophole and experiments that closed the locality loophole, but no experiment that closed both loopholes simultaneously--still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.
It does not require contrived model to spot the likely source of systematic error in NIST experiment (if that's the one you have on mind as efficient detection experiment).
In this experiment only one measurement is performed for both particles and that way detection photons are subject to interference.
As authors of that paper say: "Also, the detection solid angle is large enough that Young's interference fringes, if present are averaged out."
First, this interference effect of photons scattered from two ions is experimentally verified so there should be no reason to say that there are no interference fringes (negligible might be a better word).
Second, assumption that interference effect of detection photons is averaged out even when they are conditioned on different ion configurations is the same fair sampling assumption as used in different photon experiments.
 
  • #586
RUTA said:
If the prediction is wrong, then QM is wrong. That's the bold assertion I'm fishing for :-)
If prediction of some green alternate theory is found out to be wrong then theory is wrong.
If prediction of well established theory with proven usefulness is found out to be wrong then domain of it's applicability is established instead. :wink:
 
  • #587
P.S. akhmeteli
... these inequalities cannot be violated either in experiments or in quantum theory ...

It would be interesting to hear your view on this:
http://plato.stanford.edu/entries/bell-theorem/"
...
The incompatibility of Local Realistic Theories with Quantum Mechanics permits adjudication by experiments, some of which are described here. Most of the dozens of experiments performed so far have favored Quantum Mechanics, but not decisively because of the “detection loophole” or the “communication loophole.” The latter has been nearly decisively blocked by a recent experiment and there is a good prospect for blocking the former. The refutation of the family of Local Realistic Theories would imply that certain peculiarities of Quantum Mechanics will remain part of our physical worldview: notably, the objective indefiniteness of properties, the indeterminacy of measurement results, and the tension between quantum nonlocality and the locality of Relativity Theory.


And while you’re at it: Could you please explain why not one (1) EPR-Bell experiment so far has clearly favored Local Realistic Theories? Not one (1).

And, if you have some extra spare time: Could you also explain how nature is providing the "detection loophole", which is regarded as the most 'severe'. I mean, if you look at this slide from Alain Aspect, it’s clear that this "magic LRM function" must be wobbling between "too much" and "too little" to provide the measured data. And last but not least, this "magic LRM function" must KNOW which photons are entangled or not?? (Looks like a very "spooky function" to me... :bugeye:)

2wr1cgm.jpg
 
Last edited by a moderator:
  • #588
zonde said:
If prediction of some green alternate theory is found out to be wrong then theory is wrong.
If prediction of well established theory with proven usefulness is found out to be wrong then domain of it's applicability is established instead. :wink:

So, QM is alright as long as you don't have entangled states? Restrictions on applicability are acceptable when a theory is superceded, e.g., Newtonian dynamics is ok when v << c and was superceded by SR to account for v ~ c, but no one has a theory superceding QM that gets rid of its entangled states. And, unlike v ~ c prior to SR, we have the means to create and explore entangled states and all such experiments vindicate QM.

No, zonde, this is not a mere restriction on the applicability of QM.
 
  • #589
akhmeteli said:
I am happy that you don’t use this argument. But it does not look like a strawman to me. See, e.g., post 7 in this thread. Furthermore, Aspelmeyer and Zeilinger wrote as follows (see the reference in post 385 in this thread):
"But the ultimate test of Bell’s theorem is still missing:
a single experiment that closes all the loopholes at once.
It is very unlikely that such an experiment will disagree
with the prediction of quantum mechanics, since this
would imply that nature makes use of both the detection
loophole in the Innsbruck experiment and of the
locality loophole in the NIST experiment. Nevertheless,
nature could be vicious, and such an experiment is desirable
if we are to finally close the book on local realism."
While they are careful enough to avoid saying anything that is factually incorrect, they do use this argument. So this argument is indeed widely used.
Nowhere in that quote do they imply it is true in general that "if X is true when condition Y but not condition Z holds, and X is also true when condition Z but not condition Y holds, then we can assume X is true when both conditions Y and Z hold simultaneously". Rather they refer to the specific conditions of the experiment when they say "It is very unlikely that such an experiment will disagree with the prediction of quantum mechanics, since this would imply that nature makes use of both the detection loophole in the Innsbruck experiment and of the locality loophole in the NIST experiment." It's quite possible (and I think likely) that the reason they consider it "unlikely" is because a theory making use of both loopholes would be very contrived-looking.
JesseM said:
Rather, the reason physicists think we can be pretty confident that Bell inequalities would be violated in an experiment where both loopholes were closed simultaneously has to do with specific considerations about the physical situation we're looking at, like the idea I already mentioned that it would require a very contrived local theory that would exploit both loopholes in just the right way that it would perfectly agree with QM in all experiments done to date.
akhmeteli said:
I believe I addressed this statement in my previous post and I am not sure I have anything to add.
You addressed it by suggested your own model was non-contrived, but didn't give a clear answer to my question about whether it can actually give statistical predictions about experiments so far like the Innsbruck experiment and the NIST experiment (or any experiments whatsoever, see below)--if it can't, then it obviously doesn't disprove the claim that any local realist theory consistent with experiments so far would have to be very contrived!
JesseM said:
Are you claiming that your model gives correct statistical predictions about the empirical results of all the Aspect-type experiments that have been done to date?

That seems like a slightly evasive answer, since you later say that you distinguish the unitary evolution aspect of QM/QFT from the projection postulate, and only claim that your model reproduces the unitary evolution, but isn't the projection postulate the only way to get actual predictions about empirical experiments from QM/QFT? Do you claim that your model can correctly predict actual empirical experimental results in the types of experiments that have been done to date, yes or no?
akhmeteli said:
I appreciate that my answer may look evasive, but I was not trying to sweep anything under the carpet, so maybe the question is not quite appropriate? Let me give you an example. Suppose I’d ask you if the Schroedinger equation correctly describes all experiments performed so far? Yes or no? Strictly speaking, the correct answer is “no”, because the equation is not relativistic and does not describe the electronic spin. But perhaps you’ll agree that this “correct” answer is somewhat misleading because this is a damn good equation :-) So if you want a yes or no answer, then no, the model I offer cannot describe all experiments performed so far, e.g., because it does not describe the electronic spin, and I said so in my previous post. However, this is a quite decent model, as it includes the entire scalar electrodynamics, a well-established theory.
OK, but can your model actually give "correct predictions about statistical results" for any actual experiments, or does it only reproduce the unitary evolution? If it can't predict actual real-valued statistics that are measured empirically, as opposed to complex amplitudes, then it isn't a local realist model that can explain any existing experiments (you may be able to derive probabilities from amplitudes using many-worlds type arguments, but as I said part of the meaning of 'local realism' is that each measurement yields a unique outcome)
akhmeteli said:
According to the projection postulate, after a measurement, the system is in an eigenstate, so another measurement will produce the same result (say, if the relevant operator commutes with the Hamiltonian). According to unitary evolution, though, a measurement cannot turn a superposition of states into a mixture, so there is a probability that the next measurement will return a different result.
Suppose we do a Wigner's friend type thought-experiment where we imagine a small quantum system that's first measured by an experimenter in an isolated box, and from our point of view this just causes the experimenter to become entangled with the system rather than any collapse occurring. Then we open the box and measure both the system and the record of the previous measurement taken by the experimenter who was inside, and we model this second measurement as collapsing the wavefunction. If the two measurements on the small system were of a type that according to the projection postulate should yield a time-independent eigenstate, are you claiming that in this situation where we model the first measurement as just creating entanglement rather than collapsing the wavefunction, there is some nonzero possibility that the second measurement will find that the record of the first measurement will be of a different state than the one we find on the second measurement? I'm not sure but I don't think that would be the case--even if we assume unitary evolution, as long as there is some record of previous measurements then the statistics seen when comparing the records to the current measurement should be the same as the statistics you'd have if you assumed the earlier measurements (the ones which resulted in the records) collapsed the wavefunction of the system being measured according to the projection postulate.

In any case, the projection postulate does not actually specify that each "measurement" must collapse the wavefunction onto an eigenstate in cases where you're performing a sequence of different measurements. The "pragmatic recipe" is entirely compatible with the notion that in a problem like this, the projection postulate should only be used once at the very end of the complete experiment, when you make a measurement of all the records that resulted from earlier measurements.
akhmeteli said:
JesseM, again, it looks like we pretty much agree. I could agree, say, that the difference between unitary evolution and the projection postulate can be explained by environmental decoherence, but let us agree first what we are talking about. This thread is not about quantum theory being good or bad, everybody agrees that it is extremely good. The question of this thread is whether local realism has been ruled out or not.
But there are two aspects of this question--the first is whether local realism can be ruled out given experiments done so far, the second is whether local realism is consistent with the statistics predicted theoretically by QM. Even if you don't use the projection postulate to generate predictions about statistics, you need some real-valued probabilities for different outcomes, you can't use complex amplitudes alone since those are never directly measured empirically. And if we understand local realism to include the condition that each measurement has a unique outcome, then it is impossible to get these real-valued statistics from a local realist model.
akhmeteli said:
You see, you are talking about something “pragmatic”, but the question of this thread is not exactly pragmatic. As I said earlier in this thread, Nature cannot be “approximately local” or “approximately nonlocal”, it is either precisely local or precisely nonlocal.
No idea where you got the idea that I would be talking about "approximate" locality from anything in my posts. I was just talking about QM being a "pragmatic" recipe for generating statistical predictions, I didn't say that Bell's theorem and the definition of local realism were approximate or pragmatic. Remember, Bell's theorem is about any black-box experiment where two experimenters at a spacelike separation each have a random choice of detector setting, and each measurement must yield one of two binary results--nothing about the proof specifically assumes they are measuring anything "quantum", they might be choosing to ask one of three questions with yes-or-no answers to a messenger sent to them or something. Bell's theorem proves that according to local realism, any experiment of this type must obey some Bell inequalities. So then if you want to show that QM is incompatible with local realism, the only aspect of QM you should be interested in is its statistical predictions about some experiment of this type, all other theoretical aspects of QM are completely irrelevant to you. Unless you claim that the "pragmatic recipe" I described would actually make different statistical predictions about this type of experiment than some other interpretation of QM like Bohmian mechanics or the many-worlds-interpretation, then it's pointless to quibble with the pragmatic recipe in this context.
akhmeteli said:
Again, I agree, but, as I noted in our previous discussion (https://www.physicsforums.com/showpost.php?p=1706652&postcount=78), you may just complement unitary evolution with the Born rule as an operational principle.
But that won't produce a local realist theory where each measurement has a unique outcome. Suppose you have two separate computers, one modeling the amplitudes for various measurements which could be performed in the local region of one simulated experimenter "Alice", another modeling the amplitudes for various measurements which could be performed in the local region of another simulated experimenter "Bob", with the understanding that these amplitudes concerned measurements on a pair of entangled particles that were sent to Alice and Bob (who make their measurements at a spacelike separation). If you want to simulate Alice and Bob making actual measurements, and you must assume that each measurement yields a unique outcome (i.e. Alice and Bob don't each split into multiple copies as in the toy model I linked to at the end of my last post), then if the computers running the simulation are cut off from communicating with one another and neither computer knows in advance what measurement will be performed by the simulated experimenter on the other computer, then there is no way that such a simulation can yield the same Bell-inequality-violating statistics predicted by QM, even if you program the Born rule into each computer to convert amplitudes into probabilities which are used to generate the simulated outcome of each measurement. Do you disagree that there is no way to get the correct statistics predicted by any interpretation of QM in a setup like this where the computers simulating each experimenter are cut off from communicating? (which corresponds to the locality condition that events in regions with a spacelike separation can have no causal effect on one another)
akhmeteli said:
Again, I agree that quantum theory is a great practical value, but we are not discussing practicality. Again, it seems we both seem to agree that unitary evolution is always correct. However, it is worth mentioning that you are both telling me that you favor many worlds interpretation(s) and that there is no “complete agreement” on how “any many-worlds type interpretation can give concrete predictions in the form of probabilities”. This means that “many-worlds” people can actually live without the projection postulate. They may “all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate”, but, strictly speaking, they are just unable to derive these probabilities.
The problem is that there is no agreement on how the many-worlds interpretation can be used to derive any probabilities. If we're not convinced it can do so then we might not view it as being a full "interpretation" of QM yet, rather it'd be more like an incomplete idea for how one might go about constructing an interpretation of QM in which measurement just caused the measuring-system to become entangled with the system being measured.
akhmeteli said:
And it is good for them that they cannot derive those probabilities, because if they derived them from unitary evolution, that would mean that they made a mistake somewhere, as you cannot derive from unitary evolution something that directly contradicts it – the projection postulate.
See my comments above about the Wigner's friend type thought experiment. I am not convinced that you can actually find a situation where a series of measurements are made that each yield records of the result, such that using the projection postulate for each measurement gives different statistical predictions then if we just treat this as a giant entangled system which evolves in a unitary way, and then at the very end use the Born rule to find statistical expectations for the state of all the records of prior measurements. And as I said there as well, the projection postulate does not actually specify whether in a situation like this you should treat each successive measurement as collapsing the wavefunction onto an eigenstate or whether you should save the "projection" for the very last measurement.
JesseM said:
I don't think Demystifier was actually saying that there'd be situations where Bohmian mechanics would give different predictions about empirical results than the normal QM recipe involving the Born rule; I think he was just saying that in Bohmian mechanics the collapse is not "real" (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield.
akhmeteli said:
There is no need to guess what he said, as I gave you the reference to what he actually said.
I wasn't guessing what he said, I was guessing what he meant by what he said. What he said was only the very short statement "Yes, it is an approximation. However, due to decoherence, this is an extremely good approximation. Essentially, this approximation is as good as the second law of thermodynamics is a good approximation." I think this statement is compatible with my interpretation of what he may have meant, namely "in Bohmian mechanics the collapse is not 'real' (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield." Nowhere did he say that using the projection postulate will yield different statistical predictions about observed results than those predicted by Bohmian mechanics.
akhmeteli said:
The Born rule is one thing, the projection postulate is something different.
I think they are different only if you assume multiple successive measurements, and understanding "the projection postulate" to imply that each measurement collapses the wavefunction onto an eigenstate, and assuming that for some of the measurements the records of the results are "erased" so that it cannot be known later what the earlier result was. If you are dealing with a situation where none of the measurement records are erased, I'm pretty sure that the statistics for the measurement results you get using the projection postulate will be exactly the same as the statistics you get if you model the whole thing as a giant entangled system and then use the Born rule at the very end to find the probabilities of different combinations of recorded measurement results. And once again, the "projections postulate" does not precisely define when projection should occur anyway, you are free to interpret the projection postulate to mean that only the final measurement of the records at the end of the entire experiment actually collapses the wavefunction.
 
Last edited:
  • #590
(continued from previous post)
JesseM said:
But if it only reproduces unitary evolution, can it reproduce any of the empirical predictions about probabilities made by the standard pragmatic recipe which includes the Born rule? Or can it only predict complex amplitudes, which can't directly be compared to empirical probabilities without making use of the Born rule or some subtle many-worlds type argument?
akhmeteli said:
As I said, your SE quote above applies to this model. If you believe the Bohmian mechanics can reproduce “any of the empirical predictions about probabilities”, then why should you have a problem with this model?
I think you misunderstood what I meant by "any" above, I wasn't asking if your model could reproduce any arbitrary prediction made by the "standard pragmatic recipe" (i.e. whether it would agree with the standard pragmatic recipe in every possible case, as I think Bohmian mechanics does). Rather, I was using "any" in the same sense as it's used in the question priests used to ask at weddings, "If any person can show just cause why they may not be joined together, let them speak now or forever hold their peace"--in other words, I was asking if there was even a single instance of a case where your model reproduces the probabilistic predictions of standard QM, or whether your model only deals with complex amplitudes that result from unitary evolution. The reason I asked this is that the statement of yours I was responding to was rather ambiguous on this point:
I don't think it differs in this respect, if you include the standard measurement theory in it. But I did not say the LRM reproduces both unitary evolution and the measurement theory of this QFT, it just reproduces its unitary evolution. As unitary evolution and measurement theory are mutually contradictory, I don't think the failure to reproduce the measurement theory is a weak point of the LRM.
If your model does predict actual measurement results, then if the model was applied to an experiment intended to test some Bell inequality, would it in fact predict an apparent violation of the inequalites in both experiments where the locality loophole was closed but not the detector efficiency loophole, and in experiments where the efficiency loophole was closed but not the locality loophole? I think you said your model would not predict violations of Bell inequalities in experiments with all loopholes closed--would you agree that if we model such experiments using unitary evolution plus the Born rule (perhaps applied to the records at the very end of the full experiment, after many trials had been performed, so we don't have to worry about whether applying the Born rule means we have to invoke the projection postulate), then we will predict violations of Bell inequalities even in loophole-free experiments? Likewise, would you agree that Bohmian mechanics also predicts violations in loophole-free experiments, and many-worlds advocates would expect the same prediction even if there is disagreement on how to derive it?
 
  • #591
zonde said:
If prediction of some green alternate theory is found out to be wrong then theory is wrong.

If prediction of well established theory with proven usefulness is found out to be wrong then domain of it's applicability is established instead. :wink:

I agree with this. Technically, theories should not be seen as "Proven" or "Wrong" or whatever; rather as "More Useful" or "Useless". And the scope/domain of a theory may need to be modified from time to time as new information arises. So a theory could remain useful in a narrowed domain if new information is acquired. Newtonian gravity after GR is an example. Still quite useful. I would definitely not call Newtonian gravity a wrong theory.
 
  • #592
JesseM said:
... OK, but can your model actually give "correct predictions about statistical results" for any actual experiments, or does it only reproduce the unitary evolution?
akhmeteli said:
... No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.


This is how I get it, and I apologize in advance if it’s wrong:

According to akhmeteli, the LRM does not violate the Bell inequalities, but that doesn’t matter much, because according to akhmeteli, there are no experimental evidence of violations of Bell's Inequalities so far, and Bell's Theorem is faulty in using two mutually contradicting postulates of QM.​

If I’m right, it doesn’t impress me... all he’s saying is that EPR-Bell experiments & Bell's Theorem are wrong, without delivering any proofs for that claim.
 
  • #593
DrChinese said:
I agree with this. Technically, theories should not be seen as "Proven" or "Wrong" or whatever; rather as "More Useful" or "Useless". And the scope/domain of a theory may need to be modified from time to time as new information arises. So a theory could remain useful in a narrowed domain if new information is acquired. Newtonian gravity after GR is an example. Still quite useful. I would definitely not call Newtonian gravity a wrong theory.

Whether you choose to call Newtonian gravity "wrong," given it has been superceded by GR, is semantics. The claim that QM should be restricted to use with non-entangled states is not at all consistent with this type of "wrong" b/c there is no theory superceding QM that clearly shows why QM's treatment of entangled states is wrong -- no semantics here, GR says clearly that Newtonian gravity fails in certain regimes and tests of this claim vindicate GR. We don't have any such theory, claims and vindication against QM's predictions for entangled states. Quite the contrary, we've many experiments consistent with QM's predictions with entangled states. Thus, there is a huge burden of proof for anyone claiming QM's prediction of Bell inequality violations is wrong and, in my opinion, this burden is nowhere near being fulfilled by the proponents of local realism.
 
  • #594
RUTA & DrC, this is interesting. If we assume that one day all EPR-Bell loopholes are closed simultaneously, and we all (maybe even ThomasT :wink:) agree that nonlocality and/or nonseparability is a fact; would that mean that Quantum Mechanics has proven Relativity Theory wrong (or slightly "useless")?
 
  • #595
RUTA said:
Whether you choose to call Newtonian gravity "wrong," given it has been superceded by GR, is semantics. The claim that QM should be restricted to use with non-entangled states is not at all consistent with this type of "wrong" b/c there is no theory superceding QM that clearly shows why QM's treatment of entangled states is wrong -- no semantics here, GR says clearly that Newtonian gravity fails in certain regimes and tests of this claim vindicate GR. We don't have any such theory, claims and vindication against QM's predictions for entangled states. Quite the contrary, we've many experiments consistent with QM's predictions with entangled states. Thus, there is a huge burden of proof for anyone claiming QM's prediction of Bell inequality violations is wrong and, in my opinion, this burden is nowhere near being fulfilled by the proponents of local realism.

I agree with what you are saying, and note that I missed a big new chunk of the thread regarding akhmeteli's claims. So my bad for chiming in irelevantly as I do sometimes. akhmeteli's "suspicion" that QM makes a wrong prediction is strange given every experiment performed to date is clearly within the predicted range of QM (but not any prior LR).

akhmeteli: My big question for your model is a familiar one. If it is local realistic, can you tell me what the correct (if QM is wrong in this regard) statistical predictions are for coincidences at a, b, c = 0, 120, 240 degrees? Can you supply a dataset which is in indicative of the rules of your model?

Alice:
a b c
+ - +
- + +
- - +
+ - -

... or whatever you imagine a batch of Alices to be, independent of Bob. A local realistic model should be able to provide this. If not, it does not fulfill the claim of being realistic. And please, do not point me to your paper as proof. The proof is in the pudding, and I am looking to taste some.
 
Back
Top