Is an experiment planned to discern determinism and randomness in QM

In summary: Since the measurements can be arranged to be space-like, this experiment proves that the measurements CANNOT be random, and have to be predetermined.
  • #1
Prishon
50
8
I can remember reading something about a future experiment which alledgely could decide if there is an underlying deterministic layer governing quantum phenomena or if pure, empty chance rules suppreme (which I can't imagine).

It had something to do with arrival times but I can't imagine how that can be used.

Anyone?
 
Physics news on Phys.org
  • #2
Was that the Vacuum Anomaly Gauge Uncertainty Experiment?
 
  • Haha
Likes Vanadium 50 and Dale
  • #3
PeroK said:
Was that the Vacuum Anomaly Gauge Uncertainty Experiment?
Can you repeat that again? :)
I have never heard of it but will surely look into it!
 
  • #4
It's only an acrostic!
 
  • #5
PeroK said:
It's only an acrostic!
I had to look up that one...Acrostically...
 
  • #6
PeroK said:
It's only an acrostic!
Ah! VAGUE...
 
  • Like
Likes PeroK
  • #7
Prishon said:
I can remember reading something about a future experiment which alledgely could decide if there is an underlying deterministic layer governing quantum phenomena or if pure, empty chance rules suppreme (which I can't imagine).

It had something to do with arrival times but I can't imagine how that can be used.

Anyone?
It could be arrival times computed from deterministic Bohmian mechanics, without taking into account the measuring apparatus. See my paper https://arxiv.org/abs/2010.07575, Sec. 3.5 and references [54-56] cited therein.
 
  • Like
  • Informative
Likes Delta2, DennisN, vanhees71 and 2 others
  • #9
Prishon said:
I can remember reading something about a future experiment which alledgely could decide if there is an underlying deterministic layer governing quantum phenomena or if pure, empty chance rules suppreme (which I can't imagine).

Such an experiment has already been done, it's the EPR-Bohm experiment (using spin instead of position/momentum). The experiment shows that similar measurements of entangled pairs are always perfectly correlated.

Assuming locality (which is a very well established principle) this experiment proves that the measurement results cannot be random. They have to be predetermined.

So, we have a very good reason to believe that there is an "underlying deterministic layer governing quantum phenomena"
 
  • Skeptical
Likes Motore and PeroK
  • #10
The complete opposite is true: The successful and highly significant violation of Bell's theorem shows that the measurement results are not predetermined but truly random. So far there's not the slightest hint at an "underlying deterministic layer governing quantum phenomena".
 
  • Like
Likes gentzen and DrChinese
  • #11
An opposite to both is true. One argument (AndreiB) implies that assumption of locality implies determinism, another argument (vanhees71) implies that assumption of locality implies non-determinism. Both arguments are logically valid, yet we have a contradiction. This implies that the assumption of locality is wrong, from which we cannot conclude anything about determinism. :-p
 
  • #13
vanhees71 said:
WHAT?!?
Have you never seen a claim that the Bell theorem, combined with EPR, is a proof of nonlocality by a reductio ad absurdum?
 
  • Haha
Likes vanhees71
  • #14
AndreiB said:
Such an experiment has already been done, it's the EPR-Bohm experiment (using spin instead of position/momentum). The experiment shows that similar measurements of entangled pairs are always perfectly correlated.

Assuming locality (which is a very well established principle) this experiment proves that the measurement results cannot be random. They have to be predetermined.

So, we have a very good reason to believe that there is an "underlying deterministic layer governing quantum phenomena"
This is somewhat correct, but also misleading. The EPR-Bohm thought experiment (1951) could have supported the EPR argument (1935, asserting predetermination of measurement results) IF it had actually been run prior to Bell (1964). But that never happened. The earliest experiments were done circa 1972.

Instead, Bell brought an important nuance to the situation in which he assumed the EPR-Bohm thought experiment would succeed. Of course, Bell showed that if you assume locality: you CANNOT have predetermination of measurement results, and instead you would have measurement contextuality. So the first experiments tested this, and supported the quantum predictions (in the process excluding local deterministic explanations). Later experiments strongly improved on the early experiments.

NOTE: Under either regime, there is still no hint of true determinism - unless you assume that the EPR predetermination (which is random, without any hint of a precursor cause as to outcomes) is the same thing as determinism. Which they aren't. There could still be determinism (as the Bohmian interpretation demonstrates) even with contextuality.
 
  • Like
Likes Delta2, Demystifier, PeroK and 1 other person
  • #15
BTW: To lift the confusion about Bell with spin/polarization, read the corresponding chapter in either Sakurai or Weinberg (Lectures on quantum mechanics). There (particularly in Weinberg's book) it's explained in a no-nonsense way without all the philosophical erudition, already Einstein lamented about concerning the famous (or rather infamous ;-)) EPR paper.
 
  • Like
Likes PeroK
  • #16
vanhees71 said:
The complete opposite is true: The successful and highly significant violation of Bell's theorem shows that the measurement results are not predetermined but truly random.

Let's stick with the EPR-Bohm experiment first:

At two distant locations (A and B) you can measure the X-spin of an entangled pair. QM predicts that if you measure the X-spin at A, the B particle "collapses" to a X-spin eigenstate. Let's say that we get UP at A. After the A measurement, B is DOWN.

Since the measurements can be arranged to be space-like and assuming locality, we can conclude that the A measurement did not disturb B. So, the status of B after the A measurement should be the same as the one before the A measurement. So, we can conclude that B was DOWN even before the A measurement took place. This also implies that A was UP even before the A measurement took place.

So, we have proven, with no other assumption than locality, that both measurements were predetermined. Local non-determinism has been proven to be impossible.

Bell's theorem does not exclude all deterministic theories. Bell's theorem requires that the hidden variables should be independent of detectors' settings. Local deterministic models that violate that assumption have been proposed, like:

Quantum mechanics from classical statistics
C. Wetterich
Annals Phys. 325 (2010) 852
DOI: 10.1016/j.aop.2009.12.006

Abstract:

"Quantum mechanics can emerge from classical statistics. A typical quantum system describes an isolated subsystem of a classical statistical ensemble with infinitely many classical states. The state of this subsystem can be characterized by only a few probabilistic observables. Their expectation values define a density matrix if they obey a "purity constraint". Then all the usual laws of quantum mechanics follow, including Heisenberg's uncertainty relation, entanglement and a violation of Bell's inequalities. No concepts beyond classical statistics are needed for quantum physics - the differences are only apparent and result from the particularities of those classical statistical systems which admit a quantum mechanical description. Born's rule for quantum mechanical probabilities follows from the probability concept for a classical statistical ensemble. In particular, we show how the non-commuting properties of quantum operators are associated to the use of conditional probabilities within the classical system, and how a unitary time evolution reflects the isolation of the subsystem. As an illustration, we discuss a classical statistical implementation of a quantum computer."

Fast Vacuum Fluctuations and the Emergence of Quantum Mechanics
Gerard 't Hooft
Found.Phys. 51 (2021) 3, 63
DOI: 10.1007/s10701-021-00464-7

"Fast moving classical variables can generate quantum mechanical behavior. We demonstrate how this can happen in a model. The key point is that in classically (ontologically) evolving systems one can still define a conserved quantum energy. For the fast variables, the energy levels are far separated, such that one may assume these variables to stay in their ground state. This forces them to be entangled, so that, consequently, the slow variables are entangled as well. The fast variables could be the vacuum fluctuations caused by unknown super heavy particles. The emerging quantum effects in the light particles are expressed by a Hamiltonian that can have almost any form. The entire system is ontological, and yet allows one to generate interference effects in computer models. This seemed to lead to an inexplicable paradox, which is now resolved: exactly what happens in our models if we run a quantum interference experiment in a classical computer is explained. The restriction that very fast variables stay predominantly in their ground state appears to be due to smearing of the physical states in the time direction, preventing their direct detection. Discussions are added of the emergence of quantum mechanics, and the ontology of an EPR/Bell Gedanken experiment."

vanhees71 said:
So far there's not the slightest hint at an "underlying deterministic layer governing quantum phenomena".

Given that only determinism can provide a local description of the EPR experiment and the very high prior probability one should ascribe to locality we can conclude that we have very good reasons to believe in the existence of that deterministic layer.
 
  • Skeptical
Likes PeroK
  • #17
DrChinese said:
This is somewhat correct, but also misleading. The EPR-Bohm thought experiment (1951) could have supported the EPR argument (1935, asserting predetermination of measurement results) IF it had actually been run prior to Bell (1964). But that never happened. The earliest experiments were done circa 1972.

What exactly is misleading about what I said? The time when an experiment is performed has nothing to do with the validity of the argument.

DrChinese said:
Instead, Bell brought an important nuance to the situation in which he assumed the EPR-Bohm thought experiment would succeed.

Bell only focused on hidden variables theories since they were the last hope for locality. Whatever you think Bell proved, local non-determinism cannot be resurrected.

DrChinese said:
Of course, Bell showed that if you assume locality: you CANNOT have predetermination of measurement results, and instead you would have measurement contextuality.

You seem to imply that contextuality contradicts predetermination of measurement results. Why?

DrChinese said:
Under either regime, there is still no hint of true determinism - unless you assume that the EPR predetermination (which is random, without any hint of a precursor cause as to outcomes) is the same thing as determinism.

Predetermination is the same thing as determinism (in regards to the measured properties of course). There might be non-deterministic phenomena in nature but quantum measurements are not between them.

Also, "predetermination" implies a cause. The "hint" is that you can predict the B measurement without disturbing B. If B were genuinely random you could not predict it.
 
Last edited:
  • Skeptical
Likes PeroK
  • #18
vanhees71 said:
To lift the confusion about Bell with spin/polarization, read the corresponding chapter in either Sakurai or Weinberg (Lectures on quantum mechanics). There (particularly in Weinberg's book) it's explained in a no-nonsense way
Weinberg says clearly that hidden variables ##\lambda## (which turn out to contradict QM and experiments) are local and random. So the Bell theorem, according to Weinberg, rules out local random hidden variables. So if you think that Weinberg lifts the confusion, why do you still think that the Bell theorem rules out determinism?
 
  • #19
The assumption is that all observables have determined values, and it is only our lack of knowledge about the hidden variables that we have probabilities for the outcomes of measurements. The result, Bell's inequality, contradicts quantum mechanics for certain entangled states and certain measurements. Since relativistic local QFT is local and the predictions of Bell's class of HV theories turn out to be wrong and the predictions by QT right, the only conclusion I can draw is that I have to give up determinism, because locality is fulfilled witin both theories.
 
  • #20
vanhees71 said:
BTW: To lift the confusion about Bell with spin/polarization, read the corresponding chapter in either Sakurai or Weinberg (Lectures on quantum mechanics). There (particularly in Weinberg's book) it's explained in a no-nonsense way without all the philosophical erudition, already Einstein lamented about concerning the famous (or rather infamous ;-)) EPR paper.

I don't have Weinberg's book, but I looked in Sakurai's "Modern Quantum Mechanics". At page 230 we find:

"The fact that the quantum-mechanical predictions have been verified does not mean that the whole subject is now a triviality. Despite the experimental verdict we may still feel psychologically uncomfortable about many aspects of measurements of this kind. Consider in particular the following point: Right after observer A performs a measurement on particle 1, how does particle 2 – which may, in principle, be many light years away from particle 1 – get to “know” how to orient its spin so that the remarkable correlations apparent in Table 3.1 are realized?"

And then we read:

"We conclude this section by showing that despite these peculiarities, we cannot use spin-correlation measurements to transmit any useful information between two macroscopically separated points."

So, Sakurai does not provide any explanation for the EPR perfect anti-corelations. He does not say if information has been transferred from A to B or not, only that "useful" information cannot be transferred.

Needless to say this is a non-explanation. Relativity does not distinguish between useful and non-useful information. It says that if A and B are space-like A cannot cause B. If A causes B you have a conflict with relativity regardless of how useful that communication may appear to you.

Either a string like 0010101101 was instantly sent from A to B or it wasn't. If such a transfer occurred, relativity must be wrong. If it didn't, that string had to be at B before the measurement (hidden variables). There is no way around that and inventing new "unnecessary adjectives" (as Tim Maudlin would say) like "useful" and "non-useful" is just a desperate attempt to avoid an unpleasant logical conclusion.

Hopefully, Weinberg's explanation is better. Could you post the relevant part here so we could see it?
 
  • Like
  • Skeptical
Likes morrobay and PeroK
  • #21
vanhees71 said:
The assumption is that all observables have determined values,
The assumption by who? Most people who worked on Bell-like theorems, including Bell himself, by hidden variables did not assume that all observables have deterministically determined values.

vanhees71 said:
the only conclusion I can draw is that I have to give up determinism,
That's because you have a limited understanding of the notion of hidden variables.

vanhees71 said:
because locality is fulfilled witin both theories.
Those are two different kinds of "locality", so it's possible that one locality (that one typically associates with QFT) is true, while the other (Bell locality) is false.
 
  • #22
I don't know, how Bell exactly phrased it, but the calculation leading to Bell's inequalities within his "local realistic hidden-variable theory" clearly shows that he does.

Can you explain in clear mathematical terms, what kind of "locality" Bell has in mind? I thought he meant locality in the sense that there are no space-liked causally connected events.
 
  • #23
vanhees71 said:
Can you explain in clear mathematical terms, what kind of "locality" Bell has in mind?
Clear to who? I can explain it in terms that would be clear to some experts in quantum foundations, but not in a form that would be clear to all of them.

vanhees71 said:
I thought he meant locality in the sense that there are no space-liked causally connected events.
Loosely speaking, yes. But the devil is in the details, because it depends on what one means by "causally connected" and by "events".
 
  • #24
Ok, then tell me what's wrong with Weinberg's statement of Bell's model:
weinberg-bell.png

He discusses the Bohm version of EPR with spin 1/2, and (12.1.2) is simply the spin-singlet state of the two spins.

So here it's clearly said: The spin components in a direction ##\hat{a}## (it's a unit vector not an operator here) is a "definite function" ##\hbar/2 S(\hat{a},\lambda)##, i.e., it's determined when ##\hat{a}## and the HV(s) ##\lambda## are determined. I think that's what's behind the word "realistic".

Locality is even weaker stated than I said: It's simply that the parameter ##\lambda## "is fixed before the two particles separate from each other". In other words the locality means that the system was prepared by a local event (e.g., the decay of a particle with 0 total angular momentum in two spin-1/2 particles).

The rest of the section, is a very straight-forward and well-understandable derivation of Bell's inequality and a short citation of early experiments by Aspect et al confing QT vs. this version of Bell's "local hidden-variable theory". Weinberg doesn't conclude anything more from this than simply stating that QT is confirmed and local hidden-variable models disproven, nothing else.

My simple conclusion is that one has to give up determinism only, because in QT all that's different is precisely determinism, i.e., according to the preparation of the spin state of the two-particle system in a spin-singlet state (which can be understood as "local" in the same sense as described above, at least within local relativistic QFT). What's clearly not the case is that the single-particle spin components in any direction are in any way determined before they are measured. So indeterminism or "irreducible randomness" of QT is the difference between the so formulated local HV theories and QT. The experimentally confirmed correlations due to entanglement are imposed on the particles by their preparation in the very beginning and not through mutual influence of the (again local!) measurements on the single particles.
 
  • Like
Likes DrChinese and PeroK
  • #25
vanhees71 said:
Ok, then tell me what's wrong with Weinberg's statement of Bell's model:
Nothing is wrong with his statement.

vanhees71 said:
So here it's clearly said: The spin components in a direction ##\hat{a}## (it's a unit vector not an operator here) is a "definite function" ##\hbar/2 S(\hat{a},\lambda)##, i.e., it's determined when ##\hat{a}## and the HV(s) ##\lambda## are determined.
Yes, it's clear.

vanhees71 said:
I think that's what's behind the word "realistic".
No, "realistic" means something else.

vanhees71 said:
Weinberg doesn't conclude anything more from this than simply stating that QT is confirmed and local hidden-variable models disproven, nothing else.
Yes, Weinberg is careful to not say that it proves something that it doesn't.

vanhees71 said:
My simple conclusion is that one has to give up determinism only,
It's simple, but wrong.

vanhees71 said:
The experimentally confirmed correlations due to entanglement are imposed on the particles by their preparation in the very beginning
I think nobody in the community thinks so. Why nobody thinks so? I've tried several times to explain it to you, but without success.
 
  • Like
Likes DrChinese
  • #26
Then I'm simply too stupid to understand it. One last try:

The state is given by the preparation. What's wrong with that?

The prepared state implies the correlations of the single-particle spins and at the same time it implies that they are completely indetermined before the measurement. What's wrong with that?
 
  • #27
vanhees71 said:
Then I'm simply too stupid to understand it. One last try:

The state is given by the preparation. What's wrong with that?

The prepared state implies the correlations of the single-particle spins and at the same time it implies that they are completely indetermined before the measurement. What's wrong with that?
It's not wrong, it's incomplete. Why? Because you don't specify what do you mean by state. Do you mean the state of an ensemble, or the state of an individual system? If you mean the ensemble, then it's incomplete because the individual system can be prepared too, on which you say nothing. If you mean the single system, then it's incomplete because you need something beyond the state in the Hilbert space (because you adopt the Ballentine interpretation according to which state in the Hilbert space only describes the ensemble, not the individual system).
 
  • Like
Likes gentzen
  • #28
I mean the quantum state, mathematically described by a statistical operator. It provides probabilities for the outcome of measurements. As such it refers to an ensemble, because you can only test the predicted probabilities by repeating the experiment several times such as to gain enough statistics to get a test at some given statistical significance. On the other hand it also refers to a single system, because it is determined by the preparation procedure of the single system.

This point of view seems to me pretty complete, at least FAPP, because otherwise one couldn't make the necessary link between the mathematical formalism and real-lab experiments and nobody would talk about QT anymore.
 
  • #29
Demystifier said:
Because you don't specify what do you mean by state. Do you mean the state of an ensemble, or the state of an individual system?
The interpretation @vanhees71 seems to be (implicitly) using is the one in which the state corresponds to the preparation process. That is not the same as either of the alternatives you describe here.
 
  • Like
Likes vanhees71
  • #30
AndreiB said:
1. What exactly is misleading about what I said? The time when an experiment is performed has nothing to do with the validity of the argument.

2. Bell only focused on hidden variables theories since they were the last hope for locality. Whatever you think Bell proved, local non-determinism cannot be resurrected.

3. You seem to imply that contextuality contradicts predetermination of measurement results. Why?

4. Also, "predetermination" implies a cause. The "hint" is that you can predict the B measurement without disturbing B. If B were genuinely random you could not predict it.
1. Certainly it does. The EPR-Bohm thought experiment - which tended to support the EPR position - was performed after Bell and was instead used to contradict the EPR position. The EPR paper itself was not bolstered by the advent of EPR-B.

2. There are plenty of local non-deterministic interpretations still on the table. For example, time symmetric/retrocausal/ascausal interpretations. In these, the future observables of a system cannot be explained solely by prior states.

3. Contextuality contradicts predetermination of measurement results. Basically, that is exactly what Bell tells us. While Bohmian Mechanics (a viable interpretation) also implies predetermination of measurement results, it is considered to be contextual as well. I am not familiar with any viable interpretation which is not contextual in some form.

4. Obviously, you cannot predict the outcome of a measurement of an entangled property of a system - it's random. Note that once you make the initial measurement on that system (say Alice's), a later similar measurement (say Bob's) would be redundant - and were themselves predictable (as any redundant measurement would be).

EPR said that any such redundant measurement implied that ALL possible such measurements would similarly be predictable. And in a way, that is accurate - for the special case of redundant measurements. However, EPR did not account for measurements that were NOT redundant - i.e. those in which the measurements were, to some degree, orthogonal (non-commuting). Most/many entangled spin pair measurements fit that profile.

The QM correlation prediction for entangled spin pair measurements is explicitly contextual, dictated only by a function of the angle between (theta). There are no other input parameters. Bell showed that this contextual formula was incompatible with the basic EPR model (which they claimed must be assumed to be non-contextual). It should be obvious that a contextual formula would be incompatible with a non-contextual model, but this took decades to be better understood. In summary:

a. EPR showed that the special case of redundant measurements (and locality assumed), there were no obvious contradictions between QM and a non-contextual model in which outcomes must be predetermined prior to measurement. They SPECULATED (not proved) this precluded contextual models from being viable. In their view: a measurement "here" should NOT affect the outcome of a measurement at spacelike separated "there".

b. Bell showed that in the general case of measurements on partially non-commuting observables (and locality assumed), there were obvious contradictions between QM and any non-contextual model (i.e. in which outcomes are independent of choice of measurement bases). In Bell's view: a measurement choice "here" COULD affect the statistical outcome of a measurement at spacelike separated "there".

EPR's was a special case, and is not in contradiction with Bell's general case. I would conclude per b., that no non-contextual model is viable.
 
  • Like
Likes PeroK, vanhees71 and mattt
  • #31
vanhees71 said:
Since relativistic local QFT is local ...
QFT is just as local as ordinary QT with a Hamiltonian generating only local interactions. The non-locality of quantum theories is facilitated by the non-local construction of the state space that allows remotely entangled states. QFT has spatially separated Bell-states, so it's non-local in that sense. This is in part what makes non-locality in quantum theory so difficult to grasp: It's not caused by non-local interactions.
 
  • Like
Likes PeroK and Demystifier
  • #32
AndreiB said:
Bell's theorem does not exclude all deterministic theories. Bell's theorem requires that the hidden variables should be independent of detectors' settings. Local deterministic models that violate that assumption have been proposed, like:

Quantum mechanics from classical statistics
C. Wetterich
Annals Phys. 325 (2010) 852
DOI: 10.1016/j.aop.2009.12.006
Have you read this? Is it useful? Is it even intended as a local deterministic model? I know Wetterich is a big name, but there are certain small words in the abstract that hint at a certain arbitrariness: "Quantum mechanics can emerge from classical statistics. ... Their expectation values define a density matrix if they obey a "purity constraint". ... No concepts beyond classical statistics are needed for quantum physics - the differences are only apparent and result from the particularities of those classical statistical systems which admit a quantum mechanical description. ..."
AndreiB said:
And then we read:

"We conclude this section by showing that despite these peculiarities, we cannot use spin-correlation measurements to transmit any useful information between two macroscopically separated points."

So, Sakurai does not provide any explanation for the EPR perfect anti-corelations. He does not say if information has been transferred from A to B or not, only that "useful" information cannot be transferred.

Needless to say this is a non-explanation. Relativity does not distinguish between useful and non-useful information. It says that if A and B are space-like A cannot cause B. If A causes B you have a conflict with relativity regardless of how useful that communication may appear to you.
The point is that the non-random part which can be influenced, namely the measurement settings, does not allow to transmit any information from A to B. The random outcomes of the measurements are not independent, but they don't contain any "useful" information, because they are random. Gisin even inverted that argument, namely that the measurement results must be truly random, because otherwise they would allow to transfer "useful" information.
 
  • #33
gentzen said:
Have you read this? Is it useful? Is it even intended as a local deterministic model? I know Wetterich is a big name, but there are certain small words in the abstract that hint at a certain arbitrariness: "Quantum mechanics can emerge from classical statistics. ... Their expectation values define a density matrix if they obey a "purity constraint". ... No concepts beyond classical statistics are needed for quantum physics - the differences are only apparent and result from the particularities of those classical statistical systems which admit a quantum mechanical description. ..."

You are right, it's not the best paper to help my point, although it says some important things about Bell's theorem. I think a better example would be this paper, also from Wetterich:

Quantum fermions from classical bits:

https://arxiv.org/pdf/2106.15517.pdf

The model is a probabilistic cellular automaton. The evolution of the automaton is deterministic, but the initial state is given in terms of probabilities. As far as I understand one could view such a probabilistic automaton as an ensemble of automatons, each with a sharp value in each cell. The model is both local and realistic.

gentzen said:
The point is that the non-random part which can be influenced, namely the measurement settings, does not allow to transmit any information from A to B. The random outcomes of the measurements are not independent, but they don't contain any "useful" information, because they are random. Gisin even inverted that argument, namely that the measurement results must be truly random, because otherwise they would allow to transfer "useful" information.

I agree with what you are saying, but I don't agree that labeling the information as "non-useful" makes its instantaneous transfer compatible with relativity, so it's a red-herring.

If a judge asks a suspect if he spoke with the victim and the suspect answers that he didn't speak with the victim in the last week, would the judge be satisfied with that answer? Probably not, since the question was not about the "last week". Similarly, Sakurai should have said that either "no information was transferred" or "information has been transferred". He avoids giving a clear answer by inventing an adjective, "useful", no one asked him about. This makes me suspect that he actually believes that some non-useful information was in fact sent from A to B but he is not willing to admit it.
 
  • #34
DrChinese said:
1. Certainly it does. The EPR-Bohm thought experiment - which tended to support the EPR position - was performed after Bell and was instead used to contradict the EPR position. The EPR paper itself was not bolstered by the advent of EPR-B.

Unless the experiment refuted QM's prediction (it didn't) it couldn't have changed anything about the argument. All its premises (locality+existence of correlations) remain valid. The reason why Bell's theorem is seen as a refutation of EPR is a logical fallacy. Bell did not assume such a position and his conclusion, clearly presented in his paper, is not that the world is non-deterministic, but that the world is non-local. He also embraced a non-local deterministic theory, de-Broglie - Bohm interpretation which further confirms my understanding.

DrChinese said:
2. There are plenty of local non-deterministic interpretations still on the table. For example, time symmetric/retrocausal/ascausal interpretations. In these, the future observables of a system cannot be explained solely by prior states.

Those are not local. Relativity does not allow future to past transfer.

DrChinese said:
3. Contextuality contradicts predetermination of measurement results.

DrChinese said:
While Bohmian Mechanics (a viable interpretation) also implies predetermination of measurement results, it is considered to be contextual as well.

Can you spot a contradiction between the above statements?

DrChinese said:
4. Obviously, you cannot predict the outcome of a measurement of an entangled property of a system - it's random.
I didn't make such a claim.

DrChinese said:
Note that once you make the initial measurement on that system (say Alice's), a later similar measurement (say Bob's) would be redundant
Yes, but why is it redundant? If Bob's measurement is not predetermined and so it's a genuine "act of creation" from Bob's part, it should not be redundant. But since it is we need to conclude that Bob didn't "create" anything new, he just discovered what was already there.

DrChinese said:
EPR said that any such redundant measurement implied that ALL possible such measurements would similarly be predictable. And in a way, that is accurate - for the special case of redundant measurements. However, EPR did not account for measurements that were NOT redundant - i.e. those in which the measurements were, to some degree, orthogonal (non-commuting). Most/many entangled spin pair measurements fit that profile.

I agree with this part. The EPR argument works for the measured properties. If the X-spin was measured we can conclude that the X-spins were predetemined. We cannot say anything about the Y-spins.

DrChinese said:
The QM correlation prediction for entangled spin pair measurements is explicitly contextual, dictated only by a function of the angle between (theta). There are no other input parameters. Bell showed that this contextual formula was incompatible with the basic EPR model (which they claimed must be assumed to be non-contextual). It should be obvious that a contextual formula would be incompatible with a non-contextual model, but this took decades to be better understood.

I agree.

DrChinese said:
In summary:

a. EPR showed that the special case of redundant measurements (and locality assumed), there were no obvious contradictions between QM and a non-contextual model in which outcomes must be predetermined prior to measurement. They SPECULATED (not proved) this precluded contextual models from being viable. In their view: a measurement "here" should NOT affect the outcome of a measurement at spacelike separated "there".

b. Bell showed that in the general case of measurements on partially non-commuting observables (and locality assumed), there were obvious contradictions between QM and any non-contextual model (i.e. in which outcomes are independent of choice of measurement bases). In Bell's view: a measurement choice "here" COULD affect the statistical outcome of a measurement at spacelike separated "there".

EPR's was a special case, and is not in contradiction with Bell's general case. I would conclude per b., that no non-contextual model is viable.

I agree with that conclusion. Only contextual models are allowed. our disagreement only comes from your assumption that contextual models need to be non-deterministic.
 
  • Like
Likes gentzen and Demystifier
  • #35
vanhees71 said:
This point of view seems to me pretty complete, at least FAPP
I absolutely agree that it's complete FAPP. Bell himself (who coined the FAPP acronym) also often emphasized that. But the whole point of quantum foundations is to say something beyond FAPP. It's impossible to discuss quantum foundations and discard all aspects which are beyond FAPP, at the same time. A very dishonest thing to do is to accept only those beyond FAPP aspects which fit your own philosophical prejudices and discard all the others by claiming that you only care about FAPP.
 
  • Like
Likes gentzen

Similar threads

Replies
225
Views
12K
Replies
17
Views
2K
Replies
10
Views
2K
Replies
94
Views
14K
Replies
9
Views
2K
Replies
43
Views
7K
Back
Top