Entanglement spooky action at a distance

In summary, entanglement is often described as "spooky action at a distance" because of the correlation between the outcomes of measurements on entangled particles, which is inconsistent with the idea that they are independent and random. The correlation follows a mathematical formula that is derived from Quantum Mechanics. While it may seem like this could be used for faster-than-light communication, there is no evidence to support this and it is likely just a byproduct of the experimental design.
  • #36


peter0302 said:
Mathematically, Bell's theorem assumes that the probability of detection at A is independent of the probability of detection at B. QM violates his theorem, and therefore violates this assumption. Any interpretive framework of QM must therefore account for the fact that the probability of detection at A is, in fact, dependent on detection at B, and vice versa. One explanation is a common cause, another is non-local communication. Both are equally plausible at this juncture.
In the EPR-Bell experiments, the individual probabilities at A and B aren't being considered, are they? That is, it's not the relationship between A and B that's being considered, but the relationship between (A,B) and Theta (the angular difference between the crossed polarizers) that's being considered. So, any assumptions about the relationship between A and B are irrelevant in the global experimental context.

The rate of individual detection at A and at B remains the same, and the data sequences are always random. So, viewed individually, the probability of detection at A is always independent of the probability of detection at B.

Bell's theorem assumes statistical independence between A and B. Viewed individually this is correct. Viewed globally it's incorrect, because a detection at A affects the sample space at B.

We know that the assumption of statistical independence in the EPR-Bell global experimental context is incorrect -- whether it's A wrt B, or (A,B) wrt Theta.

There are two common causes for the correlated data, (1) the global experimental design, and (2) whatever is happening in the submicroscopic quantum world (common cause interactions or superluminal transmissions?).

I think that the assumption of common cause interactions in the submicroscopic quantum world as the deep cause of quantum entanglement and EPR-Bell correlations makes more sense because there's simply no physical evidence for superluminal transmissions in the history of quantum or classical experimentation.

Classical entanglement can be used as a basis for understanding quantum entanglement. Otherwise, there's no real understanding -- just some preparations and some data and how they're related.

If the deep cause of the correlations is due to common cause interactions, then what's wrong with Bell's ansatz?

If there's nothing wrong with Bell's ansatz, then I don't see any alternative but to accept superluminal transmissions as a fact of nature. The problem with this is that it's a fact that we'll never be able to physically detect or verify.
 
Physics news on Phys.org
  • #37


vanesch said:
Not at all. Bell's theorem states that a set of correlations between measurements of which the correlations are due to a common origin, must satisfy certain arithmetic relationships.
Bell's theorem doesn't state anything about photons, polarizations, optics, quantum mechanics or whatever. It simply states something about the possible correlations that can be caused by a common cause. It could have been stated even if there were never any quantum mechanics. Only, with classical physics, it would have sounded as almost trivial.

In other words, it is a property of a set of correlations, wherever they come from, if they are assumed to come from a common cause. Bell's theorem is hence something that applies to sets of correlations.
OK, I've got to think about this some more. :smile:
 
  • #38


In the EPR-Bell experiments, the individual probabilities at A and B aren't being considered, are they? That is, it's not the relationship between A and B that's being considered, but the relationship between (A,B) and Theta (the angular difference between the crossed polarizers) that's being considered. So, any assumptions about the relationship between A and B are irrelevant in the global experimental context.

The rate of individual detection at A and at B remains the same, and the data sequences are always random. So, viewed individually, the probability of detection at A is always independent of the probability of detection at B.

Bell's theorem assumes statistical independence between A and B. Viewed individually this is correct. Viewed globally it's incorrect, because a detection at A affects the sample space at B.
It doesn't just assume statistical independence. It assumes causal independence as well.

It's in the _derivation_ of Bell's theorem that this is obvious (not the application).

Here is a very crude derivation.

Let A, B, and C represent the probabilities of three respective spatially separated events. Let a, b, and c mean the probability that that event does not happen.

If local realism is true, then the outcome of A, B, and C are causally independent. They may indeed share a common cause, but strictly speaking, they do not depend on one another.

Mathematically, we write this as:

A = AB + Ab

In other words, the odds of A happening ae the odds of A and B happening, plus the odds of A and not B happening. A does not depend on B. A cannot be a non-trivial function of B. They might of course both be a function of some third event - the root cause, but there should be no non-trivial way to write A in terms of just B.

In other words:
A = AB + Ab
b = 1-B
A = AB + A(1-B)
A = AB + A - AB
A = A

Ok?

So if local reality holds, that general axiom (A=AB + Ab) also holds. This is the key assumption of Bell's theorem. Really it's the only important one.

Now, let's add a third event and we can make some statements:

aB = aBC + aBc
aC = aBC + abC
bC = AbC + abC <= see what we're doing? We're taking any two arbitrary events and using the local realism assumption to make additional statements

Now we can write these as:
aBC = aB - aBc
abC = bC - AbC

aC = aB - aBc + abC
aC = aB - aBc + bC - AbC

or

aC < aB + bC


That's the easiest way to derive the inequality. So make A, B, and C the odds of a photon being detected at different polarizer angles. Run through the experiment several times and note independently the number of hits. They'll violate the inequality.

Ok? So in order to understand what Bell's theorem actually says qualitatively, you need to understand what it says mathematically, or rather, what it _assumes_ mathematically. So the assumption that:

A = AB + Ab

Does not hold true for QM. That assumption was that the outcome of B is mathematically independent of the outcome of A. Again, they can have a common root cause, but what happens at A should have no bearing on what happens at B. You should not _need_ to know what happened at A in order to guess the odds of something happening at B.

Clear now? :) Now interpret away.

Incidentally, does anyone else think it's cool how Bell's theorem mirrors the triangle theorem that says no side can be greater than the sum of the other two sides?
 
Last edited:
  • #39


peter0302 said:
Mathematically, Bell's theorem assumes that the probability of detection at A is independent of the probability of detection at B. QM violates his theorem, and therefore violates this assumption. Any interpretive framework of QM must therefore account for the fact that the probability of detection at A is, in fact, dependent on detection at B, and vice versa.

I want to add to what Vanesch and ThomasT have said on this. Primarily, and not meaning to be blunt, the statement is wrong. What Bell's Theorem says is that:

IF you have any theory that respects both locality and realism, THEN you cannot end up with predictions identical to QM.

There is nothing WHATSOEVER that Bell's Theorem states about QM itself. Therefore, there is no added burden on an interpretive framework for QM. This is a common error in the understanding of Bell's Theorem.

In addition, there is no experimental evidence whatsoever that the detection at A is in any way dependent on the detection at B. In a sufficiently large sample, the probability of coincidence matching is related to the relative alignment of polarizing apparati. There is not much more you can deduce from experiment, and this exactly matches what QM predicts. It is not possible to determine if detection of A alters the results at B, or vice versa.
 
  • #40


peter0302 said:
You should not _need_ to know what happened at A in order to guess the odds of something happening at B.


You don't. It is constant at 50%. :)
 
  • #41


I'm not sure what you're trying to argue here.

You don't. It is constant at 50%. :)
No it's not constant. All we know is the average is 50%. I'm talking about the probability of a particular photon being detected. There's a difference.

Despite the fact that the joint correlations can be written as a function of the difference in the polarizer angles - itself a sign of mutual dependence - the detection probability for an inidivdual photon can be written as a non-trivial function of whether or not the entangled twin passed its polarizer. And so Bell's main assumption (A = AB + Ab) doesn't hold for QM.

I want to add to what Vanesch and ThomasT have said on this. Primarily, and not meaning to be blunt, the statement is wrong. What Bell's Theorem says is that:

IF you have any theory that respects both locality and realism, THEN you cannot end up with predictions identical to QM.
How is that different from what I said?

If local realism is true then you get certain results.
If you don't get those results then you don't have local realism.

It's 9th grade logic. I'm not sure what your disagreement is.
 
  • #42


peter0302 said:
I'm not sure what you're trying to argue here.


No it's not constant. All we know is the average is 50%. I'm talking about the probability of a particular photon being detected. There's a difference.

Despite the fact that the joint correlations can be written as a function of the difference in the polarizer angles - itself a sign of mutual dependence - the detection probability for an inidivdual photon can be written as a non-trivial function of whether or not the entangled twin passed its polarizer. And so Bell's main assumption (A = AB + Ab) doesn't hold for QM.


How is that different from what I said?

If local realism is true then you get certain results.
If you don't get those results then you don't have local realism.

It's 9th grade logic. I'm not sure what your disagreement is.

Not trying to argue or get into semantics. The point us that Bell's Theorem does not put anything on QM. So QM has nothing requiring explanation due to Bell.

Actually, the logic of the assumption (A = AB + Ab) does seem to hold for QM, at least on the surface. But it cannot be generalized to include simultaneous C, D, E, etc. However, there is a problem when you detect the AB case and try to infer that each particle is in an identical or symmetric state at that time. Clearly, they are no longer in an entangled or symmetric state (as Alice is only in state A and Bob is only in state B). This definitely calls into question the idea that the measurement of one changes the other. Which is the point I think you were trying to make and I objected to.

The QM mystery comes back to the collapse of the wavefunction. What is that? Is it physical? That is the only thing which I believe can be truly said to have a non-local component.
 
  • #43


Ah, ok we do agree then.

The QM mystery comes back to the collapse of the wavefunction. What is that? Is it physical? That is the only thing which I believe can be truly said to have a non-local component.
Yep.

This definitely calls into question the idea that the measurement of one changes the other. Which is the point I think you were trying to make and I objected to.
Right. I wouldn't say the measurement of one changes the other. What I would say is that the measurement of one changes the odds of detecting the other - and that's exactly what QM says should happen.
 
  • #44


peter0302 said:
It doesn't just assume statistical independence. It assumes causal independence as well.

... assumption was that the outcome of B is mathematically independent of the outcome of A. Again, they can have a common root cause, but what happens at A should have no bearing on what happens at B.
Thanks for trying to help me understand Bell's theorem (also thanks to vanesch and Dr. Chinese et al for their efforts) -- but I must say that I still don't understand it's meaning.

You say that A and B can have a common root cause (Does this include the idea that the attributes assigned at A and B for a given coincidence interval are associated with optical disturbances that were emitted by the same atom at the same time -- as in the 1984 Aspect, et al. experiment. -- so that during that interval what's incident on the polarizer at A is the same as what's incident on the polarizer at B?).

vanesch says that violations of Bell inequalities mean that the incident disturbances associated with paired detection attributes cannot have a common origin. This would seem to mean that being emitted from the same atom at the same time does not impart to the opposite-moving disturbances identical properties.

And yet, in the hallmark 1984 Aspect experiment using time-varying analyzers, experimenters were very careful to ensure that they were pairing detection attributes associated with photons emitted simultaneously by the same atom.

I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations.

So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:
 
  • #45


Here's what I mean by common cause:

Let's say A and B are events, and c is their common cause - i.e. the entire history of the universe.

Let A = f1(c) and B = f2(c).

If they share a common cause, but do not depend on one another, we can write each only in terms of -c- without having to reference the other.

In QM, though, we cannot do that. Instead, we get results like:

A = f3(c,B)
B = f4(c,A)

which, if B and A are spacelike separated, are not consistent with local realism.

Now, the thought was perhaps statements like
A = f3(c,B)
could be simplified back down to
A = f1(c)
and thus would turn out to be trivial if we understood exactly what f1 and c really were (here they're just gigantic oversimplifications).

What Bell showed was that if that were true - if A=f3(c,B) really was a trivial restatement of A = f1(c) - certain patterns would emerge in the correlations. Those predicted patterns are violated by QM. Therefore we conclude that A = f3(c,B) is not trivial, and A really does somehow depend on B.
 
  • #46


peter0302 said:
Ah, ok we do agree then.

Yes, I think we did all along.
 
  • #47


ThomasT said:
So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:

Yes, it is confusing. The first thing to do is to go back to traditional QM. Don't try to escape it by positing that a classical explanation will be discovered that saves us. According to Bell's Theorem, that won't happen.

That leaves us with such "paradoxes" as: the Heisenberg Uncertainty Principle (which denies reality to non-commuting operators); wavefunction collapse (which appears to be non-local); virtual particles (where do they come from, and where do they go); and conservation laws (which apply to "real" particles, even space-like separated entangled ones).

Clearly, trying to get a common sense picture of these is essentially impossible as we are no closer after 80 years of trying. So we must be content, for now, with the mathematical apparatus. And that remains a solid victory for physical science.
 
  • #48


ThomasT said:
Thanks for trying to help me understand Bell's theorem (also thanks to vanesch and Dr. Chinese et al for their efforts) -- but I must say that I still don't understand it's meaning.

You say that A and B can have a common root cause (Does this include the idea that the attributes assigned at A and B for a given coincidence interval are associated with optical disturbances that were emitted by the same atom at the same time -- as in the 1984 Aspect, et al. experiment. -- so that during that interval what's incident on the polarizer at A is the same as what's incident on the polarizer at B?).

vanesch says that violations of Bell inequalities mean that the incident disturbances associated with paired detection attributes cannot have a common origin. This would seem to mean that being emitted from the same atom at the same time does not impart to the opposite-moving disturbances identical properties.

And yet, in the hallmark 1984 Aspect experiment using time-varying analyzers, experimenters were very careful to ensure that they were pairing detection attributes associated with photons emitted simultaneously by the same atom.

I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations.

So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:


<< I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations. >>

What? Pagel is certainly wrong about that. There already exists a formulation of QM that utilizes FTL transmissions to get the nonlocal correlations (pilot wave theory) and without wavefunction collapse. Bell even shows this with generic nonlocal HV models in his original papers.


<< So, they can't be produced by common cause at emission...That doesn't seem to leave much to consider. >>

Well recall that the assumptions in Bell's theorem are that

1) Kolmogorov classical probability axioms are valid.
2) locality is valid (no causal influences can propagate faster than c between two events).
3) causality is valid (future measurement settings are "free" or random variables).

One could only reject locality as is often done, and get a nonlocal HV theory such as the pilot wave theory of de Broglie and Bohm. One could also reject only causality, and get a causally symmetric HV model that does the trick (Huw Price and Rod Sutherland are among the researchers who have successfully done this), or, even more implausibly, posit a common past before the emission and detection events. One could also get more esoteric and reject or add axioms to Kolmogorov's classical probability theory, and therefore construct a fully local account of EPRB, as Itamar Pitowsky has done.

Notice that "realism" is not at all the issue in Bell's theorem, despite the common claim that it is.
 
  • #49


ThomasT said:
Thanks for trying to help me understand Bell's theorem (also thanks to vanesch and Dr. Chinese et al for their efforts) -- but I must say that I still don't understand it's meaning.

You say that A and B can have a common root cause (Does this include the idea that the attributes assigned at A and B for a given coincidence interval are associated with optical disturbances that were emitted by the same atom at the same time -- as in the 1984 Aspect, et al. experiment. -- so that during that interval what's incident on the polarizer at A is the same as what's incident on the polarizer at B?).

vanesch says that violations of Bell inequalities mean that the incident disturbances associated with paired detection attributes cannot have a common origin. This would seem to mean that being emitted from the same atom at the same time does not impart to the opposite-moving disturbances identical properties.

And yet, in the hallmark 1984 Aspect experiment using time-varying analyzers, experimenters were very careful to ensure that they were pairing detection attributes associated with photons emitted simultaneously by the same atom.

I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations.

So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:



The easiest thing you can do is read Bell's original papers, namely, "On the EPR Paradox", "La Nouvelle Cuisine", and "Free Variables and Local Causality".
 
  • #50


Maaneli said:
<< I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations. >>

What? Pagel is certainly wrong about that. There already exists a formulation of QM that utilizes FTL transmissions to get the nonlocal correlations (pilot wave theory) and without wavefunction collapse. Bell even shows this with generic nonlocal HV models in his original papers.


<< So, they can't be produced by common cause at emission...That doesn't seem to leave much to consider. >>

Well recall that the assumptions in Bell's theorem are that

1) Kolmogorov classical probability axioms are valid.
2) locality is valid (no causal influences can propagate faster than c between two events).
3) causality is valid (future measurement settings are "free" or random variables).

One could only reject locality as is often done, and get a nonlocal HV theory such as the pilot wave theory of de Broglie and Bohm. One could also reject only causality, and get a causally symmetric HV model that does the trick (Huw Price and Rod Sutherland are among the researchers who have successfully done this), or, even more implausibly, posit a common past before the emission and detection events. One could also get more esoteric and reject or add axioms to Kolmogorov's classical probability theory, and therefore construct a fully local account of EPRB, as Itamar Pitowsky has done.

Notice that "realism" is not at all the issue in Bell's theorem, despite the common claim that it is.

Thanks for the input. I paraphrased Pagels incorrectly I think. Here's what he actually concluded:

We conclude that even if we accept the objectivity [realism, etc.] of the microworld then Bell's experiment does not imply actual nonlocal influences. It does imply that one can instantaneously change the cross-correlation of two random sequences of events on other sides of the galaxy. But the cross-correlation of two sets of widely separated events is not a local object and the information it may contain cannot be used to violate the principle of local causality.

So, is an understanding of the entangled data (correlations) produced in EPR-Bell tests in terms of a common cause produced at emission possible. Also, what do you think of the analogy with the simplest optical Bell tests with a polariscope? Of course, if the deep physical origin of Malus' Law is a mystery, then quantum entanglement is still a mystery, but at least we'd have a classical analog.
 
  • #51


DrChinese said:
Yes, it is confusing. The first thing to do is to go back to traditional QM. Don't try to escape it by positing that a classical explanation will be discovered that saves us. According to Bell's Theorem, that won't happen.

That leaves us with such "paradoxes" as: the Heisenberg Uncertainty Principle (which denies reality to non-commuting operators); wavefunction collapse (which appears to be non-local); virtual particles (where do they come from, and where do they go); and conservation laws (which apply to "real" particles, even space-like separated entangled ones).

Clearly, trying to get a common sense picture of these is essentially impossible as we are no closer after 80 years of trying. So we must be content, for now, with the mathematical apparatus. And that remains a solid victory for physical science.
Thanks DrChinese -- I don't view the uncertainty relations, or wavefunction collapse, or virtual particles, or the application of the law of conservation of angular momentum in certain Bell tests as paradoxical.

I think I should reread what's been written in these forums, Bell's papers, lots of other papers I've been putting off, your page, etc. and then get my thoughts in order. By the way, I'm still hoping for some sort of classically analogous way of understanding quantum entanglement and the EPR-Bell correlations. :smile:
 
  • #52


ThomasT said:
Thanks DrChinese -- I don't view the uncertainty relations, or wavefunction collapse, or virtual particles, or the application of the law of conservation of angular momentum in certain Bell tests as paradoxical.

I think I should reread what's been written in these forums, Bell's papers, lots of other papers I've been putting off, your page, etc. and then get my thoughts in order. By the way, I'm still hoping for some sort of classically analogous way of understanding quantum entanglement and the EPR-Bell correlations. :smile:

Thomas,

There are plenty of misleading accounts of Bell's theorem and the current state of affairs out there. I have spent several year going through all of them and finding the diamonds in the rough. So, from my experiences, I also strongly recommend, in addition to those specific Bell papers, these two books to you on QM nonlocality - they are by far the best around:

"Quantum Nonlocality and Relativity"
Tim Maudlin
https://www.amazon.com/dp/0631232214/?tag=pfamazon01-20

"Time's Arrow and Archimedes Point"
Huw Price
https://www.amazon.com/dp/0195117980/?tag=pfamazon01-20

~M
 
  • #53


ThomasT said:
Thanks for the input. I paraphrased Pagels incorrectly I think. Here's what he actually concluded:

We conclude that even if we accept the objectivity [realism, etc.] of the microworld then Bell's experiment does not imply actual nonlocal influences. It does imply that one can instantaneously change the cross-correlation of two random sequences of events on other sides of the galaxy. But the cross-correlation of two sets of widely separated events is not a local object and the information it may contain cannot be used to violate the principle of local causality.

So, is an understanding of the entangled data (correlations) produced in EPR-Bell tests in terms of a common cause produced at emission possible. Also, what do you think of the analogy with the simplest optical Bell tests with a polariscope? Of course, if the deep physical origin of Malus' Law is a mystery, then quantum entanglement is still a mystery, but at least we'd have a classical analog.


I have a hard time understanding how Pagel could possibly have reached that conclusion. Indeed it even contradicts Bell's own conclusions. It looks confused. But, can you give me the reference?

<< So, is an understanding of the entangled data (correlations) produced in EPR-Bell tests in terms of a common cause produced at emission possible. >>

No. But, as I said earlier, there is the common past hypothesis (that the emission and detection events share a common past) that is logically possible, although extremely implausible. Bell talks about this in his paper "Free Variables and Local Causality". More plausible and successful have been the nonlocal explanations, as well as the causally symmetric explanations.

If you would like a classical analogue of Bell's inequality and theorem, read the first chapter of Tim Maudlin's book. He gives a perfectly clear and accurate classical analogue.
 
  • #54


Maaneli said:
Notice that "realism" is not at all the issue in Bell's theorem, despite the common claim that it is.

I claim it is. When Bell says that there is a simultaneous A, B and C (circa his [14] in the original), he is invoking realism. He says "It follows that c is another unit vector...". His meaning is that there if there is an a, b and c simultaneously then there must be internal consistency and there must be an outcome table that yields probabilities for all permutations of outcomes a, b and c that are non-negative.

Bell's is a reference to Einstein's realism condition, which Einstein claimed was a reasonable assumption. Bell saw this would not work and that there could not be internal consistency if there were pre-determined outcomes at all possible measurement settings. Of course, that would violate the HUP anyway but Einstein believed the HUP was not a description of reality. He said so in EPR. He assumed that at the most, the HUP was a limitation on our observational powers but not representative of reality. He said that the moon was there even when it was not being observed...
 
  • #55


DrChinese said:
I claim it is. When Bell says that there is a simultaneous A, B and C (circa his [14] in the original), he is invoking realism. He says "It follows that c is another unit vector...". His meaning is that there if there is an a, b and c simultaneously then there must be internal consistency and there must be an outcome table that yields probabilities for all permutations of outcomes a, b and c that are non-negative.

Bell's is a reference to Einstein's realism condition, which Einstein claimed was a reasonable assumption. Bell saw this would not work and that there could not be internal consistency if there were pre-determined outcomes at all possible measurement settings. Of course, that would violate the HUP anyway but Einstein believed the HUP was not a description of reality. He said so in EPR. He assumed that at the most, the HUP was a limitation on our observational powers but not representative of reality. He said that the moon was there even when it was not being observed...


I know you claim it is but it contradicts Bell's understanding of his own theorem (which should give you pause). Let me challenge you to try and come up with a logically coherent prediction in terms of an inequality, without the realism assumption. My claims is that the whole theorem falls apart into an incoherent mess if you remove realism. Whereas, you could remove locality or causality or modify Kolmogorov probability axioms, you can still construct a well-defined inequality that can be empirically tested. Let me also recommend having a look at Bell's paper "La Nouvelle Cuisine" in Speakable and Unspeakable in QM and ttn's paper "Against Realism":

Against `Realism'
Authors: Travis Norsen
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
http://arxiv.org/abs/quant-ph/0607057
 
Last edited:
  • #56


<< Bell's is a reference to Einstein's realism condition, which Einstein claimed was a reasonable assumption. Bell saw this would not work and that there could not be internal consistency if there were pre-determined outcomes at all possible measurement settings. >>

No that's completely incorrect (if I correctly understand what you're trying to say). Realism is just fine even if you give up locality or causality or Kolmogorov axioms of probability. Seriously, have a look at La Nouvelle Cuisine and Travis' paper.

<< Of course, that would violate the HUP anyway but Einstein believed the HUP was not a description of reality. He said so in EPR. He assumed that at the most, the HUP was a limitation on our observational powers but not representative of reality. >>

Dude that's the point. Einstein's generic notion of realism was tested against Heisenberg's (quite frankly incoherent) positivist interpretation of the UP in QM, and was shown to be perfectly OK so long as you gave up either locality or causality. By the way, the UP was actually discovered first by Fourier in relation to classical waves, so I would prefer to call it the FUP (Fourier Uncertainty Principle).
 
Last edited:
  • #57


Maaneli said:
I know you claim it is but it contradicts Bell's understanding of his own theorem (which should give you pause). Let me challenge you to try and come up with a logically coherent prediction in terms of an inequality, without the realism assumption. My claims is that the whole theorem falls apart into an incoherent mess if you remove realism. Whereas, you could remove locality or causality or modify Kolmogorov probability axioms, you can still construct a well-defined inequality that can be empirically tested. Let me also recommend having a look at Bell's paper "La Nouvelle Cuisine" in Speakable and Unspeakable in QM and ttn's paper "Against Realism":

Against `Realism'
Authors: Travis Norsen
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
http://arxiv.org/abs/quant-ph/0607057

Well, Travis and I have had a long-standing disagreement on this subject in these forums - and I am well aware of his paper (and the others like it). Norsen bends the history of EPR and Bell to suit his objective, which is obviously to push non-locality as the only viable possibility. He also bends semantics, as far as I am concerned.

You do not need Bell's additional editorial comment either (he said a lot of things afterwards), when his original paper stands fine as is. So no, it does not give me pause. Einstein was not always right, either, and if he were alive today I think he would acknowledge Bell's insight for what it was.

The situation is quite simple really:

a) If particles have no simultaneous A, B and C polarizations independent of the act of observation (as is implied, but not required, by the HUP), then there is no Bell's Theorem (per Bell's [14]). This is the realism requirement as I mentioned, and this is NECESSARY to construct the inequality. Without it, there is nothing - so your challenge is impossible as far as I am concerned.

b) Separately from Bell, the GHZ Theorem comes to an anti-realistic conclusion which does not require the locality condition. As I see it, this is fully consistent with Bell while non-local explanations are not. However, many reject GHZ and other anti-realism proofs (I'm sure you know the ones) for philosophical reasons.

c) Bell's paper was a brilliant answer to EPR's "conclusion" (completely unjustified) that realism was reasonable as an assumption. Bell showed that either Einstein's realism or his beloved locality (or both) would need to be rejected. Bell was obviously aware of Bohmian Mechanics at the time (since he mentions it), but I would hardly call that part of Bell's paper's conclusion itself.

I happen to believe that there is a causality condition implied in the Bell proof. In other words: if the future can influence the past, then that should allow a mechanism for Bell test results to be explained without resorting to a non-local or a non-realistic solution. If time is symmetric (as theory seems to suggest), then this should be possible. On the other hand, a lot of people would probably equate such a possibility to either a non-local or non-realistic solution anyway.

At any rate, failure to explicitly acknowledge the anti-realism viewpoint does a great disservice to the readers of this board. My viewpoint is mainstream opinion and Norsen's is not. As best I recall, most of the influential researchers in the area - Zeilinger, Aspect, etc. - all adopt this position: namely, that realism and locality assumptions are embedded in the Bell paper, and (given experimental results) at least one must be rejected.
 
  • #58


Maaneli said:
By the way, the UP was actually discovered first by Fourier in relation to classical waves, so I would prefer to call it the FUP (Fourier Uncertainty Principle).

That is a fairly strange way of thinking, and certainly puts you in a very small group. Even Fourier would have been surprised to find that he was the true discoverer of the HUP a hundred years before Heisenberg (and long before the existence of atoms was consensus). Do you just not like Heisenberg for some reason?
 
  • #59


Agree with everything Dr. Chinese said. I'd also add that the interpretive difficulties always arise when we stray from the math and delve into philosophy using cushy terms like realism, determinism, superdeterminism, and the like. Of course I'm guilty of it too. :)

We have to see Bell's inequality for what it is: the consequence of an assumption which Aspect and others have proven wrong. While we all agree on what that asusmption is mathematically, we can't agree on what it means physically. But at the very least, we should be focusing on the assumption, and not any author's (including Bell's own) editorial comments or beliefs regarding it.
 
  • #60
DrChinese said:
Well, Travis and I have had a long-standing disagreement on this subject in these forums - and I am well aware of his paper (and the others like it). Norsen bends the history of EPR and Bell to suit his objective, which is obviously to push non-locality as the only viable possibility. He also bends semantics, as far as I am concerned.


Well I disagree with your assessement of his work. Travis is quite accurate in his characterization of Bell's theorem, even though I have some disagreements with him about what conclusions we can draw about it today. Also, he doesn't bend semantics - he's just very meticulous and high on philosophical and logical rigor, which is something everyone should strive for in discussing QM foundations.




DrChinese said:
You do not need Bell's additional editorial comment either (he said a lot of things afterwards), when his original paper stands fine as is. So no, it does not give me pause.


Yes you do need Bell's additional commentaries from his other papers. There are lot's of subtle and implicit assumptions in his original paper that he made much more explicit and tried to justify in other papers like "La Nouvelle Cuisine", where he clarifies his definition of local causality, and "Free Variables and Local Causality", where he justifies his assumption of causality but also emphasizes the additional possibilities involved in giving up the causality assumption.




DrChinese said:
Einstein was not always right, either, and if he were alive today I think he would acknowledge Bell's insight for what it was.


I agree Einstein was not always right and that he would probably acknowledge Bell's theorem; but I suspect we have different opinions about what exactly Bell's insight is.



DrChinese said:
The situation is quite simple really:

a) If particles have no simultaneous A, B and C polarizations independent of the act of observation (as is implied, but not required, by the HUP), then there is no Bell's Theorem (per Bell's [14]). This is the realism requirement as I mentioned, and this is NECESSARY to construct the inequality. Without it, there is nothing - so your challenge is impossible as far as I am concerned.


Yes, this was exactly my point. I think you misunderstood me before. Indeed the form of realism you generally suggest is an absolutely necessary pin in the logic of the theorem (or any physics theorem for that matter; in fact, that realism assumption is no different than the realism assumptions in, say, the fluctuation-dissipation theorem or Earnshaw's theorem, both of which are theorems in classical physics). But it is completely false to say that realism is necessarily falsified by a violation of the Bell inequalities. There are other assumptions in Bell's theorem, if you recall, which can be varied without making the general mathematical logic of the inequality derivation inconsistent. They are, once again,

1) Kolmogorov classical probability axioms are valid.
2) locality is valid (the propagation speed for causal influences between two events is bounded by the speed of light, c).
3) causality is valid ("future" or final measurement settings are "free" or random variables).

One can drop anyone of these assumptions and it wouldn't falsify realism. Well, if you drop 3) and replace it with a common past hypothesis or a form of backwards causation as Huw Price and others have suggested, then you just have to modify your notion of realism in a particular way (there is a literature on this you know). That's not the same however as saying that realism gets falsified.




DrChinese said:
b) Separately from Bell, the GHZ Theorem comes to an anti-realistic conclusion which does not require the locality condition. As I see it, this is fully consistent with Bell while non-local explanations are not. However, many reject GHZ and other anti-realism proofs (I'm sure you know the ones) for philosophical reasons.


What are you talking about? Of course the GHZ theorem assumes a locality condition, just as Bell does. And no it doesn't come to any anti-realistic conclusion whatsoever. That's a very serious error. If you don't understand any of that, then you have to return to some basics. In particular, have a read of this recent article by Zeilinger and Aspelmeyer.

http://physicsworld.com/cws/article/print/34774;jsessionid=B55E9395A8ED10334930389C70494F9B

So far, all tests of both Bell’s inequalities and on three entangled particles (known as GHZ experiments) (see “GHZ experiments”) confirm the predictions of quantum theory, and hence are in conflict with the joint assumption of locality and realism as underlying working hypotheses for any physical theory that wants to explain the features of entangled particles.

Yes, they do talk about GHZ as if it puts constraints on "local realism"; but, again, I have shown that realism is a complete red herring in the context of Bell or GHZ. And of course I am not the only person with this view. It is quite well understood by the top philosophers of physics and physicists in QM foundations like David Albert, Tim Maudlin, Huw Price, Sheldon Goldstein, Guido Bacciagaluppi, Jeff Bub, David Wallace, Harvey Brown, Simon Saunders, etc., etc.. Zeilinger and Apelmeyer are quite in the minority in that understanding among QM foundations specialists, and that should give you pause for concern on that particular issue. But to make this even more clear to you, the deBB theory (a nonlocal realist contextual HV theory) perfectly explains the results of GHZ, which Zeilinger also acknowledges himself (because he understands deBB involves a joint assumption of realism and nonlocality). So there is no refutation of realism on its own at all in GHZ.

Also, it just occurred to me that you might be confusing the Leggett inequality (which that article also discusses) with the GHZ inequality. I highly recommend getting clear on those differences.



DrChinese said:
c) Bell's paper was a brilliant answer to EPR's "conclusion" (completely unjustified) that realism was reasonable as an assumption. Bell showed that either Einstein's realism or his beloved locality (or both) would need to be rejected. Bell was obviously aware of Bohmian Mechanics at the time (since he mentions it), but I would hardly call that part of Bell's paper's conclusion itself.


That's a total mischaracterization of the EPRB conclusion and of Bell's theorem. Bell showed that Either locality or causality would need to be rejected. By the way, even though deBB was not a part of Bell's original paper, in his other papers he mentions it as a counterexample to the flawed misunderstandings physicists had (and still have) that his theorem refutes the possibility of Einsteinian realism in QM.




DrChinese said:
I happen to believe that there is a causality condition implied in the Bell proof. In other words: if the future can influence the past, then that should allow a mechanism for Bell test results to be explained without resorting to a non-local or a non-realistic solution. If time is symmetric (as theory seems to suggest), then this should be possible. On the other hand, a lot of people would probably equate such a possibility to either a non-local or non-realistic solution anyway.


Yes of course the causality condition is in Bell's theorem. That's not controversial or new. He discusses it in more detail in "La Nouvelle Cuisine" and "Free Variables and Local Causality" (see why it's a good idea to read his other papers?) and leaves open the possibility of some form of "superdeterminism", even though he himself regards it as very implausible. Later people like O. Costa de Beauregard, Huw Price, and others since have advanced the idea of using backwards causation to save locality and show how Bell and GHZ inequalities could be violated. Price discusses this at length in his book

"Time's Arrow and Archimedes Point"
http://www.usyd.edu.au/time/price/TAAP.html

and his papers:

Backward causation, hidden variables, and the meaning of completeness. PRAMANA - Journal of Physics (Indian Academy of Sciences), 56(2001) 199—209.
http://www.usyd.edu.au/time/price/preprints/QT7.pdf

Time symmetry in microphysics. Philosophy of Science 64(1997) S235-244.
http://www.usyd.edu.au/time/price/preprints/PSA96.html

Toy models for retrocausality. Forthcoming in Studies in History and Philosophy of Modern Physics, 39(2008).
http://arxiv.org/abs/0802.3230

You may also be interested to know that there exists a deBB model developed by Sutherland that implements backwards causation, is completely local, and reproduces the empirical predictions of standard QM:

Causally Symmetric Bohm Model
Authors: Rod Sutherland
http://arxiv.org/abs/quant-ph/0601095
http://www.usyd.edu.au/time/conferences/qm2005.htm#sutherland
http://www.usyd.edu.au/time/people/sutherland.htm

and his older work:

Sutherland R.I., 'A Corollary to Bell's Theorem', Il Nuovo Cimento B 88, 114-18 (1985).

Sutherland R.I., 'Bell's Theorem and Backwards-in-Time Causality', International Journal of Theoretical Physics 22, 377-84 (1983).

And just to emphasize, all these backwards causation models involve some form of realism.



DrChinese said:
At any rate, failure to explicitly acknowledge the anti-realism viewpoint does a great disservice to the readers of this board. My viewpoint is mainstream opinion and Norsen's is not. As best I recall, most of the influential researchers in the area - Zeilinger, Aspect, etc. - all adopt this position: namely, that realism and locality assumptions are embedded in the Bell paper, and (given experimental results) at least one must be rejected.


Whether your viewpoint is "mainstream" (and you still have to define what "mainstream" means to make it meaningful) or not is completely irrelevant. All that is relevant is the logical validity and factual accuracy of your understanding of these issues. But, I could tell you that among QM foundations specialists, such as people who participate in the annual APS conference on foundations of physics (which I have done so for the past 3 consecutive years):

New Directions in the Foundations of Physics
American Center for Physics, College Park, April 25 - 27, 2008
http://carnap.umd.edu/philphysics/conference.html

your opinion is quite the minority. Furthermore, I didn't imply that locality isn't embedded in Bell's theorem or that realism isn't embedded in Bell's theorem. I just said that the crucial conclusion of Bell's theorem (and Bell's own explicitly stated conclusion) is that QM is not a locally causal theory, not that it is not a locally real theory, whatever that would mean.

Let me also emphasize that unlike what you seem to be doing in characterizing Bell's theorem as a refutation of realism, Zeilinger acknolwedges that nonlocal hidden variable theories like deBB are compatible with experiments, even if he himself is an 'anti-realist'. By the way, anti-realists such as yourself or Zeilinger still have the challenge to come up with a solution to the measurement problem and derive the quantum-classical limit. Please don't try to invoke decoherence, since the major developers and proponents of decoherence theory like Zurek, Zeh, Joos, etc., are actually realists themselves - and even they admit that decoherence theory has not and probably will never on its own solves the measurement problem or account for the quantum-classical limit. On the other hand, it is well acknolwedged that nonlocal realist theories like deBB plus decoherence do already solve the problem of measurement and already accurately (even if not yet perfectly) describe the quantum-classical limit. So by my assessment, it is the anti-realist crowd that is in the minority and has much to prove.
 
Last edited by a moderator:
  • #61


In order to reject a theory based on Bell's theorem alone, that theory should have the property that the events in one part of the experimental setup (source, detector 1, detector 2) should not depend on the other parts (the statistical independence assumption).

The only theories that satisfy this assumption (and are therefore unable to reproduce QM's predictions) are the "billiard ball"-type ones (no long range force, interactions only at direct collisions). Incidentally, Maxwell's theory of electromagnetism, Newtonian gravity, or Einstein's GR all have long range forces therefore the statistical independence assumption is denied. Therefore, a modification of maxwell's theory, while remaining local and realistic could in principle reproduce QM's predictions.

So, the saying that local realism is excluded by Bell's theorem is patently false.
 
  • #62


Maaneli said:
1. Yes, this was exactly my point. I think you misunderstood me before. Indeed the form of realism you generally suggest is an absolutely necessary pin in the logic of the theorem

2. Later people like O. Costa de Beauregard, Huw Price, and others since have advanced the idea of using backwards causation to save locality and show how Bell and GHZ inequalities could be violated. Price discusses this at length in his book

"Time's Arrow and Archimedes Point"
http://www.usyd.edu.au/time/price/TAAP.html

and his papers:

Backward causation, hidden variables, and the meaning of completeness. PRAMANA - Journal of Physics (Indian Academy of Sciences), 56(2001) 199—209.
http://www.usyd.edu.au/time/price/preprints/QT7.pdf

Time symmetry in microphysics. Philosophy of Science 64(1997) S235-244.
http://www.usyd.edu.au/time/price/preprints/PSA96.html

Toy models for retrocausality. Forthcoming in Studies in History and Philosophy of Modern Physics, 39(2008).
http://arxiv.org/abs/0802.3230

You may also be interested to know that there exists a deBB model developed by Sutherland that implements backwards causation, is completely local, and reproduces the empirical predictions of standard QM:

Causally Symmetric Bohm Model
Authors: Rod Sutherland
http://arxiv.org/abs/quant-ph/0601095
http://www.usyd.edu.au/time/conferences/qm2005.htm#sutherland
http://www.usyd.edu.au/time/people/sutherland.htm

and his older work:

Sutherland R.I., 'A Corollary to Bell's Theorem', Il Nuovo Cimento B 88, 114-18 (1985).

Sutherland R.I., 'Bell's Theorem and Backwards-in-Time Causality', International Journal of Theoretical Physics 22, 377-84 (1983).

And just to emphasize, all these backwards causation models involve some form of realism.

3. Whether your viewpoint is "mainstream" (and you still have to define what "mainstream" means to make it meaningful) or not is completely irrelevant. All that is relevant is the logical validity and factual accuracy of your understanding of these issues. But, I could tell you that among QM foundations specialists, such as people who participate in the annual APS conference on foundations of physics (which I have done so for the past 3 consecutive years):

New Directions in the Foundations of Physics
American Center for Physics, College Park, April 25 - 27, 2008
http://carnap.umd.edu/philphysics/conference.html

your opinion is quite the minority. Furthermore, I didn't imply that locality isn't embedded in Bell's theorem or that realism isn't embedded in Bell's theorem. I just said that the crucial conclusion of Bell's theorem (and Bell's own explicitly stated conclusion) is that QM is not a locally causal theory, not that it is not a locally real theory, whatever that would mean.

4. Let me also emphasize that unlike what you seem to be doing in characterizing Bell's theorem as a refutation of realism, Zeilinger acknolwedges that nonlocal hidden variable theories like deBB are compatible with experiments, even if he himself is an 'anti-realist'. By the way, anti-realists such as yourself or Zeilinger still have the challenge to come up with a solution to the measurement problem and derive the quantum-classical limit. Please don't try to invoke decoherence, since the major developers and proponents of decoherence theory like Zurek, Zeh, Joos, etc., are actually realists themselves - and even they admit that decoherence theory has not and probably will never on its own solves the measurement problem or account for the quantum-classical limit. On the other hand, it is well acknolwedged that nonlocal realist theories like deBB plus decoherence do already solve the problem of measurement and already accurately (even if not yet perfectly) describe the quantum-classical limit. So by my assessment, it is the anti-realist crowd that is in the minority and has much to prove.

1. We agree on this point, and that was my issue.

2. Thank you for these references, there are a couple I am not familiar with and would like to study.

3. The issue with "mainstream" is that mainstream theory can be wrong - of course - but I think it is helpful for most folks to learn the mainstream before they reject it.

I see your point that there is a more diverse group out there, and so maybe the idea of "mainstream" is too broad to be so easily characterized. At any rate, I was not trying to say that the "anti-realism" view was the mainstream. I was trying to say that the mainstream view is that local realistic theory is not viable.

4. Repeating that I was not trying to advance the cause of "non-realism" other than showing it is one possibility. I agree that non-local solutions should be viable. In a lot of ways, they make more intuitive sense than non-realism anyway.

BTW, my point about GHZ was not that it proved non-realism over non-locality. It is another of the no-go proofs - of which there are several - which focus on the realism assumption. These proofs are taken in different ways by the community. Since we don't disagree on the main point, we can drop this particular sidebar.
 
  • #63


ueit said:
In order to reject a theory based on Bell's theorem alone, that theory should have the property that the events in one part of the experimental setup (source, detector 1, detector 2) should not depend on the other parts (the statistical independence assumption).

The only theories that satisfy this assumption (and are therefore unable to reproduce QM's predictions) are the "billiard ball"-type ones (no long range force, interactions only at direct collisions). Incidentally, Maxwell's theory of electromagnetism, Newtonian gravity, or Einstein's GR all have long range forces therefore the statistical independence assumption is denied. Therefore, a modification of maxwell's theory, while remaining local and realistic could in principle reproduce QM's predictions.

So, the saying that local realism is excluded by Bell's theorem is patently false.

I seem to be stuck in the middle again... :)

There is no viable local realistic theory on the table to discuss at this point. You don't have one to offer, and "heroic" efforts by Santos and others (with varations on stochastic ideas) have so far fallen well short of convincing much of anyone. Bell's Theorem shows us how to dissect and attack such attempts. So I strongly disagree.
 
  • #64


Maaneli said:
IAgainst `Realism'
Authors: Travis Norsen
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
http://arxiv.org/abs/quant-ph/0607057

By the way, it might surprise you (and Travis for that matter) to learn that I have had a link to another of his earlier papers - somewhat similar to your citation - on my website for nearly 3 years:

Travis Norsen: EPR and Bell Locality , arXiv (2005)

...So please don't think that I limit my viewpoints. I respect differences of opinion and think they are healthy. But I also think that on this board, opinions should be distinguished from mainstream thought for the sake of those who don't follow things to the Nth degree.
 
  • #65


Thought experiment. Suppose we simulate on a classical computer (the bits are manipulated using local deterministic rules) a world described by quantum mechanics. In this world, observers live who can do experiments and verify that Bell's inequality is violated in exactly the way as predicted by QM. Nevertheless the world they live in, is ultimately described by the rules according to which the bits in the computer are manipulated.
 
  • #66


Maaneli said:
I have a hard time understanding how Pagel could possibly have reached that conclusion. Indeed it even contradicts Bell's own conclusions. It looks confused. But, can you give me the reference?
"The Cosmic Code: quantum physics as the language of nature"

Maaneli said:
<< So, is an understanding of the entangled data (correlations) produced in EPR-Bell tests in terms of a common cause produced at emission possible?>>
No. But, as I said earlier, there is the common past hypothesis (that the emission and detection events share a common past) that is logically possible, although extremely implausible. Bell talks about this in his paper "Free Variables and Local Causality". More plausible and successful have been the nonlocal explanations, as well as the causally symmetric explanations.
Since the experimental designs seem to have (in my view anyway) common emission cause written all over them (and since the whole topic is open to speculation) I would rank that as more plausible than any of the other more exotic explanations for the correlations.

Maaneli said:
If you would like a classical analogue of Bell's inequality and theorem, read the first chapter of Tim Maudlin's book. He gives a perfectly clear and accurate classical analogue.
I take it you didn't like my polariscope analogy? (I really thought I had something there. :smile:)
I read what I could of Maudlin's first chapter at Google books. Nothing new or especially insightful there. I've read Price's book -- didn't like it. But thanks for the references and nice discussion with DrChinese et al.
I don't like the nonlocal explanations. Too easy. I'll continue, for the time being, working under the assumption that something (or things) about the physical meaning of Bell's theorem and Bell inequalities is being misinterpreted or missed.
 
  • #67


Interesting idea Count. I did something like this (I was a computer science guy in a previous life).

It is impossible using a standard - non-quantum - computer to simulate the results of EPRB experiments without utilizng both polarizer settings in calculating the odds of any particular photon passing.

The "Does Photon Pass Polarizer x" function simply cannot be written without reference to the other polarizer while still obtaining the quantum results.

If you try to do something elaborate - say, in the "generate entangled photons" function, you pre-program both of them for every conceiveable polarizer angle - you come close to the quantum results, but not perfectly.

In order to reproduce the quantum results, you have to either:
1) allow the two photons to "know" the polarizer settings before they've reached them (some kind of superdeterminism) and agree ahead of time on how they're going to behave; or
2) check to see whether the twin has reached its polarizer yet; if not, just go with 50/50. If it has reached, behave in a complimentary way (non-locaity).

The third option would be some kind of many-world simulation where we let objects continue to evolve in super-position until someone observes both but I thought that a little too complicated to code.
 
  • #68
Bell's Inequality

peter0302 said:
Each time Alice eats a tomato, Bob is more likely to eat a cucumber. Each time Alice can't finish her broccoli, Bob eats his carrots more often.
QUOTE]

I have read this thread with great interest and marvelled at the logic above - why should Bob eat a cucumber when Alice eats a tomato?

So I have done a 'plastic balls' version of Bell's Inequality in what I hope is the simplest possible depiction.
I need to add a jpg of the quantum violation of Bells Inequality, but cannot see quite how to do it.

Can someone offer advice? How could I devise a plastic Balls jpg of the QM version of events?
http://www.ronsit.co.uk/weird_at_Heart.asp
 
Last edited by a moderator:
  • #69


Maaneli said:
I have a hard time understanding how Pagel could possibly have reached that conclusion.
In case you haven't had a chance to check out Pagel's book, I can summarize his argument.

Nonlocality has to do with the spatially separated setups producing changes in each other via spacelike separated events.

Pagel's argument against nonlocality (wrt EPR-Bell tests at least) hinges on the randomness of the individual results. Quantitatively, we know that A is not producing changes in B and vice versa. Qualitatively, there's no way to know. The individual probabilities at one end remain the same no matter what happens at the other end. Speaking of the conditional probability at B given a detection at A is meaningless. The probabilities only have physical meaning wrt the accumulation of results of large numbers of individual trials. Because of the randomness of individual data sequences nonlocality in EPR-Bell tests can't be conclusively established.

If the sequences from A and B are matched appropriately, then information about changes in the settings of the separated polarizers is in the cross-correlation. In effect, the global experimental design yields the quantum correlations -- which, in my view, is what would be expected if the deep cause of the entanglement is via common emission, or interaction, or transmission of a common torque, etc. (My polariscope analogy comes in handy here I think.)

Apparently, the only thing preventing a consensus wrt the common origin explanation for the correlations is that Bell inequalitities are interpreted to exclude the possibility that the filters at A and B might have been filtering identical incident disturbances for any given pair of detection attributes.
 
  • #70


peter0302 said:
We have to see Bell's inequality for what it is: the consequence of an assumption which Aspect and others have proven wrong. While we all agree on what that asusmption is mathematically, we can't agree on what it means physically. But at the very least, we should be focusing on the assumption, and not any author's (including Bell's own) editorial comments or beliefs regarding it.
This makes sense to me. Nevertheless, it would be nice to know if experimental violations of Bell inequalities have any physical meaning -- and, if so, what. Might it be that there's no way to ascertain what the physical meaning of an EPR-Bell experiment is? I think this is possible, maybe even likely, and, if so, it would seem to reinforce the Copenhagen approach to interpreting the formalism and application of the quantum theory. (ie., we can't possibly know the truth of a deep quantum reality, so there's no scientific point in talking about it)
 

Similar threads

Back
Top