Entanglement spooky action at a distance

In summary, entanglement is often described as "spooky action at a distance" because of the correlation between the outcomes of measurements on entangled particles, which is inconsistent with the idea that they are independent and random. The correlation follows a mathematical formula that is derived from Quantum Mechanics. While it may seem like this could be used for faster-than-light communication, there is no evidence to support this and it is likely just a byproduct of the experimental design.
  • #1
Dragonfall
1,030
4
Entanglement "spooky action at a distance"

Why can't we think of entanglement as simply committing (without knowledge) to a random outcome, instead of "spooky action at a distance"?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2


Dragonfall said:
Why can't we think of entanglement as simply committing (without knowledge) to a random outcome, instead of "spooky action at a distance"?


"Spooky Action at a Distance" (nonlocality) is not the only alternative consistent with the facts. But it is probably the more popular one.

The answer to your question is that Bell's Theorem demonstrates that there is a mathematical relationship with the outcomes of measurements of entangled particles that is inconsistent with the idea that they are independent and random. Of course, the actual outcomes themselves are random when looked at separately. But when the outcome streams are correlated, the pattern becomes clear.

Specifically: the correlation of the outcomes follows the formula C=cos^2(theta) where theta is the relative angle between the measurement apparati. On the other hand, the formula associated with your hypothesis is C=.25+(cos^2(theta)/2). Experiments support the first formula - the one which is derived from Quantum Mechanics - and unambiguously reject the second.
 
  • #3


Dragonfall said:
Why can't we think of entanglement as simply committing (without knowledge) to a random outcome ...
Because in the global experimental design(s) characteristic of EPR-Bell tests the pairing of individual results (A,B) isn't done randomly. There's a very narrow (nanosecond scale) window within which the coincidence circuitry operates to produce pairs. The effect of such synchronization is that for an individual detection in, say, A's datastream, there should be, at most, one candidate (either a detection or a nondetection attribute) for pairing in B's datastream.

This interdependency between paired detection events at A and B is a function of the experimental designs necessary to produce EPR-Bell type entanglements and has, as far as I can tell, nothing to do with instantaneous or FTL transmissions.

If FTL transmissions really aren't involved, then any symbolic locality condition becomes simply a statistical independence condition, and this is just a byproduct of the experimental design.

For this reason, and also simply because there's no physical evidence for FTL transmissions, the best assumption is that FTL transmissions aren't involved in the production of quantum entanglement.
Dragonfall said:
... instead of "spooky action at a distance"?
As Dr. Chinese has pointed out, one doesn't have to attribute the observed correlation between the angular difference of the spatially separated polarizer settings and the rate of coincidental detection to "spooky action at a distance" -- or even to FTL transmissions.

For example, if, for any given coincidence interval, it's assumed that the polarizers at A and B interacted with the same incident disturbance, then it isn't difficult to understand the cos^2 angular dependence.
 
  • #4


Hi! I'm new here and interested in physics, as all of you are :) I'm speaking from a point of view of an amateur (hoping to change this in the future :)) sp I hope you won't lough too much at my contributions :D

The quantum entanglement is a process, bounding two or more particles together through space and time (if I've understood the definition correctly) and every change it the quantum state of the first particle leads simultaneously to the same change in the paired one, regardless of space and time.

Now I was wondering, if this means transmitting information (by quantum states) with superlight velocity? Can this one day be used for transmitting information more efficiently (actually instantaneously) trough bigger distances? And what does a quantum state represents, which features does it have?

best regards, Marin
 
  • #5


Marin said:
Hi! I'm new here and interested in physics, as all of you are :) I'm speaking from a point of view of an amateur (hoping to change this in the future :)) sp I hope you won't lough too much at my contributions :D

The quantum entanglement is a process, bounding two or more particles together through space and time (if I've understood the definition correctly) and every change it the quantum state of the first particle leads simultaneously to the same change in the paired one, regardless of space and time.

Now I was wondering, if this means transmitting information (by quantum states) with superlight velocity? Can this one day be used for transmitting information more efficiently (actually instantaneously) trough bigger distances? And what does a quantum state represents, which features does it have?

best regards, Marin

No it does not mean FTL communications or signals. One can only say that there is an FTL "influence".
 
  • #6


Because in the global experimental design(s) characteristic of EPR-Bell tests the pairing of individual results (A,B) isn't done randomly. There's a very narrow (nanosecond scale) window within which the coincidence circuitry operates to produce pairs. The effect of such synchronization is that for an individual detection in, say, A's datastream, there should be, at most, one candidate (either a detection or a nondetection attribute) for pairing in B's datastream.

This interdependency between paired detection events at A and B is a function of the experimental designs necessary to produce EPR-Bell type entanglements and has, as far as I can tell, nothing to do with instantaneous or FTL transmissions.

If FTL transmissions really aren't involved, then any symbolic locality condition becomes simply a statistical independence condition, and this is just a byproduct of the experimental design.

For this reason, and also simply because there's no physical evidence for FTL transmissions, the best assumption is that FTL transmissions aren't involved in the production of quantum entanglement.
I'm not sure I follow your argument perfectly, but it sounds as though you're saying that the coincidence circuitry may be the source of the correlations, and not anything occurring with or between the entangled particles. If that were true, then any two particles would produce the correlations, not just entangled particles.

Perhaps I'm misunderstanding your argument.
 
  • #7


peter0302 said:
I'm not sure I follow your argument perfectly, but it sounds as though you're saying that the coincidence circuitry may be the source of the correlations, and not anything occurring with or between the entangled particles. If that were true, then any two particles would produce the correlations, not just entangled particles.

Perhaps I'm misunderstanding your argument.

I was in a hurry, as I am now. :smile: Sorry for any misunderstanding.

My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The data correlations themselves are indeed produced by the experimenters via the experimental design. But yes, it's presumed that the deep cause of the correlations is whatever is happening at the quantum level.
 
  • #8


ThomasT said:
I was in a hurry, as I am now. :smile: Sorry for any misunderstanding.

My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The data correlations themselves are indeed produced by the experimenters via the experimental design. But yes, it's presumed that the deep cause of the correlations is whatever is happening at the quantum level.

It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever). In other words, that in EPR correlations, one simply finds back the correlation of the common properties the source of the two particles has induced in them. A bit like the source is a vegetable chopper, and it cuts vegetables in two pieces to send them off to two different locations. It randomly picks vegetables (say, a salad, a tomato, a cucumber), but then it takes, say, a salad, cuts it in two pieces, and sends off half a salad to Alice and to Bob. Alice by herself sees randomly the arrival of half a salad, half a tomato, half a cucumber, ... and Bob too, but of course when we compare their results, each time Alice had half a salad, Bob also had half a salad etc...
Is that what you mean ?
 
  • #9


I don't know if that's what he meant, but that's what I meant.
 
  • #10


My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.
Yes.


The data correlations themselves are indeed produced by the experimenters via the experimental design. But yes, it's presumed that the deep cause of the correlations is whatever is happening at the quantum level.
Ok. On one level I agree that the correlations are not evident until the results of measurements are compared using the coincidence circuitry. But the coincidence circuitry merely compares two measurements that have already been made - it does not fabricate the results. If we extrapolate back in time to attempt to discern what happened, we cannot account for the fact that the two measurement events are outside one anotehrs' light cones. How then did the correlation occur?

The contenders have always been:
- Superdeterminism: the entire system, including the experimental components, was pre-ordained to act the way it did, and all conspired to produce the results we see.

- Hidden variables: there was something hidden in the particles that we couldn't detect that determined the outcome. Bell disproved naive hidden variable theories but more sophisticated ones such as Bohm still are popular among some.

- Many Worlds: Photon A splits in two at the polarizer, and Photon B splits in two at the other polarizer, and when both reach the coincidence counter, a total of four worlds are created, and the odds of being in anyone of those four is governed by Malus' law depending on the difference in angles between the polarizers

- Copenhagen: the two photons going through the polarizers isn't actually a measurement, for the experimenter, because it hasn't been observed yet by him. So the wave function hasn't collapsed, and the system continues to evolve in the superpositioned state until both measurements have been observed by the same observer. (Unfortunately this doesn't account for the fact that two experimenters could independently view the results of their respective photons, meet, and then compare notes - each believes that he caused the other's wavefunction to collapse. Who's right?) The fact that wavefunction collapse has no objective and logically self-consistent definition is CI's greatest failing IMO.

If I understand you right, you're arguing for superdeterminism?
 
  • #11


vanesch said:
It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever). In other words, that in EPR correlations, one simply finds back the correlation of the common properties the source of the two particles has induced in them. A bit like the source is a vegetable chopper, and it cuts vegetables in two pieces to send them off to two different locations. It randomly picks vegetables (say, a salad, a tomato, a cucumber), but then it takes, say, a salad, cuts it in two pieces, and sends off half a salad to Alice and to Bob. Alice by herself sees randomly the arrival of half a salad, half a tomato, half a cucumber, ... and Bob too, but of course when we compare their results, each time Alice had half a salad, Bob also had half a salad etc...
Is that what you mean ?
Alice and Bob's Salad instead of Bertlmann's socks, right? :smile:
 
  • #12


peter0302 said:
Alice and Bob's Salad instead of Bertlmann's socks, right? :smile:

Yup :approve: o:)
 
  • #13


vanesch said:
It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever).
Is that what you mean ?
I thought my statement was pretty clear. :smile: Maybe not. I'm in a hurry just now, but will return to reply to your and peter's questions in an hour or so.
 
  • #14


vanesch said:
It sounds like you are saying that EPR correlations are obtained by a common property of the two emitted particles ("torque", or "disturbance" or whatever). In other words, that in EPR correlations, one simply finds back the correlation of the common properties the source of the two particles has induced in them. A bit like the source is a vegetable chopper, and it cuts vegetables in two pieces to send them off to two different locations. It randomly picks vegetables (say, a salad, a tomato, a cucumber), but then it takes, say, a salad, cuts it in two pieces, and sends off half a salad to Alice and to Bob. Alice by herself sees randomly the arrival of half a salad, half a tomato, half a cucumber, ... and Bob too, but of course when we compare their results, each time Alice had half a salad, Bob also had half a salad etc...
Is that what you mean ?
I'd say it's more like a Caesar salad without the anchovies. Just kidding. :rolleyes:

Here's what I said:
My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The above statement pertains to the models and experimental designs (that I've seen) involved in producing quantum entanglements.

This is, and will likely forever remain, an assumption regarding what is actually happening at the quantum level. Nevertheless, this assumption of "common property [or properties] of the two emitted particles [or spatially separated groups of particles]" is an integral part of the designs of the experiments that produce entangled data (eg., correlations of the type gotten via typical optical Bell tests).

This my conceptual understanding of the nature of quantum entanglement, and it's the way that at least some of the people that do the experiments that produce quantum entanglement think about it. And, there's simply no reason in the first place to entertain the idea that quantum entanglement has anything to do with FTL propagation of anything.
 
  • #15


peter0302 said:
Ok. On one level I agree that the correlations are not evident until the results of measurements are compared using the coincidence circuitry. But the coincidence circuitry merely compares two measurements that have already been made - it does not fabricate the results. If we extrapolate back in time to attempt to discern what happened, we cannot account for the fact that the two measurement events are outside one anotehrs' light cones. How then did the correlation occur?
Zap two spatially separated groups of atoms in an identical way, and the two groups of atoms are entangled with respect to the common motional properties induced by the common zapping.

Two opposite-moving optical disturbances emitted at the same time by the same atom are entangled with respect to the motion of the atom at the time of emission.

The experimental correlations are produced by analyzing the entangled properties with identical instruments via a global experimental design.

peter0302 said:
The contenders have always been:
- Superdeterminism: the entire system, including the experimental components, was pre-ordained to act the way it did, and all conspired to produce the results we see.

- Hidden variables: there was something hidden in the particles that we couldn't detect that determined the outcome. Bell disproved naive hidden variable theories but more sophisticated ones such as Bohm still are popular among some.

- Many Worlds: Photon A splits in two at the polarizer, and Photon B splits in two at the other polarizer, and when both reach the coincidence counter, a total of four worlds are created, and the odds of being in anyone of those four is governed by Malus' law depending on the difference in angles between the polarizers

- Copenhagen: the two photons going through the polarizers isn't actually a measurement, for the experimenter, because it hasn't been observed yet by him. So the wave function hasn't collapsed, and the system continues to evolve in the superpositioned state until both measurements have been observed by the same observer. (Unfortunately this doesn't account for the fact that two experimenters could independently view the results of their respective photons, meet, and then compare notes - each believes that he caused the other's wavefunction to collapse. Who's right?) The fact that wavefunction collapse has no objective and logically self-consistent definition is CI's greatest failing IMO.

If I understand you right, you're arguing for superdeterminism?

I think the assumption of some sort of determinism underlies all science.

My understanding of wave function collapse via the CI is that once an individual qualitative result is recorded, then all of the terms of the wave function that don't pertain to that result are discarded. In this sense, the wave function describes, quantitatively, the behavior of the experimental instruments.

I don't see what isn't objective or self-consistent about the CI.

The essence of the CI, as I see it, is that statements regarding events and behavior that haven't been observed are speculative. Objective science begins at the instrumental level.
Hence a fundamental quantum of action, and limitations on what we can ever possibly know.
 
  • #16


No, you're trapped in a classical understandng of entanglement.

Bell's theorem proves that there is no actual property that can be common to the entangled particles before their detection that can account for the correlations that we see. It can get close, but not all the way there.

It turns out that the probability of joint detection is dependent solely on the difference in angle between the two polarizers. Moreover, it works even if the polarizer angles are set a nanosecond before detection. There's nothing about the experimental set up that could cause that. There's no conceiveable hidden variable scheme that could cause the photons to behave that way. It is as though they "know" what the other polarizer angle was.

I think the assumption of some sort of determinism underlies all science.
ACK. No, that's the whole point! It underlies all of *your* *common* *sense*.

My understanding of wave function collapse via the CI is that once an individual qualitative result is recorded, then all of the terms of the wave function that don't pertain to that result are discarded. In this sense, the wave function describes, quantitatively, the behavior of the experimental instruments.

I don't see what isn't objective or self-consistent about the CI.
Define a qualitative result being recorded. By whom? By a computer? A person? A cat? It's subjective. There is no consistent definiton of observer or observation. It all depends on the experiment. Heisenberg even said this. He said the quantum/classical divide depends on the epxeriment. It's not an objective process. And it's not well understood (in CI). The only thing that is well understood is how to calculate the odds.
 
  • #17


ThomasT said:
I'd say it's more like a Caesar salad without the anchovies. Just kidding. :rolleyes:

Here's what I said:
My understanding is that it's assumed, at least tacitly, that in EPR-Bell experiments the source of the entanglement at the quantum level is, eg., emission from the same atom of two opposite-moving disturbances, or transmission of an identical torque to two spatially separated particles or groups of particles, or direct interaction of two particles, or however else one might produce spatially separated, yet identical, properties to correlate with respect to one global operation or another.

The above statement pertains to the models and experimental designs (that I've seen) involved in producing quantum entanglements.

I'm trying to find out whether you understood the difficulty presented by Bell's theorem or not. If you think that the correlations found in the outcomes in EPR experiments are due to a common property to the two particles, in other words, because the two particles are, say, identical copies of one another, determined by the fact that they have been emitted by the same source (the same atom or so), and hence have random, but identical spin each time, or something else, then:
1) that wouldn't have surprised anybody
2) you have not understood Bell's theorem.

The surprise resides in the fact that the correlations found cannot be explained that way: numerically they don't fit. With the half-a-vegetable emitter, you cannot obtain the same correlations as those of an EPR experiment. That's exactly the content of Bell's theorem. Of course you can find the perfect correlations in the case of identical analysis. That's no surprise. That's like "each time bob finds half a salad, alice finds half a salad too". Easy. That's because they came from the same source: the chopper.
The crazy thing about EPR results is something like: AT THE SAME TIME, we also have: "each time bob finds a salad, the color of Alice's vegetable is random".
That's kind of impossible with our chopper: each time bob finds a salad, Alice was supposed to find a salad too, so if she decided not to look at the kind of vegetable, but rather at its color, she should have found systematically "green". Well, no. She finds red or dark green/blue also.

Now, maybe you know this, but then I don't understand your statements, which then sound tautological to me: "particles show entangled behavior because they became entangled at their source". Sure. But that doesn't explain the "paradoxial" correlations AND lack of correlation at the same time.
 
  • #18


Yes exactly. Each time Alice eats a tomato, Bob is more likely to eat a cucumber. Each time Alice can't finish her broccoli, Bob eats his carrots more often.

Those are the types of wacky correlations that entanglement produces. Yes, yes I suppose you could construct very elaborate explanations for all that. Fortunately Bell proved mathematically that NO explanation can work.
 
  • #19


There are some loopholes in Bell's theorem. The most obvious one is the assumption that even though the theory is assumed to be deterministic, we can assume that the observer can choose the experimental set up at will. This is impossible, because if the observer had "free will" that would violate determinism.

This is discussed in detail http://arxiv.org/abs/quant-ph/0701097"
 
Last edited by a moderator:
  • #20


Count Iblis said:
There are some loopholes in Bell's theorem. The most obvious one is the assumption that even though the theory is assumed to be deterministic, we can assume that the observer can choose the experimental set up at will. This is impossible, because if the observer had "free will" that would violate determinism.

This is discussed in detail http://arxiv.org/abs/quant-ph/0701097"

Because it is t'Hooft saying it, it gets more visibility that an article like this otherwise would. But there is plenty to criticize, and the idea of "superdeterminism" is not considered a loophole in Bell's Theorem. Keep in mind that the essential question is whether a local deterministic theory can yield predictions equivalent to QM. A superdeterministic theory comes no closer!

Keep in mind that such a theory comes with its own rather substantial baggage. It would be somewhat like saying that Bell's Theorem is flawed because you believe in God, and Bell's Theorem tacitly assumes there is no God. I think we can all acknowledge that if there is some unseen force that changes the results of only the experiments we perform to have different values than they really are - then we will be blissfully unaware of this and have incorrect scientific theories. Except that these "incorrect" theories will still work and be useful "as if" they were correct all along.

(Side comment: I guess the Pythagorean Theory is wrong similarly.)

The fact is that even if our choice of measurements is pre-determined because we don't have free will, that in no ways explains why the results match QM's predictions and not those of local realistic theories.
 
Last edited by a moderator:
  • #21


Question about the derivation of Bell's theorem. He assumes, does he not, not only the freedom to choose experimental conditions, but also the freedom to choose any continuous real value for the parameters of those conditions. In other words in his derivations he clearly uses integral calculus, and so naturally he's assuming continuity in the possible values. (And I've seen dumbed down derivations too but they still inherently assume arbitrary freedom to choose parameters).

But we all know that quantum values are not continuous, they are quantized - multiples or half multiples of hbar for example. So we can't actually choose any arbitrary value for our polarizer measurement; we are slightly restricted (we don't have *totally* free will). If Bell's theorem is re-derived using discontinuous summations instead of continuous integration, do the inequalities still come out the same?

Does the question even make any sense?

On a side note, I have read some of the papers on scale relativity, which, for those of you who are unfamiliar with it, states that no length measurement, regardless of reference frame or scale, will ever be less than the Planck Length (similar to the invariance of the speed of light). The author claims to derive the schrodinger equation and other postulates of QM using this theory. Some have dismissed it as "numerology" but I find it remarkable how much of QM drops into place once you quantize spacetime. Thus I have to wonder if EPR bell experiments can likewise be accounted for by quantizing polarization.
 
  • #22


From 't Hooft's paper:
It is easy to argue that, even the best conceivable computer, cannot compute ahead of time what Mr. Conway will do, simply because Nature does her own calculations much faster than any man-made construction, made out of parts existing in Nature, can ever do. There is no need to demand for more free will than that.
This, to me, is the coup de grace. I've said this many times myself - you need more than the entire universe in order to observe/predict the entire universe (whether in terms of speed or size or accuracy). If you're using billiard balls to measure the location of billiard balls, you can never know the location of ALL the billiard balls or know their location more precisely than the diameter of a billiard ball!

But that's just an intuitive derivation of the HUP. And even given that uncertainty in billiard ball positions, they don't behave like waves, nor do they exhibit entanglement.
 
  • #23


peter0302 said:
No, you're trapped in a classical understandng of entanglement.
Yes, my understanding of entanglement has a classical basis.

peter0302 said:
Bell's theorem proves that there is no actual property that can be common to the entangled particles before their detection that can account for the correlations that we see.
I don't think Bell's theorem proves anything about nature. Bell inequalities are simply arithmetic expressions. (with respect to N properties of the members of some population a certain numerical relationship will always hold).

peter0302 said:
It turns out that the probability of joint detection is dependent solely on the difference in angle between the two polarizers.
Yes, but only if the experimental design matches up the data sequences at A and B according to the assumption of common (prior to filtration) cause -- in other words, the assumption that what is getting analyzed at A during a certain interval is identical to what is getting analyzed at B during that same interval.

peter0302 said:
Moreover, it works even if the polarizer angles are set a nanosecond before detection.
Of course, why shouldn't it? There's always one, and only one, angular difference associated with any given pair of detection attributes.

peter0302 said:
There's nothing about the experimental set up that could cause that.
Apparently there is something about the experimental setup that causes it, because it's been reproduced at least hundreds of times.

peter0302 said:
There's no conceiveable hidden variable scheme that could cause the photons to behave that way. It is as though they "know" what the other polarizer angle was.
If A and B are analyzing the same thing, then it's easily understandable. If they aren't, then it's a complete mystery.
Note that we can't say anything more about what's being analyzed except in accordance with the experimental design. So, if you're doing optical Bell tests using polarizers, then nothing can be said about the polarization of photons prior to production at the detectors. But the working assumption is that opposite-moving, polarizer-incident disturbances associated with paired attributes are identical.

peter0302 said:
Define a qualitative result being recorded. By whom? By a computer? A person? A cat? It's subjective. There is no consistent definiton of observer or observation. It all depends on the experiment. Heisenberg even said this. He said the quantum/classical divide depends on the epxeriment. It's not an objective process. And it's not well understood (in CI). The only thing that is well understood is how to calculate the odds.
Just because the decision about where to draw the line can seem a bit arbitrary at times doesn't mean that, once the line is drawn, or once a qualitative result is recorded, it's not objective.

The CI is the super-realistic, instrumentalist interpretation of quantum theory -- and therefore the most objective way to look at it.
 
  • #24


ThomasT said:
I don't think Bell's theorem proves anything about nature. Bell inequalities are simply arithmetic expressions. (with respect to N properties of the members of some population a certain numerical relationship will always hold).

Well, here's the surprise: those inequalities are violated by:
1) quantum mechanical predictions
2) experimental results of an ideal EPR experiment.

As you said, they should normally hold. They don't. That means that you CANNOT find a set of properties that "predict" the results, as those should, as you correctly point out, satisfy numerical relationships that will always hold.

This is as shocking as the following: there's a theorem that says that if you have two sets of objects, and you count the set of objects in the first set, and you find m, and you count the set of objects in the second set, and you find n, then if you count the set of objects in both sets, you should find, well, n + m. You now take EPR-marbles in two bags. You count the marbles in the first bag and you find 5. You count the marbles in the second bag and you find 3. You count the marbles in the first bag and then in the second bag, and you find 6. Huh ? THAT's the surprise. An "obvious" arithmetic property simply doesn't hold.

Yes, but only if the experimental design matches up the data sequences at A and B according to the assumption of common (prior to filtration) cause -- in other words, the assumption that what is getting analyzed at A during a certain interval is identical to what is getting analyzed at B during that same interval.

It would even be more surprising if the correlations even helt between data that were NOT matched up!

Of course, why shouldn't it? There's always one, and only one, angular difference associated with any given pair of detection attributes.

The point is that each detection phenomenon individually, doesn't "know" what was the setting at the other side. So the only way to "correlate" with this difference is that we are measuring a common property of the two objects. Well, it turns out that correlations due to common properties have to obey the arithmetic inequalities that Bell found out, and lo and behold, the actual correlations that are observed, and that are predicted by quantum mechanics do NOT satisfy those arithmetic inequalities.

Apparently there is something about the experimental setup that causes it, because it's been reproduced at least hundreds of times.

Point is that it can't be something that comes from the source! That's the difficult part.

If A and B are analyzing the same thing, then it's easily understandable.

No, it isn't. If they were analysing the same thing, Bell's arithmetic inequalities should hold. And they don't in this case.


Note that we can't say anything more about what's being analyzed except in accordance with the experimental design. So, if you're doing optical Bell tests using polarizers, then nothing can be said about the polarization of photons prior to production at the detectors. But the working assumption is that opposite-moving, polarizer-incident disturbances associated with paired attributes are identical.

No, not even that. No property emitted from the source could produce the observed correlations. Again, because if they did, they should follow the Bell inequalities which are, as you correctly point out, nothing else but a arithmetic expressions which (should) hold for any set of N properties (emitted from the source). And they don't.
 
  • #25


peter0302 said:
Question about the derivation of Bell's theorem. He assumes, does he not, not only the freedom to choose experimental conditions, but also the freedom to choose any continuous real value for the parameters of those conditions. In other words in his derivations he clearly uses integral calculus, and so naturally he's assuming continuity in the possible values. (And I've seen dumbed down derivations too but they still inherently assume arbitrary freedom to choose parameters).

But we all know that quantum values are not continuous, they are quantized - multiples or half multiples of hbar for example. So we can't actually choose any arbitrary value for our polarizer measurement; we are slightly restricted (we don't have *totally* free will). If Bell's theorem is re-derived using discontinuous summations instead of continuous integration, do the inequalities still come out the same?

Does the question even make any sense?

On a side note, I have read some of the papers on scale relativity, which, for those of you who are unfamiliar with it, states that no length measurement, regardless of reference frame or scale, will ever be less than the Planck Length (similar to the invariance of the speed of light). The author claims to derive the schrodinger equation and other postulates of QM using this theory. Some have dismissed it as "numerology" but I find it remarkable how much of QM drops into place once you quantize spacetime. Thus I have to wonder if EPR bell experiments can likewise be accounted for by quantizing polarization.

It is not really necessary for there to be free will for the inequality to be violated. You don't really need to make last minute polarizer setting choices. It just makes it easier to see that there is no signal influence between observers that accounts for the results.

If you set Alice & Bob's polarizers at 120 degrees apart (say Alice at 0 and Bob at 120 degrees) or -120 degrees apart (Alice at 0 and Bob at 240 degrees), you get the same coincidence results, 25% match. Classically, it cannot be less than 33.3% if you expect internal consistency at those angle settings. That is Bell's Theorem (Mermin variation).

If you try to put together a possible set of results for angles A, B and C (all 120 degrees apart) you will quickly see that it cannot be made to yield results consistent with experiment. So you will see that free will is NOT an assumption of Bell's Theorem.

A better question is to ask whether Fair Sampling is a valid assumption. And the answer to that is similar to the answer of any scientific experiment. All scientific theory is essentially an extrapolation of the results of a finite series of scientific experiments. Where those experiments to be biased in some way, it is possible we could find that relativity is wrong, as are all theories. But that is part and parcel of the scientific method and has nothing to do with Bell's Theorem. So I consider such discussion more of a philosophical topic rather than a topic specific to entanglement.

If you are interested in that philosophical topic, I might recommend "How the Laws of Physics Lie" by Nancy Cartwright. It is excellent.
 
  • #26


Ok, let me put it a different way. Forget about "free will." Let's just focus on the quantization of spin. Does Bell's inequality hold just as easily for discontinuous functions as for continuous ones?
 
  • #27


peter0302 said:
Ok, let me put it a different way. Forget about "free will." Let's just focus on the quantization of spin. Does Bell's inequality hold just as easily for discontinuous functions as for continuous ones?

It relates to the QM formula, which is continuous. If you want to posit a discontinuous function or discontinuous underlying reality, then you will want some experimental support for it. There isn't anything like that at this time.

However, spin is quantized in one critical sense. It is basically -1 or +1 (can be scaled from 0 to 1 as well) at any point in time that you choose to measure it. Once it is measured, it then takes on an angular value and all non-commuting components (relative to that angular value) are now reset to some random value (-1 or +1). For spin 1 particles, non-commuting components are offset 45 degrees. For spin 1/2 particles, non-commuting components are offset 90 degrees.
 
  • #28


It relates to the QM formula, which is continuous. If you want to posit a discontinuous function or discontinuous underlying reality, then you will want some experimental support for it. There isn't anything like that at this time.
Perhaps not, but if postulating one solves all of the interpretive difficulties, such as the way scale relativity claims to, then it might be quite useful.
 
  • #29


vanesch said:
Now, maybe you know this, but then I don't understand your statements, which then sound tautological to me: "particles show entangled behavior because they became entangled at their source". Sure. But that doesn't explain the "paradoxial" correlations AND lack of correlation at the same time.
Bell's theorem would have us expecting that the intensity of light transmitted via crossed polarizers should vary as a linear function of the angular difference between the polarizers.
On the other hand we see in EPR-Bell tests as well as in classical polariscopic setups that the intensity of light transmitted via crossed polarizers varies as cos^2 the angular difference.

This says to me that there is a relationship, a connection between what is happening in optical Bell tests to produce the observed correlations and what is happening in a classical polariscopic setups to produce the observed Malus Law angular dependency.

Granted that the numbers are a bit different, but I'm trying for a conceptual understanding of what is happening to produce the EPR-Bell correlations.

The intensity of the light transmitted by the analyzing polarizer in a polariscopic setup corresponds to the rate of coincidental detection in an EPR-Bell setup. Propagating between the polarizers in both setups is an identical optical disturbance. In a polariscopic setup, the light that is transmitted by the first polarizer is identical to the light that is incident on the analyzing polarizer. In an EPR-Bell setup, the light incident on the polarizer at A is identical to the light incident on the polarizer at B during any given coincidence interval.
 
  • #30


vanesch said:
THAT's the surprise. An "obvious" arithmetic property simply doesn't hold.
I would expect the correlation between angular difference and rate of coincidental detection to be linear only if (1) my formulation manifested the assumption of common emission cause and (2) my "theorem" also assumed that common emission cause must produce a linear functional relationship between the correlated variables.

vanesch said:
It would even be more surprising if the correlations even helt between data that were NOT matched up!
Yes, but that doesn't happen, does it? And the fact that the data does have to be very carefully matched according to the assumption of common emission cause is further support for the assumption that there is a common emission cause.

vanesch said:
Point is that it can't be something that comes from the source! That's the difficult part.
It might seem that it's even more difficult to see why Bell's theorem doesn't rule out emission produced or pre-polarization entanglement. But it isn't, and it doesn't.

vanesch said:
If they were analysing the same thing, Bell's arithmetic inequalities should hold.
Not necessarily.

vanesch said:
No property emitted from the source could produce the observed correlations.
And yet this is what's generally assumed to be happening. The models are based on common properties emitted from the source. As far as I know, there aren't any working models based on FTL or instantaneous action at a distance assumptions that are used by experimenters in designing and doing EPR-Bell experiments.
 
  • #31


vanesch said:
If they were analysing the same thing, Bell's arithmetic inequalities should hold.
The inequalities are violated because they require specific (pre-filtration) values for the properties that are being related. However, such specific values aren't supplied by the quantum theory because they aren't supplied via experiment. Anyway, such specific values are superfluous and irrelevant with regard to calculating the probable results of EPR-Bell tests, since the values of these common properties are assumed to be varying randomly from emitted pair to emitted pair.

So, a common emission cause can be assumed (and is of course necessary for a local understanding of the results) as long as no specific value is given to the identical properties (the hidden variables) incident on the polarizer.

To reiterate, it doesn't matter what the value of the common property is, just that it be the same at both ends wrt any given coincidence interval.
 
  • #32


Thomas, recently certain non-local models have been ruled out as well...
 
  • #33


"Superdeterminism" has been mentioned a few times, also called "conspiracy", particularly by Bell in "Free variables and local causality". Note that both are pejorative terms. Note also, however, that only deterministic evolution of probabilities and correlations, etc. is required, not deterministic evolution of particle properties. The trouble with ruling out classical models with this sort of property is that quantum theory applied to a whole experimental apparatus --- instead of only to the two particle quantum state that is supposed to cause the apparatus to register synchronized discrete events --- predicts the same type of deterministic evolution of probabilities and correlations of the whole experimental apparatus as a classical model requires. So, if superdeterminism or conspiracy is not acceptable, quantum theory is not acceptable (but of course quantum theory is acceptable, ...). See my "Bell inequalities for random fields", arXiv:cond-mat/0403692, published as J. Phys. A: Math. Gen. 39 (2006) 7441-7455, for a discussion.

What we have to let go of is the idea that the individual events are caused by two particles and their (pre-existing) properties. The no-go theorems require assumptions that are very hard to let go of for classical particles but do not hold at all for a classical field theory (more is needed, however; probabilities over field theories are mathematically not easily well-defined, requiring us to use what are called continuous random fields; see the above reference).

Although this is superficially a conceptually simple way to think about experiments that violate Bell inequalities, a great deal of subtlety is required. Bell inequalities are not confusing for no reason. In particular, the experimental apparatus has to be modeled to allow this point of view, not just a pair of particles. This is a heavy enough cost that for practical purposes it's best to come to an understanding with quantum theory. Furthermore, we have to think of an experimental apparatus that violates Bell inequalities as in a coarse-grained equilibrium (despite the discrete events that are clearly a signature of fine-grained nonequilibrium). For other experiments, at least, we also have to explicitly model quantum fluctuations in a classical formalism, but quantum fluctuations are not essential to understanding the violation of Bell inequalities. If we adopt all these qualifications, measurement of the experimental settings of polarization direction and the timings of measurement events (in experiments such as that by Wiehs, arXiv:quant-ph/9810080) are compatible, so they as much form the basis for classical models as for quantum mechanical models.

There are, however, very many other ways of reconciling oneself to the violation of Bell inequalities in the extensive literature. Pick one you like if at all possible, the alternative of rolling your own won't be easy.
 
  • #34


Sadly I really think these qualitative discussions are not very helpful. This is one of those instances where it makes sense only to talk about the mathematics.

Mathematically, Bell's theorem assumes that the probability of detection at A is independent of the probability of detection at B. QM violates his theorem, and therefore violates this assumption. Any interpretive framework of QM must therefore account for the fact that the probability of detection at A is, in fact, dependent on detection at B, and vice versa. One explanation is a common cause, another is non-local communication. Both are equally plausible at this juncture.
 
  • #35


ThomasT said:
Bell's theorem would have us expecting that the intensity of light transmitted via crossed polarizers should vary as a linear function of the angular difference between the polarizers.

Not at all. Bell's theorem states that a set of correlations between measurements of which the correlations are due to a common origin, must satisfy certain arithmetic relationships.
Bell's theorem doesn't state anything about photons, polarizations, optics, quantum mechanics or whatever. It simply states something about the possible correlations that can be caused by a common cause. It could have been stated even if there were never any quantum mechanics. Only, with classical physics, it would have sounded as almost trivial.

In other words, it is a property of a set of correlations, wherever they come from, if they are assumed to come from a common cause. Bell's theorem is hence something that applies to sets of correlations. It can be formulated more generally, but in its simplest form, applied to a specific set of correlations, it goes like this:

Suppose that we can ask 6 questions about something. The questions are called A, B, C, D, E and F. To each question, one can have an answer "yes", or "no".
As I said, Bell's theorem is more general than this, but this can be a specific application of it.

We assume that we have a series of such objets, and we are going to consider that to each of the potential questions, each object has a potential answer.

Now, consider that we pick one question from the set {A,B,C}, and one question from the set {D,E,F}. That means that we have two questions, and hence two answers.

There are 9 possible sets of questions:

(A,D)
(A,E)
(A,F)
(B,D)
(B,E)
(B,F)
(C,D)
(C,E)
(C,F)

Let us call a generic set of questions: (X,Y) (here, X stands for A, B or C, and Y stands for D,E or F).

We are going to look at the statistics of the answers to the question (X,Y) for our series of objets, and we want to determine:
the average fraction of time that we have "yes,yes",
the average fraction of time that we have "yes,no"
the average fraction of time that we have "no,yes"
the average fraction of time that we have "no,no"

We take it that that average is the "population average".

In fact, we even look at a simpler statistic: the average fraction of time that we have the SAME answer (yesyes, or nono). We call this: the correlation of (X,Y) of the population.

We write it as: C(X,Y). If C(X,Y) = 1, that means that the answers to the questions X and Y are ALL of the kind: yesyes, or nono. Never we find a yesno or a noyes.
If C(X,Y) = 0, then it's the opposite: we never find yesyes, or nono.

If C(X,Y) = 0.5, that means that we have as many "equal" as "unequal" answers.

We have 9 possible combinations (X,Y), and hence we have 9 different values C(X,Y):
we have a C(A,D), we have a C(A,E), etc...

Suppose now that C(A,D) = 1, C(B,E) = 1, and C(C,F) = 1.

That means that each time we ask A, and we have yes, then if we measure D, we also have yes, and each time we ask A and we have no, then if we ask D, we have no.

So the answer to the question A is the same as the answer to the question D.
Same for B and E, and same for C and F.

So in a way, you can say that D is the same question as A, and E is the same question as B.

We can hence consider the six remaining combinations, and rewrite A for D etc...

C(A,B)
C(A,C)
C(B,A)
C(B,C)
C(C,A)
C(C,B)

We also suppose a symmetrical situation: C(A,B) = C(B,A).

We now have 3 numbers left: C(A,B), C(B,C) and C(A,C).

Well, Bell's theorem asserts: C(A,B) + C(B,C) + C(A,C) >= 1.

Another way of stating this is that if you look at it as:

C(A,B) + C(B,C) > = 1 - C(A,C)

It means that you cannot find a property B which is sufficiently anti-correlated with as well A as C, without A and C being somehow correlated. In other words, there's a lower limit to which 3 properties can be anti-correlated amongst themselves. It's almost trivial if you think about it: if A is the opposite of B, and B is the opposite of C, then A cannot be the opposite of C.

Let's indeed take A is not B, and B is not C: then C(A,B) = 0, and C(B,C) = 0. We then find that 0 + 0 must be larger than 1 - C(A,C), hence C(A,C) = 1. A must be the same as C.

Let's take A and B uncorrelated, and B and C uncorrelated. That means that C(A,B) = 0.5 and C(B,C) = 0.5. In this case, there's no requirement on C(A,C) which can go from 0 to 1.

So this is a "trivial" property of the correlations of the answers to the questions A, B and C one can ask about something.

Well, it is this which is violated in quantum mechanics.

C(X,Y) is given by cos(theta_x - theta_y) ^2, and so we have:

cos^2(th_xy) + cos^2(th_yz) + cos^2(th_xy + th_yz) is not always > 1.

Indeed, take angles 1 rad, 1 rad and 2 rad:

The sum is 0.757...
 

Similar threads

Back
Top