Understanding bell's theorem: why hidden variables imply a linear relationship?

In summary: Bell's theorem does not hold. In summary, the proof/logic of Bell's theorem goes thus: with the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables would imply a linear variation in the correlation. However, according to quantum mechanical theory, the correlation varies as the cosine of the angle. Experimental results match the [cosine] curve predicted by quantum mechanics.
  • #1
San K
911
1
(Part of) The proof/logic of Bell's theorem goes thus:

With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables would imply a linear variation in the correlation. However, according to quantum mechanical theory, the correlation varies as the cosine of the angle. Experimental results match the [cosine] curve predicted by quantum mechanics.


Question: why do hidden variables need to imply a linear variation?

we have many cases in physics/sciences/management/electrical where the relationship can be other than linear...(exponential, cosine, sine, log, square, cube, quad, polynomial etc)
 
Last edited:
Physics news on Phys.org
  • #2
San K said:
Question: why do hidden variables need to imply a linear variation?
They don't.
 
  • #3
San K said:
Question: why do hidden variables need to imply a linear variation?
The short answer is that, in the presence of the hypotheses of counterfactual definiteness and locality, the linearity of the laws of probability leads to the linearity of the Bell inequality.

But let me spell out the logic in greater detail. The example I'll discuss comes from http://quantumtantra.com/bell2.html. We start with the experimental prediction of quantum mechanics that when you send two entangled photons into polarizers that are oriented at the same angle, the photons do identical things: they either both go through or they both don't. If you believe in local hidden variables, then you can conclude from this that right when the two photons were created, when they were presumably in sthe same place (that's the locality assumption), they decided in advance what polarizer orientations they should go through and which ones they shouldn't go through. So they basically have a list of "good angles" and "bad angles". If, for instance, one of the photons encounters a 15 degree-oriented polarizer, it will check whether 15 degrees is good or bad, and if it's good then it will go through. If x is the angle a polarizer is oriented, let's say P(x)=1 if x is a good angle, and P(x)=0 if x is a bad angle.

Now Bell's theorem is concerned with the probability that the two photons behave differently if the polarizers are turned to different orientations. But since, as we said, the photons are just deciding to go through or not go through based on a previously agreed upon decision about what angles are good and bad, all we're talking is the probability that P(θ1)≠P(θ2), where θ1 is the angle of the first polarizer and θ2 is the angle of the second polarizer. In the Herbert proof I linked to, the specific case we're talking about is the probability that P(-30)≠P(30), i.e. the probability that if you turn one polarizer at -30 degrees and the other one at 30 degrees, you get a mismatch.

Now under what conditions is the statement P(-30)≠P(30) true? Well, it can only be true if either P(-30)≠P(0) OR P(0)≠P(30) (because if both of these were false we would have P(-30)=P(0)=P(30)). The word "OR" is the crucial part, because one of the basic rules of probability is that the probability of A OR B is less than or equal to the probability of A plus the probability of B. So the probability that P(-30)≠P(30) is less than or equal to the probability that P(-30)≠P(0) plus the probability that P(0)≠P(30) - and Bingo, we've derived a Bell inequality! And note the crucial role counterfactual definiteness played in the proof: we are assuming it makes sense to talk about P(0), even though we only measured P(-30) and P(30). In other words, the assumption is that measurements that we did not make still have well defined answers as to what would have happened if you made them.

Does that make sense? The form of Bell's inequality, A+B≤C, fundamentally comes from the fact that probabilities are (sub)additive.
 
  • Like
Likes Spinnor
  • #4
lugita15 said:
The short answer is ...
I already gave him the short, and correct, answer. You gave him the (or a) long answer. The fact of the matter is that the assumption of hidden variables doesn't imply a linear correlation between θ and rate of coincidental detection. Period. Herbert's line of reasoning ignores what has been known about the behavior of light for ~ 200 years. Period. The conclusion that the correlation between θ and rate of coincidental detection must be linear is ... ignorant. No offense, by the way, because I regard your contributions as being informative and thought provoking.
 
  • #5
The fact of the matter is that the assumption of hidden variables doesn't imply a linear correlation between θ and rate of coincidental detection. Period.
You forgot to mention locality. Bohmian mechanics is quite capable of having a nonlinear relationship, but it does this only by having nonlocal interaction between the two particles. But what I continue to claim is that, if the two particles have determined in advance what angles to go through and what angles not to go through, then there MUST be a linear relationship AKA Bell's inequality.
Herbert's line of reasoning ignores what has been known about the behavior of light for ~ 200 years. Period.
First of all, while it's true we've known about phenomena involving light having a nonlinear dependence on the polarizer angle, like Malus' law, for centuries, the specific nonlinear dependence we're talking about in the context of entanglement was only revealed when we found out about particle-wave duality and were able to make single-photon detectors. And about Herbert's line of reasoning, he's not ignoring the known properties of light, he's rather trying to show how a particular philosophical belief leads to conflict with the properties of light predicted by quantum mechanics.
The conclusion that the correlation between θ and rate of coincidental detection must be linear is ... ignorant.
Well, as I've told you before, if the conclusion is wrong then the reasoning must be wrong. And the reasoning is so straight forward. Here it is again, now reduced to four steps:
1. Entangled photons behave identically at identical polarizer settings.
2. The believer in local hidden variables says that the polarizer angles the photons will and won't go through are agreed upon in advanced by the two entangled photons.
3. In order for the agreed-upon instructions (to go through or not go through) at -30 and 30 to be different, either the instructions at -30 and 0 are different or the instructions at 0 and 30 are different.
4. The probability for the instructions at -30 and 30 to be different is less than or equal to the probability for the instruction at -30 and 0 to be different plus the probability for the instructions at 0 and 30 to be different.
No offense, by the way, because I regard your contributions as being informative and thought provoking.
Thanks!
 
  • Like
Likes Spinnor
  • #6
lugita15 said:
You forgot to mention locality. Bohmian mechanics is quite capable of having a nonlinear relationship, but it does this only by having nonlocal interaction between the two particles. But what I continue to claim is that, if the two particles have determined in advance what angles to go through and what angles not to go through, then there MUST be a linear relationship AKA Bell's inequality.
It depends on how you formulate it. The fact of the matter is that there are Bell-type LR models of quantum entanglement that predict a nonlinear correlation. So you're just wrong about that.

lugita15 said:
First of all, while it's true we've known about phenomena involving light having a nonlinear dependence on the polarizer angle, like Malus' law, for centuries, the specific nonlinear dependence we're talking about in the context of entanglement was only revealed when we found out about particle-wave duality and were able to make single-photon detectors.
Look at the quantum mechanical treatment. Where do you think it comes from? What's the basis for it? Do you think it was just plucked out of nothing? Of course not. It's based on the accumulated knowledge of the behavior of light in similar experimental situations.

lugita15 said:
And about Herbert's line of reasoning, he's not ignoring the known properties of light, he's rather trying to show how a particular philosophical belief leads to conflict with the properties of light predicted by quantum mechanics.
But the point is that that particular philosophical belief isn't in conflict with the properties of light predicted by QM. Herbert's conclusion simply ignores the known properties of light. It's ignorant. Period.

lugita15 said:
Well, as I've told you before, if the conclusion is wrong then the reasoning must be wrong.
Herbert's reasoning ignores the known behavior of light. It isn't a big mystery why Herbert's conclusion is wrong. It's just ignorant reasoning.

If you want to understand entanglement in optical Bell tests, then you don't focus on the detection attributes. You focus on the design of the experiments, the known behavior of light, and the apparent fact that the variable that determines individual detection is irrelevant wrt coincidental detection. Thus arriving at the conclusion that coincidental detection is determined by an underlying parameter that isn't varying from pair to pair. It's determined by the relationship between entangled photons. A constant. Now, how would you model that?
 
  • #7
ThomasT said:
It depends on how you formulate it. The fact of the matter is that there are Bell-type LR models of quantum entanglement that predict a nonlinear correlation. So you're just wrong about that.
I maintain that no local hidden variable theory can reproduce all the experimental predictions of quantum mechanics, but it may well be possible for such a theory to be consistent with the practical Bell tests that have been performed to date. For instance, zonde is an adherent of models which say that you do NOT have identical behavior at identical polarizer settings, in contradiction with quantum mechanics, and that an ideal loophole-free Bell test would show zonde to be right and QM to be wrong.
But the point is that that particular philosophical belief isn't in conflict with the properties of light predicted by QM.
You can't just assert that, you have to point to the step in the reasoning that's wrong, or the step in the reasoning that not all local hidden variable theories are logically required to accept.
Herbert's reasoning ignores the known behavior of light. It isn't a big mystery why Herbert's conclusion is wrong. It's just ignorant reasoning.
If the conclusion of an argument is wrong, then one of the steps must be wrong. Which of my now four steps do you find questionable?
If you want to understand entanglement in optical Bell tests, then you don't focus on the detection attributes. You focus on the design of the experiments
As I said, I'm interested in showing how a local deterministic universe cannot reproduce all the experimental predictions of QM, not on showing that the design of currently practical Bell tests definitively disprove all local hidden variable theories.
 
  • #8
San K said:
Question: why do hidden variables need to imply a linear variation?

This is not strictly required by Bell, but it is more of a practical consequence. The candidate local hidden variable theory needs a relationship which both works for perfect correlations (which is the requirement of EPR's elements of reality) and needs to yield a result at other angle settings which is proportional to the angle difference so that there aren't anomalies at certain angles (as Bell discovered). A linear relationship solves that instantly. Of course, that won't match experiment.
 
  • #9
ThomasT said:
Herbert's line of reasoning ignores what has been known about the behavior of light for ~ 200 years.

That's a bit harsh. Malus applies to a stream of polarized particles, but does not strictly apply to a stream of entangled particles. That relationship (cos^2) is indirect. There is a fair description of how entangled particles end up at the cos^2 point here:

http://departments.colgate.edu/phys... research/Quantumlan07/lab5entanglement09.PDF

See the equations leading up to (14), which is the result which is mathematically identical to Malus, but as you can see is obtained completely independently.
 
  • #10
DrChinese said:
This is not strictly required by Bell, but it is more of a practical consequence. The candidate local hidden variable theory needs a relationship which both works for perfect correlations (which is the requirement of EPR's elements of reality).

the perfect correlation between what?

1. between the angle and probabilities

or

2. between the two entangled particles
 
  • #11
San K said:
the perfect correlation between what?

1. between the angle and probabilities

or

2. between the two entangled particles
He means the second one. Perfect correlation refers to the fact that the entangled photons exhibit identical, i.e perfectly correlated, behavior when sent through polarizers with the same orientation. Imperfect correlation occurs at most other angle settings, except when the polarizers are at right angles to each other, in which case you get "perfect anti-correlation".
 
  • #12
San K said:
the perfect correlation between what?

1. between the angle and probabilities

or

2. between the two entangled particles

nope... possibly 3 entangled particles...1 of which we can't see
 
  • #13
lostprophets said:
nope... possibly 3 entangled particles...1 of which we can't see

There is no third particle when you have normal PDC (n=2). There are conservation rules, and the 2 detected particles account for the conserved quantities (since the input particle attributes are known). There can be specialized cases of n>2, because occasionally there are 2 or more input particles being down converted. However, these are not normally seen in ordinary Bell tests.
 
  • #14
lugita15 said:
He means the second one. Perfect correlation refers to the fact that the entangled photons exhibit identical, i.e perfectly correlated, behavior when sent through polarizers with the same orientation. Imperfect correlation occurs at most other angle settings, except when the polarizers are at right angles to each other, in which case you get "perfect anti-correlation".

thanks Lugita.

Are we saying that:

for hidden variable hypothesis -- the two photons would be expect to have perfect correlation (or anti-correlation) at ALL angle settings?

where angle = angle between the axis of the two polarizers?

and then we further conclude that:

since the perfect correlation (or anti correlation) occurs only at angle 0 or 90
(and is a cosine relation at other angles)

therefore the hidden variable hypothesis is weak/rejected
 
Last edited:
  • #15
San K said:
Are we saying that:

for hidden variable hypothesis -- the two photons would be expect to have perfect correlation (or anti-correlation) at ALL angle settings?
No, we are certainly not requiring hidden variables theories to have perfect correlation at all polarizer angles θ1 and θ2.
where angle = angle between the axis of the two polarizers?
When the angles of the two polarizers are the same, then yes, we do say that the two photons exhibit perfect correlations. But that's not just some assumption we make - that's an experimental consequence of quantum mechanics, which presumably the local hidden variable theorist will want to match. (There are, by the way, some local hidden variable theories that do not even try to reproduce all the experimental predictions of QM - they instead claim that quantum mechanics can in principle be disproved experimentally, and that the only reason this disproof has not happened yet is practical limitations in experimentation. But in the context of Bell's theorem we're talking about theories that DO want to reproduce all the experimental consequences of QM.)
and then we further conclude that:

since the perfect correlation (or anti correlation) occurs only at angle 0 or 90
(and is a cosine relation at other angles)

therefore the hidden variable hypothesis is weak/rejected
No, the logic isn't that immediate. There are some steps between here and there, and I laid them out in post #3. We start with perfect correlation at identical polarizer angles, then the local hidden variables guy concludes that the photons have decided in advance what polarizer angles to go through and what ones not to go through, and then we say that in order for there to be a mismatch between -30 and 30 there must be a mismatch between -30 and 0 or 0 and 30, and then we use the laws of probability to conclude that the probability of a mismatch between -30 and 30 is less than or equal to the probability of a mismatch between -30 and 0 plus the probability of a mismatch between 0 and 30, but this contradicts the cosine relation predicted by quantum mechanics, so the local hidden variable hypothesis can be deemed rejected. Tell me if you want any of this spelled out in more detail.
 
  • Like
Likes Spinnor
  • #16
lugita15 said:
there to be a mismatch between -30 and 30 there must be a mismatch between -30 and 0 or 0 and 30, and then we use the laws of probability to conclude that the probability of a mismatch between -30 and 30 is less than or equal to the probability of a mismatch between -30 and 0 plus the probability of a mismatch between 0 and 30, but this contradicts the cosine relation predicted by quantum mechanics, so the local hidden variable hypothesis can be deemed rejected. Tell me if you want any of this spelled out in more detail.

thanks lugita. I have not completely got it yet and that's ok for now as it will take some time.

is the correlation between quantum entangled particles not linear then?
 
  • #17
San K said:
thanks lugita. I have not completely got it yet and that's ok for now as it will take some time.
Have you read the http://quantumtantra.com/bell2.html I linked to earlier? If you haven't, that might make it "click" for you.
is the correlation between quantum entangled particles not linear then?
It is an experimental consequence of quantum mechanics that the correlation between entangled photons has a nonlinear dependence on the angle between the polarizers. But the point of Bell's theorem is that no local hidden variable theory can match this experimental prediction, as long as it also matches the experimental prediction that perfect correlation occurs at identical polarizer settings.
 
Last edited:
  • #18
San K said:
Question: why do hidden variables need to imply a linear variation?
Good question. I assume you refer to this chart borrowed from wikipedia:
447px-StraightLines.svg.png

Here cosine curve is QM prediction and linear function is the maximum correlation achievable with local realistic model, obtained by replacing ≥ with = in Bells inequality. This linear function actually corresponds to a very simple thought experiment by Bohm: At the source two atoms get random but opposite spins. During the measurement the spin of each atom is projected onto the corresponding direction of measurement and the sign of this projection becomes the outcome. In Bell's terms A(a,λ) = sign cos (a-λ) = { 1 when |a-λ| ≤ π/2 otherwise -1 }

Now, interesting question is what kind of functions are allowed by Bell's inequalities. The impression one gets is that that no LR model can get above the straight line. This is not so, its a bit more subtle. For example, consider roulette wheel with 10 alternating red/black sectors, where the outcome of a measurement is determined by the color at angle a when the wheel stops: A(a,λ) = sign cos 5(a-λ) = { 1 when |a-λ| mod π/5 ≤ π/10 else -1 }. It reproduces the usual values at 0°, 90°, 180° and 270° but jumps all over the place in between: from 1 at 0° to -1 at 36° back to 1 at 72°, crosses 0 at 90°, -1 at 108° etc., all that in perfect agreement with Bell.

On the other hand, QM predicts 0.7 at 45° where no LR model can get above 0.5.
 
  • #19
thanks Lugita and Delta

Delta Kilo said:
Now, interesting question is what kind of functions are allowed by Bell's inequalities. The impression one gets is that that no LR model can get above the straight line. This is not so, its a bit more subtle.

good one Delta Kilo. thanks for pointing out the subtlety..

Delta Kilo said:
For example, consider roulette wheel with 10 alternating red/black sectors, where the outcome of a measurement is determined by the color at angle a when the wheel stops: A(a,λ) = sign cos 5(a-λ) = { 1 when |a-λ| mod π/5 ≤ π/10 else -1 }. It reproduces the usual values at 0°, 90°, 180° and 270° but jumps all over the place in between: from 1 at 0° to -1 at 36° back to 1 at 72°, crosses 0 at 90°, -1 at 108° etc., all that in perfect agreement with Bell.

agreed

Delta Kilo said:
On the other hand, QM predicts 0.7 at 45° where no LR model can get above 0.5.

give me time to digest that...

is 45 degree = the angle between the polarizers?

and is .7 (or .5) the probability that the polarizer will flash green = i.e. give same answer?
 
Last edited:
  • #20
San K said:
is 45 degree = the angle between the polarizers?
Sorry that would be 22.5° for polarizers (photons, spin 1 particles) and 45° for SG apparatus (electrons, spin 1/2 particles). The chart on wikipedia is for electrons.

San K said:
and is .7 (or .5) the probability that the polarizer will flash green = i.e. give same answer?
Yes.
 
  • #21
Delta Kilo said:
Sorry that would be 22.5° for polarizers (photons, spin 1 particles) and 45° for SG apparatus (electrons, spin 1/2 particles). The chart on wikipedia is for electrons.

Yes.

Hi Delta, can you give/put the probabilities because it might become easier to discuss (and play the devil's/LHV advocate till its defeated)

We can use just photons instead of electrons.

For example the probabilities for the below listed orientation for polarizers a and b (i.e. P(a,b))..

case 1 = per quantum/actual results
case 2 = per linear expectations (LHV hypothesis)

P (0, 0) = 100%?
P (30, 0)
P (-30, 0)

P (30, 30) = 100%?
P (-30, 30)

repeat above for 15 degree variation
 
  • #22
San K said:
For example the probabilities for the below listed orientation for polarizers a and b (i.e. P(a,b))..

case 1 = per quantum/actual results
case 2 = per linear expectations (LHV hypothesis)
angle: LHV / QM
0: 1 / 1 ; E(0,0) perfect correlation
15: 0.66 / 0.87 ; E(-15,0) = E(0,15)
30: 0.33 / 0.5 ; E(-30,0) = E(-15,15)= E(0,30)
45: 0: / 0 ; no correlation
60: -0.33 / -0.5 ; anticorrelation

These are expectation values of correlation E(a,b) (confusingly called P(a,b) in Bell's paper)
Coincidence probabilities: P(a=b) = E(a,b)/2+0.5
 
Last edited:
  • #23
Delta Kilo said:
angle: LHV / QM
0: 1 / 1 ; E(0,0) perfect correlation
15: 0.66 / 0.87 ; E(-15,0) = E(0,15)
30: 0.33 / 0.5 ; E(-30,0) = E(-15,15)= E(0,30)
45: 0: / 0 ; no correlation
60: -0.33 / -0.5 ; anticorrelation

These are expectation values of correlation E(a,b) (confusingly called P(a,b) in Bell's paper)
Coincidence probabilities: P(a=b) = E(a,b)/2+0.5

give me time to prepare my devil's/LHV advocate argument, hint: i will be arguing how LHV can support non-linear (cosine) relationship...

i hope i lose because then QM becomes more interesting...transcending space-time...;)
 
  • #24
San K said:
give me time to prepare my devil's/LHV advocate argument, hint: i will be arguing how LHV can support non-linear (cosine) relationship...

i hope i lose because then QM becomes more interesting...transcending space-time...;)

Keep in mind that there have been a host of scientists who have attempted this endeavor with no success. They just keep developing models that are actually either non-local or non-realistic.

On the other hand, there really aren't any decent models around that are actually linear - these have severe problems on other fronts. The linear function is more of a dividing line between Bell compliant and Bell non-compliant.
 
  • #25
DrChinese said:
Keep in mind that there have been a host of scientists who have attempted this endeavor with no success.

really? i was not aware of this/that

they are more knowledgeable, dedicated and intelligent than me...so I am about to give up on LHV/EPR and join the quantum/bell bandwagon/party...

DrChinese said:
They just keep developing models that are actually either non-local or non-realistic.

ok

DrChinese said:
On the other hand, there really aren't any decent models around that are actually linear - these have severe problems on other fronts. The linear function is more of a dividing line between Bell compliant and Bell non-compliant.

what do you mean... models that are not linear?...isn't LHV linear (even if it does not work though) ? which models are you talking about?

however good info...thanks Dr. Chinese
 
Last edited:
  • #26
San K said:
what do you mean... models that are not linear?...isn't LHV linear (even if it does not work though) ? which models are you talking about?

however good info...thanks Dr. Chinese

When you have separability and combine that with a model that reproduces Malus (this is the usual starting point for a model), you end up with the nonlinear function:

P(a,b) = .25 + .5(cos^2(a-b))

This is well within the Bell boundary. Of course, it is even *farther* away from the QM prediction for an entangled state.
 
  • #27
San K said:
what do you mean... models that are not linear?...isn't LHV linear (even if it does not work though) ? which models are you talking about?
Bell's inequality isn't strictly speaking linear. A linear relationship would be of the form A=B+C. But Bell's inequality is of the form A≤B+C. You can call the constraint that local hidden variables must satisfy "sublinearity", because it's at most linear. But in quantum mechanics you can have the correlation be such that A>B+C, or "superlinearity". So no matter what a local hidden variable theory won't be able to match the quantum mechanical prediction (e.g. for the angles -30, 0, and 30), but the best it can do is have A=B+C. You can of course make a theory with A<B+C, and thus it will be nonlinear, but as DrChinese said that will only be farther, not closer, than the specific prediction of quantum mechanics in this case.
 
  • #28
DrChinese said:
That's a bit harsh. Malus applies to a stream of polarized particles, but does not strictly apply to a stream of entangled particles.
It does if they're assumed to be polarized identically, via the source. Which is the case with the Aspect experiments.

So, I don't think it's unnecessarily harsh to say that Herbert has ignored a characteristic behavior of light that's been known for a long time. Expecting the correlation between θ and rate of coincidental detection to be linear contradicts the known, and therefore expected, behavior of light.
 
  • #29
DrChinese said:
The linear function is more of a dividing line between Bell compliant and Bell non-compliant.
I do agree with this. This is the result of archetypal Bell-type LR models, not necessarily the most sophisticated Bell-type LR models.

The fact is that you can get a nonlinear (sine or cosine) angular dependence incorporating the standard LR restrictions. But it will be, necessarily, skewed. The outstanding question is ... why. What, exactly, is it about Bell-type LR models of quantum entanglement that precludes congruence with QM predictions?
 
  • #30
ThomasT said:
So, I don't think it's unnecessarily harsh to say that Herbert has ignored a characteristic behavior of light that's been known for a long time. Expecting the correlation between θ and rate of coincidental detection to be linear contradicts the known, and therefore expected, behavior of light.
But Herbert is not just randomly expecting the correlation to be (sub)linear. Rather, he is showing how a certain assumption, namely that quantum entanglement can be explained by local hidden variables, leads to the conclusion that the correlation is (sub)linear. Surely you agree that some assumptions lead to correct conclusions about the world, and other assumptions lead to incorrect conclusions about the world. He is doing a proof by a contradiction, a common technique in argumentation.
 
  • #31
lugita15 said:
But Herbert is not just randomly expecting the correlation to be (sub)linear. Rather, he is showing how a certain assumption, namely that quantum entanglement can be explained by local hidden variables, leads to the conclusion that the correlation is (sub)linear.
Ok, honestly, I don't understand how he gets to the linear correlation from the assumption of local hidden variables. If you can clearly explain that, then you will have helped not just me but, I suspect, lots of other people interested in this stuff.

lugita15 said:
Surely you agree that some assumptions lead to correct conclusions about the world, and other assumptions lead to incorrect conclusions about the world.
Well, no. We're talking about the world that isn't amenable to our sensory perceptions. So, how could we ever know if any inferences about it are true or not?
 
  • #32
ThomasT said:
Ok, honestly, I don't understand how he gets to the linear correlation from the assumption of local hidden variables. If you can clearly explain that, then you will have helped not just me but, I suspect, lots of other people interested in this stuff.?
I've been trying to do that for a while now. Here's my latest attempt, from post #5 of this thread:
lugita15 said:
1. Entangled photons behave identically at identical polarizer settings.
2. The believer in local hidden variables says that the polarizer angles the photons will and won't go through are agreed upon in advanced by the two entangled photons.
3. In order for the agreed-upon instructions (to go through or not go through) at -30 and 30 to be different, either the instructions at -30 and 0 are different or the instructions at 0 and 30 are different.
4. The probability for the instructions at -30 and 30 to be different is less than or equal to the probability for the instruction at -30 and 0 to be different plus the probability for the instructions at 0 and 30 to be different.
Which of these steps do you disagree with, or which of these steps do not apply to all possible local hidden variable theories?
Well, no. We're talking about the world that isn't amenable to our sensory perceptions. So, how could we ever know if any inferences about it are true or not
I have no idea what you're talking about. All I said is that certain assumptions lead to correct conclusions about the world, and certain assumptions lead to incorrect conclusions about the world. Many arguments take the form of starting from an assumption and showing how it leads to a false conclusion about the world. For instance, Rayleigh-Jeans showed that the assumption that light is described by Maxwell's equation leads to the ultraviolet catastrophe, which does not occur in real life. Rayleigh-Jeans certainly wasn't ignoring the fact that there is no ultraviolet catastrophe for real-life blackbody radiation, but he was showing how a certain assumption led to that incorrect conclusion. Herbert's (and Bell's) proof works the same way. They are trying to show that the assumption of local hidden variables leads to a certain conclusion that is contrary to the experimental predictions of quantum mechanics, even though the predictions of QM are presumably correct.
 
  • #33
lugita15 said:
I've been trying to do that for a while now. Here's my latest attempt, from post #5 of this thread:
Ok, seriously, I think you should abandon this exercise of attempting to reformulate Herbert's argument and look at the situation conceptually. The goal then is not to formulate a restricted LR model of entanglement, but to understand entanglement in terms of local interactions and transmissions. Which, I submit, is entirely possible, even if Bell-type LR models of quantum entanglement are definitively ruled out.

lugita15 said:
All I said is that certain assumptions lead to correct conclusions about the world, and certain assumptions lead to incorrect conclusions about the world.
And the question is: how could you possibly know if those conclusions are correct or incorrect? What's your criterion for ascertaining that?

The good thing about the scientific method, and also what makes it impossible for us to make definitive statements about deep reality, is that the ultimate arbiters wrt any questions or statements about nature are instrumental behaviors amplified to the level of our sensory apprehension. That's it. That's all we have. That's the data. One can assume, infer, deduce, etc. to one's passionate intent/content. Doesn't matter. The data are instrumental ... not deep.

lugita15 said:
They (Herbert and Bell) are trying to show that the assumption of local hidden variables leads to a certain conclusion that is contrary to the experimental predictions of quantum mechanics ...
And they have shown that a certain reasoning and formal encoding of the assumptions of locality and determinism are incompatible with the predictions of standard QM and the results of experiments. But their reasoning is neither deep nor all inclusive. They obviously ignore certain known facts about the behavior of light and the experimental designs. And on the basis of this reasoning we should assume that nature is nonlocal? That's not just bad reasoning, it's just silly ... and should, I think, be summarily rejected.

This is not to say that Bell has not definitively ruled out a broad class, maybe the general class, of LR models of quantum entanglement. I fully believe that this is a great accomplishment. And I further believe that the insight into the deep reality that it engenders is one of the great accomplishments/discoveries of modern scientific thinking.

Bell's point and contribution, imho, isn't that he showed that nature is nonlocal, but that he revealed an extremely subtle problem wrt the modelling of entanglement in a local deterministic universe.
 
  • #34
ThomasT said:
So, I don't think it's unnecessarily harsh to say that Herbert has ignored a characteristic behavior of light that's been known for a long time.

I don't understand this, why do you say Herbert ignores the behaviour of light? All he's saying (as far as I can tell) is that an LR model which predicts perfect correlation when the polarizers are at the same setting will produce a linear correlation for the angles between the polarizers.
 
  • #35
Joncon said:
I don't understand this, why do you say Herbert ignores the behaviour of light? All he's saying (as far as I can tell) is that an LR model which predicts perfect correlation when the polarizers are at the same setting will produce a linear correlation for the angles between the polarizers.
The known behavior of light suggests that, assuming an underlying polarization, vis λ, then the rate of individual detection will be a nonlinear function involving the interaction of λ and the polarizer setting. This is modeled after what's known from polariscopic experiments. The rate of individual detection doesn't vary with polarizer setting because, presumably, the value of λ is varying randomly from pair to pair (while, in most optical Bell tests, eg., Aspect, λ is assumed to be the same for both photons of an entangled pair), so the rate of individual detection remains constant at about half that with no polarizer in place.

The rate of coincidental detection in a setup where you have a source flanked by two polarizers and two detectors with one detection per detector per entangled pair is also modeled after polariscopic setups where the rate varies nonlinearly with θ, the angular difference between the polarizers.

Every optical Bell test is some variation on this theme. The conceptual and factual basis for modelling optical Bell tests comes from what's known about the behavior of light in various experiments involving crossed polarizers, all of which suggest that P(a,b) is a nonlinear function.
 

Similar threads

Back
Top