Understanding bell's theorem: why hidden variables imply a linear relationship?

In summary: Bell's theorem does not hold. In summary, the proof/logic of Bell's theorem goes thus: with the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables would imply a linear variation in the correlation. However, according to quantum mechanical theory, the correlation varies as the cosine of the angle. Experimental results match the [cosine] curve predicted by quantum mechanics.
  • #71
lugita15 said:
[..] The proof shows that local hidden variable theories MUST make predictions contrary to the known behavior of light predicted by quantum mechanics. [..]
That sentence reproduces the fundamental error that Herbert made as discussed in the thread on Herbert's proof: "the known behavior of light predicted by quantum mechanics" confounds the known behaviour of light (we only know what is observed) and predictions by quantum mechanics about unobserved (and possibly unobservable, and thus not knowable) behaviour of light.

As I now understand it (I'm still very much learning here!), hidden variables may imply a linear relationship for some hypothetical experiments, but do not necessarily imply a linear relationship for realistic experiments as performed until today.
 
Physics news on Phys.org
  • #72
harrylin said:
That sentence reproduces the fundamental error that Herbert made as discussed in the thread on Herbert's proof: "the known behavior of light predicted by quantum mechanics" confounds the known behaviour of light (we only know what is observed) and predictions by quantum mechanics about unobserved (and possibly unobservable, and thus not knowable) behaviour of light.

As I now understand it (I'm still very much learning here!), hidden variables may imply a linear relationship for some hypothetical experiments, but do not necessarily imply a linear relationship for realistic experiments as performed until today.
Yes, I concede your point. I was just being a bit loose with my wording.
 
  • #73
San K said:
Agreed. [local hidden variables says that the polarizer angles the photons will and won't go through are agreed upon in advance]

I disagree: that sounds like determinism without any allowance for "randomness". However, that isn't how most people think that nature works, has nothing to do with realism, and as Bell admitted, was also not required by EPR.
 
  • #74
lugita15 said:
[..] the experiments are not good enough to definitively answer which is right and which is wrong. They strongly suggest quantum mechanics is right, but due to practical loopholes they leave open a slim possibility for a local deterministic theory.[..]
I guess that it depends on one's preexisting knowledge what one finds that the experiments suggest... For example, the kind of things that comes to my mind (e.g. Ehrenfest paradox, M-M's Lorentz contraction "loophole") suggest to me that most likely local realism is right and that perhaps QM (with a minimal set of claims) is also correct. :smile:
 
  • #75
Before we nitpick your 4 steps, I'd like to address some other statements you made.
lugita15 said:
... currently there are various experimental loopholes that prevent the kind of ideal Bell test which would be able to definitively test whether this is a local deterministic universe.
Because of the LR requirements/restrictions, which, in effect, preempt (and thereby render irrelevant wrt nature) the locality assumption, Bell tests, even loophole-free one's, can't ever be used to determine an underlying nonlocality.

As for the assumption of a fundamental deterministic evolution. It's, in principle, nontestable. It's an unfalsifiable assumption. Just as the assumption of a fundamental nondeterminism is.

lugita15 said:
In particular, as zonde has pointed out, it is difficult to test the prediction that you get perfect correlation at identical polarizer settings, because you would have to "catch" literally all the photons that are emitted by the source, and that requires really efficient photon detectors. All we can say is that when the angles of the polarizers are the same, the correlation is perfect for the photon pairs we DO detect. But that leaves open the possibility, seized on by zonde and other local determinists, that the photon pairs we do detect are somehow special, because the detector is biased (in an unknown way) towards detecting photon pairs with certain (unspecified) characteristics, and that the photon pairs we do NOT detect would NOT display perfect correlation, and thus the predictions of QM would be incorrect.
This just seems to be 'clutching at straws', so to speak. Since the correlation is perfect at θ = 0° for the entangled pairs that are detected, then I see no reason to assume that it would be different if all entangled pairs could be detected.

lugita15 said:
No, it just means that the experiments are not good enough to definitively answer which is right and which is wrong. They strongly suggest quantum mechanics is right ...
You said that wrt certain recent Bell tests the LR predictions were correct. We know that LR and QM predictions must be different wrt entanglement setups. So, if the LR predictions were correct, then the QM predictions had to be wrong wrt the tests in question. But now you say that the test results suggest that QM was right? So, which is it?

lugita15 said:
... but due to practical loopholes they leave open a slim possibility for a local deterministic theory.
As I said, the loopholes are irrelevant, imo.

lugita15 said:
Sorry, there's a miscommunication. When I say "local hidden variables", I mean the philosophical stance you call "local determinism", not the formal model you call "local realism", so keep that in mind when reading my posts.
I think that ease of communication would be much better served if we simply write local determinism when we're referring to the philosophical assumptions, and LR when we're referring to the circumscribed LR formalism.

lugita15 said:
I'm trying to show that the predictions of QM cannot be absolutely correct in a local deterministic universe. As I said, the thing to be deduced from my four steps is not the claim that nature is nonlocal. Rather it is the claim that if the predictions of quantum mechanics are completely correct, than nature is nonlocal or nondeterministic.

1. Entangled photons behave identically at identical polarizer settings.
2. In a local deterministic universe, the polarizer angles the photons will and won't go through must be agreed upon in advance by the two entangled photons.
3. In order for the agreed-upon instructions (to go through or not go through) at -30 and 30 to be different, either the instructions at -30 and 0 are different or the instructions at 0 and 30 are different.
4. The probability for the instructions at -30 and 30 to be different is less than or equal to the probability for the instruction at -30 and 0 to be different plus the probability for the instructions at 0 and 30 to be different.
Step 2. doesn't represent a local common cause, or locality (independence). The photons might be exchanging ftl transmissions sometime after their emission from a common source, but before interacting with the polarizers.

From step 1. we might assume that the value of λ, the variable determining whether a photon will go through a polarizer or not, is the same for photons 1 and 2 of any given pair when θ = 0° .

If the value of λ is the same for photons 1 and 2 of an entangled pair when θ = 0°, then is there any reason to suppose that λ would be different for photons 1 and 2 of an entangled pair wrt any other θ, such as 30°?

Before we deal with that, we might speculate about the nature of λ in these experiments involving light and polarizers. The intensity of the light they transmit is affected by the polarizer's orientation. In a polariscopic setup, the intensity of the light transmitted by the analyzing polarizer varies as cos2θ. If we think of the optical disturbances incident on the polarizers as rotating wave shells whose expansion is constrained and directed by the transmission lines, then we can visualize the relationship between the polarizer setting and the axis of rotation as determining the amplitude of the wavefront filtered by the polarizer. So, for our purposes, the assumption that the rotational axes of entangled photons is identical seems to fit with the experimental result noted in step 1.
This common rotational axis of entangled photons, represented by λ, is also compatible with the assumption that it's produced locally via a common emission source.

Back to the preceding question. If photons 1 and 2 of any entangled pair have the exact same rotational axis, then how would we expect the rate of coincidental detection to vary as θ varies?

Using another QM observation we note that rate of individual detection doesn't vary with polarizer orientation, so we assume that λ is varying randomly from entangled pair to entangled pair. We also note that rate of coincidental detection only varies with θ, and, most importantly, as cos2θ. Is this behavior compatible with our conceptualization of λ? It seems to be.

Start with a source emitting entangled photons with λ varying randomly. The emitter is flanked by two detectors A and B. Whatever the rate of coincidental detection is in this setup is normalized to 1. After putting identical polarizers in place, one between the emitter and A and one between the emitter and B, we note that the maximum rate of coincidental detection is .5 what it was without the polarizers, and that it varies from .5, at θ = 0° to 0 at θ = 90° as cos2θ.

Now visualize this setup without the emitter in the middle and with the polarizers aligned (θ = 0°). We note a common rotational axis extending between the polarizers. Now rotate one or both of the polarizers to create a θ of 30°. We note a common rotational axis extending between the polarizers. Now rotate one or both of the polarizers to create a θ of 60°. We note a common rotational axis extending between the polarizers.

It's just like a polariscope, except that the source emitting random λ's is in the middle rather than at one end or the other, so both polarizers are analyzers, and the rate of coincidental detection is a function of the same angular dependency, and analogous to the detected intensity, as in a regular polariscopic setup.

Thus, the nonlinear angular dependencies observed in Bell tests are intuitive, and compatible with a local deterministic view.
 
Last edited:
  • #76
lugita15 said:
because the detector is biased (in an unknown way) towards detecting photon pairs with certain (unspecified) characteristics, and that the photon pairs we do NOT detect would NOT display perfect correlation, and thus the predictions of QM would be incorrect.

why would a detector not detect even one such photon?

also this is somewhat verifiable via (patterns between) count of photons sent Vs detected
 
  • #77
ThomasT said:
I said that, wrt the OP, it's been demonstrated that the assumption of hidden variables doesn't imply a linear relationship between θ and rate of coincidental detection. So, unless you're saying that a nonlinear LR model of quantum entanglement hasn't been done, then it's been demonstrated.
I still maintain my claim that in a local deterministic universe in which there is perfect correlation at identical angle settings, Herbert's Bell inequality MUST be satisfied and thus the exact nonlinear correlation predicted by quantum mechanics can NOT be reproduced.
Perfect positive correlation between detection attributes at identical polarizer settings is either observed or it isn't. In hashing out these philosophical and semantic problems, an ideal situation with 100% efficient detection and 100% efficient pairing is usually assumed (which entail perfect positive correlation between detection attributes at identical polarizer settings).
I agree, experimental loopholes like detector efficiency are irrelevant for the philosophical issues we're discussing concerning whether local determinism is compatible in principle with all the predictions of quantum mechanics. The only reason I brought it up is because you mentioned local deterministic theories that are consistent with the results of currently practical Bell tests, and I was explaining how that is possible. It is because those models exploit experimental loopholes in order to claim that some of the predictions of QM are wrong, but that current practical limitations prevent us from definitively testing those particular predictions.
lugita15 said:
Wait a minute, you agree with me that any possible local hidden variable theory must make predictions contrary to those of QM?
Of course. Where have I ever said that I thought otherwise?
Remember, when I say "local hidden variables" I mean what you call "local determinism". Keeping that in mind, I assume you don't agree with me anymore.
I don't think that loopholes and practical flaws have anything to do with LR being incompatible with QM and experiment.
Loopholes have nothing to do with why local determinism is incompatible with the predictions of QM. But loopholes are relevant to the question of whether currently practical Bell tests definitively rule out local determinism, which is a question I'm not really interested in.
What would be different? Perfect, loophole-free Bell tests are already assumed.
Of course they're assumed in our philosophical discussion. I was just making a brief aside because you were making claims about what has and hasn't been demonstrated experimentally.
If perfect, loophole-free Bell tests were ever done, then would QM would suddenly become incorrect? Would QM suddenly become compatible with LR?
I claim that a perfect, loophole-free Bell test could in principle either disprove QM or disprove local determinism.
Apparently, the predictions of QM are correct, but it doesn't follow that nature is nonlocal.
Well, if the predictions of QM are correct and nature is local, then I claim that my four steps show nature is nondeterministic.
 
  • #78
ThomasT said:
Because of the LR requirements/restrictions, which, in effect, preempt (and thereby render irrelevant wrt nature) the locality assumption, Bell tests, even loophole-free one's, can't ever be used to determine an underlying nonlocality.
But my proof, which is about a perfect loophole-free Bell test, isn't talking about the formal model you call LR. It's talking about the philosophical stance that this is a local deterministic universe.
As for the assumption of a fundamental deterministic evolution. It's, in principle, nontestable. It's an unfalsifiable assumption. Just as the assumption of a fundamental nondeterminism is.
But even if determinism by itself is not testable, I claim local determinism is, at least in principle.
This just seems to be 'clutching at straws', so to speak. Since the correlation is perfect at θ = 0° for the entangled pairs that are detected, then I see no reason to assume that it would be different if all entangled pairs could be detected.
I agree that the view zonde advocates can be fairly described as "clutching at straws" (no offense), but I was just responding to your mention of local deterministic theories that are not absolutely ruled out by currently practical Bell tests.
You said that wrt certain recent Bell tests the LR predictions were correct. We know that LR and QM predictions must be different wrt entanglement setups. So, if the LR predictions were correct, then the QM predictions had to be wrong wrt the tests in question. But now you say that the test results suggest that QM was right? So, which is it?
Bell tests performed to date have not been good enough to definitively test which is right, the predictions of QM or the predictions of local determinism. They strongly lean towards QM being right, but there are small loopholes from making this definitive, and it is these loopholes that local determinists like zonde cling to.
As I said, the loopholes are irrelevant, imo.
I agree that the experimental loopholes are irrelevant to the philosophical issues.
I think that ease of communication would be much better served if we simply write local determinism when we're referring to the philosophical assumptions, and LR when we're referring to the circumscribed LR formalism.
Yes, I'll try to avoid using at least the term "local realism". But just keep in mind that when I say "local hidden variables", I mean what you call "local determinism". (By the way, your usage of the term local realism is nonstandard, because most people use the term to refer to a philosophical stance).
Step 2. doesn't represent a local common cause, or locality (independence). The photons might be exchanging ftl transmissions sometime after their emission from a common source, but before interacting with the polarizers.
You're right, Step 2 applies not only to all local deterministic theories, but also to some nonlocal deterministic theories. Anyway, is step 2 where you disagree with my argument, and if not which step is it?
Thus, the nonlinear angular dependencies observed in Bell tests are intuitive, and compatible with a local deterministic view.
But I'm not asking you to give a plausible justification for why you think the nonlinear correlation predicted by QM would make reasonable sense in a local deterministic universe. I'm asking you, what specific step in my reasoning do you dispute?
 
  • #79
It is understood that Malus's Law and QM were derived independently.
I = Im cos2 theta
cos2 30o = .75
cos2 60o = .25
cos2 120o= .25
cos2 45o = .5
And when polarizing plates are parallel , 0o, 180o
cos2 = 1
For QM : 1/2Pi integral cos2(theta) d(theta) = same as above (1/2Pi)
How then can the mechanism in Malus's Law be discounted in having an effect
(not cause ) in the Bells Inequalities violation counts ?
https://www.physicsforums.com/showthread.php?t=74806&highlight=malus+law&page=4
 
Last edited:
  • #80
lugita15 said:
I still maintain my claim that in a local deterministic universe in which there is perfect correlation at identical angle settings, Herbert's Bell inequality MUST be satisfied and thus the exact nonlinear correlation predicted by quantum mechanics can NOT be reproduced.
You said that nonlinear LR models haven't been demonstrated, implying that, wrt the OP and title question, hidden variables imply a linear relationship between θ and rate of coincidental detection. But, it's a matter of fact that hidden variables don't imply a linear relationship θ and rate of coincidental detection, because there are nonlinear LR models of quantum entanglement. So, the OP's question has been answered.

Your claim is, in part, that LR models can't reproduce the exact nonlinear correlation predicted by QM. I agree with this. Afaik, most everybody agrees with this.

But you want to extend that mathematical fact to a statement about nature. Namely, that the incompatibility between LR and QM proves that nature is nonlocal. There do seem to be a number of physicists who believe this, but there are also a number who don't. I don't know the numbers, but, in my experience, no working physicist, teacher, professor, experimentalist, or theorist that I've talked to about this stuff believes that nonlocality has been proven vis the incompatibility between LR and QM -- with the possible exception of dBB advocates, who, it bears noting, have a sort of vested interest in interpreting Bell's theorem as proof positive that nature is nonlocal.

The bottom line is that your n step proof(s) cannot possibly prove that nature is nonlocal, because the only thing that can possibly prove the empirical truth of such a statement would be the objective observance and recording of the ocurrence of ftl transmissions. Wrt which, afaik, there are none.

I offered, in post #75, a certain way of looking at optical Bell tests to show that wrt a slightly less pedestrian (but not much less) and non-anthropomorphic conceptualization of what's happening in the quantum realm underlying instrumental behavior it's possible to construct a view of quantum entanglement (at least wrt certain simplified and idealized setups) that's compatible with the philosophical assumption of local determinism. Of course, this conceptualization doesn't prove that nature is local any more than your arguments prove that it isn't.

It really is an open question, imho, as to whether nature is local or nonlocal. But modern physical science continues to assume (ie., proceed according to the working hypothesis) that nature, our universe, is evolving deterministically in accordance with the principle of locality, because these assumptions are the most reasonable given what's currently known.

lugita15 said:
... even if determinism by itself is not testable, I claim local determinism is, at least in principle.
The assumption that nature is exclusively local is a falsifiable assumption vis the production of just one nonlocal transmission. The assumption that nature is evolving deterministically isn't falsifiable.

With this, I think I should fade back into the peanut gallery wrt this thread. As far as I'm concerned the OP's question has been answered. If he wants more info he can find it via PF, Google, Yahoo, arxiv.org, etc. searches. If you still want to discuss your n step proof(s) that nature is nonlocal, then I think the appropriate place to do that would be in the philosophy forum.
 
  • #81
morrobay said:
It is understood that Malus's Law and QM were derived independently.
I = Im cos2 theta
cos2 30o = .75
cos2 60o = .25
cos2 120o= .25
cos2 45o = .5
And when polarizing plates are parallel , 0o, 180o
cos2 = 1
For QM : 1/2Pi integral cos2(theta) d(theta) = same as above (1/2Pi)
How then can the mechanism in Malus's Law be discounted in having an effect
(not cause ) in the Bells Inequalities violation counts ?
https://www.physicsforums.com/showthread.php?t=74806&highlight=malus+law&page=4
Thanks for the link. Interesting thread. Regarding your question, Malus Law isn't discounted in the QM treatment of optical Bell tests. I wouldn't call Malus Law itself a mechanism. Not that you did that.

I suppose you're referring to some presumed underlying dynamics that result in the observations referred to collectively as supporting the Malus Law function. I would also suppose that virtually all optics texts have something to say about the mechanism underlying observations of Malus Law.
 
  • #82
Thanks Thomas. My question has been answered namely - laws of probabilities are additive and hence linear. Bells proof, proving existence of quantum entanglement, seems valid to me within the scope of this question. If one has to go beyond that then it requires much more research and thinking. I will go through your posts in detail later this week.

the hypothesis that I had in mind was that at various angles it would not be linear however I realized that in LHV the photons would not even know the angle between the polarizers. The hypothesis that I had in mind however fits with QM
 
  • #83
San K said:
Thanks Thomas. My question has been answered namely - laws of probabilities are additive and hence linear. Bells proof, proving existence of quantum entanglement, seems valid to me within the scope of this question. If one has to go beyond that then it requires much more research and thinking. I will go through your posts in detail later this week.
Yes. Bell's proof is valid. But it doesn't prove the existence of quantum entanglement. Quantum entanglement refers to certain experimental results and the preparations that produce those results. It's just a convention. What Bell proved was that Bell LR models of entanglement are incompatible with QM. The larger question is how to interpret that, what it might entail wrt inferences about nature.

But don't waste any time on my posts. I'm as fascinated and confused about this stuff as anyone. A recommended path of inquiry would be to go back to Bell 1964 and work your way forward from there. When you have questions about a mathematical meaning or a conceptual line of reasoning, then present them at PF.
 
  • #84
ThomasT said:
[..]This just seems to be 'clutching at straws', so to speak. Since the correlation is perfect at θ = 0° for the entangled pairs that are detected, then I see no reason to assume that it would be different if all entangled pairs could be detected.
[..]
I'm not sure about this, but it seems to me that here is again a partial misunderstanding, for there is a "twist" on this: the correlation may be perfect for those pairs that are called "entangled pairs".

To be clearer I'll give an example (however deeper discussion would need a new thread): in Weihs' experiment, many pairs were actually detected but rejected for analysis by means of a very small time window, while reanalysis by De Raedt et al yielded a very different result with a larger time window; this was all matched with a partly ad hoc local simulation model.
 
Last edited:
  • #85
ThomasT said:
Yes. Bell's proof is valid. But it doesn't prove the existence of quantum entanglement. Quantum entanglement refers to certain experimental results and the preparations that produce those results. It's just a convention. What Bell proved was that Bell LR models of entanglement are incompatible with QM. The larger question is how to interpret that, what it might entail wrt inferences about nature.

agreed

ThomasT said:
I'm as fascinated and confused about this stuff as anyone. A recommended path of inquiry would be to go back to Bell 1964 and work your way forward from there. When you have questions about a mathematical meaning or a conceptual line of reasoning, then present them at PF.

good suggestion, thanks Tom.

a thought ...

50% of all photons pass through a polarizer at all angles ...

- are the 50% different from the 50% that don't pass through (just like the initial conditions of a coin toss which shows heads is different from a coin toss which shows tails)?
- are photon spins are partially malleable, by a polarizer, within limits?
- do the members of the 50% pass-through group keep changing with the polarizer angle?
 
Last edited:
  • #86
San K said:
50% of all photons pass through a polarizer at all angles ...
My understanding is that, wrt Bell tests, there's a source emitting randomly polarized (or unpolarized) photons. Looking at just one side, without a polarizer between the emitter and detector, then there will be a certain rate of detection per unit time. When a polarizer is placed between the emitter and detector, then the rate of detection is cut by 50%, no matter how the polarizer is oriented.

San K said:
- are the 50% different from the 50% that don't pass through (just like the initial conditions of a coin toss which shows heads is different from a coin toss which shows tails)?
I'm not sure what you're asking. A photon transmitted by the polarizer and registered by the detector is hypothesized, I think, to have the same polarization orientation, on interacting with the detector, as the orientation of the polarizer that transmitted it.

San K said:
- are photon spins partially malleable, by a polarizer, within limits?
I think that's the mainstream hypothesis. But I don't know. There was a thread about this some time back. I forget the title of it.
 
  • #87
harrylin said:
I'm not sure about this ...
Neither am I. DrC is pretty familiar/fluent wrt the experiment and simulation you mentioned. I think he might agree with:
ThomasT said:
Since the correlation is perfect at θ = 0° for the entangled pairs that are detected, then I see no reason to assume that it would be different if all entangled pairs could be detected.
But I don't know.
 
  • #88
zonde might be a good person to ask about this. He believes in models in which you would get dramatically different results if you detected all the entangled pairs. He thinks the polarizers are somehow biased towards only detecting the ones that display perfect correlation.
 
  • #89
lugita15 said:
zonde might be a good person to ask about this. He believes in models in which you would get dramatically different results if you detected all the entangled pairs. He thinks the polarizers are somehow biased towards only detecting the ones that display perfect correlation.
This is one point that you and I agree on. Isn't it -- that there wouldn't be any difference if everything were perfect?

Looking at Aspect 1982 the setup is biased in the sense that the only photons that are, supposedly, considered (ie., interacting with the polarizers) are one's moving in exactly opposited directions and which are detected within a certain 19ns time interval, and thus presumably were emitted by the same atom during the same atomic transition. That is, the setup is designed so that the only photons which might possibly be detected are entangled photons. (I'm not exactly sure how it works, and it might be that disturbances of lower amplitude get randomly into the transmission lines and that this is one source of the inefficiencies.)

There's no particular reason to assume that anything else is biased or inefficient in any way that can't be accounted for with some reasonable assumptions. Identical polarizers placed in the transmission line path are used, and identical PMTs are used.

So, to me, the most reasonable assumption is that if you had 100% detection efficiency, and 100% pairing efficiency, then QM would still hold, and LR, being necessarily incompatible with QM, would not.
 
  • #90
ThomasT said:
This is one point that you and I agree on. Isn't it -- that there wouldn't be any difference if everything were perfect?

Looking at Aspect 1982 the setup is biased in the sense that the only photons that are, supposedly, considered (ie., interacting with the polarizers) are one's moving in exactly opposited directions and which are detected within a certain 19ns time interval, and thus presumably were emitted by the same atom during the same atomic transition. That is, the setup is designed so that the only photons which might possibly be detected are entangled photons. (I'm not exactly sure how it works, and it might be that disturbances of lower amplitude get randomly into the transmission lines and that this is one source of the inefficiencies.)

There's no particular reason to assume that anything else is biased or inefficient in any way that can't be accounted for with some reasonable assumptions. Identical polarizers placed in the transmission line path are used, and identical PMTs are used.

So, to me, the most reasonable assumption is that if you had 100% detection efficiency, and 100% pairing efficiency, then QM would still hold, and LR, being necessarily incompatible with QM, would not.

Tom, Not wishing to hi-jack this thread, I wonder if this might help you in discussions elsewhere?

"I am certain that, if you had 100% detection-efficiency and 100% pairing-efficiency, then QM would hold, as would Einstein-locality. So that suggests to me the need to focus on the R in LR." :smile:

Regards, GW
 
  • #91
Gordon Watson said:
"I am certain that, if you had 100% detection-efficiency and 100% pairing-efficiency, then QM would hold, as would Einstein-locality. So that suggests to me the need to focus on the R in LR." :smile:
Regards, GW
Einstein locality refers to independence, ie., the separability of the joint function vis the functions and λs which determine individual detection, doesn't it? If so, then that way of formalizing locality would continue to be ruled out. I agree with ttn on that. It's the locality condition which effectively creates an LR-predicted correlation between θ and rate of coincidental detection that's incompatible with the QM-predicted correlation.

Realistic models of entanglement are allowed -- as long as they're explicitly nonlocal. This is another reason why, wrt the title question, hidden variables, by themselves, don't imply a linear relationship between θ and rate of coincidental detection.

Of course, the OP was talking about LR models, and, imo, the key to why BIs are violated has to do with how the assumption of locality is expressed in an LR model of entanglement.
 
  • #92
ThomasT said:
If so, then that way of formalizing locality would continue to be ruled out. I agree with ttn on that.
ThomasT, regardless of whether Bell's "way of formalizing locality" restricted it in some way, and regardless of whether that means that Bell's original proof does not apply to all local deterministic theories, the point still remains that not all Bell proofs involve a "formal model". Probably Herbert's version of the proof, and certainly my restatement of Herbert, is not concerned with "encoding" or "formalizing" the philosophical assumption of local determinism to fit some kind of restricted model. The only thing I'm trying to do is deduce the logical consequences of this philosophical assumption, and I claim to have done so in my four steps. If you believe that local determinism IS compatible with the predictions of QM, then the burden of proof is on you to identify the step you disagree with, because if all of my steps are correct how can my conclusion be wrong?

EDIT: For convenience, I just put my four steps in a blog post here.
 
Last edited:
  • #93
lugita, the blog link you gave doesn't seem to work.
But I'll reply to your comments.
lugita15 said:
ThomasT, regardless of whether Bell's "way of formalizing locality" restricted it in some way, and regardless of whether that means that Bell's original proof does not apply to all local deterministic theories, the point still remains that not all Bell proofs involve a "formal model".
In that case, then they can hardly be called proofs. Proof of nonlocal transmissions would be the objectively recorded observation of nonlocal transmissions. Neither you, nor Bell, nor Herbert, nor Bell tests offer that. Instead, we must, you say, infer from the steps in your lines of reasoning that nature must be nonlocal. I don't conclude that from your, or Bell's, or Herbert's, or ttn's treatments.

lugita15 said:
Probably Herbert's version of the proof, and certainly my restatement of Herbert, is not concerned with "encoding" or "formalizing" the philosophical assumption of local determinism to fit some kind of restricted model.
But, in effect, that's what you're doing. You're placing certain restrictions on the correlation between θ and rate of coincidental detection. Where do these restrictions come from? Are they warranted? Do they actually show that nature is nonlocal, or might there be some other explanation regarding the effective causes of BI violations?

lugita15 said:
If you believe that local determinism IS compatible with the predictions of QM, then the burden of proof is on you to identify the step you disagree with, because if all of my steps are correct how can my conclusion be wrong?
Formal, standard, Bell-type LR is incompatible with formal QM. That's an indisputible mathematical fact.

But I think I've shown that wrt at least one conceptualization of the situation the predictions of QM are quite compatible with the assumption of local determinism. Encoding that assumption into a formal model that agrees with QM and experiment is another problem altogether. It, apparently, can't be done.

Extant observations (not necessarily interpretations of those observations) are all in line with the assumption of local determinism, so if you say, via some logical argument, that nature must be nonlocal, then the burden of proof is on you. And that proof, scientifically, wrt your contention, would consist of producing some nonlocal transmissions.

Since no nonlocal transmissions have ever been observed/recorded, then the most reasonable scientific position is to retain the assumption that our universe is evolving in accordance with the principle of locality.

Wrt Bell, Herbert, etc., that means that the most reasonable hypothesis is that there's something in the formalism, or line of reasoning, that has nothing to do with locality in nature, but which nevertheless skews the predictions of a an LR formalism or line of reasoning.

You're a scientist, right? Ok, so just approach this problem from a different perspective, adopting the working hypothesis that maybe, just maybe, there's something in the standard Bell-type LR formalism, or, say, a Herbert-like line of reasoning, that doesn't fit the experimental situation, and that, just maybe, that incompatibility has nothing to do with whether nature is nonlocal or not.
 
Last edited:
  • #94
lugita15 said:
zonde might be a good person to ask about this. He believes in models in which you would get dramatically different results if you detected all the entangled pairs. He thinks the polarizers are somehow biased towards only detecting the ones that display perfect correlation.

well in that case the above hypothesis can be expanded to apply to various experiments of QM...for example this argument could be stretched to even (single photon, double slit) interference patterns
 
Last edited:
  • #95
ThomasT said:
.

I'm not sure what you're asking. A photon transmitted by the polarizer and registered by the detector is hypothesized, I think, to have the same polarization orientation, on interacting with the detector, as the orientation of the polarizer that transmitted it.

i am saying

One hypothesis/assumption could be that --

prior to even interacting with the polarizer 50% of the photons have a property that is different from the photons that will pass through...(or how else do we explain why 50% pass thru...;)...)

i.e. the photons that will (or will not) pass through the polarizer are predetermined/marked for any given angle/orientation of the polarizer


that group of photons changes ...with the change in polarizer angle

let me illustrate the above hypothesis with the following example:

lets say we have 360 photons

on average maybe each is at 0, 1, 2...360 degree etc...

so for polarizer at 0 degree...the ones between 270 and 90 degree pass through
for polarizer at 90 degree the ones between 0 and 180 pass through
for polarizer at 180 degree the ones between 90 and 270 pass through etc

to add up to 50% passing through...at any polarizer angle/orientation
 
Last edited:
  • #96
San K said:
the photons that will (or will not) pass through the polarizer are predetermined/marked for any given angle/orientation of the polarizer
If I understand you, all you're saying is that each photon decides in advance which angles to go though and which angles not to go through. And since the two photons in an entangled pair always do the same thing when you send them through polarizers oriented at the same angle, you can conclude that the two photons have agreed on which angles to go through and which ones not to. That's exactly in line with my four-step proof.
 
  • #97
San K said:
...lets say we have 360 photons

on average maybe each is at 0, 1, 2...360 degree etc...

so for polarizer at 0 degree...the ones between 270 and 90 degree pass through
for polarizer at 90 degree the ones between 0 and 180 pass through
for polarizer at 180 degree the ones between 90 and 270 pass through etc

to add up to 50% passing through...at any polarizer angle/orientation

As lugita15 says, this is exactly the idea (or its equivalent) that was considered PRIOR to Bell. But you will quickly find that works fine when the angle settings are identical, but generally does not work out at many combinations of settings.

0/120/240 are good examples - take the DrChinese challenge! See how close you can get the match rate (comparing AB, BC, AC) to 25% (the QM value) for 10 photons.

Photon 1:
A=0 degrees is +
B=120 degrees is +
C=240 degrees is -

So AB is a match, but AC and BC are not. 1 of 3.

Photon 2:
A=0 degrees is -
B=120 degrees is +
C=240 degrees is -

So AC is a match, but AB and BC are not. 1 of 3.

Photon 3:
A=0 degrees is -
B=120 degrees is -
C=240 degrees is -

So AC, AB and BC are matches. 3 of 3.

Etc. to 10 or whatever.

You win the challenge if your average rate is lower than 1/3. Again, the QM prediction is 1/4. It should be clear that the ONLY way to "win" over a suitably large sample is to know in advance which 2 settings will be selected for each photon in our trial. But if I do that (being independent of you and not knowing whether you marked a + or a - for each photon's settings) after you mark your values, the rate of matches will converge on 1/3 or more.
 
  • #98
ThomasT said:
lugita, the blog link you gave doesn't seem to work.
Sorry, I had to get it approved. Now you can read my post. (It doesn't contain any new info, just the reasoning I've already been giving, for easy reference.)
In that case, then they can hardly be called proofs. Proof of nonlocal transmissions would be the objectively recorded observation of nonlocal transmissions.
No, such an observation would be EVIDENCE of nonlocal transmissions. But I'm trying to do is write logical proofs, not argue about evidence.
Neither you, nor Bell, nor Herbert, nor Bell tests offer that. Instead, we must, you say, infer from the steps in your lines of reasoning that nature must be nonlocal.
No, i do not say that. I say that we must infer from my line of reasoning that local determinism is incompatible with the experimental predictions of quantum mechanics being completely correct.
But, in effect, that's what you're doing. You're placing certain restrictions on the correlation between θ and rate of coincidental detection. Where do these restrictions come from?
I am not arbitrarily placing restrictions. I am *logically deducing* such restrictions, i.e. the Bell inequality, from certain assumptions. If you disagree with my conclusion, you must either disagree with the assumption of local determinism, or you must believe that my reasoning is flawed.
Do they actually show that nature is nonlocal, or might there be some other explanation regarding the effective causes of BI violations?
The proofs show that either nature is nonlocal, nature is nondeterministic, or that quantum mechanics is incorrect, in principle, in at least some of its experimental predictions. As far as actual BI violations in practical Bell tests, zonde will tell you that they reveal nothing at all due to experimental loopholes.
Extant observations (not necessarily interpretations of those observations) are all in line with the assumption of local determinism, so if you say, via some logical argument, that nature must be nonlocal, then the burden of proof is on you.
But I'm not logically proving that nature must be nonlocal. I am trying to logically prove that local determinism leads to certain conclusions that contradict the experimental predictions of QM. So the burden of proof is still on you to either identify a step in my reasoning your disagree with or to agree with the conclusion of my reasoning.
And that proof, scientifically, wrt your contention, would consist of producing some nonlocal transmissions.
If I were interested in proving that this is a nonlocal universe, that might be a worthwhile thing to try to do. But that's not what I'm doing here. I'm trying to demonstrate that two assumptions are logically incompatible.
Since no nonlocal transmissions have ever been observed/recorded, then the most reasonable scientific position is to retain the assumption that our universe is evolving in accordance with the principle of locality.
The reasoning certainly allows you to retain local determinism, but then it forces you to conclude that not all the experimental predictions of QM are correct.
Wrt Bell, Herbert, etc., that means that the most reasonable hypothesis is that there's something in the formalism, or line of reasoning, that has nothing to do with locality in nature, but which nevertheless skews the predictions of a an LR formalism or line of reasoning.
Well there is no formalism in Herbert's proof or my restatement of it, so then the thing you must dispute is the line of reasoning. But which step is it? Step 1 is just a statement of a prediction of QM. Step 3 is completely obvious given step 2 (it's the transitive property of equality: if A=B and B=C then A=C). Step 4 is just an application of the laws of probability using step 3. So the only step left is Step 2. But at least previously, you were unwilling to reject step 2.
You're a scientist, right? Ok, so just approach this problem from a different perspective, adopting the working hypothesis that maybe, just maybe, there's something in the standard Bell-type LR formalism, or, say, a Herbert-like line of reasoning, that doesn't fit the experimental situation, and that, just maybe, that incompatibility has nothing to do with whether nature is nonlocal or not.
I readily concede that Herbert's ideal scenario is not exactly realized in currently practical Bell tests, due to various loopholes. But I maintain the claim that a loophole-free Bell test could, in principle, refute local determinism (as always, excluding superdeterminism). I am also willing to concede that the incompatibility demonstrated by Herbert's reasoning does not automatically mean that nature is nonlocal. I don't think I've ever claimed this.
 
  • #99
lugita15 said:
If I understand you, all you're saying is that each photon decides in advance which angles to go though and which angles not to go through. And since the two photons in an entangled pair always do the same thing when you send them through polarizers oriented at the same angle, you can conclude that the two photons have agreed on which angles to go through and which ones not to. That's exactly in line with my four-step proof.

Yes it is. Agreed Lugita.

question: why do 50% photons going through a polarizer? What is QM's explanation for that? Is the (indeterminate state) photon's interaction with the polarizer -- totally random or is it cause and effect?
 
  • #100
San K said:
question: why do 50% photons going through a polarizer? What is QM's explanation for that? Is the (indeterminate state) photon's interaction with the polarizer -- totally random or is it cause and effect?
I see what you mean now, from your post #95. That seems to work for rate of individual detection, but not for rate of coincidental detection. Afaik, QM doesn't provide a causal explanation for either the random individual result sequences, or the random coincidental result sequences, or the predictable correlation between θ and rate of coincidental detection.
 
  • #101
DrChinese said:
As lugita15 says, this is exactly the idea (or its equivalent) that was considered PRIOR to Bell. But you will quickly find that works fine when the angle settings are identical, but generally does not work out at many combinations of settings.

0/120/240 are good examples - take the DrChinese challenge! See how close you can get the match rate (comparing AB, BC, AC) to 25% (the QM value) for 10 photons.

Photon 1:
A=0 degrees is +
B=120 degrees is +
C=240 degrees is -

So AB is a match, but AC and BC are not. 1 of 3.

Photon 2:
A=0 degrees is -
B=120 degrees is +
C=240 degrees is -

So AC is a match, but AB and BC are not. 1 of 3.

Photon 3:
A=0 degrees is -
B=120 degrees is -
C=240 degrees is -

So AC, AB and BC are matches. 3 of 3.

Etc. to 10 or whatever.

You win the challenge if your average rate is lower than 1/3. Again, the QM prediction is 1/4. It should be clear that the ONLY way to "win" over a suitably large sample is to know in advance which 2 settings will be selected for each photon in our trial. But if I do that (being independent of you and not knowing whether you marked a + or a - for each photon's settings) after you mark your values, the rate of matches will converge on 1/3 or more.

Agreed, could not win the DrChinese challenge...:)

question (and I will search on the net too) what are the QM calculations and assumptions to arrive at 1/4?
 
  • #102
ThomasT said:
I see what you mean now, from your post #95.

great

ThomasT said:
That seems to work for rate of individual detection, but not for rate of coincidental detection.

interesting.

what is co-incidental detection? is it (experiments using) entangled photons detected by a co-incidence counter?

ThomasT said:
Afaik, QM doesn't provide a causal explanation for either the random individual result sequences, or the random coincidental result sequences, or the predictable correlation between θ and rate of coincidental detection.

ok
 
  • #103
San K said:
Agreed, could not win the DrChinese challenge...:)

question (and I will search on the net too) what are the QM calculations and assumptions to arrive at 1/4?

A - B = -120 degrees
A - C = -240 "
B - C = -120 "

These are all equivalent for the function cos^2(theta), which is the formula for the QM prediction.

cos^2(-120) = .25
cos^2(-240) = .25
cos^2(-120) = .25

So no matter which pair you consider, the QM expectation is 1/4.
 
  • #104
lugita15 said:
Sorry, I had to get it approved. Now you can read my post. (It doesn't contain any new info, just the reasoning I've already been giving, for easy reference.)
Ok. That makes it handy.

lugita15 said:
No, i do not say that. I say that we must infer from my line of reasoning that local determinism is incompatible with the experimental predictions of quantum mechanics being completely correct.
What you (and Bell and Herbert) are saying is that expressing coincidental detection in terms of a separable local predetermination is incompatible with the QM-predicted and observed correlation between θ and rate of coincidental detection. Which I agree with.

lugita15 said:
I am not arbitrarily placing restrictions. I am *logically deducing* such restrictions, i.e. the Bell inequality, from certain assumptions. If you disagree with my conclusion, you must either disagree with the assumption of local determinism, or you must believe that my reasoning is flawed.
Wrt Bell's formulation, it's clear where the restrictions come from and how they affect the predictions of any LR model that encodes those restrictions. Wrt your and Herbert's proofs, it's not so clear to me -- so, if you could clarify that it would help.

lugita15 said:
The proofs show that either nature is nonlocal, nature is nondeterministic, or that quantum mechanics is incorrect, in principle, in at least some of its experimental predictions.
More precisely, the proofs show that any model or line of reasoning embodying certain restrictions must be incompatible with QM and experiment. What are the restrictions, and how did they become part of the model or line of reasoning? Does employing these restrictions prove that nature is nonlocal? Imo, no.

It's been well established that the QM predictions are correct. Regarding determinism, it's an unfalsifiable assumption. So all you're dealing with is locality. So, what you're saying your proof proves is that nature is nonlocal (which is what Herbert says). But, what you've shown is that a particular way of conceptualizing coincidental detection is incompatible with QM and experiment. You can infer, from a certain conceptualization and line of reasoning that nature is nonlocal, but whether or not that inference is warranted depends on what's involved in the model or line of reasoning, and whether or not that inference is a fact of nature can only be ascertained by observing a nonlocal transmission.

lugita15 said:
But I'm not logically proving that nature must be nonlocal.
Yet that seems to be what you said above, and it is what Herbert says his proof proves, and you present your steps as a simplified recounting of Herbert's proof.

lugita15 said:
So the burden of proof is still on you to either identify a step in my reasoning your disagree with or to agree with the conclusion of my reasoning.
I think you (and Bell and Herbert) have proved what I said above. If you don't claim that your proof proves that nature is nonlocal, then we're basically on the same page.
 
  • #105
San K said:
Yes it is. Agreed Lugita.

question: why do 50% photons going through a polarizer? What is QM's explanation for that? Is the (indeterminate state) photon's interaction with the polarizer -- totally random or is it cause and effect?
According to (the standard interpretation of) quantum mechanics, you have a wave function for the two-particle system, so the polarizations of the particles are in a superposition of states, until one of the photons is detected by one of the polarizers (say the first polarizer). Then the wave function of the system collapses (nonlocally and instantaneously), putting both photons in the same definite polarization state. The collapse will either make both particles polarized in the direction of the first polarizer, or make both particles polarized perpendicular to the direction of the first polarizers. Which of these two things will happen is considered to be a 50-50 chance event, because wave function collapse is according to (the standard interpretation of) QM completely random.

So then if the collapse makes the photons polarized in the direction of the first polarizer, the first photon will go through the first polarizer. If the collapse makes the photons polarized perpendicular to the first polarizer, then the first photon doesn't go through. So to someone just looking at the first polarizer, he always sees random 50-50 results.

What about the second polarizer? Well, the second photon is now in a definite polarization state, either parallel or perpendicular to the angle of the first polarizer. So now if the second polarizer is oriented at the same angle as the first one, the second photon will do the same thing the first one did. If the second polarizer is oriented at a different angle, then the second photon will randomly either go through or not go through, with a probability of going through equal to the cosine squared of the difference between the polarization angle of the photon and the angle of the second polarizer. But if someone was just looking at the second polarizer they won't know what angle the first polarizer was turned to or whether the first photon went through or not, so he won't know what angle the second photon was polarized along before it hit, and thus to him it will seem to go through or not go through with random 50-50 chance.

Does that make sense?
 

Similar threads

Back
Top