A Bell Theorem with no locality assumption?

In summary, the conversation revolves around the concept of realism and its relationship to Bell and other HV no-go theorems. It also discusses the interchangeability of Hidden Variables and Realism, as well as the idea of elements of reality as a starting point for discussions. The discussion is based on papers by Charles Tresser, specifically focusing on versions of Bell's Theorem that do not assume locality and the implications of classical realism versus locality. The conversation also touches on the concept of naive realism and its relation to non-locality. The idea of an any-all distinction in quantum mechanics and its connection to the Uncertainty Principle is also mentioned. The conversation concludes with a discussion of a simple classical model that captures aspects of this
  • #71
zonde said:
Detection loophole is the fact that interpretation of Bell tests that use photons should relay on fair sampling assumption. This assumption means that correlations in photon pairs where one of two is not detected would be the same (if they would be detected) as for the pairs where both photons were detected.
If that is not so then correlations can be affected by detection of different subsamples under different settings of analyzer.


I thought the point of these experiments was to prove that objective reality doesn't exist, until we start to detect/measure it (although, this can't be/isn't true).
 
Physics news on Phys.org
  • #72
DrChinese said:
With LR nowhere near those results, which are 100% consistent with QM.

Don't make me come and beat you up! :smile:
Well actually, at the risk of personal injury, I feel compelled to say that the predictions of the most sophisticated LRHV formulations aren't that far from the qm predictions. That is, they're more or less in line, just as the predictions of qm are, with what one might expect given the principles of classical optics. That is, the angular dependence is similar.

Nevertheless, the predicted results are still measurably different. So, I don't know what zonde is saying. I currently believe that the LR program is more or less dead, and I don't think that improving the detection efficiency will resurrect it, because, as I mentioned, even assuming 100% efficiency the qm predictions are still different from the LRHV predictions.

IMO, the stuff by Zeilinger et al. about the importance of closing the detection loophole is just a bunch of BS aimed at procuring contracts and grants from more or less ignorant investors. But then, what do I know?

We'll see what zonde has to say about this.
 
  • #73
No-where-man said:
I thought the point of these experiments was to prove that objective reality doesn't exist, until we start to detect/measure it (although, this can't be/isn't true).
I don't think that that's the point of the experiments -- although some otherwise very wise commenters have said this.

The experiments demonstrate that a certain class of models of quantum entanglement that conform to certain explicit expressions/forms of local realism are incapable of correctly predicting the results of entanglement preparations.

What this means wrt objective reality is still a matter of some dispute. However, the mainstream view seems to be that the experiments don't inform wrt objective reality, but only wrt the specific restrictions on the formulation of models of quantum entanglement.

So, if you're a local realist, then you can still be a local realist and neither qm nor experimental results contradict this view. It's just that you can't model entanglement in terms of the hidden variables that determine individual results -- because coincidental detection isn't determined by the variables that determine individual detection.
 
Last edited:
  • #74
ThomasT said:
Ok, but calculations assuming 100% efficient detection also indicate a measurable difference between qm predictions and LRHV predictions. So, I'm not sure what you're asserting wrt local realism.
Can you say what does it means from perspective of QM that there is 100% efficient detection? Does it have anything to do with wave particle duality?
Then we can speak about calculations assuming 100% efficient detection.

ThomasT said:
I'm not sure what you mean by this. Afaik, wrt the way I learned what I remember of qm :rolleyes:, it doesn't have to do with single photon detections, but only with photon flux wrt large number of trials. And isn't this is also what LRHV models of entanglement are concerned with predicting?
There is difference between statistical ensemble and physical ensemble.
In statistical ensemble you have independent events and all statistics you calculate from statistical ensemble you can apply to individual events as probabilities.

Now what I mean is that QM predictions are not accurate for statistical ensemble of single photon events. It's gives basically the same as effect as making photons distinguishable at double slit i.e. no interference pattern.

ThomasT said:
Ok, I think we agree on this. So exactly what are you referring to when you speak of "local realism"?
I am referring to kind of (physical) ensemble interpretation. Or more exactly I mean that interference is effect of unfair sampling. I have tried to put together things from discussion into single coherent piece http://vixra.org/abs/1109.0052" .
 
Last edited by a moderator:
  • #75
zonde said:
... I have tried to put together things from discussion into single coherent piece http://vixra.org/abs/1109.0052" .

Well, you ignore my question on mainstream science, and then you link to a "paper" on a site that has this policy:
[PLAIN said:
http://vixra.org/]ViXra.org[/PLAIN] is an e-print archive set up as an alternative to the popular arXiv.org service owned by Cornell University. It has been founded by scientists who find they are unable to submit their articles to arXiv.org because of Cornell University's policy of endorsements and moderation designed to filter out e-prints that they consider inappropriate.

ViXra is an open repository for new scientific articles. It does not endorse e-prints accepted on its website, neither does it review them against criteria such as correctness or author's credentials.

Nice.
 
Last edited by a moderator:
  • #76
zonde said:
Can you say what does it means from perspective of QM that there is 100% efficient detection? Does it have anything to do with wave particle duality? Then we can speak about calculations assuming 100% efficient detection.
Regarding the assumption of 100% efficient detection, afaik, when qm and proposed LRHV models of entanglement are compared, this comparison is usually done on the basis of calculations "in the ideal", in order to simplify things and make the comparison more clear.

Anyway, what I was getting at had to do with my current understanding that qm and LRHV (wrt models that are clearly, explicitly local and realistic, ie., Bell-type models) entanglement predictions are necessarily different. And, if so, then if "local realism holds" for quantum entanglement, then it follows that qm doesn't hold for quantum entanglement.

But you noted in post #65 that "QM predictions are tested and they work for inefficient detection case". So, apparently, you're saying that qm holds for quantum entanglement. And what I surmise from this is that you think that there's an LRHV formulation of quantum entanglement that agrees with qm.

This is what I'm not clear about. Are you advocating an LRHV model that's compatible with qm, or are you saying something else?

Let me take a guess. There's an unnacceptable disparity between individual detection efficiency and coincidental detection efficiency. What you're saying is that as these converge, then qm and LRHV correlation ranges will converge, and the correlation curves will become more approximately congruent. Is that what you're saying?

zonde said:
... what I mean is that QM predictions are not accurate for statistical ensemble of single photon events.
If by "statistical ensemble of single photon events" you're referring to either an accumulation of individual results or single individual results taken by themselves, then Bell has already shown that qm and local realism are compatible wrt this.

But you said in post #65 that if local realism holds for quantum entanglement "it would mean that QM predictions are incorrect for the limit of single photon." And you seem to be saying in post #66 that you agree with the idea that qm doesn't apply to single particles.

I don't know what that means. Don't we already know that neither qm nor LRHV models can predict the occurance of single photon detections in optical Bell tests (except when the angular difference of the polarizers is either 0 or 90 degrees and the result at one end is known)? For all other settings, the probability of individual detection is always 1/2 at both ends, ie., the results accumulate randomly.

I don't know if I'm parsing correctly what you're saying. Hopefully this post will enable further clarification. And thanks for the link to your paper. Maybe it will clear things up for me.
 
  • #77
ThomasT said:
Regarding the assumption of 100% efficient detection, afaik, when qm and proposed LRHV models of entanglement are compared, this comparison is usually done on the basis of calculations "in the ideal", in order to simplify things and make the comparison more clear.

Anyway, what I was getting at had to do with my current understanding that qm and LRHV (wrt models that are clearly, explicitly local and realistic, ie., Bell-type models) entanglement predictions are necessarily different. And, if so, then if "local realism holds" for quantum entanglement, then it follows that qm doesn't hold for quantum entanglement.

But you noted in post #65 that "QM predictions are tested and they work for inefficient detection case". So, apparently, you're saying that qm holds for quantum entanglement. And what I surmise from this is that you think that there's an LRHV formulation of quantum entanglement that agrees with qm.

This is what I'm not clear about. Are you advocating an LRHV model that's compatible with qm, or are you saying something else?

Let me take a guess. There's an unnacceptable disparity between individual detection efficiency and coincidental detection efficiency. What you're saying is that as these converge, then qm and LRHV correlation ranges will converge, and the correlation curves will become more approximately congruent. Is that what you're saying?
I am not sure what do you mean with "unnacceptable disparity between individual detection efficiency and coincidental detection efficiency".

So let me say that I am advocating QM interpretation that is compatible with local realism.

QM is rather flexible about it's predictions. For example if we take double slit experiment. Let's say we perform double slit experiment and do not observe any interference pattern. We can say that light was not coherent (basically that means absence of interference) or that two photon paths were distinguishable (another term that means absence of interference).
So QM still holds even if we do not observe any interference.

In similar fashion we can claim (if we want) that QM still holds even if quantum entanglement correlations are reduced to classical correlations in case of efficient detection. Here with classical correlations I mean product of probabilities not Bell's model.

ThomasT said:
If by "statistical ensemble of single photon events" you're referring to either an accumulation of individual results or single individual results taken by themselves, then Bell has already shown that qm and local realism are compatible wrt this.

But you said in post #65 that if local realism holds for quantum entanglement "it would mean that QM predictions are incorrect for the limit of single photon." And you seem to be saying in post #66 that you agree with the idea that qm doesn't apply to single particles.

I don't know what that means. Don't we already know that neither qm nor LRHV models can predict the occurance of single photon detections in optical Bell tests (except when the angular difference of the polarizers is either 0 or 90 degrees and the result at one end is known)? For all other settings, the probability of individual detection is always 1/2 at both ends, ie., the results accumulate randomly.
I am saying that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons. If first case you can't get interference but in second case you can.
 
  • #78
zonde said:
So let me say that I am advocating QM interpretation that is compatible with local realism.

Wow! This is getting better and better!

Pleeease, what is the name of this "QM interpretation" that refutes the predictions of QM!? :bugeye:

zonde said:
QM is rather flexible about it's predictions.

Really?? :eek: That’s not what I’ve heard... I have always gotten the impression that QM is one of the most accurate physical theories constructed thus far?? Quod Erat Demonstrandum... within ten parts in a billion (10−8).

zonde said:
I am saying that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons. If first case you can't get interference but in second case you can.

This must be the gem of everything you’ve claimed this far! I’m stunned!

I think you’re in for a Noble (sur)Prize if you can show that throwing ONE DICE 1000 times gives a different outcome compared to throwing 1000 dices ONE TIME!
 
  • #79
zonde said:
I am not sure what do you mean with "unnacceptable disparity between individual detection efficiency and coincidental detection efficiency".
In post #59 you quoted from Ultra-bright source of polarization-entangled photons this,
After passing through adjustable irises, the light was collected using 35mm-focal length doublet lenses, and directed onto single-photon detectors — silicon avalanche photodiodes (EG&G #SPCM’s), with efficiencies of ~ 65% and dark count rates of order 100s−1
and this,
The collection irises for this data were both only 1.76 mm in diameter – the resulting collection efficiency (the probability of collecting one photon conditioned on collecting the other) is then ~ 10%.
and then said,
zonde said:
So while detector efficiency is around 65% coincidence rate was only around 10%. And it is this coincidence rate that is important if we want to speak about closing detection loophole.
.
I supposed that you're saying that the gap between detector efficiency and coincidence rate is due to the efficiency of the coincidence mechanism of the experimental setup, and that as the efficiency of coincidence counting increases, and therefore as detector efficiency and coincidence rate converge, then the predicted (and recorded) qm and LRHV correlation ranges will converge -- thus making the qm and LRHV correlation curves more approximately congruent.

Which would be in line with your statement that,
zonde said:
... let me say that I am advocating QM interpretation that is compatible with local realism.

zonde said:
... we can claim (if we want) that QM still holds even if quantum entanglement correlations are reduced to classical correlations in case of efficient detection. Here with classical correlations I mean product of probabilities not Bell's model.
Ok, so it seems that what you're saying is that given maximally efficient detection (both wrt individual and coincidental counts), then quantum entanglement correlations will be the same as classical correlations. Is that what you're saying?

If so, then can't one just assume maximally efficient detection and calculate whether qm and classical models of entanglement give the same correlation coefficient?

zonde said:
I am saying that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons.
I don't understand exactly what you mean by this, and how it pertains to what seems to be your contention that quantum entanglement should "reduce to" classical correlations given maximal coincidental detection efficiency. So if you could elaborate, then that would help.

Is this paper, Quantum entanglement and interference from classical statistics relevant to what you're saying? (The author, C. Wetterich, also has other papers at arxiv.org that might pertain.)

And now I'm actually going to read your paper. I just wanted to hash out what you're saying in a minimally technical way first, because I would think that you would want to be able to eventually clearly explain your position to minimally educated, but interested/fascinated laypersons such as myself. :smile:
 
  • #80
ThomasT said:
I supposed that you're saying that the gap between detector efficiency and coincidence rate is due to the efficiency of the coincidence mechanism of the experimental setup, and that as the efficiency of coincidence counting increases, and therefore as detector efficiency and coincidence rate converge, then the predicted (and recorded) qm and LRHV correlation ranges will converge -- thus making the qm and LRHV correlation curves more approximately congruent.
I am certainly not saying that gap between detector efficiency and coincidence rate is due to the efficiency of the coincidence mechanism.
Actually I was not saying anything about the reasons for that difference. What I said is that we need high coincidence rate relative to single photon detections to avoid the need for fair sampling assumption (to close detection loophole).

The reason for that gap roughly is that a lot of photons hitting two detectors are not paired up. So we have photon losses that are not symmetrical and if they happen after polarization analyzer they are subject to fair sampling assumption. And judging by scheme of experiment this is exactly the case for Kwiat experiment. There apertures and interference filters are placed after polarization analyzer.


ThomasT said:
Ok, so it seems that what you're saying is that given maximally efficient detection (both wrt individual and coincidental counts), then quantum entanglement correlations will be the same as classical correlations. Is that what you're saying?

If so, then can't one just assume maximally efficient detection and calculate whether qm and classical models of entanglement give the same correlation coefficient?
I say that in case of efficient detection correlations of polarization entangled photons approach this rule:
[tex]P=\frac{1}{2}(cos^2(a)cos^2(b)+sin^2(a)sin^2(b))[/tex]
and it is classical.

ThomasT said:
I don't understand exactly what you mean by this, and how it pertains to what seems to be your contention that quantum entanglement should "reduce to" classical correlations given maximal coincidental detection efficiency. So if you could elaborate, then that would help.
By this I mean that you can observe photon interference only if you observe many photons. You detect some photons and effect of interference is that you detect more photons on average when there is constructive interference (many photons of ensemble interact with experimental equipment in a way that makes more photons detectable) and less photons when there is destructive interference.
Connection with quantum entanglement is that measurement basis of polarization entangled photons determines if you are measuring photon polarization (H/V basis) or you are measuring interference between photons of different polarizations (+45/-45 basis) i.e. mechanism behind correlations changes as you rotate polarization analyzer.
If interference disappears (efficient detection) then correlations are maximal in H/V basis and are zero in +45/-45 basis.

ThomasT said:
Is this paper, Quantum entanglement and interference from classical statistics relevant to what you're saying? (The author, C. Wetterich, also has other papers at arxiv.org that might pertain.)
I do not see that it is relevant. My judgment is simple - you can speak about viable classical model for quantum entanglement only if you make use of unfair sampling. I didn't see anything like that in this paper so I say it should be faulty.
 

Similar threads

Back
Top