Weih's data: what ad hoc explanations do local and non-local models give?

In summary, the conversation discusses the topic of local and non-local models in relation to various experiments attempting to disprove "local realism". The speakers also mention loopholes, such as detector efficiency and "noise", that prevent a definitive proof. Weih's experiment is specifically mentioned and it is suggested that the data set can be obtained from the author. An analysis of Weihs' data is presented, along with other relevant papers, discussing the concept of acausal filtering and the use of a global offset in data analysis. The availability of the "bluescan" data set is also mentioned.
  • #1
harrylin
3,875
93
Weih's data: what "ad hoc" explanations do local and non-local models give?

From the thread on Nick Herbert's proof:

harrylin:
The more I become aware about the tricky details and the impressive experimental attempts to disprove "local realism", the more I am impressed by - to borrow some of Lugita's phrasing - the equally overwhelming survival of Einstein locality, but not definitive proof [if that were possible] due to various loopholes like detector efficiency and "noise".

Luigita15:
But the thing is that pretty much every one of the loopholes have been closed seperately, we just haven't managed to close all of them simultaneously in one experiment. So the local determinist is left with having to come up with an explanation of the form "In THIS kind of experiment, the predictions of QM only appear to be right because of THIS loophole, but in THAT kind of experiment, the predictions of QM only appear to be right because of THAT loophole."

harrylin:
[..] What we are concerned with is realistic measurements and different from what I think you suggest, I have not seen evidence of the necessity for more ad hoc "local" explanations of measurement results than for "non-local" explanations. [..]

Lugita15 next brought up ion experiments while I was thinking of Weih's experiment and now Zonde brought that up, complete with a real data set. So now Weih's experiment is the topic of this thread but we should also start one on ion experiments - and hopefully also with real data!

zonde said:
Coincidence time loophole is just the same about imperfect matching of pairs. Say if instead of talking about "detection loophole" you would talk about violation of "fair sampling assumption" then it would cover coincidence time loophole just as well.

On the practical side for this coincidence time loophole you would predict some relevant detections outside coincidence time window. And that can be tested with Weihs et al data.
I got one dataset from Weihs experiment (some time ago there was one publicly available) loaded it in mysql database and then fooled around with different queries for quite some time. And I found that first - as you increase coincidence time window (beyond certain value of few ns) correlations diminish at the level that you would expect from random detections, second - detection times do not correlate beyond some small time interval. Deviations in that small time interval are explained as [...] jitter. [..]
zonde said:
[..] this jitter seems much more likely to come from electronics instead of detector. [...] you can PM me and I will send you the dataset.

Take a look at this paper:
A Close Look at the EPR Data of Weihs et al
It basically does analogous analysis that I have made.

And there is another one from the same author:
Explaining Counts from EPRB Experiments: Are They Consistent with Quantum Theory?

If you are interested in comparing that analysis with mine I have some excel file left from my analysis: see attachment
Thank you! :smile: - and the arrow above links to your post with the attachment (that will have to wait until tomorrow or the day after though)
 
Physics news on Phys.org
  • #2


From "A Close Look at the EPR Data of Weihs et al":
"I include the ranges Low 2, High 2, and High 3 for t B − t A near ±470 ns because I find them mysterious. They contribute too few coincidences to matter for determining whether a Bell inequality is violated."

I have some comments to say about these mystery peaks but I will post them a bit later. Just the very salt of it - they are related to wrong handling of double detection in electronics.
 
  • #5


Many sources refer to the so-called 'bluescan' data set. Any idea if it is available for download?
 
  • #6


Delta Kilo said:
Many sources refer to the so-called 'bluescan' data set. Any idea if it is available for download?
Yes. Send an e-mail to Gregor Weihs and he'll send you the link. That's how I got it.
 
  • #7


harrylin said:
Also the following data analysis may be useful for this discussion:
Einstein-Podolsky-Rosen-Bohm laboratory experiments: Data analysis and simulation
An interesting aspect of this paper I've not seen discussed here is this snippet:

The analysis of the data of the experiment of Weihs et al. shows that the average time between pairs of photons is of the order of 30 μ s or more, much larger than the typical values (of the order of a few nanoseconds) of the time-window W used in the experiments [8]. In other words, in practice, the identification of photon pairs does not require the use of W ’s of the order of a few nanoseconds in which case the use of a time-coincidence window does not create a “loophole”. A narrow time window mainly acts as a filter that selects pairs, the photons of which experienced time delays that differ by the order of nanoseconds.
The use of a global offset ∆G , determined by maximizing the number of coincidences, introduces an element of non-causality in the analysis of the experimental results (but not in the original data itself): Whether or not at a certain time, a pair contributes to the number of coincidences depends on all the coincidences, also on those at later times. This is an example of an acausal filtering [55]: The output (coincidence or not) depends on both all previous and all later inputs (the detection events and corresponding time-tags). Therefore, data analysis of EPRB experiments that employs a global offset ∆G to maximize the number of coincidences, is intrinsically acausal: The notion of coincidences happening inside or outside the light cone becomes irrelevant.
 
  • #8


billschnieder said:
Another analysis of Weih's data:

Is the Fair Sampling Assumption supported by EPR Experiments?
http://arxiv.org/abs/quant-ph/0606122

I think these analyses of the Innsbruck data are pretty cool. Peter Morgan has done some work in this area too, although he has not published anything at this point (that I have seen at least). He found some anomalies in the data as he was working with time windows.

However, in all of these you have some issues with finding patterns post facto. As you know, it is not unusual to find variations after the fact if you cherry pick what you use. Most of the analyses have shown little to date (mostly supporting the Weihs et al conclusion). Even this reference comes to some rather unusual conclusions. They say they witness signalling from Alice to Bob. That is assuming I read this correctly: "Bob’s marginal probabilities clearly vary with Alice’s setting , in violation of the non signalling principle." That would be quite a find.
 
  • #9


DrChinese said:
Even this reference comes to some rather unusual conclusions. They say they witness signalling from Alice to Bob. That is assuming I read this correctly: "Bob’s marginal probabilities clearly vary with Alice’s setting , in violation of the non signalling principle." That would be quite a find.
I understood their conclusion differently that you. In their own words:

In other words, we do not believe that our investigation supports the idea of
faster than light signalling, although this possibility cannot be logically excluded. Once
the Fair Sampling assumption is rejected, there is no evidence of violation of the no-
signalling principle, and therefore no evidence whatsoever of a possible superluminal
communication.
 
  • #10


billschnieder said:
I understood their conclusion differently that you. In their own words:...

Yes it was a little confusing to me too. Either there is violation of the no-
signalling principle or not.

I mean, either Bob is seeing something different on his own or he isn't. If he's not, and you have to compare data from both sides first, where is the violation of no-signalling?
 
  • #11


billschnieder said:
Another analysis of Weih's data:

Is the Fair Sampling Assumption supported by EPR Experiments?
http://arxiv.org/abs/quant-ph/0606122

Bigelow in
A Close Look at the EPR Data of Weihs et al
p.9-12 under NO-SIGNALING VIOLATIONS is analyzing Adenier and Khrennikov's claims.

"I demonstrate that Adenier & Khrennikov were mistaken; the coincidence counts
identified in either window—I’ll illustrate for the wide window—can be modeled as a
fair sample drawn from a set of sixteen coincidence counts that obey the no-signaling conditions."
...
"In light of the above results it may occur to the reader to wonder how Adenier &
Khrennikov could conclude that the fair sampling assumption “cannot be maintained to
be a reasonable assumption as it leads to an apparent violation of the no-signaling
principle.” In their normalization procedure, Adenier & Khrennikov assumed ..."
 
  • #12


OK, I haven't yet gotten around to looking at the data themselves, but I think that we could start with discussing precise and understandable issues that have been raised by others.

For example, I'm interested in comparing the explanations that are given for the results with a time window of 200 ns, as in plotted in fig.2 of the paper by De Raedt et al - http://arxiv.org/abs/1112.2629 .

Apparently they can explain it with a somewhat ad hoc local realistic model, and I would like to compare that with what QM predicts for that case. But regretfully, it appears that different explanations have been given, which also sounds ad-hoc to me. Does anyone know of a serious, numeric calculation or simulation attempt in defence of QM?
 
  • #13


harrylin said:
For example, I'm interested in comparing the explanations that are given for the results with a time window of 200 ns, as in plotted in fig.2 of the paper by De Raedt et al - http://arxiv.org/abs/1112.2629 .

Apparently they can explain it with a somewhat ad hoc local realistic model, and I would like to compare that with what QM predicts for that case. But regretfully, it appears that different explanations have been given, which also sounds ad-hoc to me. Does anyone know of a serious, numeric calculation or simulation attempt in defence of QM?

You don't need to defend QM. Nothing is happening that is an issue. The question is how to define the universe of entangled pairs. QM only predicts that the entangled state statistics will come from polarization entangled pairs only. Pairs that are not entangled produce product state statistics. Since we know that the Weihs data worked as expected per QM when the window was 6ns (i.e. a Bell inequality was violated), all is good.

About the 200ns coincidence window: considering that the original was 6ns, obviously the statistics change as we get further from that and expanding to 200 is a lot. What is an objective definition of an entangled pair? Who decides that? If we use a smaller window, we are progressively more certain we actually have such a pair. As long as there is no particular bias when that happens, what difference is there? So you would need to be asserting something like: "The true universe of entangled pairs includes pairs outside the 6ns window which, if they were included, would not violate a Bell Inequality." But you would also need to explain why ones within the 6 ns window DID! The obvious explanation being, those outside the window are no longer polarization entangled and you wouldn't expect them to produce entangled state statistics.
 
  • #14


DrChinese said:
You don't need to defend QM. Nothing is happening that is an issue. The question is how to define the universe of entangled pairs. QM only predicts that the entangled state statistics will come from polarization entangled pairs only. Pairs that are not entangled produce product state statistics. Since we know that the Weihs data worked as expected per QM when the window was 6ns (i.e. a Bell inequality was violated), all is good.
We are looking for possible disproof of one of the models;that comment is putting things on their head. :-p
About the 200ns coincidence window: [..] What is an objective definition of an entangled pair? Who decides that? [..] The obvious explanation being, those outside the window are no longer polarization entangled and you wouldn't expect them to produce entangled state statistics.
Thus you suggest that the different observation is mainly due to non-entangled photon pairs. I have also heard the incompatible explanation that the the different observation is mainly due to noise. Assuming that QM is a predictive theory and not ad hoc, then someone skilled in QM should be able to give numerical predictions of both observables if enough apparatus details are known.
 
  • #15


harrylin said:
Thus you suggest that the different observation is mainly due to non-entangled photon pairs. I have also heard the incompatible explanation that the the different observation is mainly due to noise.

There is no "noise" other than photons. Filters keep out those that are not coming from the PDC crystal. What you don't know is what might cause decoherence of the entangled state. If that happens mid flight, it might be due to something that delays a photon. If so, it is not entangled when it arrives AND it is late. So no issue there.

As I mention, we need an entangled source. Certainly, the 6ns window source is a pretty good entangled stream. We cannot make that statement once the window gets too large, however large that is.

You will have to assert that the 200 ns source is ALSO entangled before we can use it for anything. How do you do that? You check for perfect correlations. Once you get a good set of entangled pairs, you can do your Bell test. (That is what Weihs et al did with the smaller window.)

So it is meaningless to analyze the data this way (post facto) without FIRST having checked the source for being in an entangled state. Where is this accomplished? I haven't seen this. Unless Weihs agrees the larger window is suitable (obviously he didn't), you can't really criticize his conclusion.
 
  • #16


harrylin said:
For example, I'm interested in comparing the explanations that are given for the results with a time window of 200 ns, as in plotted in fig.2 of the paper by De Raedt et al - http://arxiv.org/abs/1112.2629 .

Apparently they can explain it with a somewhat ad hoc local realistic model, and I would like to compare that with what QM predicts for that case. But regretfully, it appears that different explanations have been given, which also sounds ad-hoc to me. Does anyone know of a serious, numeric calculation or simulation attempt in defence of QM?
Can you list those different QM explanations? Maybe this list can be made shorter.

And I am not sure that there can be QM prediction for 200 ns coincidence window because QM gives prediction for idealized setup where coincidence window is 0 ns. Reasoning how we go from idealized 0 ns case to 6 or 25 or 200 ns case and how much noise this adds is supposedly classical i.e. there are imperfections in measurement equipment like jitter, dark counts, delays in electronics, drift of clock, discarded events due to polarization setting change window and inefficient detection. Question about unpaired photons prior to detection should be within QM domain but I don't know how it can be handled.
 
  • #17


DrChinese said:
There is no "noise" other than photons. [..] As I mention, we need an entangled source. Certainly, the 6ns window source is a pretty good entangled stream. We cannot make that statement once the window gets too large, however large that is. [..] So it is meaningless to analyze the data this way (post facto) without FIRST having checked the source for being in an entangled state. Where is this accomplished? I haven't seen this. Unless Weihs agrees the larger window is suitable (obviously he didn't), you can't really criticize his conclusion.
I did not even consider his conclusion, which was based on a different data analysis. But indeed, we need to consider the assertions that were made about the experimental set-up, and then look what can be put to the test with the existing data.
 
  • #18


zonde said:
Can you list those different QM explanations? Maybe this list can be made shorter.
One person asserted that it was noise. Another one asserted that it was non-entangled photons. I ignore if each of these explanations:
- can produce the observed results
- can be made non ad-hoc by means of post-diction, based on the used equipment.

And I'm not sure if also detector inefficiency was proposed as possible "non-local" explanation.
And I am not sure that there can be QM prediction for 200 ns coincidence window because QM gives prediction for idealized setup where coincidence window is 0 ns.
Are you sure that here you are not confusing a specific Bell test prediction with the capability of a presumably complete theory of physics to predict a statistical result?
Reasoning how we go from idealized 0 ns case to 6 or 25 or 200 ns case and how much noise this adds is supposedly classical i.e. there are imperfections in measurement equipment like jitter, dark counts, delays in electronics, drift of clock, discarded events due to polarization setting change window and inefficient detection. Question about unpaired photons prior to detection should be within QM domain but I don't know how it can be handled.
Exactly - what we expect of physical theories, is to enable us to predict the results of measurements under realistic predictions. For example, one doesn't get away with claiming that Newton's mechanics is perfectly correct, and then when measurement results deviate somewhat from the prediction for the "ideal case", make the ad hoc excuse that results are surely due to clock drift, jitter, thermal expansion and so on; these must be accounted for by means of plausible estimations.
 
  • #19


harrylin said:
One person asserted that it was noise. Another one asserted that it was non-entangled photons. I ignore if each of these explanations:
- can produce the observed results
- can be made non ad-hoc by means of post-diction, based on the used equipment.
About noise. We have two different sources of noise - stray photons that do not come from our photon source and dark counts in detector.
About dark counts Weihs et al wrote in their paper http://arxiv.org/abs/quant-ph/9810080:
"The photons were detected by silicon avalanche pho-
todiodes with dark count rates (noise) of a few hundred
per second. This is very small compared to the 10.000 –
15.000 signal counts per second per detector."

There is nothing said about stray photons so I suppose that careful experimentalist can keep them out good enough so that they can be ignored.

So I say that noise can be excluded from the list.

harrylin said:
Are you sure that here you are not confusing a specific Bell test prediction with the capability of a presumably complete theory of physics to predict a statistical result?
I am fairly sure that I am not confusing anything.

QM prediction is not concerned about technical difficulties of identifying which photons belong to the same pair.

harrylin said:
Exactly - what we expect of physical theories, is to enable us to predict the results of measurements under realistic predictions. For example, one doesn't get away with claiming that Newton's mechanics is perfectly correct, and then when measurement results deviate somewhat from the prediction for the "ideal case", make the ad hoc excuse that results are surely due to clock drift, jitter, thermal expansion and so on; these must be accounted for by means of plausible estimations.
No, we expect from theories results for idealized setups.
We approach reality perturbatively. So the theory provides mathematically simple baseline and then we add correction terms using some other theory or simply using empirical data and experience (of experts).

So yes, we make excuses. But what we expect from valid test of the theory is that all excuses are made prior to experiment.
 
  • #20


zonde said:
About noise. We have two different sources of noise - stray photons that do not come from our photon source and dark counts in detector.
About dark counts Weihs et al wrote in their paper http://arxiv.org/abs/quant-ph/9810080:
"The photons were detected by silicon avalanche pho-
todiodes with dark count rates (noise) of a few hundred
per second. This is very small compared to the 10.000 –
15.000 signal counts per second per detector."

There is nothing said about stray photons so I suppose that careful experimentalist can keep them out good enough so that they can be ignored.

So I say that noise can be excluded from the list.
Thanks for the link to their paper and the clarification! :smile:
[..] No, we expect from theories results for idealized setups.
That can't be right - but surely this is a simple misunderstanding about words. For as I stressed and you confirmed with your precisions here above, good experimenalists discuss the consequences of the deviations from the intended ideal and give quantitative predictions that account for these effects, as far as known. Not doing so would make experiments useless, as I also illustrated with my earlier example.
[..] the theory provides mathematically simple baseline and then we add correction terms using some other theory or simply using empirical data and experience (of experts).
I had the impression that the theory concerning essential points that we are interested in - issues like dark counts and non-entangled photons - happens to be in the field of QM. In particular such things as instrument noise can be predicted from QM.
So yes, we make excuses. But what we expect from valid test of the theory is that all excuses are made prior to experiment.
That's a fair but unrealistic expectation as many experiments showed: often some crucial detail was overlooked, and only discovered afterwards. A more realistic expectation is that no ad hoc explanations are provided, after-the-fact; what matters is explanations that are backed up by experiments or existing theory, such as the one by Weiss that you provided here above.
 
  • #21


harrylin said:
That can't be right - but surely this is a simple misunderstanding about words. For as I stressed and you confirmed with your precisions here above, good experimenalists discuss the consequences of the deviations from the intended ideal and give quantitative predictions that account for these effects, as far as known. Not doing so would make experiments useless, as I also illustrated with my earlier example.
This misunderstanding is not about addressing deviations or not but about how they are addressed. I am saying that big part is played by empirical knowledge that is not part of the theory.

harrylin said:
I had the impression that the theory concerning essential points that we are interested in - issues like dark counts and non-entangled photons - happens to be in the field of QM. In particular such things as instrument noise can be predicted from QM.
Dark counts are simply measured for particular detector when they are manufactured. You simply put them into the dark and measure how often they will give a click without any photon hitting them. There are no predictions of QM involved. There is simply not much demand for rigorous theoretical treatment (there are explanations of course).

harrylin said:
That's a fair but unrealistic expectation as many experiments showed: often some crucial detail was overlooked, and only discovered afterwards. A more realistic expectation is that no ad hoc explanations are provided, after-the-fact; what matters is explanations that are backed up by experiments or existing theory, such as the one by Weiss that you provided here above.
If you overlook some crucial detail then you patch your theory (add additional assumptions) and repeat experiment taking that detail into account.

But I would like to keep on topic with that question of yours:
harrylin said:
For example, I'm interested in comparing the explanations that are given for the results with a time window of 200 ns, as in plotted in fig.2 of the paper by De Raedt et al - http://arxiv.org/abs/1112.2629 .
I suppose that might seem obvious for some and not so obvious for others - increasing coincidence widow accommodates experimental imperfections (jitter) but introduces additional errors (pairing up photons that are not from the same pair). So it is balance between how much false negatives you trow out and how much false positives you take in.

It is not QM but rather "science" of experiments.
 
  • #22


zonde said:
This misunderstanding is not about addressing deviations or not but about how they are addressed. I am saying that big part is played by empirical knowledge that is not part of the theory. Dark counts are simply measured for particular detector when they are manufactured. [..]
I think that we already agreed that explanations based on empirical knowledge are fine of course; and I doubt that anyone would disagree. Sorry if our agreement on that point wasn't clear!
If you overlook some crucial detail then you patch your theory (add additional assumptions) and repeat experiment taking that detail into account. [...] It is [..] rather "science" of experiments.
Generally that is only done if repeating the experiments can add information. More commonly what is done is to do an auxiliary experiment to provide the missing information, if possible. But as you say, here we risk to drift into another topic, related to the scientific method and so that's not for this thread. By chance I started a thread on that a few days ago in which we can discuss further about it if anyone likes:
https://www.physicsforums.com/showthread.php?t=598724
Note that that topic is deemed to be not a topic to discuss on a physics forum but in the Social Sciences forum!
[..] I suppose that might seem obvious for some and not so obvious for others - increasing coincidence widow accommodates experimental imperfections (jitter) but introduces additional errors (pairing up photons that are not from the same pair). So it is balance between how much false negatives you trow out and how much false positives you take in.
[..]
Indeed. Weihs did not (if I didn't overlook it) account for the possible existence of non-entangled pairs, but considered the pairing up of photons that are not from the same pair. And for some reason (perhaps based on the same philosophy as you expressed at the end of post #19), De Raedt simply considers the suggestions of Weihs; and as Weihs gives a precise estimation of this, the effect can be simulated. The reason that it hasn't been done is perhaps that the outcome is already obvious. Else we can do it.
 
  • #23


harrylin said:
Indeed. Weihs did not (if I didn't overlook it) account for the possible existence of non-entangled pairs, but considered the pairing up of photons that are not from the same pair. And for some reason (perhaps based on the same philosophy as you expressed at the end of post #19), De Raedt simply considers the suggestions of Weihs; and as Weihs gives a precise estimation of this, the effect can be simulated. The reason that it hasn't been done is perhaps that the outcome is already obvious. Else we can do it.
I am not sure I got precisely about what kind of simulation you are talking but can you take a look at Bigelow's http://arxiv.org/abs/0906.5093 section "ESTIMATING THE FALSE-POSITIVE RATE" and comment if it is the thing you are talking about or no?
 
  • #24


zonde said:
I am not sure I got precisely about what kind of simulation you are talking but can you take a look at Bigelow's http://arxiv.org/abs/0906.5093 section "ESTIMATING THE FALSE-POSITIVE RATE" and comment if it is the thing you are talking about or no?

I already looked at that paper, as it was cited by De Raedt. De Raedt and he are looking at a similar aspect that regretfully I don't understand; it's not what I meant. What I meant was the data as plotted in De Raedt's fig.2; and I was pretty sure that I have seen a simulation attempt by De Raedt with models of his group, and some of which could match the data plot rather well. However, I thought that it was included in that paper but I now see that it isn't.. and now I cannot find that other paper back... I will continue looking!

So, assuming that he did (or could) match his fig.2, the demand for non-localist models is similar:
1. (your requirement:) when using Weihs' assumptions, if a simulation plot matches the data plot of fig.2 (I guess not, but perhaps we should put it to the test); and
2. (my requirement:) allowing for other plausible explanations in agreement with QM but different from "local" explanations, if one can obtain a simulation that matches that data plot.

For test 1, I thus suggested that one of us could perhaps try to calculate it for just one time window (for example 200 ns), taking miscounts in account.

ADDENDUM: I now found it back, it's in Arxiv 0712.3781v2 , fig.10. And interesting, the match isn't that good, also their simulations are apparently still too idealistic as they don't fall below 2 (simulation up to a window of 180 us).
 
Last edited:
  • #25


harrylin said:
I already looked at that paper, as it was cited by De Raedt. De Raedt and he are looking at a similar aspect that regretfully I don't understand; it's not what I meant. What I meant was the data as plotted in De Raedt's fig.2; and I was pretty sure that I have seen a simulation attempt by De Raedt with models of his group, and some of which could match the data plot rather well. However, I thought that it was included in that paper but I now see that it isn't.. and now I cannot find that other paper back... I will continue looking!

So, assuming that he did (or could) match his fig.2, the demand for non-localist models is similar:
1. (your requirement:) when using Weihs' assumptions, if a simulation plot matches the data plot of fig.2 (I guess not, but perhaps we should put it to the test); and
2. (my requirement:) allowing for other plausible explanations in agreement with QM but different from "local" explanations, if one can obtain a simulation that matches that data plot.

For test 1, I thus suggested that one of us could perhaps try to calculate it for just one time window (for example 200 ns), taking miscounts in account.

ADDENDUM: I now found it back, it's in Arxiv 0712.3781v2 , fig.10. And interesting, the match isn't that good, also their simulations are apparently still too idealistic as they don't fall below 2 (simulation up to a window of 180 us).
I now see that they make the following general remark concerning their simulations:

"we consider ideal experiments only, meaning that we assume that detectors operate with 100% efficiency, clocks remain synchronized forever, the “fair sampling” assumption is satisfied, and so on."

Not clear however (at least, not clear to me) is if the the false count effect due to limited time window is included - that even occurs in such ideal experiments.
 
  • #26


De Raedt's experiment generates many, many outcomes in both sides of the experiment which are not paired with outcomes on the other side. Two events are only considered a pair when they occur close enough together in time. They reject a whole load of events on one side which are not close enough in time to an event on the other side, and vice versa.

In Weihs et al experiment, only one in twenty of the events on Alice's side was close enough in time to an event on Bob's side to be called part of a pair. And vice versa. So if we think of pairs of photons going to the two wings of the experiment, some being measured and other not, the experiment we are talking about needs 400 photon pairs to start with. One in 20 photons on Alice's side survive. One in 20 on Bob's side. Only one in 400 get measured together.

Selecting pairs according to coincidence of measurement times opens up an even bigger loophole than in discrete time experiments. See

arXiv:quant-ph/0312035
Bell's inequality and the coincidence-time loophole
Jan-Ake Larsson, Richard Gill

Abstract:This paper analyzes effects of time-dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence [and hence whether or not a pair contributes to the actual data] is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole."

Europhysics Letters, vol 67, pp. 707-713 (2004)
 
  • #27


gill1109 said:
De Raedt's experiment generates many, many outcomes in both sides of the experiment which are not paired with outcomes on the other side. Two events are only considered a pair when they occur close enough together in time. They reject a whole load of events on one side which are not close enough in time to an event on the other side, and vice versa.

In Weihs et al experiment, only one in twenty of the events on Alice's side was close enough in time to an event on Bob's side to be called part of a pair. And vice versa. So if we think of pairs of photons going to the two wings of the experiment, some being measured and other not, the experiment we are talking about needs 400 photon pairs to start with. One in 20 photons on Alice's side survive. One in 20 on Bob's side. Only one in 400 get measured together.

It appears you are saying here that indeed De Raedt's simulation faithfully represents the Weihs et al experiment, and yet reproduce their results in a completely locally causal manner.
 
  • #28


gill1109 said:
De Raedt's experiment generates many, many outcomes in both sides of the experiment which are not paired with outcomes on the other side. Two events are only considered a pair when they occur close enough together in time. They reject a whole load of events on one side which are not close enough in time to an event on the other side, and vice versa.
The discussion here is about interpretation of Weih's experimental data. But indeed, correcting for that, as you say: Weih's experiment generates many outcomes in both sides of the experiment which are not paired (by Weihs) with outcomes on the other side. He only considers two events a pair when they occur close enough together in time. Weihs et al reject a whole load of events on one side which are not close enough in time to an event on the other side, and vice versa.
I'm afraid de Raedt and his colleagues are rather confused and don't understand these issues. So many things they say are rather misleading. The experiment as a whole has an efficiency and it is measured by the proportion of unpaired events on both sides of the experiment. Big proportion unpaired, low efficiency.
?! "Detector efficiency" refers to the physical detectors, how much of the received light is actually detected and how much light is actually lost. Compare for example:
http://www.safetyoffice.uwaterloo.c...detection/calibration/detector_efficiency.htm
Similarly, overall efficiency refers to the total efficiency of the whole apparatus, that is, how much of the emitted light is actually detected and how much light is actually lost. Selection of detection data by the experimenter is very distinct from efficiency.

Moreover, if you deem De Raedt et al misleading, what do you think of this: today you claimed to Gordon Watson that in their simulation photons get lost, even suggesting that "Alice's photon [..] chooses to vanish". That certainly sounds as if de Raedt proposes that photons simply disappear, that is, without triggering the detectors (I now asked Gorden how he understood your explanation).
- https://www.physicsforums.com/showthread.php?t=590249&page=9
In Weihs et al experiment, only one in twenty of the events on Alice's side was close enough in time to an event on Bob's side to be called part of a pair.
Wait a moment, that's a central discussion point: Weihs could have decided to call more of the events close enough in time to be called a pair.
And vice versa. So if we think of pairs of photons going to the two wings of the experiment, some being measured and other not, the experiment we are talking about needs 400 photon pairs to start with. One in 20 photons on Alice's side survive. One in 20 on Bob's side. Only one in 400 get measured together.
Again: "some being measured and others not" should refer to system efficiency, not to data selection (and this has nothing to with "photons survive" either).
However, you made an interesting remark: I thought that if 20 photons on the one side are close enough in time to photons on the other side, that those pairs are counted. I did not follow the part of why those 20 pairs are not paired. Can you clarify that?
Selecting pairs according to coincidence of measurement times opens up an even bigger loophole than in discrete time experiments. [..]
Yes, I think that everyone agrees on that. De Raadt et al showed it by simulation.
And thanks for the link to that journal paper, which, I notice, doesn't confuse efficiency with time coincidence - good! :smile:
 
Last edited:
  • #29


The main problem with coincidence-time loophole as I see it is that you have to claim that detector efficiency is much higher than the one deduced from coincidence/single rate.
But then detector efficiency can be increased. And there should be some serious flaws in detector efficiency calibration (I do not say that's impossible but then there should be at least some vague ideas how this can happen).
 
  • #30


As Weihs' et al make the coincidence window larger, the proportion of unpaired observations goes down and the correlations are dampened. As they make it smaller, the number of pairs counted in the statistics gets smaller and the correlations fluctuate more and more wildly.

Imagine two photons who are trying to cheat by pre-arranging what setting they want to see. If Alice's sees the wrong setting on the left, it arranges that the measurement process is faster, so that its registered "measurement time" is earlier than average. The other photon arranges that its measurement time is later than average, if it doesn't like the setting which it sees. So if both see the "wrong" setting the time interval between their arrivals is extended and they are not counted as a pair.

There are more opportunities for "cheating" when you can manipulate your arrival time when pairing is determined by the difference between arrival times, than in a clocked experiment where pairing is determined absolutely. And also, you can arrange that every single photon gets detected so that detector efficiency is 100%, even though many many events are unpaired.

I think that de Raedt et al. are using this trick quite instinctively. I have told them about it many times (and others have too) but they continue to publish papers with no reference whatever to the immense literature on these issues.
 
  • #31


zonde said:
The main problem with coincidence-time loophole as I see it is that you have to claim that detector efficiency is much higher than the one deduced from coincidence/single rate.
But then detector efficiency can be increased. And there should be some serious flaws in detector efficiency calibration (I do not say that's impossible but then there should be at least some vague ideas how this can happen).
If I understand you correctly, you think that it may be possible to - at least partly - compensate for that loophole by clarifying how well or how poorly certain explanations match the facts as we know. Indeed, that's exactly my aim.

Weihs claimed a detection efficiency of 5%; I don't know how they established that, and I'm not sure how that affects the different explanations. At first sight it will have on De Raedt's model little other effect than to lower the predicted correlations for large coincidence windows, and that is at least qualitatively what their simulations need in order to better match the data. And it is also qualitatively what Weihs' model needs.

Thus I'm afraid that without some serious work on it, we won't be able to say which model would be a better match by adding that consideration. However, you suggest to have already found a weak point in one of the models; I hope that you will elaborate. :-p

On a side note: I think that particle based models such as of De Raedt are fundamentally flawed; but that issue is perhaps of no importance for this discussion.
gill1109 said:
As Weihs' et al make the coincidence window larger, the proportion of unpaired observations goes down and the correlations are dampened. As they make it smaller, the number of pairs counted in the statistics gets smaller and the correlations fluctuate more and more wildly.
Exactly - however, as I indicated here above, it looks to me that such a qualitative description is insufficient.
Imagine two photons who are trying to cheat by pre-arranging what setting they want to see.
Sorry, I have no reason to imagine such weird explanations - and certainly not in physicsforums. Here I limit myself to the discussion of hypotheses or models that can be found in the peer reviewed literature.
[..] de Raedt et al. [..] continue to publish papers with no reference whatever to the immense literature on these issues.
Off-topic as that is, just this: there may be papers that are lacking, but the last paper that I referred to in this thread happens to reference a certain Gill twice on these issues. Please don't treat this thread as a soap box!
 
Last edited:
  • #32


Harry Lin: when I say "imagine two photons trying to cheat" I am asking you to imagine that you are trying to write a computer program which generates the correlations you are after. It is not meant to be an explanation for what happens in the real world. It is an explanation for how de Raedt and similar researchers are able to engineer violations of CHSH, etc. Strong positive correlations have to be made stronger still, strong negative correlations have to be made even more strong negative? How to fix this? By a selection bias. Filter out potential observations which won't contribute to the pattern you want to create. We know this can only be done by some kind of communication between the two wings of the experiment. The trick is to let the observer do this communication, by selecting events to be considered as pairs under conditions which depend on the local hidden variables carried by the two particles.

This is called "rejection sampling" in the simulation literature, "biased sampling" or "selection bias" in the statistical literature.

By the way, I am not claiming that de Raedt and others are purposely trying to trick you. These possibilities have been known and studied for more than 40 years by specialists in the field. I use the word "trick" figuratively.

Richard Gill
 
  • #33


gill1109 said:
[..] It is not meant to be an explanation for what happens in the real world.
Sorry: this thread is about explanations for what happens in the real world, and only such.
 
Last edited:
  • #34


The thread is about what "ad hoc" explanations are given by local and non-local models. If you want to understand why de Raedt's (local) model works, and what limits are there to models of that type, you have to understand how "coincidence selection" opens a loophole, you have to understand just how wide that loophole is. Once you understand the mechanism they are exploiting, you will realize how easy the task was which they set themselves.

Maybe this does not tell you much about the physics of the real world. No. That's my point. De Raedt's work does not tell us much about the physics of the real world. Their models are just tricks, tricks which are in fact built on ideas which have been known for a long time.

This also tells us what is important for future experiments. It will be a great day when someone does a decent loophole free experiment. If they are going to use observed detection times and a coincidence window to determine pairs then the important criterion is the percentage of all the events on one side of the experiment which is succesfully paired with an event on the the other side. This percentage has to be above a certain threshhold.

If timing is determined externally, then the threshold is different (lower).

This also tells us that the important "efficiency" measure is not the efficiency of the detectors, as determined conventionally, but some kind of efficiency of the entire experimental setup.

These things have been known by theoreticians for a long time but many experimentalists and also the general (physics) public seems quite unaware of them. That gives a nice niche for de Raedt et al to write paper after paper with results which superficially seem exciting or controversial, but actually are not exciting at all.
 
  • #35


gill1109 said:
The thread is about what "ad hoc" explanations are given by local and non-local models. If you want to understand why de Raedt's (local) model works, and what limits are there to models of that type, you have to understand how "coincidence selection" opens a loophole [..] This also tells us what is important for future experiments. [..]
I suppose that everyone participating in this discussion, understands that; so far the easy part! The challenge is here to compare Weihs' data with the two competing models -which include hypotheses/excuses about the physics of the real world.

1. I'm interested in how the models compare for several coincidence windows; it may be sufficient to use the data as presented by De Raedt in his latest paper.

2. De Raedt compares QM with another aspect of the data, from which he concludes that QM does not match the observations. Again it will be interesting to know how successfully that issue can be "explained away", and if his own model fares better or worse.

3. Zonde also did a comparison, but regretfully I did not grasp the issue he looked into. I hope that he will elaborate.
 

Similar threads

Back
Top