The Unfair Sampling Assumption & Bell Tests

In summary: Observation. This means that the correlations that are observed are not affected by whether or not someone is looking. 2. The Bias() function CAN depend on...Observation. This means that the correlations that are observed are affected by whether or not someone is looking.From these 2 conflicting statements, it seems that the Unfair Sampling Assumption must be true in order for Local Realism to still be viable. However, this assumption is still not satisfying since it does not provide a logical explanation for why the Detection Loophole still exists.
  • #1
DrChinese
Science Advisor
Gold Member
8,195
1,930
The Fair Sampling Assumption, as commonly discussed, is implicit in Bell tests: As long as detection efficiency is less than 100%, the observed photon pairs are a representative sample of the total universe of emitted pairs. At this time, no Bell test has satisfactorily closed both the Fair Sampling Assumption and the Strict Locality assumptions (some call them the Detection and Strict Locality loopholes) at the same time, even though they have been closed individually.

From Towards a loophole-free test of Bell's inequality with entangled pairs of neutral atoms (2009)

"Experimental tests of Bell's inequality allow to distinguish quantum mechanics from local hidden variable theories. Such tests are performed by measuring correlations of two entangled particles (e.g. polarization of photons or spins of atoms). In order to constitute conclusive evidence, two conditions have to be satisfied. First, strict separation of the measurement events in the sense of special relativity is required ("locality loophole"). Second, almost all entangled pairs have to be detected (for particles in a maximally entangled state the required detector efficiency is 82.8%), which is hard to achieve experimentally ("detection loophole"). "

I was interested in discussing what I call the Unfair Sampling Assumption: If the Fair Sampling Assumption and the Locality Assumptions/Loopholes are still open as a pair, then Local Realism (LR) is still viable. I question this assumption! Take a look at this graph:

Bell.UnfairSamplingAssumption1.jpg


In the above:

a. We have percentages for a hypothetical Local Realistic (Hidden Variable) Theory, labeled as LR(Theta), showing what it predicts the true correlation percentage is for a pair of entangled PDC Type I photons where theta is the angle between the settings (for spacelike separated Alice and Bob). In the graph, 1=100% and 0=0% per convention.

There is no specific theory this is supposed to mimic except that it is assumed to be local and realistic. If it is local, then what happens at Alice cannot influence what happens at Bob and vice versa. If it is realistic, then it is assumed that even unobserved polarization settings would have well-defined values independent of the act of observation ("the moon is there even when no one is looking...").

The LR(Theta) line, in blue, is a straight line ranging from 1 at 0 degrees to 0 at 90 degrees. This matches the values that an LR would need to come closest to the predictions of QM, shown in Red. Other LR theories might posit different functions, but if they are out there then they will lead to even greater differences as compared to QM. Keep in mind that the QM predicted values match experiment closely.

b. The difference between LR and QM is the detection bias which must exist, which causes us to get an unfair sample (assuming the Unfair Sampling Assumption is true). That is the green line, and you can see that it is positive in one range and negative in another. This is interesting because it means that sometimes it is the LR-supporting pairs which are not detected, and other times it is the QM-supporting pairs that are not detected!

c. Finally, there is the Purple line which goes through 0 at all points (it is a little hard to see on the graph). This is interesting because it is the value of 2 special cases of the 8 possible permutations you get when you have 3 settings (2 which can be actually observed, i.e. Alice and Bob, and the third which is hypothesized by the local realistic theory).


Case Predicted likelihood of occurance
[1] A+ B+ C+ >=0
[2] A+ B+ C- >=0
[3] A+ B- C+ >=0
[4] A+ B- C- >=0
[5] A- B+ C+ >=0
[6] A- B+ C- >=0
[7] A- B- C+ >=0
[8] A- B- C- >=0

Assuming that we have something like this: A=0 degrees, B=45 degrees, C is between A and B.

It turns out that 2 of the above cases are suppressed: permutations [2] and [7]. The reason is that you cannot have matches at 0 and 45 degrees, but have a mismatch (non-correlation) at an angle in between. The purple line is calculated based on the LRT values (the math is a little complicated and is on a spreadsheet I created). So:

IF you assume that LR has function values closest to QM/experiment, such that you don't run afoul of Bell (since Bell still applies), THEN you get the straight line shown in
BLUE and you also get the values for cases [2] and [7] as being 0 across the board (they need to be non-negative to prove that a Bell inequality is not violated).
 
Physics news on Phys.org
  • #2
Now, here is my question: How can the Unfair Sampling Assumption be viable? In other words, how can Local Realism be viable even considering the Detection Loophole?

By looking at the graph, there are 2 important and conflicting elements for the Realistic view:

1. The Bias() function CANNOT depend on Theta, but it does! If it depends on Theta, then that violates the strict separation of Alice and Bob, which has already been verified! In other words, how can the bias function know to vary as it does unless Alice and Bob are communicating somehow? But that violates locality, which is one of the assumptions of the local realistic view. The Bias function should be constant (proportionally) for any specified detection setup.

2. The Bias function is positive in some places, negative in others. So that means that the detected sample has pairs missing that would give support to LR in some cases, and in other cases it is those that support QM that are missing. You would expect that the bias would ALWAYS be either positive or negative!

I say that there is NO LR(theta) data values which have the characteristic that the Bias function is constant for all Theta. This can easily be seen because both QM and LR must, by definition, have the same value (1) for 0 degrees. That is the QM predicted vaule, we already knew that. But any local realistic theory must also predict 1 for 0 degrees: that is the essence of the hidden variable hypothesis and is necessary so you have perfect correlations at all matching angles for Alice and Bob.

If you have the same prediction for LR() and QM() at 0 degrees, there is no difference. If the Bias function is independent of theta, then that difference must be the same (zero) at all angles. Yet that would violate Bell. QED.

Keep in mind that the above does NOT depend on assuming QM is correct; rather, they merely reflect that the QM predicted values match experiments to date.
 
Last edited:
  • #3
DrChinese said:
The Fair Sampling Assumption, as commonly discussed, is implicit in Bell tests: As long as detection efficiency is less than 100%, the observed photon pairs are a representative sample of the total universe of emitted pairs. At this time, no Bell test has satisfactorily closed both the Fair Sampling Assumption and the Strict Locality assumptions (some call them the Detection and Strict Locality loopholes) at the same time, even though they have been closed individually.

From Towards a loophole-free test of Bell's inequality with entangled pairs of neutral atoms (2009)

"Experimental tests of Bell's inequality allow to distinguish quantum mechanics from local hidden variable theories. Such tests are performed by measuring correlations of two entangled particles (e.g. polarization of photons or spins of atoms). In order to constitute conclusive evidence, two conditions have to be satisfied. First, strict separation of the measurement events in the sense of special relativity is required ("locality loophole"). Second, almost all entangled pairs have to be detected (for particles in a maximally entangled state the required detector efficiency is 82.8%), which is hard to achieve experimentally ("detection loophole"). "

I was interested in discussing what I call the Unfair Sampling Assumption: If the Fair Sampling Assumption and the Locality Assumptions/Loopholes are still open as a pair, then Local Realism (LR) is still viable. I question this assumption! Take a look at this graph:

Bell.UnfairSamplingAssumption1.jpg


In the above:

a. We have percentages for a hypothetical Local Realistic (Hidden Variable) Theory, labeled as LR(Theta), showing what it predicts the true correlation percentage is for a pair of entangled PDC Type I photons where theta is the angle between the settings (for spacelike separated Alice and Bob). In the graph, 1=100% and 0=0% per convention.

There is no specific theory this is supposed to mimic except that it is assumed to be local and realistic. If it is local, then what happens at Alice cannot influence what happens at Bob and vice versa. If it is realistic, then it is assumed that even unobserved polarization settings would have well-defined values independent of the act of observation ("the moon is there even when no one is looking...").

The LR(Theta) line, in blue, is a straight line ranging from 1 at 0 degrees to 0 at 90 degrees. This matches the values that an LR would need to come closest to the predictions of QM, shown in Red. Other LR theories might posit different functions, but if they are out there then they will lead to even greater differences as compared to QM. Keep in mind that the QM predicted values match experiment closely.

b. The difference between LR and QM is the detection bias which must exist, which causes us to get an unfair sample (assuming the Unfair Sampling Assumption is true). That is the green line, and you can see that it is positive in one range and negative in another. This is interesting because it means that sometimes it is the LR-supporting pairs which are not detected, and other times it is the QM-supporting pairs that are not detected!

c. Finally, there is the Purple line which goes through 0 at all points (it is a little hard to see on the graph). This is interesting because it is the value of 2 special cases of the 8 possible permutations you get when you have 3 settings (2 which can be actually observed, i.e. Alice and Bob, and the third which is hypothesized by the local realistic theory).


Case Predicted likelihood of occurance
[1] A+ B+ C+ >=0
[2] A+ B+ C- >=0
[3] A+ B- C+ >=0
[4] A+ B- C- >=0
[5] A- B+ C+ >=0
[6] A- B+ C- >=0
[7] A- B- C+ >=0
[8] A- B- C- >=0

Assuming that we have something like this: A=0 degrees, B=45 degrees, C is between A and B.

It turns out that 2 of the above cases are suppressed: permutations [2] and [7]. The reason is that you cannot have matches at 0 and 45 degrees, but have a mismatch (non-correlation) at an angle in between. The purple line is calculated based on the LRT values (the math is a little complicated and is on a spreadsheet I created). So:

IF you assume that LR has function values closest to QM/experiment, such that you don't run afoul of Bell (since Bell still applies), THEN you get the straight line shown in
BLUE and you also get the values for cases [2] and [7] as being 0 across the board (they need to be non-negative to prove that a Bell inequality is not violated).

How can that be? How can you have a mismatch at the angle between 0 and 45 degrees? In other words, how can one entangled photon go thru the 22.5 degree polarizer while the other does not go thru its 22.5 degree polarizer?
Am I interpreting this correctly? Is this a property of the unfair sampling assumption that you're referring to? That one photon goes thru while the other does not due to equipment inaccuracies?
 
  • #4
Neo_Anderson said:
How can that be? How can you have a mismatch at the angle between 0 and 45 degrees? In other words, how can one entangled photon go thru the 22.5 degree polarizer while the other does not go thru its 22.5 degree polarizer?
Am I interpreting this correctly?

The mismatch is between the 0 degree polarizer and the 22.5 degree polarizer not between 2 both at 22.5.

Imagine:

Alice=0 degrees and observes a 1.
Bob=45 degrees and observes a 1.
If there had been a Chris at 22.5 degrees (assumed to exist in LR), then Chris must observe either a 1 or a 0. The 1 case occurs, but the 0 case is suppressed. You would probably expect a few 0's but there is no room for those in the graph. The absence of Chris getting a 0 is NOT a proof against LR in itself.
 
  • #5
I still can't figure out why we can make semiconductors such as modern CPU's with very low innacuracy rates during their course of computation, and in general, produce all kinds of experimental apparati that achieves near 100% accuracy, yet still not create the experiment that proves Bell inequality violations that rival the accuracy of just about every other experimental apparatus out there.
Aspect's experiment is accurate better than five standard deviations. GHZ experiment has a level of accuracy better than eight standard deviations. Yet they still can do no better than 82.8% detector efficiency. Why?
 
  • #6
DrChinese said:
The mismatch is between the 0 degree polarizer and the 22.5 degree polarizer not between 2 both at 22.5.

Imagine:

Alice=0 degrees and observes a 1.
Bob=45 degrees and observes a 1.
If there had been a Chris at 22.5 degrees (assumed to exist in LR), then Chris must observe either a 1 or a 0. The 1 case occurs, but the 0 case is suppressed. You would probably expect a few 0's but there is no room for those in the graph. The absence of Chris getting a 0 is NOT a proof against LR in itself.

Now this is something new to me: the concept of supression. What causes supression, exactly? What is it? You said, "As it turns out, (2) and (7) are supressed..." :confused:
Forgive my ignorance. Heck, as of last weekend, I didn't even know what a Bell inequality was...
 
  • #7
Neo_Anderson said:
Now this is something new to me: the concept of supression. What causes supression, exactly? What is it? You said, "As it turns out, (2) and (7) are supressed..." :confused:
Forgive my ignorance. Heck, as of last weekend, I didn't even know what a Bell inequality was...

It is interesting, and actually I have never seen any discussion of it at all. But yet it is there, you have to look at the math on my Negative Probabilites page to see it (it is too complicated to repeat here).

http://drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm

See parts d. and f., which I quote briefly:

Why do we pick these particular combinations to define X, Y and Z? Because (X + Y - Z)/2 is the same as the probability of our 2 suppressed cases, [2] and [7] from c. above. We can now see that:

(X + Y - Z) / 2

= (([1] + [2] + [7] + [8]) + ([2] + [4] + [5] + [7]) - ([1] + [4] + [5] + [8])) / 2

Now simplify by eliminating offsetting terms:

= ([2] + [7] + [2] + [7]) / 2

= [2] + [7]

Which means that, if c. above is true, we summarize:

[2] + [7] = (X + Y - Z) / 2 >= 0 (per the Realistic side)
 
  • #8
Dr. Chinese, I know you want others to chime in on this thread; and I'm not trying to derail the thread either, but an interesting aspect of the GHZ experiment I found on your website here:
http://www.drchinese.com/David/Bell-MultiPhotonGHZ.pdf
states that:
Thus from a local realistic point of view the only possible results for an xxx experiment are VVV, HVV and HVH. How do these predictions of local realism for an xxx experiment compare with those of quantum physics? If we express the state given in (16.1) in terms of H/V polarization using (16.2), we obtain [an equation that has terms that are exactly opposite of those in local realism] (16.5)
We conclude that the local realistic model predicts none of the terms occurring in the quantum prediction and vice versa. This implies that, whenever local realism predicts a specific result definitely to occur for a measurement on one of the photons based on the results for the other two, quantum physics definitely predicts the opposite result.
Why is it that for the XXX run specifically, QM is exactly the opposite of local realism? I think the fact that LR and QM are exactly the opposite is significant. A partial deviance between the two I can ignore. But exactly the opposite? Don't you think this warrants discussion?
In Aspects experiment, QM was acccurate to 85% or so. Hidden variables was accurate to 75%--a partial deviance.
 
Last edited:
  • #9
DrChinese said:
The LR(Theta) line, in blue, is a straight line ranging from 1 at 0 degrees to 0 at 90 degrees. This matches the values that an LR would need to come closest to the predictions of QM, shown in Red. Other LR theories might posit different functions, but if they are out there then they will lead to even greater differences as compared to QM. Keep in mind that the QM predicted values match experiment closely.
This is true that particular LR function comes closest to prediction of QM but it is chewed over so much that I would suggest to try different function (and hidden variables).

Usually hidden variable is meant to determine in what PBS output it will appear (what polarization photon has). But consider such hidden variable that does not determine PBS output (polarization) but rather "says" that if photon is in output #1 it prefers to be detected and if it's in output #2 it prefers not to be detected. If measurement apparatus perfectly "heeds" what photon prefers then we have almost the same hidden variable as in traditional case except that half of the photons are never detected. But there is additional space for maneuver meaning that measurement apparatus can "heed" partly "detection preference" of photon and differently for different levels of photon "detection preference".

One particular thing of such hidden variable is that for theoretical 100% detection efficiency graph will not be that sloped line but rather horizontal line. It could be sloped line for less than 100% detection efficiency and perfect "heed" of photon preference for detection.
 
  • #10
zonde said:
This is true that particular LR function comes closest to prediction of QM but it is chewed over so much that I would suggest to try different function (and hidden variables).

Usually hidden variable is meant to determine in what PBS output it will appear (what polarization photon has). But consider such hidden variable that does not determine PBS output (polarization) but rather "says" that if photon is in output #1 it prefers to be detected and if it's in output #2 it prefers not to be detected. If measurement apparatus perfectly "heeds" what photon prefers then we have almost the same hidden variable as in traditional case except that half of the photons are never detected. But there is additional space for maneuver meaning that measurement apparatus can "heed" partly "detection preference" of photon and differently for different levels of photon "detection preference".

One particular thing of such hidden variable is that for theoretical 100% detection efficiency graph will not be that sloped line but rather horizontal line. It could be sloped line for less than 100% detection efficiency and perfect "heed" of photon preference for detection.

One of the issues is that a candidate LR MUST say what the "true" graph is. There are 2 constraints as I see them:

1. You MUST have LR(0)=1 as this is an explicit assumption of the candidate theory (otherwise you wouldn't be revealing a pre-existing condition common to both Alice and Bob).

2. Further, you CANNOT violate Bell's Inequality. Therefore, there can be no "negative" probabilities.

What you can have, however, is other values that satisfy the above other than the straight sloping blue line. But what are they? I say there aren't any such in which the Bias function is independent of Theta! And you discover very quickly - when you try to plug in another possible candidate function - that you run afoul of Bell. You need the following formula to yield a non-negative value at all points in the range (based on the X+Y-Z formula on my page, ignoring a factor of 2):

Where theta is between 0 and 45 degrees:
BellInequality(theta) = LR(45-0) + (1-LR(theta-0)) - LR(theta-45)

Where theta is between 45 and 90 degrees:
BellInequality(theta) = LR(theta-0) + (1-LR(45-0)) - LR(theta-45)

In case this was not clear before: the above are the formulas for calculating whether Bell's Inequality is violated for 3 angle settings, A=0, B=45, and C=theta. The formula determines the expected values for combined cases [2] and [7], which must in turn be >=0 to be realistic.

I realize that there could exist another attribute which determines whether or not an entangled photon pair will be detected, in the hypothetical LR situation. However, that does not change the fact that the other observables must be real at all times (as that is the realistic hypothesis!) and so Bell applies.
 
  • #11
DrChinese said:
One of the issues is that a candidate LR MUST say what the "true" graph is. There are 2 constraints as I see them:

1. You MUST have LR(0)=1 as this is an explicit assumption of the candidate theory (otherwise you wouldn't be revealing a pre-existing condition common to both Alice and Bob).
I do not agree with that.
With LR(0)=1 for theoretical SP=100% you imply hidden variable that unequivocally determines observable of polarization.
It is indeed the local realism as proposed by Einstein but it does not cover all possible models of local realism.
QM says that photons are entangled with superpositions of polarization but not observables of polarization. If we heed that statement we should ascribe local hidden variable to superposition itself but not observable.
So from this perspective I say LR(0)=0.5 for theoretical SP=100%.
 
  • #12
zonde said:
QM says that photons are entangled with superpositions of polarization but not observables of polarization. If we heed that statement we should ascribe local hidden variable to superposition itself but not observable.
So from this perspective I say LR(0)=0.5 for theoretical SP=100%.

I don't follow any of this. Can you walk me through it from a local realistic perspective? It doesn't seem there should be a superposition in that view. It should be like Bertlmann's socks, which is classical. There is an EPR element of reality, because the result of the second observation can be predicted with 100% certainty.
 
  • #13
DrChinese said:
I don't follow any of this. Can you walk me through it from a local realistic perspective? It doesn't seem there should be a superposition in that view. It should be like Bertlmann's socks, which is classical. There is an EPR element of reality, because the result of the second observation can be predicted with 100% certainty.
Did you get what I had on mind here:
Usually hidden variable is meant to determine in what PBS output it will appear (what polarization photon has). But consider such hidden variable that does not determine PBS output (polarization) but rather "says" that if photon is in output #1 it prefers to be detected and if it's in output #2 it prefers not to be detected.

Now we have two photons sharing the same hidden variable. Photons encounter PBS and they end up in one of the outputs. But hidden variable is still there. Assuming PBSes have relative angle 0 between them in respect to common reference hidden variable has altered but the same value for both photons if they are in the same outputs of respective PBSes but has opposite values if photons are in different outputs. When photons encounter detectors hidden variable is again altered in probabilistic fashion by hidden variable of detector and now it determines detection or non detection of photon.

If the photon is not detected by detector in a sense it can be viewed as having opposite polarization. Because of that photon can be viewed as having indefinite polarization (superposition) even after PBS.

Do not now if it's what you was asking for but I do not think that realism means that something can be viewed with 100% certainty. There are always some unaccounted (and practically unaccountable) factors that can alter predictions, for example in QM there are ground level fluctuations of field.
 

FAQ: The Unfair Sampling Assumption & Bell Tests

What is the Unfair Sampling Assumption?

The Unfair Sampling Assumption is a fundamental principle in Bell Tests, which states that the sample of particles used in an experiment must be representative of the entire population. This means that the selection process for the particles must be unbiased and not favor any particular subset.

How does the Unfair Sampling Assumption affect Bell Tests?

The Unfair Sampling Assumption is crucial in Bell Tests because it ensures that the results of the experiment are not influenced by any hidden variables or biases. If the sampling process is unfair, it can lead to incorrect conclusions and undermine the validity of the test.

Why is the Unfair Sampling Assumption important in scientific research?

The Unfair Sampling Assumption is important in scientific research because it helps to eliminate potential sources of error and bias. By ensuring that the sample used in an experiment is representative of the entire population, researchers can have more confidence in the results and draw accurate conclusions.

Can the Unfair Sampling Assumption be violated?

Yes, the Unfair Sampling Assumption can be violated in certain situations, such as when the sample selection process is biased or when there are hidden variables that influence the results. This is why it is important for scientists to carefully design their experiments and ensure that the sampling process is fair.

How can scientists ensure that the Unfair Sampling Assumption is upheld in their experiments?

To ensure that the Unfair Sampling Assumption is upheld, scientists can use randomization techniques to select their samples. This helps to eliminate any potential biases and ensures that the sample is representative of the entire population. Additionally, researchers can also use control groups and statistical analysis to validate their results and check for any violations of the Unfair Sampling Assumption.

Similar threads

Back
Top