Arguments in favour of the fair sampling assumption

  • Thread starter gespex
  • Start date
  • Tags
    Sampling
In summary, the conversation discusses the existence of completely classical hidden variable theorems that match the coincidence rates predicted by quantum mechanics. The theories contain a "no enhancement assumption" and it is argued that this assumption should be true. A paper is mentioned that seems to correspond with the predictions of quantum mechanics, but it is noted that the theory is not local and is experimentally falsifiable. The article is published in Physics Letters A and the author is known for performing experiments to close loopholes in entanglement experiments.
  • #1
gespex
56
0
Hi all,

I'm no expert in quantum mechanics by any means, but I've been quite interested in, and done quite some research on, Bell's theorem and related inequalities such as CH and CHSH. The theories all look perfectly sound, except that they all contain the "no enhancement assumption", some even in the form of an even stronger "fair sampling assumption".
Now it seems quite doable to formulate a theory that contains regular, classical hidden variables that match CH and CHSH in all correlations (ie. coincidence rate between detected photons with polarizes either set at specific angles or removed). Simply by assuming there exist hidden variables that determines the chance the particle is detected, it seems possible to match these coincidence rates that Quantum Mechanics also predicts.

I have two questions regarding this:
1. Do such hidden variable theorems exist, that are completely classical and match the coincidence rates as predicted by QM?
2. Regardless of whether such a theorem exists, why is this "no enhancement assumption" assumed to be true? What are the arguments in favour of it?


Thanks in advance,
Gespex
 
Physics news on Phys.org
  • #2
I don't like bumping - but isn't this a fair question? I've found a paper that seems to correspond with the QM predictions here:
http://arxiv.org/pdf/quant-ph/9905018v1.pdf

I know it's "only" arxiv, but the maths look sound...
 
  • #3
gespex said:
Now it seems quite doable to formulate a theory that contains regular, classical hidden variables that match CH and CHSH in all correlations ... Simply by assuming there exist hidden variables that determines the chance the particle is detected [at various angles]

It's harder than it sounds, because what matters is the relative angle between the two detectors. The hidden variable theory would have to produce the same result with A at zero degrees and B at 120 degrees as with A at 10 degrees and B at 130 degrees; and it would have to produce the same result with A at zero degrees and B at zero degrees as it would with A at 10 degrees and B at 10 degrees... and if the theory is to be local the probability of detection at A must be written as a function of just the hidden variables and the setting of A; that is, it must be the same for all the cases in which A is set to ten degrees or zero degrees or any other value.
 
  • #4
Nugatory said:
It's harder than it sounds, because what matters is the relative angle between the two detectors. The hidden variable theory would have to produce the same result with A at zero degrees and B at 120 degrees as with A at 10 degrees and B at 130 degrees; and it would have to produce the same result with A at zero degrees and B at zero degrees as it would with A at 10 degrees and B at 10 degrees... and if the theory is to be local the probability of detection at A must be written as a function of just the hidden variables and the setting of A; that is, it must be the same for all the cases in which A is set to ten degrees or zero degrees or any other value.

Thanks for your reply!
Though the article I provided (http://arxiv.org/pdf/quant-ph/9905018v1.pdf) seems to have such a local hidden variable theorem, which completely agrees with QM's predictions.

So why is it still assumed that there is some sort of collapse, rather than that very model being true?
 
  • #5
While you're waiting for a respose to your question, I'd just like to point out that a forum search for "Gisin" (use the "Search this forum" link at the top right of the thread list) turns up a number of hits, some of which probably discuss this paper.
 
  • #6
gespex said:
Thanks for your reply!
Though the article I provided (http://arxiv.org/pdf/quant-ph/9905018v1.pdf) seems to have such a local hidden variable theorem, which completely agrees with QM's predictions.

So why is it still assumed that there is some sort of collapse, rather than that very model being true?

You must understand that if you are to reproduce the QM results, you need something somewhere to be sensing the global context or otherwise be leaving some kind of trace. If it is a global context, then it is not local. If there is some other trace, then it is experimentally falsifiable.

In this case, the comment is that there is a dependency on the likelihood of detection between the hidden angle and the angle being measured. Were that the case, then the total intensity of light emerging from a polarizing beam splitter would vary with the input angle. That doesn't happen, ergo it is falsified before you start.

After his (2):

"when ~a happens to be close to ~λA then the probability that an outcome is produced is larger than when ~a happens to be nearly orthogonal to ~λA."

There is always something like this hanging around (and of course it is not a prediction of QM). Has to be, because of Bell!
 
Last edited:
  • #7
gespex said:
I know it's "only" arxiv, but the maths look sound...
[...]
Though the article I provided (http://arxiv.org/pdf/quant-ph/9905018v1.pdf) seems to have such a local hidden variable theorem, which completely agrees with QM's predictions.

So why is it still assumed that there is some sort of collapse, rather than that very model being true?

Some comments. First, it is not just ArXiv. It has been published in Physics Letters A Volume 260, Issue 5, 20 September 1999, Pages 323–327. (http://dx.doi.org/10.1016/S0375-9601(99)00519-8.

Second, I do not know what you interpret into the article, but Gisin is an opponent of local hidden variable theories. However, as this article dates back to 1999, one should rather read it as giving an estimate of what detector quantum efficiencies one might need to close the detector efficiency loophole in experiments on entanglement. Actually, that is rather clever, as once he outlined the importance of these loopholes, he went on to perform experiments on closing them and published those in high-impact journals.
 
  • #8
Thanks for your answers, guys! Really appreciated, but I have a few questions left.

@DrChinese:
But isn't λA completely random per particle? Yes, if it's close to the measured angle it has a greater chance to be measured (at one side, not at the other). However, the angle between λA and a is completely random. So given a large number of particles the chance of an outcome being produced is the same for every angle, being 50% for half of the entangled particles and 100% for the other half, so on average 75%. That's completely independent of the actual detected angle, right?

I actually wrote a program to simulate a similar theorem. I made one function for each part of the experiment, to prevent some small bug from allowing one detector of accidentally influencing the other particle or detector. It produced perfectly (within extremely reasonable margins) the same outcome as CHSH and CH74, and as this paper. The number of particles measured would be constant for every measured angle.
(Unfortunately, I no longer have this program, but it seems to follow logically from this paper anyway)
Or am I missing something here?

@Cthugha:
So have people done the experiment with a detector efficiency of >75% yet? And also, how can we be so certain that this missed detection isn't inherent to detecting it at all? In other words, how can we measure the real detector efficiency for particles that simply *can't* be measured? We can't but in another detector, as it will suffer from the same flaws, right? Can we tell some other way?
The only way I can think of we can be certain is by knowing exactly how many particles should be measured. As it may be inherently impossible to measure this number, to tell for sure we must know exactly how many particles will be emitted by some source. But how can we know this for sure, if we have no way to actually test it? We may simply be wrong there...
Or is there some other method here?Again, thanks for your replies guys, I really appreciate it! I hope I'm not being ignorant here, as I said, I'm not an expert, just a programmer with some interest in QM.
 
  • #9
gespex said:
So have people done the experiment with a detector efficiency of >75% yet?

Sure, this has been done as early as 2001 by using ions instead of photons in David Winelands group (Nature 409, 791-794 (15 February 2001)). For ions the detection efficiency is essentially 100%.

gespex said:
And also, how can we be so certain that this missed detection isn't inherent to detecting it at all? In other words, how can we measure the real detector efficiency for particles that simply *can't* be measured?

Well, if it cannot be detected at all, it is not existing. If you rather consider an unfair sampling scenario, you can rule that out by having efficient detectors. At high efficiencies, you have different prediction between typical entanglement and unfair sampling predictions which are testable (like in the paper mentioned above).

gespex said:
We can't but in another detector, as it will suffer from the same flaws, right? Can we tell some other way?

Well, you obviously can know the total output within some time interval by placing many detectors or using non-linear detectors.

gespex said:
The only way I can think of we can be certain is by knowing exactly how many particles should be measured. As it may be inherently impossible to measure this number, to tell for sure we must know exactly how many particles will be emitted by some source.

I see no problem with measuring the total output power and the statistics of the light field in question.

gespex said:
But how can we know this for sure, if we have no way to actually test it? We may simply be wrong there...
Or is there some other method here?

Also you can calibrate your detector and find out its quantum efficiency. This can be done by using down-converted light or by subjecting light to a non-linearity (like second harmonic generation). You can also perform detector tomography (Nature Physics 5, 27 - 30 (2009)) to fully classify your detector.
 

FAQ: Arguments in favour of the fair sampling assumption

1. What is the fair sampling assumption and why is it important in scientific research?

The fair sampling assumption is the belief that a sample group accurately represents the larger population being studied. It is important in scientific research because it allows researchers to make inferences and draw conclusions about a larger population based on a smaller sample size.

2. How do scientists ensure that the fair sampling assumption is met in their research?

Scientists use various methods to ensure the fair sampling assumption is met in their research, such as random sampling, stratified sampling, and cluster sampling. These methods help to minimize bias and ensure that all individuals in the population have an equal chance of being included in the sample.

3. Are there any limitations to the fair sampling assumption?

Yes, there are limitations to the fair sampling assumption. One limitation is that it assumes the sample is representative of the entire population, which may not always be the case. Another limitation is that it does not account for potential confounding variables or external factors that may affect the results of the study.

4. Can the fair sampling assumption be applied to all types of research?

While the fair sampling assumption is commonly used in scientific research, it may not be applicable to all types of research. In certain situations, such as studying a rare or hard-to-reach population, it may be challenging to ensure a truly representative sample. In these cases, alternative sampling methods may need to be used.

5. What are the consequences of not meeting the fair sampling assumption in a research study?

If the fair sampling assumption is not met, it can lead to biased results and incorrect conclusions about the larger population. This can have significant implications, especially in fields such as medicine and public policy where decisions are made based on research findings. It is important for scientists to carefully consider and address any potential limitations to the fair sampling assumption in their research.

Similar threads

Replies
28
Views
2K
Replies
99
Views
15K
Replies
874
Views
35K
Replies
120
Views
8K
Replies
333
Views
14K
Replies
12
Views
2K
Back
Top