Photon entanglement and fair sampling assumption

In summary, the discussion about correctness of fair sampling assumption in photon entanglement experiments is avoided for reasons of lack of creditability. I would like to ask if there are others who share this view?
  • #36
DrChinese said:
Fine, an example for us to work with. A few questions about the terminology:

1. Do unentangled photons have POL, PH hidden variables too? If so, this provides additional testable constraints. You imply the answer is yes, but I want to be sure.
Yes

DrChinese said:
2. I get POL. But you say that PH (phase) is "distributed" according to the function abs(sin(2theta)). What is theta here?
The same as in Malus law I'=I*cos^2(theta). It's angle between polarization axis of photons and polarization axis of polarizer.
abs(sin(2theta)) is just derivative of cos^2(theta) with dropped sign.

DrChinese said:
3. Then you say that the probability a photon has a given value of the hidden variables is abs(sin(2*ph)). Can you give an example? The indicated hidden variable is PH and I do not believe this function sums to 100% across a suitable range (as it would need to).
Hmm, not sure how to address this question. Let's try it this way.
If we take out of the sample (of one side) all photons with the same polarization they have to follow Malus law so that when we use "ph" to calculate "passing" and "absorbing" intervals of photons they together should produce characteristic cos^2(alpha) picture (with alpha being angle between polarization axis of photons and polarization axis of polarizer)

About the sum. Integral of function abs(sin(2*ph)) in interval 0 - Pi/2 is 1. So if you take interval 0 - 2Pi then it's 4. But isn't this question about sum being 100% just the matter of normalization? If I give description of distribution I define that integral of this function is whole sample.

But maybe you have on mind the thing about relating two distributions in non-symmetric fashion (when we consider Alice's and Bob's samples together)? In that case yes they do not mach for 100%. So there are always unpaired photons with given setup of photon source.
 
Physics news on Phys.org
  • #37
zonde said:
Yes

The same as in Malus law I'=I*cos^2(theta). It's angle between polarization axis of photons and polarization axis of polarizer.
abs(sin(2theta)) is just derivative of cos^2(theta) with dropped sign.

...

Just making sure. Now in your previous post you said:

"So I hypothesize that all photons in sample have the same hidden variable - polarization but there is another hidden variable that determines interval of angles where photon will be filtered out by polarizer or will pass through."

If that were true, then placing a polarizing beam splitter into a light stream - which yields 2 streams out - would mean that the 2 output streams do not match (sum to) the intensity of the input beam. Is that what you are saying? Or are you saying that the path out of the PBS is determined by considering both the hidden variables POL and PH?
 
Last edited:
  • #38
zonde said:
Well, it took some time to come up with some model that can produce local samples of Bob and Alice that will give sinusoidal graph (so as to "fill" my claims with something touchable).
...
To test this model one has to generate set of evenly distributed random values for "pol" and "ph" in period of 0 - 2Pi and then plug them into formulas .

Have you already done some simulations with the setup mentioned?
 
  • #39
This http://msc.phys.rug.nl/pdf/athens06-deraedt1.pdf"

The simulation effectively proves that "Bell's theorem is irrelevant for (quantum) physics"
 
Last edited by a moderator:
  • #40
ajw1 said:
This http://msc.phys.rug.nl/pdf/athens06-deraedt1.pdf"

The simulation effectively proves that "Bell's theorem is irrelevant for (quantum) physics"

Claims to, but there are plenty of similar claims around. No way to debate it really, unless you are one of the authors (in which case I would be happy to discuss, but we should start a new thread). I doubt there are many who will waste time with something like this as it is obvious that no dataset can accomplish the intended result without a trick somewhere. In the referenced article, the "trick" is performed with time tags (definition of the coincidence window).

I don't usually bother with authors who intentionally hide key elements of their work, as they do here. If I had a solution as they claimed (I mean they claim the Bell result is wrong for God's sake), I would show everyone exactly how the trick is done because I would be proud of it.
 
Last edited by a moderator:
  • #41
DrChinese said:
Just making sure. Now in your previous post you said:

"So I hypothesize that all photons in sample have the same hidden variable - polarization but there is another hidden variable that determines interval of angles where photon will be filtered out by polarizer or will pass through."

If that were true, then placing a polarizing beam splitter into a light stream - which yields 2 streams out - would mean that the 2 output streams do not match (sum to) the intensity of the input beam. Is that what you are saying? Or are you saying that the path out of the PBS is determined by considering both the hidden variables POL and PH?
Sorry, a mistake. What I intended to say is that photons in pair have the same polarization.
And yes, I am proposing that the path out of the PBS is determined by considering both the hidden variables POL and PH.
 
  • #42
DrChinese said:
Claims to, but there are plenty of similar claims around. No way to debate it really, unless you are one of the authors (in which case I would be happy to discuss, but we should start a new thread). I doubt there are many who will waste time with something like this as it is obvious that no dataset can accomplish the intended result without a trick somewhere. In the referenced article, the "trick" is performed with time tags (definition of the coincidence window).

I don't usually bother with authors who intentionally hide key elements of their work, as they do here. If I had a solution as they claimed (I mean they claim the Bell result is wrong for God's sake), I would show everyone exactly how the trick is done because I would be proud of it.

The 'trick' is completely explained, as the source code of the simulation program can be http://rugth30.phys.rug.nl/dlm/Down7535load.htm" in the Journal of the Physical Society of Japan.

Although I didn't check it line by line yet, it seems to be that when you include a time window for which you assume particles to be entangled (which normally is the case in real experiments) the Bell distribution naturally occurs.
 
Last edited by a moderator:
  • #43
ajw1 said:
Have you already done some simulations with the setup mentioned?
Yes of course. That is why I say that one gets sinusoid using this model.

ajw1 said:
This article describes a simulation based on local hidden variables assumption that reproduces the quantum predictions correctly. The related presentation provides some further information. The formal publication can be found here

The simulation effectively proves that "Bell's theorem is irrelevant for (quantum) physics"
Some time ago I stumbled on that "coincidence time loophole" myself and after that read some articles about this loophole.
But then I got one sample from Weihs experiment and investigated the real data. However I didn't found the things I expected so I discarded the idea as not applicable to real experiments.

About Bell's theorem. It seems that it's fair to say that Bell's theorem is irrelevant for real experiments if fair sampling assumption does not hold. And that is accepted at least by mainstream experimentalists.
 
  • #44
DrChinese said:
If I had a solution as they claimed (I mean they claim the Bell result is wrong for God's sake), I would show everyone exactly how the trick is done because I would be proud of it.
This I do not understand. Why you are saying that "they claim the Bell result is wrong"?
This statement is simply wrong. The claim is merely that Bell theorem is mathematical construct that describes hypothetical situation that does not cover all possible interpretations of real experiments.
If someone is trying to prove that Bell theorem is wrong as mathematical construct then I am with you that it is not worth to consider such claims.
 
  • #45
zonde said:
Yes of course. That is why I say that one gets sinusoid using this model.
Can you post the relevant lines of code (to avoid confusion about the model)?
 
  • #46
ajw1 said:
Can you post the relevant lines of code (to avoid confusion about the model)?
I used spreadsheets.
So I generated two columns with:
=RAND()*PI()*2
made two identical copies and the used these two formulas:
=(SIGN(SIN(C$2+$B3)^2-SIN($A3)^2)+SIGN(SIN($A3)^2-SIN(PI()/8)^2))/2*ABS(SIN(2*$A3))
=(SIGN(SIN(C$2+$B3)^2-SIN($A3+PI()/4)^2)+SIGN(SIN($A3+PI()/4)^2-SIN(PI()/8)^2))/2*ABS(SIN($A3*2+PI()/2))
C$2 is polarizator angle, columns A and B are generated random hidden variables
then with this formula calculated coincidence probability:
=IF(AND(Alice!$E3>0;Bob!C3>0);Alice!$E3*Bob!C3;0)
then summed the last column.
 
  • #47
ajw1 said:
The 'trick' is completely explained, as the source code of the simulation program can be http://rugth30.phys.rug.nl/dlm/Down7535load.htm" in the Journal of the Physical Society of Japan.

Although I didn't check it line by line yet, it seems to be that when you include a time window for which you assume particles to be entangled (which normally is the case in real experiments) the Bell distribution naturally occurs.

Well, you have to make the time window be biased for that to happen (obviously cannot be random or else the fair sampling assumption is valid). The bias must be related to the detection events so that the subsample is no longer representative of the entire universe.

These programs trick you because you only see what they want you to see. Like a magician's act, they have you looking one place when you should be looking another. Here is a simple test:

1. Show me the entire universe, not just the sampled items. I want to know what the TRUE function is. This should provide results that match Malus.
2. Show me the sampled universe with 3 - not 2 - angle settings of my choosing.

The above are absolute requirements of any counter to Bell. It becomes clear quickly that there are no settings that are consistent which meet these criteria. Don't be fooled by magic! It doesn't exist.
 
Last edited by a moderator:
  • #48
zonde said:
This I do not understand. Why you are saying that "they claim the Bell result is wrong"?
This statement is simply wrong. The claim is merely that Bell theorem is mathematical construct that describes hypothetical situation that does not cover all possible interpretations of real experiments.
If someone is trying to prove that Bell theorem is wrong as mathematical construct then I am with you that it is not worth to consider such claims.

Bell does not cover all possible experiments. Or even all interpretations of experiments. (Just reasonable ones. :smile: )

Bell is a statement about local hidden variable theories. No LHV theory can make the same predictions about the universe of spin/polarization values as does QM. I think you agree with this (this is simply accepting Bell). Now, a second question is whether it is possible to have the same experimental results as QM when the universe of spin/polarization values is actually different. Clearly, there are some out there who think this is possible.

Now, this is a steep hill to climb! Because on the one hand, you have the champion: QM, which predicts a wide range of phenomena and has stood the test of time for over 80 years. One the other, you have a challenger... oops, no we don't! We have something which is NOT a theory at all, merely a hypothesis about one small aspect of the atomic world (spin/polarization). And that limited view cannot stand full scrutiny (witness Hess, Santos, etc.)

I can set plenty of bars that no local hidden variable theory can pass. Quantum erasers? No hope there, these always violate classical ideas. How about Type II PDC? I would love to see a LHV explain that!

I am happy to review a dataset to show you the problems in your logic, as I would be with anyone who makes claims that the Bell result is not scientifically supported. Keep in mind that you need to show me the values at THREE angle settings of my choosing, not TWO. This is the same mistake many make. If you show me only 2 settings, you violate the Bell reality assumption. Once you see the 3 settings together, it becomes clear that the detection MUST be related to settings of Alice and Bob and they are not independent (thus violating Bell locality assumption).

I think if you review your dataset, you will see that you cannot pass these 2 critical tests. A formula in a spreadsheet is a far cry from a serious disproof of the Bell/Aspect result.
 
  • #49
zonde said:
I used spreadsheets.
So I generated two columns with:
=RAND()*PI()*2
made two identical copies and the used these two formulas:
=(SIGN(SIN(C$2+$B3)^2-SIN($A3)^2)+SIGN(SIN($A3)^2-SIN(PI()/8)^2))/2*ABS(SIN(2*$A3))
=(SIGN(SIN(C$2+$B3)^2-SIN($A3+PI()/4)^2)+SIGN(SIN($A3+PI()/4)^2-SIN(PI()/8)^2))/2*ABS(SIN($A3*2+PI()/2))
C$2 is polarizator angle, columns A and B are generated random hidden variables
then with this formula calculated coincidence probability:
=IF(AND(Alice!$E3>0;Bob!C3>0);Alice!$E3*Bob!C3;0)
then summed the last column.

The code seems different from the formulas mentioned earlier: I miss the Cos() functions, or am I overlooking something?
Is the first SIGN(.. function in the C column, and the second in E?
 
  • #50
DrChinese said:
Bell does not cover all possible experiments. Or even all interpretations of experiments. (Just reasonable ones. :smile: )

Bell is a statement about local hidden variable theories. No LHV theory can make the same predictions about the universe of spin/polarization values as does QM. I think you agree with this (this is simply accepting Bell). Now, a second question is whether it is possible to have the same experimental results as QM when the universe of spin/polarization values is actually different. Clearly, there are some out there who think this is possible.

Now, this is a steep hill to climb! Because on the one hand, you have the champion: QM, which predicts a wide range of phenomena and has stood the test of time for over 80 years. One the other, you have a challenger... oops, no we don't! We have something which is NOT a theory at all, merely a hypothesis about one small aspect of the atomic world (spin/polarization). And that limited view cannot stand full scrutiny (witness Hess, Santos, etc.)

I can set plenty of bars that no local hidden variable theory can pass. Quantum erasers? No hope there, these always violate classical ideas. How about Type II PDC? I would love to see a LHV explain that!

I am happy to review a dataset to show you the problems in your logic, as I would be with anyone who makes claims that the Bell result is not scientifically supported. Keep in mind that you need to show me the values at THREE angle settings of my choosing, not TWO. This is the same mistake many make. If you show me only 2 settings, you violate the Bell reality assumption. Once you see the 3 settings together, it becomes clear that the detection MUST be related to settings of Alice and Bob and they are not independent (thus violating Bell locality assumption).

I think if you review your dataset, you will see that you cannot pass these 2 critical tests. A formula in a spreadsheet is a far cry from a serious disproof of the Bell/Aspect result.
Hmm, who is talking about challenging QM? It seems that it's only you.
If for you QM starts and ends at non-locality then yes, I am challenging QM. But for most people working in the field it is quite different. So I do not see it that way.

About testing with different angles. I am drawing graph using 32 different angles - is it enough?
What is the second critical test? Type II PDC? Simple thing - just make a difference of 90 deg. between POL hidden variables of Alice and Bob with PH keeping the same difference as before.

But it seems to me that you are not caching the meaning about this fair sampling assumption thing.
And the meaning is this - Bell theorem is no-go theorem so there should be no possibility to construct LHV models that violate it's inequalities. But if fair sampling assumption does not hold there are plenty of possible ways how to construct such models that violate Bell's inequalities and it does not matter if experimental results justify them or not.
Bell's inequalities does not prove anything if fair sampling assumption does not hold. And sorry but this IS mainstream view.
 
  • #51
ajw1 said:
The code seems different from the formulas mentioned earlier: I miss the Cos() functions, or am I overlooking something?
Well it's statistically the same as the one mentioned earlier.
ajw1 said:
Is the first SIGN(.. function in the C column, and the second in E?
Well, I copied one of the functions with different polarizator angles (at top of the column) in multiple columns so that multiple angles can be calculated at once.
Just adjust column indexes accordingly.
 
  • #52
zonde said:
1. If for you QM starts and ends at non-locality then yes, I am challenging QM.

2. About testing with different angles. I am drawing graph using 32 different angles - is it enough?

3. What is the second critical test? Type II PDC? Simple thing - just make a difference of 90 deg. between POL hidden variables of Alice and Bob with PH keeping the same difference as before.

4. But it seems to me that you are not caching the meaning about this fair sampling assumption thing. ... But if fair sampling assumption does not hold there are plenty of possible ways how to construct such models that violate Bell's inequalities and it does not matter if experimental results justify them or not.
Bell's inequalities does not prove anything if fair sampling assumption does not hold. And sorry but this IS mainstream view.

1. I only advocate the position that local hidden variable theories are not tenable. I do not advocate non-locality in particular.

2. 32 is NOT enough. Unless of course you are talking about SIMULTANEOUS values. I want to see 3 *simultaneous* values for angles settings I choose. Preferably either 0/120/240 or 0/45/67.5 as these are the most often used examples. If you can only show 2 at a time, you don't have a LHV theory. Sorta like taking the magician at his word when he says, "nothing up my sleeve"...

3. You haven't explained Type II PDC by saying the crystal is rotated in your formula sheet. For a local realist to explain Type II PDC it will take a lot. The reason is that there is NO 360 degree polarization entanglement from a single crystal. The output of either alone lacks that characteristic! Only when the outputs are superimposed does this feature emerge! If the photons emerged from one or the other but not both (as a local realist would insist), then the entanglement is not explainable.

4. Yes, indeed it is the mainstream view that IF the fair sampling assumption were discovered NOT to hold, then Bell test results might be invalidated. So what? If next week the speed of light were discovered to be 4 kilometers per second then a lot of other science might be invalidated too. Fair sampling is the cornerstone of science, my friend, and has absolutely nothing to do with Bell tests in particular. Obviously, there are some cases in which the data points are relatively few and far between and there may in fact be a natural bias. An example would be celestial objects used as "standard candles". But you can't model a theory (such as a LHV) that runs counter to the data and explain it as "supported" and "consistent".
Do you have some data to share? You saw my requested angle settings. Just show me data for 3 simultaneously and we can get on with the main show here.
 
  • #53
DrChinese said:
1. I only advocate the position that local hidden variable theories are not tenable. I do not advocate non-locality in particular.
But you do not separate whether LHV theories are viewed as as interpretation of QM or as entirely different things?

DrChinese said:
2. 32 is NOT enough. Unless of course you are talking about SIMULTANEOUS values. I want to see 3 *simultaneous* values for angles settings I choose. Preferably either 0/120/240 or 0/45/67.5 as these are the most often used examples. If you can only show 2 at a time, you don't have a LHV theory. Sorta like taking the magician at his word when he says, "nothing up my sleeve"...
I suppose you mean that different angles are calculated with the same set of randomly generated HV. Right?
For relative angles 0/45/67.5 and three different angles for Alice (0,45,90) with the same set of HV (65534 rows):
1701/813/227
1654/851/266
1646/805/242

DrChinese said:
3. You haven't explained Type II PDC by saying the crystal is rotated in your formula sheet. For a local realist to explain Type II PDC it will take a lot. The reason is that there is NO 360 degree polarization entanglement from a single crystal. The output of either alone lacks that characteristic! Only when the outputs are superimposed does this feature emerge! If the photons emerged from one or the other but not both (as a local realist would insist), then the entanglement is not explainable.
Just to understand the question - do you say that model should explain not only measurement process but generation process at source as well for you to consider the model? If that's so it surely will take some time.

DrChinese said:
4. Yes, indeed it is the mainstream view that IF the fair sampling assumption were discovered NOT to hold, then Bell test results might be invalidated. So what? If next week the speed of light were discovered to be 4 kilometers per second then a lot of other science might be invalidated too. Fair sampling is the cornerstone of science, my friend, and has absolutely nothing to do with Bell tests in particular. Obviously, there are some cases in which the data points are relatively few and far between and there may in fact be a natural bias. An example would be celestial objects used as "standard candles". But you can't model a theory (such as a LHV) that runs counter to the data and explain it as "supported" and "consistent".
Yes, but there are reasons for that particular cornerstone. You can never test everything so you have to make some assumptions to move forward. You have to disregard some information in favor of other information you want to know.
But it does not mean that assumptions can not be revisited later including fair sampling assumption in some particular context.
 
  • #54
zonde said:
1. But you do not separate whether LHV theories are viewed as as interpretation of QM or as entirely different things?


2. I suppose you mean that different angles are calculated with the same set of randomly generated HV. Right?
For relative angles 0/45/67.5 and three different angles for Alice (0,45,90) with the same set of HV (65534 rows):
1701/813/227
1654/851/266
1646/805/242


3. Just to understand the question - do you say that model should explain not only measurement process but generation process at source as well for you to consider the model? If that's so it surely will take some time.

1. I think an LHV will not be an interpretation, it will be a different theory.

2. This needs to be discussed separately because we are getting close to the crux. I believe I understand your numbers as follows:

1701/813/227 means coincidences for 0, 45 and 67.5 degrees (relative to Bob), where Alice is oriented at 0 degrees. Or?

3. My point is that local realists struggle to prove Bell/Aspect wrong, failing to realize that their hypothesis is elsewhere contradicted. That is why the Bell Theorem states that no LHV theory can reproduce ALL of the predictions of quantum theory. Once you understand the full implications of the requirement, it becomes a much larger issue to overcome. That is why Santos, Hess and others have failed, because they have stumbled in postulating a full and consistent LHV hypothesis that actually leads to the predictions of QM.

Explaining Type II PDC is not simple for an LHV theory, so we should take it off the table for now. But that doesn't mean it isn't there.
 
Last edited:
  • #55
DrChinese said:
1. I think an LHV will not be an interpretation, it will be a different theory.
Even completely different from pilot-wave interpretation?
I will disagree.

DrChinese said:
2. This needs to be discussed separately because we are getting close to the crux. I believe I understand your numbers as follows:

1701/813/227 means coincidences for 0, 45 and 67.5 degrees (relative to Bob), where Alice is oriented at 0 degrees. Or?
Yes

DrChinese said:
3. My point is that local realists struggle to prove Bell/Aspect wrong, failing to realize that their hypothesis is elsewhere contradicted. That is why the Bell Theorem states that no LHV theory can reproduce ALL of the predictions of quantum theory. Once you understand the full implications of the requirement, it becomes a much larger issue to overcome. That is why Santos, Hess and others have failed, because they have stumbled in postulating a full and consistent LHV hypothesis that actually leads to the predictions of QM.
But Bell Theorem does not cover ALL of the predictions of quantum theory. So it is just declarative statement without too much behind it.

DrChinese said:
Explaining Type II PDC is not simple for an LHV theory, so we should take it off the table for now. But that doesn't mean it isn't there.
It seems that you get polarization entanglement when contexts (pilot waves) of two photons overlap in a certain way. I found this experiment as a very nice and simple demonstration of polarization entanglement creation: http://arxiv.org/abs/0912.1275"
 
Last edited by a moderator:
  • #56
zonde said:
Even completely different from pilot-wave interpretation?
I will disagree.[/URL]

Pilot wave is not a LHV! L=Local, Pilot wave is non-local. So I am not sure of what you mean. If you are trying to say that a non-local hidden variable interpretation is possible: I would agree and there are lots of supporters of that perspective. That perspective is also considered contextual.
 
  • #57
zonde said:
But Bell Theorem does not cover ALL of the predictions of quantum theory. So it is just declarative statement without too much behind it.
[/URL]

This too is strange. If there was not much behind it, why is it so important? There are over 1000 papers published annually on the subject. In fact, there is enough behind it to be accepted as proven.
 
  • #58
zonde said:
2. This needs to be discussed separately because we are getting close to the crux. I believe I understand your numbers as follows:

1701/813/227 means coincidences for 0, 45 and 67.5 degrees (relative to Bob), where Alice is oriented at 0 degrees.

Reply: Yes

Okay. So out of the SAME 1701 trials mentioned above, there was ALSO coincidences of 813 for 45 degrees and 227 for 67.5 degrees. Correct? (I am not interested in separate trials for the 3 angles because we are not testing the realism requirement in such case.)
 
  • #59
zonde said:
3. You haven't explained Type II PDC by saying the crystal is rotated in your formula sheet. For a local realist to explain Type II PDC it will take a lot. The reason is that there is NO 360 degree polarization entanglement from a single crystal. The output of either alone lacks that characteristic! Only when the outputs are superimposed does this feature emerge! If the photons emerged from one or the other but not both (as a local realist would insist), then the entanglement is not explainable.

OOPS! :redface:

I wrote Type II PDC and I meant Type I. Sorry for any confusion this caused.
 
  • #60
DrChinese said:
Pilot wave is not a LHV! L=Local, Pilot wave is non-local. So I am not sure of what you mean. If you are trying to say that a non-local hidden variable interpretation is possible: I would agree and there are lots of supporters of that perspective. That perspective is also considered contextual.
If pilot wave will turn from non-local into local I assume that Pilot wave interpretation will not suffer much. That is what I mean.

DrChinese said:
This too is strange. If there was not much behind it, why is it so important? There are over 1000 papers published annually on the subject. In fact, there is enough behind it to be accepted as proven.
I thought about this a bit and it seems to me that meaning of particular statement is that LHV theories might explain all peridictions of QM except entanglement. And in that case it's only about entanglement after all.

DrChinese said:
Okay. So out of the SAME 1701 trials mentioned above, there was ALSO coincidences of 813 for 45 degrees and 227 for 67.5 degrees. Correct? (I am not interested in separate trials for the 3 angles because we are not testing the realism requirement in such case.)
No. Otherwise we are not discussing unfair sampling.

DrChinese said:
OOPS! :redface:

I wrote Type II PDC and I meant Type I. Sorry for any confusion this caused.
Well it turned out that as a result I cleared some misunderstanding of mine. I found out that I had wrong picture about Type I PDC as direct source of polarization entangled photon pairs.
So in case of Type I PDC if we talk about polarization entanglement there have to be some more details about the setup how produced (polarization non-entangled) photons are turned into polarization entangled photons.
 
  • #61
zonde said:
1. No. Otherwise we are not discussing unfair sampling.


2. So in case of Type I PDC if we talk about polarization entanglement there have to be some more details about the setup how produced (polarization non-entangled) photons are turned into polarization entangled photons.

1. You can have an unfair sample (of the universe of photon pairs), but it still must be realistic! There must be 3 simultaneous values for Alice at 0, 45, 67.5. Otherwise you are just saying it is a realistic model when it isn't. That is the point of Bell.

2. Yes, it is difficult to model "realistically". (Pilot wave theorists don't think so, but it is.)
 
  • #62
DrChinese said:
My point is that local realists struggle to prove Bell/Aspect wrong, failing to realize that their hypothesis is elsewhere contradicted. That is why the Bell Theorem states that no LHV theory can reproduce ALL of the predictions of quantum theory.

I tend to agree with this wording. However, I am not sure this is bad "news" for LHV and local realists. Without taking sides with local realists or against them here, I tend to think this is actually great "news" for them. The reasoning is as follows (a part of it was offered by nightlight).

1. Predictions of quantum theory include both unitary evolution and the projection postulate.

2. To prove the Bell theorem, one needs both unitary evolution and the projection postulate.

3. Strictly speaking, unitary evolution and the projection postulate directly contradict each other.

4. Inability of LHV theories to reproduce contradictory results is good for local realists.

As some of these points are not obvious, let me explain.

1. This statement seems obvious as far as unitary evolution is concerned. If you disagree that the projection postulate is also a prediction of quantum theory, please advise (I admit that this is not an obvious statement, as it depends on the interpretation of quantum theory. What is important for me, however, is that this postulate or something similar is required to prove the Bell theorem - see below).

2. One needs unitary evolution when one assumes that spin projection on any axis is conserved. One needs the projection postulate to prove that quantum theory violates the Bell inequalities (it is used to compute the correlations in quantum theory).

3. Indeed, the projection postulate necessitates irreversibility, and, strictly speaking, unitary evolution does not allow any irreversibility (let me mention, e.g., the quantum recurrence theorem (Phys. Rev. V.107 #2, pp.337-338, 1957)), so a particle, strictly speaking, does not stay in the eigenstate after measurement (if it was in a superposition before the measurement).

4. Seems obvious
DrChinese said:
Once you understand the full implications of the requirement, it becomes a much larger issue to overcome. That is why Santos, Hess and others have failed, because they have stumbled in postulating a full and consistent LHV hypothesis that actually leads to the predictions of QM.

As I said, maybe it’s good for them that they failed. Interestingly, in a recent article (http://arxiv.org/PS_cache/arxiv/pdf/0912/0912.4098v1.pdf) Santos argues that “the usual postulates of quantum are too strong”. Again, I am not taking sides with Santos or against him here. I believe, however, that, on the one hand, the proof of the Bell theorem uses mutually contradictory assumptions, on the other hand, so far no experiment has demonstrated violations of the Bell inequalities without some dubious additional assumptions, such as “fair sampling”. So I am not sure there are sufficient theoretical or experimental arguments proving that “local hidden variable theories are not tenable.”
 
  • #63
akhmeteli said:
As I said, maybe it’s good for them that they failed. Interestingly, in a recent article (http://arxiv.org/PS_cache/arxiv/pdf/0912/0912.4098v1.pdf) Santos argues that “the usual postulates of quantum are too strong”. Again, I am not taking sides with Santos or against him here. I believe, however, that, on the one hand, the proof of the Bell theorem uses mutually contradictory assumptions, on the other hand, so far no experiment has demonstrated violations of the Bell inequalities without some dubious additional assumptions, such as “fair sampling”. So I am not sure there are sufficient theoretical or experimental arguments proving that “local hidden variable theories are not tenable.”

That is an "interesting" perspective, since you are basically saying failure is good. :smile:

The problem with the LR perspective is that they do not work against the opposition's strongest arguments, they seek the weakest to challenge. I consider fair sampling to be one of the worst possible attacks as the hypothesis is born out of LR anger and frustration and little else. As I have said before, virtually every scientific experiment relies on the fair sampling assumption and there is nothing special about it with respect to a Bell test.

On the other hand, the opposition (which is of course the mainstream) consistently challenge themselves at the highest level. For example, there are new and improved Bell tests every year. Entanglement is being sought - and discovered - in new and usual places. On the other hand, LRists basically deny the existence of entanglement (since they say coincidences are predetermined and not a result of an ongoing state).

So while the LR camp is grasping at straws (that's how it appears to me), I have read papers finding entanglement under every conceivable rock - including entanglement of particles that are outside of each other's light cones! And as predicted by QM.

As to Bell using mutually contradictory assumptions: all Bell is saying is that LR predictions can never match QM. If you think QM itself is based on mutually contradictory assumptions (which some claim is the case), that is NOT equivalent to saying Bell itself is based on those assumptions. If QM is shown to be experimentally wrong tomorrow, then so be it. But the predictions of QM are still the predictions of QM, and I don't know anyone who sees any confusion (or contradiction) in the cos^2(theta) rule.

But you are missing a truly important point of Bell: At the time it was introduced, it was widely believed that a local realistic version (a la Einstein's perspective) was tenable. Entanglement had never been witnessed! So maybe QM was wrong. But 45 years later, the story has not played out as Einstein might have imagined.

As to Santos suggesting that QM should be relaxed: yes, I saw that paper and laughed. I mean, who is he kidding? Hey, let's change the rules so Santos can convince himself LR is viable and he is right in the end. It's science, where's the beef Santos? I would love to see Santos stick with a theory for an entire year and use it to discover new sources of entanglement that were previously unknown. That would impress me.

In the meantime, there are numerous NEW theorems that are fully independent of Bell but which ALSO rule out the LR position. Examples are GHZ, Leggett, etc. and guess what: they don't rely on the "unfair" sampling assumption. So the LR position is being left in the dust as science advances. So I guess I am disagreeing with your assessment. LR is not tenable and the evidence is getting stronger, not weaker.
 
  • #64
DrChinese.
Thank you very much for a prompt a detailed reply. Let me try to comment.
DrChinese said:
That is an "interesting" perspective, since you are basically saying failure is good. :smile:
I am not just saying that failure is good in this case, I am also saying why: because “success” would be fatal for the potential “successful” theory. Indeed, if your theory has two contradictory conclusions, or assumptions, that means the theory is, strictly speaking, wrong. By the way, for this very reason quantum theory, in the specific form used to prove the Bell theorem, is, strictly speaking, wrong. Mathematically wrong. It does contain two contradictory assumptions. One of these assumptions must be wrong – logic does not allow any other conclusion. Specifically, I believe that unitary evolution (UE) is right, and the projection postulate (PP) is, strictly speaking, wrong. This is just my opinion, so you may agree or disagree, but you just cannot have both UE and PP, for the simple reason that they contradict each other, and you don’t seem to dispute that. If you do, please advise. In the following I won’t repeat this caveat and will assume that it is PP that is wrong. PP may be a good approximation, it may be a very good approximation, it may be an excellent approximation, it may be an amazingly great approximation, but the bottom line is it’s just an approximation. It just cannot be precise, because if it is, then UE has its share of problems.
DrChinese said:
The problem with the LR perspective is that they do not work against the opposition's strongest arguments, they seek the weakest to challenge.
Maybe I don’t quite understand you, or my English fails me, but I don’t quite see what is wrong about going against the weakest argument of the opponent. I would think in any contest the opponent’s weakest point is fair game. Furthermore, we are not in a court room, I think we both are just trying to understand something better, so I would think we should just agree with each other’s strongest argument, rather than waste time refusing to concede what we believe is actually correct in the opponent’s arguments.
DrChinese said:
I consider fair sampling to be one of the worst possible attacks as the hypothesis is born out of LR anger and frustration and little else. As I have said before, virtually every scientific experiment relies on the fair sampling assumption and there is nothing special about it with respect to a Bell test.
I don’t quite get it. Such people as Shimony and Zeilinger, who are no fans of LR, admit that the “detection loophole” (and, consequently, the fair sampling assumption) presents a serious problem (see the relevant quotes at https://www.physicsforums.com/showpost.php?p=1702189&postcount=13 and https://www.physicsforums.com/showpost.php?p=1705826&postcount=65 ). Do you really believe we should accept the fair sampling assumption without discussion? You yourself gave an example where this assumption may be less than obvious – “An example would be celestial objects used as "standard candles".” I guess the following reasoning by Santos makes some sense: “In the context of LHV theories the fair sampling assumption is, simply, absurd. In fact, the starting point of any hidden variables theory is the hypothesis that quantum mechanics is not complete, which essentially means that states which are considered identical in quantum theory may not be really identical. For instance if two atoms, whose excited states are represented by the same wave-function, decay at different times, in quantum mechanics this fact may be attributed to an ”essential indeterminacy”, meaning that identical causes (identical atoms) may produce different effects (different decay times). In contrast, the aim of introducing hidden variables would be to explain the different effects as due to the atomic states not being really identical, only our information (encapsuled in the wave-function) being the same for both atoms. That is, the essential purpose of hidden variables is to attribute differences to states which quantum mechanics may consider identical. Therefore it is absurd to use the fair sampling assumption -which rests upon the identity of all photon pairs- in the test of LHV theories, because that assumption excludes hidden variables a priori.”

DrChinese said:
On the other hand, the opposition (which is of course the mainstream) consistently challenge themselves at the highest level. For example, there are new and improved Bell tests every year.
I agree, there are “new and improved Bell tests every year”. However, so far the result is always the same: no violation of the genuine Bell inequalities. For some reason there is always something: either the detection loophole, or locality loophole, you name it. 45 years and counting – no violations. That reminds me the following words from Heller’s “Catch-22”:
"I've got just the twelve-year-old virgin you're looking for," he announced jubilantly. "This twelve-year-old virgin is really only thirty-four, but she was brought up on a low-protein diet by very strict parents and didn't start sleeping with men until"

This is the same stuff that we hear about the Bell inequalities violations (BIV): “Yeah, we demonstrated violations, they are as good as genuine ones, even better. Detection loophole? Oh, come on, you’re nit-picking. Locality loophole? Oh, come on, you’re hair-splitting”.

You believe that BIV have been demonstrated to your satisfaction? I fail to see any such demonstrations, sorry.
DrChinese said:
Entanglement is being sought - and discovered - in new and usual places. On the other hand, LRists basically deny the existence of entanglement (since they say coincidences are predetermined and not a result of an ongoing state).

So while the LR camp is grasping at straws (that's how it appears to me), I have read papers finding entanglement under every conceivable rock - including entanglement of particles that are outside of each other's light cones! And as predicted by QM.
I don’t know, I fail to see how entanglement can eliminate LR, as existence of entanglement is not enough to prove the Bell theorem. You need the projection postulate. You are a knowledgeable person, so I am sure you appreciate that “entanglement of particles that are outside of each other's light cones” per se does not eliminate LR. In general, the only thing that could be fatal to LR is genuine BIV (that is, if we forget about superdeterminism). So far genuine BIV have not been demonstrated, and I don’t hold my breath.
DrChinese said:
As to Bell using mutually contradictory assumptions: all Bell is saying is that LR predictions can never match QM. If you think QM itself is based on mutually contradictory assumptions (which some claim is the case), that is NOT equivalent to saying Bell itself is based on those assumptions. If QM is shown to be experimentally wrong tomorrow, then so be it. But the predictions of QM are still the predictions of QM, and I don't know anyone who sees any confusion (or contradiction) in the cos^2(theta) rule.
I don’t get it. I specifically indicated the two mutually contradictory assumptions that are both predictions of QM and necessary to prove the Bell theorem. So while I could agree that “If you think QM itself is based on mutually contradictory assumptions (which some claim is the case), that is NOT equivalent to saying Bell itself is based on those assumptions.”, this is not relevant, because the proof of the Bell theorem is indeed based on two mutually contradictory assumptions, and I specifically indicated that, showing where the proof uses UE and PP. As for the cos^2(theta) rule, when you use it for both particles of the singlet, I believe you need the projection postulate (to count the QM correlations), and PP directly contradicts UE.
DrChinese said:
But you are missing a truly important point of Bell: At the time it was introduced, it was widely believed that a local realistic version (a la Einstein's perspective) was tenable.
I don’t know. My impression was that the Copenhagen interpretation’s grip on physics was much stronger then than now. But I may be mistaken.
DrChinese said:
Entanglement had never been witnessed! So maybe QM was wrong. But 45 years later, the story has not played out as Einstein might have imagined.
Again, entanglement does not eliminate LR. And Einstein is no relative of mine. It is my understanding he opposed the uncertainty principle. So he was wrong on this issue (at least I believe so). But the uncertainty principle per se does not eliminate LR either. On the other hand, Einstein’s EPR paper led to significant progress.
DrChinese said:
As to Santos suggesting that QM should be relaxed: yes, I saw that paper and laughed. I mean, who is he kidding? Hey, let's change the rules so Santos can convince himself LR is viable and he is right in the end. It's science, where's the beef Santos? I would love to see Santos stick with a theory for an entire year and use it to discover new sources of entanglement that were previously unknown. That would impress me.
Neither is Santos any relative of mine:-) I just mentioned his paper as an example where a local realist appreciates that he cannot and does not need to emulate all predictions of QM.
DrChinese said:
In the meantime, there are numerous NEW theorems that are fully independent of Bell but which ALSO rule out the LR position.
Are they independent on such things as PP?
DrChinese said:
Examples are GHZ, Leggett, etc. and guess what: they don't rely on the "unfair" sampling assumption.
I don’t quite get it. Neither does the standard Bell theorem rely on the “fair” or “unfair” sampling assumption. FS is used to interpret experimental results as violating the Bell inequalities. I readily admit that I don’t know much about GHZ, Leggett etc., but I suspect they basically have the same problems as the Bell theorem. For example, I have not heard anybody state that they were successfully used to conduct loophole-free experiments eliminating LR.
DrChinese said:
So the LR position is being left in the dust as science advances. So I guess I am disagreeing with your assessment. LR is not tenable and the evidence is getting stronger, not weaker.
My assessment is there are neither no-go theorems nor experimental data eliminating LR. But I certainly respect your point of view.
 
Last edited by a moderator:
  • #65
Y'know, this is a complaint that I've never understood, because we have arrived at solid conclusions based on flimsier evidence than this. Let's examine 2 classes of the Bell-type experiments.

1. Bell-violating experiments using light.

Now, everyone agrees that the plethora of experiments (and let's be honest here, there have been PLENTY OF THEM with NOT ONE SINGLE evidence pointing to the contrary) that showed violation of Bell, etc. inequality, even for multipartite system. ALL of these experiments closed the locality loophole. However, the biggest complaint here is the detection loophole, and somehow, there are still people not convinced of the results of the experiments (even though ALL of them give the IDENTICAL conclusion).

2. Bell-violating experiments using matter.

These can be done using charge carriers, or even qubits (see, for example, M. Ansmann et al., Nature v.461, p.504 (2009)). There has been ZERO question that ALL of these experiments closed the detection loophole - you can detect them one at a time without any need for a fair-sampling treatment. The loophole that these experiment can't close right now is the locality loophole, since these are experiments done on very small scale, although there are indications that using the technique of Ansmann et al., there's a possibility that the system might be robust enough to extend to a large length scale and close this loophole as well.

So what do we have here. We have a set of test for a single principle, in which the tests are conducted in various different manner, coming from very different angles, and testing different aspects of it. It is an AMAZING FACT that ALL of them produce a consistent result! This fact seems to be severely overlooked! I mean, think about it for second! It is astounding that each of these experiments that close each of the different loopholes produce the SAME, IDENTICAL result, and not only that, the result having such HIGH CONFIDENCE (the Ansmann et al. experiment, for example, produced a result that exceeded 244 standard deviations!. It's not even funny!

I can understand if there are some indications from some experiment somewhere that a test has produced something to the contrary. The FACT that even this doesn't even exist, and yet, there are people here who are somehow CONVINCED, for some odd reason, that this whole thing is "wrong" (which is a very strong word), now THAT is utterly baffling.

Zz.
 
  • #66
ZapperZ said:
Y'know, this is a complaint that I've never understood, because we have arrived at solid conclusions based on flimsier evidence than this. Let's examine 2 classes of the Bell-type experiments.

1. Bell-violating experiments using light.

Now, everyone agrees that the plethora of experiments (and let's be honest here, there have been PLENTY OF THEM with NOT ONE SINGLE evidence pointing to the contrary) that showed violation of Bell, etc. inequality, even for multipartite system. ALL of these experiments closed the locality loophole. However, the biggest complaint here is the detection loophole, and somehow, there are still people not convinced of the results of the experiments (even though ALL of them give the IDENTICAL conclusion).

2. Bell-violating experiments using matter.

These can be done using charge carriers, or even qubits (see, for example, M. Ansmann et al., Nature v.461, p.504 (2009)). There has been ZERO question that ALL of these experiments closed the detection loophole - you can detect them one at a time without any need for a fair-sampling treatment. The loophole that these experiment can't close right now is the locality loophole, since these are experiments done on very small scale, although there are indications that using the technique of Ansmann et al., there's a possibility that the system might be robust enough to extend to a large length scale and close this loophole as well.

So what do we have here. We have a set of test for a single principle, in which the tests are conducted in various different manner, coming from very different angles, and testing different aspects of it. It is an AMAZING FACT that ALL of them produce a consistent result! This fact seems to be severely overlooked! I mean, think about it for second! It is astounding that each of these experiments that close each of the different loopholes produce the SAME, IDENTICAL result, and not only that, the result having such HIGH CONFIDENCE (the Ansmann et al. experiment, for example, produced a result that exceeded 244 standard deviations!. It's not even funny!
I am trying hard to understand how your reasoning is better than the following:
Euclidian geometry on a plane is wrong because it proves that the sum of the angles is 180 degrees. Experiment shows, however, that this is wrong 1) for quadrangles on a plane and 2) for triangles on a sphere.
Sorry, I just cannot understand how this is different from what you want me to accept. The Bell theorem states that LHV theories cannot violate some inequalities under some assumptions. All you’re telling me is experiment demonstrates violations when these assumptions are not satisfied. ZapperZ, I do sincerely respect you for knowledge and patience, so it is with great regret that I have to say that I’m less than impressed.
ZapperZ said:
I can understand if there are some indications from some experiment somewhere that a test has produced something to the contrary. The FACT that even this doesn't even exist, and yet, there are people here who are somehow CONVINCED, for some odd reason, that this whole thing is "wrong" (which is a very strong word), now THAT is utterly baffling.

Zz.
I tried to explain why unitary evolution contradicts the projection postulate. I used purely mathematical arguments. For some reason, you don’t challenge the specific reasoning I used. If you do challenge it, please advise. So on a purely mathematical level these predictions of standard quantum mechanics contradict each other. Therefore, strictly speaking, one of them wrong. Yes, this is a strong word, but I am afraid you’re trying to kill the messenger again. I did not invent unitary evolution. I did not invent the projection postulate. It’s not my fault that they contradict each other. Even if I die of West Nile fever tomorrow :-), they won’t stop contradicting each other.
 
  • #67
akhmeteli said:
I am trying hard to understand how your reasoning is better than the following:
Euclidian geometry on a plane is wrong because it proves that the sum of the angles is 180 degrees. Experiment shows, however, that this is wrong 1) for quadrangles on a plane and 2) for triangles on a sphere.
Sorry, I just cannot understand how this is different from what you want me to accept. The Bell theorem states that LHV theories cannot violate some inequalities under some assumptions. All you’re telling me is experiment demonstrates violations when these assumptions are not satisfied. ZapperZ, I do sincerely respect you for knowledge and patience, so it is with great regret that I have to say that I’m less than impressed.

I tried to explain why unitary evolution contradicts the projection postulate. I used purely mathematical arguments. For some reason, you don’t challenge the specific reasoning I used. If you do challenge it, please advise. So on a purely mathematical level these predictions of standard quantum mechanics contradict each other. Therefore, strictly speaking, one of them wrong. Yes, this is a strong word, but I am afraid you’re trying to kill the messenger again. I did not invent unitary evolution. I did not invent the projection postulate. It’s not my fault that they contradict each other. Even if I die of West Nile fever tomorrow :-), they won’t stop contradicting each other.

I was addressing your complaint regarding the loopholes, as in the detection loopholes.

If you think there is a logical inconsistencies in the Bell theorem itself, then I would love to see you stick your neck out and publish it. Complaining about it on here does no one any good, does it?

Zz.
 
  • #68
ZapperZ said:
I was addressing your complaint regarding the loopholes, as in the detection loopholes.
Yes, but you also did something else. You reproached me for the strong word “wrong”. I used this word for the assumptions of the Bell theorem only, so I assumed you challenged that part of my post as well.
ZapperZ said:
If you think there is a logical inconsistencies in the Bell theorem itself, then I would love to see you stick your neck out and publish it. Complaining about it on here does no one any good, does it?

Zz.
I am not sure I quite understand that. I don’t see what I can publish – I am not sure I said anything original. The assumptions of the Bell theorem are well-known. The problem of measurement in QM is well-known. The results of the experiments on the Bell inequalities are well-known and are not a matter of dispute – only their interpretation may be controversial. I did not present any independent research, just summarized some pretty well-known results. You don’t seem to dispute the factual aspects of my posts, only my interpretation.
As for my posts doing or not doing any good… I don’t know. I can imagine they do not do you any good, as you know everything this without me. However, we are not the only people on this forum, and I hope some of them may find my posts more useful than you do. You see, people keep saying in this forum that the Bell theorem and the relevant experiments rule out local realism. I present some arguments trying to explain that the situation is somewhat more complex. I am not sure that is just an unwanted distraction for participants of the forum. If, however, you, as a mentor, are telling me to keep my opinions to myself… Well, it’s certainly your right, you are the boss.
 
  • #69
DrChinese said:
1. You can have an unfair sample (of the universe of photon pairs), but it still must be realistic! There must be 3 simultaneous values for Alice at 0, 45, 67.5. Otherwise you are just saying it is a realistic model when it isn't. That is the point of Bell.
There are of course 3 simultaneous values for Alice at 0, 45, 67.5 - they are calculated independently for Alice and Bob. But it does not mean that all pairs are detected at 0 deg.

Let me illustrate this. We have photon pair that have the same POL value but it is off by 45 deg from polarizers of Alice and Bob. Depending from PH value photons are detected or not. But PH value for photons in pair is different (according to model) so depending from PH values of photons both of them could be detected or only one photon from pair can be detected (no coincidence) or both photons can be undetected (this case can not result in detected coincidence if we manipulate only Bob's polarizer or only Alice's polarizer).
Let's say we detected Bob's photon but didn't Alice's. Now we turn Alice's polarizer by 45 deg and sure enough now we detect Alice's photon and we have coincidence that didn't showed up at 0 deg measurement.

So you don't detect all relevant pairs (for possible 45 and 67.5 coincidences) at 0 deg according to model.
 
  • #70
ZapperZ said:
1. Bell-violating experiments using light.

Now, everyone agrees that the plethora of experiments (and let's be honest here, there have been PLENTY OF THEM with NOT ONE SINGLE evidence pointing to the contrary) that showed violation of Bell, etc. inequality, even for multipartite system. ALL of these experiments closed the locality loophole. However, the biggest complaint here is the detection loophole, and somehow, there are still people not convinced of the results of the experiments (even though ALL of them give the IDENTICAL conclusion).
Yes that thing get me puzzling about it. So I was looking what is common for all these experiments.
And you know I think I know one thing that is common for them. You have to keep coincidence detection rate as low as reasonably possible for minimum correlation settings.
That is reasonable because this is indicator how pure is entanglement. Isn't it so?
So the question is whether there can be constructed LHV models that restore local realism if quasi decoherence takes place but is taken away by unintentionally biased settings. And I just gave one such model.
 
Back
Top