A classical challenge to Bell's Theorem?

In summary: But, assuming I understand, and for your info., my interest/concern here is to understand how physicists/mathematicians deal with the wholly classical setting in the context set by Bell (1964).In summary, the conversation revolves around a discussion of randomness and causality in quantum mechanics. The original post discusses a thought experiment involving a Bell-test set-up and the CHSH inequality. The conversation then shifts to a discussion of the possibility of effects without a cause in quantum events and how this relates to the Bell-test scenario. Finally, there is a suggestion to change the scenario by removing the quantum entanglement and replacing it with a mystical being controlling a parameter, and the conversation ends with a request for clarification on how physicists and
  • #141
gill1109 said:
I'm afraid de Raedt and his colleagues are rather confused and don't understand these issues. So many things they say are rather misleading. The experiment as a whole has an efficiency and it is measured by the proportion of unpaired events on both sides of the experiment. Big proportion unpaired, low efficiency.
Let us use your coins analogy. For this purpose we say heads = +1 tails = -1

THEORETICAL:
If we have 3 coins labelled "a", "b", "c" and we toss all three a very large number of times. It follows that the inequality |ab + ac| - bc <= 1 will never be violated for any individual case and therefore for averages |<ab> + <ac>| - <bc> <= 1 will also never be violated.

Proof:
a,b,c = (+1,+1,+1): |(+1) + (+1)| - (+1) <= 1, obeyed=True
a,b,c = (+1,+1,-1): |(+1) + (-1)| - (-1) <= 1, obeyed=True
a,b,c = (+1,-1,+1): |(-1) + (+1)| - (-1) <= 1, obeyed=True
a,b,c = (+1,-1,-1): |(-1) + (-1)| - (+1) <= 1, obeyed=True
a,b,c = (-1,+1,+1): |(-1) + (-1)| - (+1) <= 1, obeyed=True
a,b,c = (-1,+1,-1): |(-1) + (+1)| - (-1) <= 1, obeyed=True
a,b,c = (-1,-1,+1): |(+1) + (-1)| - (-1) <= 1, obeyed=True
a,b,c = (-1,-1,-1): |(+1) + (+1)| - (+1) <= 1, obeyed=True


EXPERIMENTAL:
We have 3 coins labelled "a","b","c", one of which is inside a special box. Only two of them can be outside the box at any given time because you need to insert a coin in order to release another. So experimentally we decide to perform the experiment by tossing pairs of coins at a time, each pair a very large number of times. In the first run, we toss "a" and "b" a large number times, in the second one we toss "a" and "c" a large number of times and in the third we toss "b" and "c". Even though the data appears random, we then calculate <ab>, <ac> and <bc> and substitute in our equation and find that the inequality is violated! We are baffled, does this mean therere is non-local causality involved? For example we find that <ab> = -1, <ac> = -1 and <bc> = -1 Therefore |-1 - 1| + 1 <= 1, or 3 <= 1 which violates the inequality. How can this be possible? Does this mean there is spooky action at a distance happening?

No. Consider the following: Each coin has a hidden mechanism inside [which the experimenters do not know of] which exhibits an oscillatory behaviour in time determined at the moment it leaves the box. Let us presume that the hidden behaviour of each coin is a function of some absolute time, "t" and follows the function sin(t).

The above scenario [<ab> = -1, <ac> = -1 and <bc> = -1 ] can easily be realized if:
- if sin(t) > 0: coin "a" always produces heads (+1), coin "c" always produces tails (-1) while coin "b" produces tails (-1) if it was released from the box using coin "a", but heads (+1) if it was released from the box using coin "c".
- if sin(t) <= 0: all the signs are reversed.


Therefore it can be clearly seen here that violation of the inequality is possible in a situation which is CLEARLY locally causal. We have defined in advance the rules by which the system operates and those rules do not violate any local causality, yet the inequality is violated. No mention of any detector efficiency or loopholes of any kind.
 
Physics news on Phys.org
  • #142
gill1109 said:
The important measure of efficiency is determined empirically. And it is not just about what goes on at the detectors. It is: what proportion of the observed events in Alice's side of the experiment are linked to an observed event on Bob's side. And the same thing, the other way round. Both of these two proportions have to be at least something like 95%, before a violation CHSH at 2 sqrt 2 actually proves anything. In Weihs experiment they are both about 5%.

Particles don't just get lost at the detectors. They also get lost "in transmission". Some even get reabsorbed in the same crystal where they were "born" by being excited with a lazer.


I'm afraid de Raedt and his colleagues are rather confused and don't understand these issues. So many things they say are rather misleading. The experiment as a whole has an efficiency and it is measured by the proportion of unpaired events on both sides of the experiment. Big proportion unpaired, low efficiency.

Good points. For those who are wondering, please keep the following in mind:

First: Using the EPR definition of an element of reality, you must start with a stream of photon pairs that yield perfect correlations. I.e. You cannot test a source stream which is NOT entangled! The experimenter searches for this, and provides one with as much fidelity as possible. If it is 5% due to any number of factors or not, you must start there for executing your Bell test.

Next: you ask if there is something about your definition of entangled pairs that is somehow systemically biased. That is always possible with any experiment, and open to critique. And in almost any experiment, you ultimately conclude with the assumption that your sample is representative of the universe as a whole.

In a computer simulation such as that of de Raedt et al, you don't really have a physical model. It is just an ad hoc formula constructed with a specific purpose. In this case it is wrapped up as if there is a time coincidence window. But I could remap their formula to be day of week or changes in the stock market or whatever and it would work as well.

BTW, their model does not yield perfect correlations as the window size increases. This violates one of our initial requirements, which is to start with a source which meets the EPR requirement of providing an element of reality. Only when the stream is unambiguously entangled do we see perfect correlations. So we lose the validity of considering the unmatched events (of this stream of imperfect pairs) as being part of a full universe which does not violate a Bell inequality. You cannot mix in unentangled pairs!

Lastly: If you DID take the simulation seriously, you would then need to map it to a physical model that would then be subject to physical tests. No one really takes this that seriously because of the other issues present. Analysis of the Weihs coincidence time data does not match the de Raedt et al model in any way. The only connection is that the term "coincidence time window" is used.
 
  • #143
DrChinese said:
First: Using the EPR definition of an element of reality, you must start with a stream of photon pairs that yield perfect correlations. I.e. You cannot test a source stream which is NOT entangled!
Obviously it can be verified that for the coin counter-example in post #141 above which violates the inequality, there is perfect correlation between the pairs. There is no coincidence required, there is no time window, or detector efficiency involved. Yet violation occurs.
 
  • #144
billschnieder said:
The above scenario [<ab> = -1, <ac> = -1 and <bc> = -1 ] can easily be realized if:
- if sin(t) > 0: coin "a" always produces heads (+1), coin "c" always produces tails (-1) while coin "b" produces tails (-1) if it was released from the box using coin "a", but heads (+1) if it was released from the box using coin "c".
- if sin(t) <= 0: all the signs are reversed.

Sad Bill. Really sad. This is not realistic, it is contextual.
 
  • #145
...And obviously does NOT yield perfect correlations since the coin b outcome is dependent on what coin the observer uses to get it.
 
  • #146
DrChinese said:
...And obviously does NOT yield perfect correlations since the coin b outcome is dependent on what coin the observer uses to get it.

We are talking about perfect correlations of measured pairs! QM does not say one photon of one pair must be perfectly correlated with a different photon of a differnt pair does it? :rolleyes:

<ab> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<ac> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<bc> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
 
  • #147
DrChinese said:
Sad Bill. Really sad. This is not realistic, it is contextual.
Duh, realistic does NOT conflict with contextual. I've explained this to you 1 million times and you still make silly mistakes like this.

You are now claiming that our coins and box mechanism is not realistic. Anyone else reading this shoud be able to see how absurd such a claim is. The coins each have definite properties, and behave according to definite rules all defined well ahead of time. What is not realistic about that! Yet we still get a violation.
 
  • #148
billschnieder said:
We are talking about perfect correlations of measured pairs! QM does not say one photon of one pair must be perfectly correlated with a different photon of a differnt pair does it? :rolleyes:

<ab> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<ac> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<bc> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair

billschnieder said:
Duh, realistic does NOT conflict with contextual. I've explained this to you 1 million times and you still make silly mistakes like this.

You are now claiming that our coins and box mechanism is not realistic. Anyone else reading this shoud be able to see how absurd such a claim is. The coins each have definite properties, and behave according to definite rules all defined well ahead of time. What is not realistic about that! Yet we still get a violation.

Take the DrC challenge then and show me the realistic data set. Show me +/- values of a, b, c for each of 8 to 20 pairs. You use your own algorithm to generate so that way, there is no chance of me misinterpreting things.

The only model that has ever passed is the de Raedt et al computer simulation, and it has its own set of issues (i.e. does not match experiment).
 
  • #149
DrChinese said:
Take the DrC challenge then and show me the realistic data set. Show me +/- values of a, b, c for each of 8 to 20 pairs. You use your own algorithm to generate so that way, there is no chance of me misinterpreting things.

The only model that has ever passed is the de Raedt et al computer simulation, and it has its own set of issues (i.e. does not match experiment).
We already went though this and you gave up (https://www.physicsforums.com/showthread.php?t=499002&page=4). it is a nonsensical challenge. I turn it back on you to give me a non-realistic dataset or non-local dataset which violates the inequality.

BTW: If you want to repeat this again, start a new thread for it as this is getting off-topic.
 
Last edited:
  • #150
billschnieder said:
We already went though this and you gave up (https://www.physicsforums.com/showthread.php?t=499002&page=4). it is a nonsensical challenge. I turn it back on you to give me a non-realistic dataset or non-local dataset which violates the inequality.

BTW: If you want to repeat this again, start a new thread for it as this is getting off-topic.

All I can say is take the challenge or be quiet. :-p

By definition, a non-realistic dataset does NOT have 3 simultaneous values.

Here is as good a place to discuss as any, please see (er, I mean read and understand) the title.
 
  • #151
DrChinese said:
By definition, a non-realistic dataset does NOT have 3 simultaneous values.
Does a non-realistic dataset have any values at all? :smile:
 
  • #152
billschnieder said:
Does a non-realistic dataset have any values at all? :smile:

How is this:

a b c
+ - *
- * +
* + +
+ - *

Where * is undefined, and the other 2 map to actual observations. Now, where is yours big talker? Howsa 'bout just the dataset.
 
  • #153
DrChinese said:
How is this:

a b c
+ - *
- * +
* + +
+ - *

Where * is undefined, and the other 2 map to actual observations.

So I guess according to you this is also a non-realistic dataset:

a b c d
+ - + -
- * + -
* + + +
+ - * +

and this too:
a b c
+ - *
- + *
+ + *
+ - *

Is this one a realistic dataset?:
a b
+ -
- +
+ +
+ -

What about this one?:
a
+
-
+
+

Cleary in your mind, you believe it is impossible for an experiment to produce a realistic dataset. So according to you, by definition all experiments produce non-realist datasets. I wonder then why you need Bell at all?

Now let us not waste the time of these fine folks reading this thread with such rubbish. We went through this exercise already right here: https://www.physicsforums.com/showthread.php?t=499002&page=4 and I presented several datasets on page 6. Anybody who is interested to see how nonsensical your challenge is can check this thread and see the datasets I presented and your bobbing and weaving.
 
  • #154
DrChinese said:
How is this:

a b c
+ - *
- * +
* + +
+ - *

Where * is undefined, and the other 2 map to actual observations. Now, where is yours big talker? Howsa 'bout just the dataset.

This data set could have been generated by tossing two coins at a time. Right!
The number of mismatches are:
nab=2
nbc=0
nac=1

nbc+nac≥nab is violated. How do you explain the violation. Is coin tossing non-local?
 
  • #155
rlduncan said:
This data set could have been generated by tossing two coins at a time. Right!
The number of mismatches are:
nab=2
nbc=0
nac=1

nbc+nac≥nab is violated. How do you explain the violation. Is coin tossing non-local?

There are a lot of Bell inequalities. The one you used is not applicable in this case. I think you have discovered one of the points I am making. Bill often switches from one example to the other, throwing things around.

In my challenge, the Bell lower limit is 1/3 (matches). The quantum mechanical value is .25 - which is the cos^2(120 degrees). My example yields .25, which is fine because it is not realistic and so Bell does not apply.

What I am saying is that no realistic dataset will produce results below 1/3 once we have a suitably large sample. Bill or you can provide the dataset, I will select which 2 angles to choose from for each set of 3 values for a/b/c. That's the challenge.
 
  • #156
DrChinese said:
There are a lot of Bell inequalities. The one you used is not applicable in this case. I think you have discovered one of the points I am making. Bill often switches from one example to the other, throwing things around.

In my challenge, the Bell lower limit is 1/3 (matches). The quantum mechanical value is .25 - which is the cos^2(120 degrees). My example yields .25, which is fine because it is not realistic and so Bell does not apply.

What I am saying is that no realistic dataset will produce results below 1/3 once we have a suitably large sample. Bill or you can provide the dataset, I will select which 2 angles to choose from for each set of 3 values for a/b/c. That's the challenge.

I assume then you hand-picked your data set. And you may be correct about the inequality I chose to violate not being applicable in one sense. However, when all three data pieces (a,b,c) are used then I believe no matter how you pick the data pairs or write the inequality there can be no violation, it is a mathematical truth. It is equivalent to the triangle inequality where the sum of any two sides is greater than the third. Am I wrong on this point?
 
  • #157
billschnieder said:
EXPERIMENTAL:
We have 3 coins labelled "a","b","c", one of which is inside a special box. Only two of them can be outside the box at any given time because you need to insert a coin in order to release another. So experimentally we decide to perform the experiment by tossing pairs of coins at a time, each pair a very large number of times. In the first run, we toss "a" and "b" a large number times, in the second one we toss "a" and "c" a large number of times and in the third we toss "b" and "c". Even though the data appears random, we then calculate <ab>, <ac> and <bc> and substitute in our equation and find that the inequality is violated! We are baffled, does this mean therere is non-local causality involved? For example we find that <ab> = -1, <ac> = -1 and <bc> = -1 Therefore |-1 - 1| + 1 <= 1, or 3 <= 1 which violates the inequality. How can this be possible? Does this mean there is spooky action at a distance happening?

Bill, I've been trying to understand your coins example. Is this analogous to measuring a pair of entangled photons at one of three previously agreed angles? So tossing coins "a" and "b" means Alice measures her photon at angle "a", while Bob measures his photon at angle "b"?
 
  • #158
gill1109 said:
Quick response to Gordon. You said you could get half-way to the desired correlations, easily. I said "exactly", because half-way does not violate CHSH. Sorry, I have not found out exactly what you mean by Y, W, and I don't know what you mean by the classical OP experiment. My discussion was aimed at Aspect, done more recently, better still, by Weihs.

...

gill1109, here's the point that I was making; and why I see your "exactly" as missing the point:

1. We can run an experiment (i.e., W; the classical experiment defined in the OP; replacing Aspect's high-tech source with a low-cost classical one) AND obtain exactly half of the Aspect correlation over every (a, b) setting.

2. So why (then) should it be surprising that Aspect's high-tech source (in his experiment; identified here as Y) delivers a higher correlation? Is it not to be expected?

3. Is it not to be expected: That an expensive high-tech source of highly correlated particles (in the singlet-state) should out-perform a low-cost classical source whose particles are hardly correlated at all!?

4. If you want to say that "the surprise" relates to the breaching of the CHSH inequality, that (I suggest) we should happily discuss under another thread.

....

PS: The designations W, X, Y, Z are short-cut specifications of experimental conditions:

W (the classical OP experiment) is Y [= Aspect (2004)] with the source replaced by a classical one (the particles pair-wise correlated via identical linear-polarisations).

X (a classical experiment with spin-half particles) is Z [= EPRB/Bell (1964)] with the source replaced by a classical one (the particles pair-wise correlated via antiparallel spins).

Y = Aspect (2004).

Z = EPRB/Bell (1964).

Hope that helps.

...

NB: Do you see some good reason to replace Aspect (2004) here with Weihs? The questions here relate to some straight-forward classical analyses, with Aspect (2004) nicely explanatory of the quantum situation and readily available on-line at arxiv.org.

With best regards,

GW
 
  • #159
Usually we discuss hypothetical experiments where timing is fixed. Like: every second we send off two photons. They may or may not get measured at the measurement stations. Detection efficiency is then usually defined in terms of the proportion of photons lost in either wing of the experiment.

In real experiments, the times of the departure of the photons and times they are measured are not fixed externally. Photons leave spontaneously and get measured at times which are not controlled by us. No, the measurement process itself generates times of events in both wings of the experiment. We use a "coincidence window" to decide which events are to be thought of as belonging together.

This opens a new loophole a bit different and in fact more potentially harmful than the detector efficiency loophole. If a "photon" arrives at a detector with a plan in its mind what setting it wants to see, and what outcome it will generate, cleverly correlated with the plan of its partner in the other wing of the experiment, then this photon can arrange to arrive a bit earlier (ie the measurement process is faster) if it doesn't like the setting it sees. At the same time, its partner in the other wing of the experiment arranges to arrive a bit later (ie its measurement process is slower) if it doesn't like the setting it sees. If they both see "wrong" settings the time interval between their arrivals is extended so much that they no longer count as a pair in the statistics.

All the photons get measured, detector efficiency is 100%, but many events are unpaired.

I wrote about this with Jan-Ake Larsson some years ago:

arXiv:quant-ph/0312035

Bell's inequality and the coincidence-time loophole
Jan-Ake Larsson, Richard Gill

This paper analyzes effects of time-dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence [and hence whether or not a pair contributes to the actual data] is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole."

Europhysics Letters, vol 67, pp. 707-713 (2004)
 
  • #160
gill1109 said:
Usually we discuss hypothetical experiments where timing is fixed. Like: every second we send off two photons. They may or may not get measured at the measurement stations. Detection efficiency is then usually defined in terms of the proportion of photons lost in either wing of the experiment.

In real experiments, the times of the departure of the photons and times they are measured are not fixed externally. Photons leave spontaneously and get measured at times which are not controlled by us. No, the measurement process itself generates times of events in both wings of the experiment. We use a "coincidence window" to decide which events are to be thought of as belonging together.

This opens a new loophole a bit different and in fact more potentially harmful than the detector efficiency loophole. If a "photon" arrives at a detector with a plan in its mind what setting it wants to see, and what outcome it will generate, cleverly correlated with the plan of its partner in the other wing of the experiment, then this photon can arrange to arrive a bit earlier (ie the measurement process is faster) if it doesn't like the setting it sees. At the same time, its partner in the other wing of the experiment arranges to arrive a bit later (ie its measurement process is slower) if it doesn't like the setting it sees. If they both see "wrong" settings the time interval between their arrivals is extended so much that they no longer count as a pair in the statistics.

All the photons get measured, detector efficiency is 100%, but many events are unpaired.

I wrote about this with Jan-Ake Larsson some years ago:

arXiv:quant-ph/0312035

Bell's inequality and the coincidence-time loophole
Jan-Ake Larsson, Richard Gill

This paper analyzes effects of time-dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence [and hence whether or not a pair contributes to the actual data] is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole."

Europhysics Letters, vol 67, pp. 707-713 (2004)


gill1109, Thanks for this. However, I see nothing here that relates to anything that I've said or implied. Recall that we are discussing idealised experiments, like Bell (1964). So questions of detector-efficiencies, unpaired-events, loss-of-pairs, coincidence-timing, coincidence-counting, etc, do not arise: For there is neither wish nor need here to exploit any loop-hole.

GW
 
  • #161
Thanks GW

If indeed the experiment is a perfect idealized experiment ... as in Bell's "Bertlmann's socks" paper then there is no way to beat CHSH in a local realistic way. Bell's 1964 paper is not about experiments, whether idealized and/or perfect or not. There are very good reasons why Bell moved from his initial inequality to CHSH and why he rather carefully spelt out the details of an idealized CHSH-type experiment in his later work.
 
  • #162
gill1109 said:
Thanks GW

If indeed the experiment is a perfect idealized experiment ... as in Bell's "Bertlmann's socks" paper then there is no way to beat CHSH in a local realistic way. Bell's 1964 paper is not about experiments, whether idealized and/or perfect or not. There are very good reasons why Bell moved from his initial inequality to CHSH and why he rather carefully spelt out the details of an idealized CHSH-type experiment in his later work.

I took Bell (1964) to be about (idealised) EPR-Bohm (Bohm 1951), as cited in Bohm-Aharonov (1957). The result that Bell aims for [(his (3)] is the EPR-Bohm result E(A, B) -- Bell's P(a, b) -- = -a.b.

As suggested above, discussion of CHSH warrants another thread, imho.
 
  • #163
GW: the point of CHSH is that it gives us an easy way to see why local realist models can't generate E(A,B)=-a.b without recourse to trickery.
 
  • #164
gill1109 said:
GW: the point of CHSH is that it gives us an easy way to see why local realist models can't generate E(A,B)=-a.b without recourse to trickery.

But if E(A,B) is calculated in a local realistic manner and gives -a.b the way Gordon has done, and Joy Christian has done, and De Raedt has done, and Kracklauer etc. ..., then there has to be something wrong with your claim that it can't. It is up to you to point out the trickery then. The CHSH being therefore a red-herring for this particular discussion.
 
Last edited:
  • #165
Gordon has not supplied anything yet.
 
  • #166
DrChinese said:
Gordon has not supplied anything yet.

See post #102.
 
  • #167
billschnieder said:
See post #102.

Anyone can write a result. It is meaningless. His model does not produce this result. I thought we settled that.
 
  • #168
billschnieder said:
But if E(A,B) is calculated in a local realistic manner and gives -a.b the way Gordon has done, and Joy Christian has done, and De Raedt has done, and Kracklauer etc. ..., then there has to be something wrong with your claim that it can't. It is up to you to point out the trickery then. The CHSH being therefore a red-herring for this particular discussion.

Christian claims to have done this in the reference provided below, but at this point I cannot confirm his claim. (I am discussing the matter with him.) De Raedt et al created a computer simulation which violates a Bell Inequality (winning the DrC challenge in the process) but still failing to violate Bell's Theorem (since it no longer matches the predictions of QM).

http://arxiv.org/abs/0806.3078
 
  • #169
DrChinese said:
Gordon has not supplied anything yet.
To add to that: also Joy Christian has not really done so. It's now concluded by almost everyone that he simply messed up and tried in vain to undo the mess. As for the solutions of the remaining ones, those are not of the kind that Gordon is after (post #122).
 
  • #170
DrChinese said:
Christian claims to have done this in the reference provided below, but at this point I cannot confirm his claim. (I am discussing the matter with him.)
And your inability to confirm his claim is relevant in what way?
De Raedt et al created a computer simulation which violates a Bell Inequality (winning the DrC challenge in the process)
Puhleese :smile:! De Raedt et al will laugh at your so called "DrC Challenge".

but still failing to violate Bell's Theorem (since it no longer matches the predictions of QM).

Huh? It matched QM before but no longer does so? What has changed since December 2011
http://arxiv.org/pdf/1112.2629v1
Einstein-Podolsky-Rosen-Bohm laboratory experiments: Data analysis and simulation
H. De Raedt, K. Michielsen, F. Jin
(Submitted on 12 Dec 2011)

Data produced by laboratory Einstein-Podolsky-Rosen-Bohm (EPRB) experiments is tested against the hypothesis that the statistics of this data is given by quantum theory of this thought experiment. Statistical evidence is presented that the experimental data, while violating Bell inequalities, does not support this hypothesis. It is shown that an event-based simulation model, providing a cause-and-effect description of real EPRB experiments at a level of detail which is not covered by quantum theory, reproduces the results of quantum theory of this thought experiment, indicating that there is no fundamental obstacle for a real EPRB experiment to produce data that can be described by quantum theory.

http://arxiv.org/pdf/0712.3781v2
Event-by-event simulation of quantum phenomena: Application to Einstein-Podolosky-Rosen-Bohm experiments
H. De Raedt, K. De Raedt, K. Michielsen, K. Keimpema, S. Miyagarbagea
(Submitted on 21 Dec 2007 (v1), last revised 25 Dec 2007 (this version, v2))

We review the data gathering and analysis procedure used in real Einstein-Podolsky-Rosen-Bohm experiments with photons and we illustrate the procedure by analyzing experimental data. Based on this analysis, we construct event-based computer simulation models in which every essential element in the experiment has a counterpart. The data is analyzed by counting single-particle events and two-particle coincidences, using the same procedure as in experiments. The simulation models strictly satisfy Einstein's criteria of local causality, do not rely on any concept of quantum theory or probability theory, and reproduce all results of quantum theory for a quantum system of two $S=1/2$ particles. We present a rigorous analytical treatment of these models and show that they may yield results that are in exact agreement with quantum theory. The apparent conflict with the folklore on Bell's theorem, stating that such models are not supposed to exist, is resolved. Finally, starting from the principles of probable inference, we derive the probability distributions of quantum theory of the Einstein-Podolsky-Rosen-Bohm experiment without invoking concepts of quantum theory.
 
  • #171
harrylin said:
To add to that: also Joy Christian has not really done so.
You do not know that so why do you state it as though you do?

It's now concluded by almost everyone that he simply messed up and tried in vain to undo the mess. As for the solutions of the remaining ones, those are not of the kind that Gordon is after (post #122).
It is true that many people do not believe Joy Christian, but that is not a reason to state their opinion as fact, nor does it mean he is wrong. I recommend you follow the discussion on FQXi where he explains his program in more detail and his article in which he responds to gill1109's criticisms.

There is also a faq at FQXi (http://fqxi.org/data/forum-attachments/JoyChristian_FAQ.pdf)
 
  • #172
harrylin said:
To add to that: also Joy Christian has not really done so. It's now concluded by almost everyone that he simply messed up and tried in vain to undo the mess.

I am trying to sort through Joy's thinking at this point. His above referenced paper asserts that CHSH is flat wrong and proposes a macroscopic (classical) test to prove it. I really don't get where is headed with it (to be honest), but I will keep at it until I resolve one way or another in my own mind.
 
  • #173
billschnieder said:
De Raedt et al will laugh at your so called "DrC Challenge".

That's an interesting speculation on your part*.

However, in fact I worked closely with Kristel (and Hans) on theirs for about a month, and they were kind enough to devote substantial time and effort to the process. In the end we did not disagree on the operation of their simulation. It is fully local and realistic. And if you look at the spreadsheet, you will see for yourself what happens in their model. And it does not match QM for the full universe, thereby respecting Bell.

As to Joy Christian: I am trying to put together a similar challenge with him, not sure if it will be possible or not because he does not seem open to a computer simulation. But I am hopeful I can either either change his mind on that point or alternately conclude exactly why his model is not realistic.

*And typically wrong-headed. :smile:
 
  • #174
DrChinese said:
And it does not match QM for the full universe, thereby respecting Bell.
But this is a fundamental misunderstanding on your part. There is no such thing as full universe in QM. QM gives you correlations for the experimental outcome, the same thing they calculated. If you want to claim that the outcome is the full universe, then you can not use a different standard for their simulation, you must also look only at the outcome.

However, in fact I worked closely with Kristel (and Hans) on theirs for about a month, and they were kind enough to devote substantial time and effort to the process. In the end we did not disagree on the operation of their simulation. It is fully local and realistic.
BTW I do not doubt that Kristel and Hans might have spent a lot of their valuable time with you. Although I do doubt that, that time was spent on, let-alone winning, the "DrC challenge."
 
Last edited:
  • #175
billschnieder said:
But this is a fundamental misunderstanding on your part. There is no such thing as full universe in QM. QM gives you correlations for the experimental outcome, the same thing they calculated. If you want to claim that the outcome is the full universe, then you can not use a different standard for their simulation, you must also look only at the outcome.

If you look at the simulation, you can vary the size of the window. This is only the outcomes that are "visible". Since the model is realistic, we can also display the full universe (which of course never matches the QM expectation, respecting Bell).

For the visible outcomes: As you increase window size, the result clearly deviates from the QM predictions. So it is up to you to decide where to peg it. If you take a small window where pairs are clearly acting entangled*, then you see results that (more or less) match the QM expectation. But if you widen the window so there is more ambiguity in what should be called entangled*, you clearly approach the straight line boundary. And the model no longer matches the QM expectation or experiment. So your conclusion is somewhat dependent on your choice of cutoff.

I will try to take a couple of screenshots in a few days so you can see the effect. That might help everyone see what happens as k (window size) is varied. Clearly, there is nothing stopping you from looking at 100% of the pairs (the full universe), and that definitely does not match QM or experiment. So it looks fairly good as long as you pick settings that are favorable. But as you vary those settings, it does not seem to reproduce the dynamics of an actual experiment.

Again, some of this is in the eye of the beholder.

*This being a function of the % of perfect correlations. Anything which does not perfectly correlate when expected should be ignored as not qualifying. I did not eliminate those in my model nor did the De Raedt team. Again, there is no exact point of acceptance or rejection.
 
Back
Top