Why the De Raedt Local Realistic Computer Simulations are wrong

In summary, the De Raedt model produces QM-like data, but it does not prove that a local realistic theory is possible.
  • #36
ajw1 said:
Have you noticed they probably use a different model for their combined type I and type II experiments? They use something called a DLM (deterministic learning machine). This is not present in the model you base your conclusion on.

See"[URL Event-based computer simulation model of Aspect-type experiments
strictly satisfying Einstein’s locality conditions[/URL]

That is the reference I am working with. And yes, they refer to the DLM but it is not a part of their model as far as I can see except as general justification for their program. Their formula is arrived at as follows, quoting:

"... Therefore we use simplicity as a criterion to select a specific form. By trial and error, we found that T(n − 1) = T0F(| sin 2(n − 1)|) = T0| sin 2(n − 1)|d yields useful results. Here, T0 = max T() is the maximum time delay and defines the unit of time, used in the simulation and d is a free parameter of the model. In our numerical work, we set T0 = 1. As we demonstrate later, our model reproduces the quantum results of Table I under the hypothesis that the time tags tn,1 are distributed uniformly over the interval [0, | sin 2(n − 1)|d] with d = 2. Needless to say, we do not claim that our choice is the only one that reproduces the results of quantum theory for the EPRB experiments."


And again, since I am following their model which uses Type II PDC, I am sticking with that alone for my example. And I use that for the spreadsheet. However, the same problem does exist with Type I PDC so I don't think it is necessary to further document.
 
Last edited by a moderator:
Physics news on Phys.org
  • #37
DrChinese said:
That is the reference I am working with. And yes, they refer to the DLM but it is not a part of their model as far as I can see except as general justification for their program. Their formula is arrived at as follows, quoting:

"... Therefore we use simplicity as a criterion to select a specific form. By trial and error, we found that T(n − 1) = T0F(| sin 2(n − 1)|) = T0| sin 2(n − 1)|d yields useful results. Here, T0 = max T() is the maximum time delay and defines the unit of time, used in the simulation and d is a free parameter of the model. In our numerical work, we set T0 = 1. As we demonstrate later, our model reproduces the quantum results of Table I under the hypothesis that the time tags tn,1 are distributed uniformly over the interval [0, | sin 2(n − 1)|d] with d = 2. Needless to say, we do not claim that our choice is the only one that reproduces the results of quantum theory for the EPRB experiments."

I think they do use different logic for the model in the reference cited:
We now describe the DLM that simulates the operation of a polarizer
with the logic of the DLM explained on page 12. Your quote must be about the implementation of the logic for the timewindow.
 
  • #38
ajw1 said:
I think they do use different logic for the model in the reference cited:
with the logic of the DLM explained on page 12. Your quote must be about the implementation of the logic for the timewindow.

I think you are essentially correct. I do not know (nor do I need to know) what their DLM logic consists of. That is not a part of their formula for Experiment I, which is where their FORTRAN simulation comes into play. I have not seen any published code for their Experiment II. I only know that when you model their Experiment II WITHOUT their "DLM" polarizer - which is my "Experiment IV" - you get results that are materially different than they predict and what experiment and QM predicts.

In other words, they used a "trick" to get the results to look right for Experiment II. That "trick" (it is not an inappropriate trick, so that is why I put in quotes) is that they ascribe new properties to the Polarizer... and they say that somehow relates to the DLM. OK, fine. So my "trick" (also not inappropriate) is to remove the Polarizer. And guess what? Without the Polarizer (and the DLM), the results do NOT work right. So their model CANNOT be correct, even in principle.

We all know that De Raedt et al are not actually asserting their model is accurate. They are simply claiming that SOME model could hypothetically work. And I am saying: NO, it doesn't, because you must FIRST provide at least one consistent example before the point can be yielded. And this is not that example. It must work for the same scope as what they claim it works for, and I have the counterexample that shows it does not.

The fact is: No "local realistic" simulation of the quantum mechanical predictions has been provided that even in principle meets the criteria of Bell by exploiting the fair sampling loophole.
 
  • #39
As I understand by "the criteria of Bell" in this context you mean some sort of relation like that:
[tex]P_{VV}(\alpha,\beta) = sin^{2}\alpha\, sin^{2}\beta\, cos^{2}\theta_{l} + cos^{2}\alpha\, cos^{2}\beta\, sin^{2}\theta_{l} + \frac{1}{4}sin 2\alpha\, sin 2\beta\, sin 2\theta_{l}\, cos \phi [/tex]
This is equation (9) from paper - http://arxiv.org/abs/quant-ph/0205171/"

This relation produces [tex]cos^{2}(\alpha-\beta)[/tex] law when [tex] \theta_{l}[/tex] is Pi/4 and [tex]\phi[/tex] is 0. But when for example [tex]\theta_{l}[/tex] is 0 it produces [tex]sin^{2}\alpha\, sin^{2}\beta[/tex] that is simply product of two probabilities from Malus law.

But interesting how do you view this PDC TypeI from QM perspective?
As I see it if we have one crystal incident photon is converted (sometimes) into two photons that go out of the crystal diverted in opposite directions from original direction. They are not polarization entangled.
However if these two photons encounter second crystal right after the first one if certain conditions are met two photons become polarization entangled.
It seems to me that pilot wave interpretation provides nice way to resolve this. Empty pilot wave of incident photon continues its way next to second crystal and gets partly downconverted there and then overlaps with downconverted photons (somehow) creating entangled state.
How would you explain creation of polarization entangled state in TypeI PDC?
 
Last edited by a moderator:
  • #40
zonde said:
As I understand by "the criteria of Bell" in this context you mean some sort of relation like that:
[tex]P_{VV}(\alpha,\beta) = sin^{2}\alpha\, sin^{2}\beta\, cos^{2}\theta_{l} + cos^{2}\alpha\, cos^{2}\beta\, sin^{2}\theta_{l} + \frac{1}{4}sin 2\alpha\, sin 2\beta\, sin 2\theta_{l}\, cos \phi [/tex]
This is equation (9) from paper - http://arxiv.org/abs/quant-ph/0205171/"

This relation produces [tex]cos^{2}(\alpha-\beta)[/tex] law when [tex] \theta_{l}[/tex] is Pi/4 and [tex]\phi[/tex] is 0. But when for example [tex]\theta_{l}[/tex] is 0 it produces [tex]sin^{2}\alpha\, sin^{2}\beta[/tex] that is simply product of two probabilities from Malus law.

But interesting how do you view this PDC TypeI from QM perspective?
As I see it if we have one crystal incident photon is converted (sometimes) into two photons that go out of the crystal diverted in opposite directions from original direction. They are not polarization entangled.
However if these two photons encounter second crystal right after the first one if certain conditions are met two photons become polarization entangled.
It seems to me that pilot wave interpretation provides nice way to resolve this. Empty pilot wave of incident photon continues its way next to second crystal and gets partly downconverted there and then overlaps with downconverted photons (somehow) creating entangled state.
How would you explain creation of polarization entangled state in TypeI PDC?

Yes, the Type I PDC is especially interesting to consider. If you think in classical terms, it is difficult to explain. For those who are unfamiliar with the PDC Types:

Both Type I and Type II use specially cut thin (typically 1mm) non-linear crystals (often made of Barium Borate or BBo) which produced pairs of correlated photons from an input laser source. The outputs are actually conic regions that vary in wavelength and intensity by angle, and the experimenter usually locates a region of the cone with the best output according to the desired characteristics. Both Type I and Type II can produce known outputs from a known input, and such outputs are NOT polarization entangled.

Type I: H> input produces VV> output when crystal is oriented correctly for H>, and produces nothing for V> input. If the crystal is turned 90 degrees and the source is also turned 90 degree, you have V> input produceing HH> output (and produces nothing for H> input).

Type II: H> input produces HV> output, V> input produces VH> output.

To get polarization entanglement, you do the following:

Type I: Use 2 crystals oriented at 90 degrees apart (0 and 90), with the source input at a 45 degree angle.

Type II: Use 1 crystal oriented at 0 degrees, with the source input at a 45 degree angle.


The question zonde is asking is: how to consider the Type I case which requires 2 crystals to achieve entanglement? After all, doesn't the light convert at either one crystal or the other? The QM explanation is as follows:

There are 2 paths the photon pair can take from the source to the target. Path 1 could have gone through the first crystal IF the input photon is H, and that yields a VV> output. Path 2 could have gone through the second crystal IF the input photon is V, and that yields an HH> output. Since we cannot know, in principle, if the input photon resolved to H or to V, then we have a superposition in the output: VV> + HH>. This superposition is polarization entangled and is rotationally invariant (i.e. Alice and Bob can rotate around 360 degrees and will still see full correlations).

The above explanation is not "realistic" because the light does not go through one crystal OR the other. Like the double slit, it somehow converts within both and collapses to one or the other (whatever that means) upon observation. If one of the crystals is removed, then the output stream is NOT polarization entangled (though it is correlated). On the other hand, there is nothing WHATSOEVER to indicate that down conversion is due to some physical interaction of the 2 crystals. After all, Type II does NOT require 2 crystals to obtain entanglement!

So the QM explanation works for both Type I and Type II because you follow the typical QM rules for when you have superpositioned states. The Bohmian explanation works because it is designed to yield equivalent results as QM. But I personally don't see how the BM interpretation allows a "better" visual by saying the particle goes one way and the pilot wave goes the other (whatever that means), and they interfere to create entanglement.

You can clearly see that both visuals (QM, BM) have fuzzy components. I like the QM explanation simply because it follows naturally from the superposition rules, while it seems to me that the BM explanation does not. In other words, the QM explanation is based on a superposition of probability amplitudes and treats those as "relatively real". The BM explanation must then treat the pilot wave as "relatively real" too. To me, "relatively real" is not realistic. So I don't see the BM interpretation as being "more" realistic than QM. They are equal. But hey, that is just my opinion and I am perfectly comfortable with others coming to a different conclusion.
 
Last edited by a moderator:
  • #41
DrChinese said:
There are 2 paths the photon pair can take from the source to the target. Path 1 could have gone through the first crystal IF the input photon is H, and that yields a VV> output. Path 2 could have gone through the second crystal IF the input photon is V, and that yields an HH> output. Since we cannot know, in principle, if the input photon resolved to H or to V, then we have a superposition in the output: VV> + HH>. This superposition is polarization entangled and is rotationally invariant (i.e. Alice and Bob can rotate around 360 degrees and will still see full correlations).
As I see from QM perspective it is more correct to talk about ensembles and not individual photons and from that viewpoint ensemble is really taking both paths. The question is how to treat certain moment in time where we assume quantized photon of one ensemble is there and there is no quantized photon from other ensemble.
And I prefer to think that quantization conserves energy on average across the whole ensemble but when we talk about individual photon we can not talk about strict conservation of energy without considering environment. So individual photons can interact indirectly through environment.
 
  • #42
Attached is an updated version of the Experimental setup. It shows the 4 setups side by side and explains how either Entangled State or Product State statistics are obtained. Note that in Figure D, the De Raedt statistics are Entangled State but the actual observation is Product State. This also occurs when you do a similar analysis on Type I PDC.

In other words: The De Raedt simulation model works for some PDC cases, but is inconsistent (and wrong) in others. See the second attachment for the graphed simultation results that demonstrate this. I use parameters k=30 and i=50000, but the results do not change much regardless of parameter selection; and they never look like the observed/theoretical Product State statistics.

----------------------------

You may also find it helpful to look at a good exposition on the Type II PDC setup: http://www.ino.it/~azavatta/References/JMO48p1997.pdf

Generation of correlated photon pairs in type-II parametric down conversion revisited
(2001) by CHRISTIAN KURTSIEFER, MARKUS OBERPARLEITER and HARALD WEINFURTER

See especially Figures 1 and 5.
 

Attachments

  • TypeIIPDC.PolarizedUnentangledPairs1.jpg
    TypeIIPDC.PolarizedUnentangledPairs1.jpg
    64.2 KB · Views: 394
  • ExperimentIV.30x50000.jpg
    ExperimentIV.30x50000.jpg
    30.3 KB · Views: 400
Last edited by a moderator:
  • #43
OK, I have sent a letter to Hans De Raedt. I am curious as to response!
 
  • #44
DrChinese said:
OK, I have sent a letter to Hans De Raedt. I am curious as to response!
And, did you receive any response yet?
 
  • #45
ajw1 said:
And, did you receive any response yet?

Yes, I received a very kind (and prompt!) response and am in the process of sending something back. I want them to have the Excel spreadsheet simulation I posted here, as they may find it useful (since more people have Excel than Fortran).

I prefer not to disclose his comments without prior OK from him, but I would characterize the response as follows (without, I believe, saying anything that he hasn't said before):

1. The De Raedt simulation does not handle the case I describe.
2. It also does not match Malus (a separate issue that I did not raise as this is a consequence of any algorithm respecting Bell).

Dr. De Raedt also provided me with additional materials which I am reviewing, and I believe they are already present in the archives. I have been out of it the past few days due to a surgery in the family.
 
  • #46
I am reviving this thread in hopes that this will assist some readers in following some arguments about Bell's Theorem and Bell tests.

It has been argued that perhaps there is SOME set of hidden variables in which there may be a) a double dependency on theta (inflector); b) a common cause related to some global variable (ThomasT); c) cyclic hidden variables (billschnieder); or similar. See some of the active threads and you will see these themes.

I have repeatedly indicated that Bell is a roadmap to understanding that local realistic theories must be abandoned. This is generally accepted science.

In trying to show that there "could" be an exception to Bell, please consider the following to add to your list of tests for you candidate LHV theory:

a) You will be providing a formula which leads to a realistic dataset, say at angle settings 0/120/240 degrees (or some standard combination such as for CHSH inequality). This should be generated for a full universe, not just the observed cases.
b) The formula for the underlying relationship will be different than the QM predictions, and must respect the Bell Inequality curve. I.e. usually that means the boundary condition which is a straight line, although there are solutions which yield more radical results.
c) The relevant hidden variables/forumulae must be determined in advance, such that Alice's setting does not itself influence Bob's result - and vice versa.
d) There is a formula or set of hidden variables - they can be random - which leads to the detection or non-detection so that the Fair Sampling Assumption is shown to be violated (thus explaining how a LHV theory can reproduce the Entangled State statistics in a sampled environment).

And here is the little trick that should doom anything left standing:

e) You must be able to use the same assumptions and setup to yield Product State statistics when the photon pairs coming from the PDC crystal are NOT entangled.

See, that last one is a real trick: the only apparent different between PDC photons that are polarization entangled versus those that are not is that the H/V orientation is known for one, but not for the other. And yet, that flies completely in the face of the thinking of the LHV candidate theory. There should be NO DIFFERENCE! And yet, experimentally there is!

To recap: LHV candidate theories argue that the hidden variables are unknown but are pre-existing with definite values. These should lead to determinate outcomes (for polarization entangled pairs) that yield Entangled State stats when a subset is sampled. Yet when the same assumptions are made for non-polarization entangled pairs, the prediction should be for the same Entangled State stats. Yet experiments yield Product state stats for these! How can that be?

Good luck! :biggrin:
 
  • #47
I have been working with the De Raedt team for several months to address the issue identified in this thread. Thanks especially to Dr. Kristel Michielsen for substantial time and effort to work with me on this.

The issue I identified was rectified very quickly using what they call their "Model 2" algorithm. My earlier analysis was using their older "Model 1" algorithm. After getting to the point where we were able to compare statistics for a jointly agreed upon group of settings, I am satisfied that they have a simulation which accomplishes - in essence - what they claim.

I am still analyzing the event by event results. I do expect to have some follow up issues. As I get some more information, I will share this too. For those of you with a computer background, I thought you might be interested in the solution:

Replace:

If c1 > 0 Then

k1 = ((1 - (c1 * c1)) * r0 / tau) + 1 ' delay time...

by:

if(c1>2*RND()-1) then

k1 = ( (1 - (c1*c1))**2 * r0 / tau) + 1 ' delay time...

This subtle change made a huge difference! I will be updating my Excel spreadsheet model and posting a link when it is ready.

I wanted to provide this update for those who follow this subject. Please keep in mind that the De Raedt model is a computer simulation which exploits the coincidence time window as a means to achieve a very interesting result: It is local realistic. Therefore, it is able to provide event by event detail for 3 (or more) suimultaneous settings (i.e. it is realistic). It does this with an algorithm which is fully independent (i.e. local/separable). It does not violate a Bell Inequality for the full universe but does (somewhat) for the sample. Its physical interpretation is something else entirely and not something which I was intending to address in this thread. Although I would be happy to discuss this too. :smile:
 
  • #48
DrChinese said:
I have been working with the De Raedt team for several months to address the issue identified in this thread. Thanks especially to Dr. Kristel Michielsen for substantial time and effort to work with me on this.

The issue I identified was rectified very quickly using what they call their "Model 2" algorithm. My earlier analysis was using their older "Model 1" algorithm. After getting to the point where we were able to compare statistics for a jointly agreed upon group of settings, I am satisfied that they have a simulation which accomplishes - in essence - what they claim.

I am still analyzing the event by event results. I do expect to have some follow up issues. As I get some more information, I will share this too. For those of you with a computer background, I thought you might be interested in the solution:

Replace:

If c1 > 0 Then

k1 = ((1 - (c1 * c1)) * r0 / tau) + 1 ' delay time...

by:

if(c1>2*RND()-1) then

k1 = ( (1 - (c1*c1))**2 * r0 / tau) + 1 ' delay time...

This subtle change made a huge difference! I will be updating my Excel spreadsheet model and posting a link when it is ready.

I wanted to provide this update for those who follow this subject. Please keep in mind that the De Raedt model is a computer simulation which exploits the coincidence time window as a means to achieve a very interesting result: It is local realistic. Therefore, it is able to provide event by event detail for 3 (or more) suimultaneous settings (i.e. it is realistic). It does this with an algorithm which is fully independent (i.e. local/separable). It does not violate a Bell Inequality for the full universe but does (somewhat) for the sample. Its physical interpretation is something else entirely and not something which I was intending to address in this thread. Although I would be happy to discuss this too. :smile:
The changed algorithm produces correct results for 'entangled' photons. But I don't see how you initialise not-entangled photons to obtain the classical relation between polarisator angles in the model. Or wasn't this the issue you're referring to?

Did de Raedt hint on any possible interpretation?
 
  • #49
ajw1 said:
The changed algorithm produces correct results for 'entangled' photons. But I don't see how you initialise not-entangled photons to obtain the classical relation between polarisator angles in the model. Or wasn't this the issue you're referring to?

Strangely, and despite the fact that it "shouldn't" work, the results magically appeared. Keep in mind that this is for the "Unfair Sample" case - i.e. where there is a subset of the full universe. I tried for 100,000 iterations. With this coding, the full universe for both setups - entangled and unentangled - was Product State. That part almost makes sense, in fact I think it is the most reasonable point for a full universe! What doesn't make sense is the fact that you get Perfect Correlations when you have random unknown polarizations, but get Product State (less than perfect) when you have fixed polarization. That seems impossible.

However, by the rules of the simulation, it works.

Now, does this mean it is possible to violate Bell? Definitely not, and they don't claim to. What they claim is that a biased (what I call Unfair) sample can violate Bell even though the full universe does not. This particular point has not been in contention as far as I know, although I don't think anyone else has actually worked out such a model. So I think it is great work just for them to get to this point.

I am still studying the results, as there are a number of very critical issues involved in their results. For example, it is still not clear to me by how much a Bell Inequality is violated. Their Model 1 did a fine job of Entangled State, but Model 2 does not seem anywhere near as good at this. On the other hand, the Model 1 completely fails at Product State while the Model 2 does this very well. So there are trade-offs between the models. (You would expect that to a certain extent.)

(And while I haven't really looked into HOW the formula works its magic, it appears to be a function of the number of calls to a random number generator. I.e. it is almost as if 2 calls can offset each other. I guess in the right circumstances, that could happen. You need something like that to get Entangled State correlations from an otherwise Product State scenario, I think.)
 
  • #50
I get results that seems to be equally good as the model 1:
DeaedtModel_II.png

Maybe I have interpreted the formula differently:
Code:
            double malus = h.Malus(Particle.Polarization - this.Angle);
            if (malus > h.GetRandomPlusMin())  // sign(cos(...))
            {
                Particle.Absorbed = true; // <=> -1 event
            }
            Particle.DelayTime = Math.Ceiling(Math.Pow(Math.Pow((1 - malus * malus),2), (d / 2)) * h.GetRandom() / tau); // delay time
with h.Malues defined as:
Code:
        public static double Malus(double Angle)
        {
            return Cos(2 * Angle);
        }
 
  • #51
I just learned about De Raedt. My first inclination was to test his theory by examining his single photon double slit work, because of the beautiful simplicity of the experiment.

One thing I noticed while looking at his code was that every single photon is coherent with the photons released earlier. When I added rand()*2*pi to the initial phase the interference disappeared.

I didn't think coherence between the individual photons was required for the actual experiment. Therefore it seems that his simulation fails for the simplest of systems.

Is there something I'm missing here?
 
  • #52
Since this thread is about de Raedt.. what do you guys think of his paper "Extended Boole-Bell inequalities applicable to quantum theory" in which he stated that:

"Our proofs of the EBBI do not require metaphysical assumptions
but include the inequalities of Bell and apply to
quantum theory as well. Should the EBBI be violated, the
logical implication is that one or more of the necessary conditions
to prove these inequalities are not satisfied. As these
conditions do not refer to concepts such as locality or macroscopic
realism, no revision of these concepts is necessitated
by Bell’s work. Furthermore, it follows from our work that,
given Bell’s premises, the Bell inequalities cannot be violated,
not even by influences at a distance.".

Does it mean Bell's Theorem is wrong? Hope DrChinese who is familiar with de Raedt work can comment esp if he has read the paper with links at
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2546v2.pdf

This link has been posted before here months ago and it is a valid argument and not crackpottry so hope it won't be removed by the moderators. Many Thanks.
 
  • #53
Joseph14 said:
I just learned about De Raedt. My first inclination was to test his theory by examining his single photon double slit work, because of the beautiful simplicity of the experiment.

One thing I noticed while looking at his code was that every single photon is coherent with the photons released earlier. When I added rand()*2*pi to the initial phase the interference disappeared.

I didn't think coherence between the individual photons was required for the actual experiment. Therefore it seems that his simulation fails for the simplest of systems.

Is there something I'm missing here?

De Raedt acknowledges that there are some unrealistic assumptions involved in their model which lead to inconsistencies with observation. What they are trying to say though is that there exists a model which overcomes the Bell constraints for entangled pairs. If there was one such model, that would be a pretty good accomplishment in my book. But as I have said before, and as you point out, the complete set of constraints will be too much for any single model.
 
  • #54
daezy said:
Does it mean Bell's Theorem is wrong? Hope DrChinese who is familiar with de Raedt work can comment esp if he has read the paper with links at
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2546v2.pdf

I will take a closer look at this particular paper, it is a little different than some of the others.
 
  • #55
DrChinese, I'm also interested in the same paper (mentioned in the message just before this) that alleged that Bell's Theorem was wrong and really supported Local Realism. Were you able to find a flaw after 4 months of analyzing it? If you can't find a flaw, then Bell's Theorem is refuted and local realism holds? This is important as proof of the paper claims can refute even Aspect experiment, etc. and entertain the possibility of local hidden variables and let us return back to the days of Einstein EPR.
 
  • #56
Varon said:
If you can't find a flaw, then Bell's Theorem is refuted and local realism holds? This is important as proof of the paper claims can refute even Aspect experiment, etc. and entertain the possibility of local hidden variables and let us return back to the days of Einstein EPR.

I have not read it to the depth I want yet. It is not going to overturn Bell anyway. If you are imagining a return to the days of EPR (1935), I would recommend you buy some Louis Armstrong records.
 
  • #57
daezy said:
Since this thread is about de Raedt.. what do you guys think of his paper "Extended Boole-Bell inequalities applicable to quantum theory" in which he stated that:

"Our proofs of the EBBI do not require metaphysical assumptions
but include the inequalities of Bell and apply to
quantum theory as well. Should the EBBI be violated, the
logical implication is that one or more of the necessary conditions
to prove these inequalities are not satisfied. As these
conditions do not refer to concepts such as locality or macroscopic
realism, no revision of these concepts is necessitated
by Bell’s work. Furthermore, it follows from our work that,
given Bell’s premises, the Bell inequalities cannot be violated,
not even by influences at a distance.".

Does it mean Bell's Theorem is wrong? Hope DrChinese who is familiar with de Raedt work can comment esp if he has read the paper with links at
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2546v2.pdf

This link has been posted before here months ago and it is a valid argument and not crackpottry so hope it won't be removed by the moderators. Many Thanks.

The topic of that paper is quite different from the topic of computer simulations; please don't mix different topics. As it's now officially been published, a thread on that paper's discussion of Boole-Bell inequalities was started here:
https://www.physicsforums.com/showthread.php?t=499002
 
  • #58
DrChinese said:
If you are imagining a return to the days of EPR (1935), I would recommend you buy some Louis Armstrong records.

Yup, that’s the way to do it.

May I just add that there’s one variable missing to get this kind of realism working; Varon also need to update his gear and get an authentic phonograph. I guess any local dealer could help him out, they often keep this stuff in the basement, hidden under 9″ of dust.

[PLAIN]http://upload.wikimedia.org/wikipedia/en/thumb/6/65/Gramophone.jpg/300px-Gramophone.jpg


... or one could just make it easy and watch The Return of the Living Dead – it will have the same effect ...
[PLAIN]http://upload.wikimedia.org/wikipedia/en/thumb/2/23/Return_of_the_living_deadposter.jpg/300px-Return_of_the_living_deadposter.jpg
 
Last edited by a moderator:
  • #59
Love it!
 
  • #60
:wink:
 
  • #61
DrChinese said:
Thanks for the link!
Again it is a completely artificial mechanism, so what you call it is completely irrelevant. When talking about a suppression mechanism, I may call mention Detector Efficiency while they call it Coincidence Time Window.
If I hear you correctly, what you call "detector efficiency" (which refers to a physical characteristic of the detector) is in fact the data picking by means of the time window - that is, a human choice.
But nothing changes. There is no more one effect than the other. As you look at more of the universe, you get farther and farther away from the QM predictions and that never really happens in actual experiments.
To the contrary, their simulation matches Weihs' experiment rather well on exactly that issue. That topic is discussed here: https://www.physicsforums.com/showthread.php?t=597171
So the Suppression Mechanism must grow if you DO want it to match experiment! And THAT is the Unfair Sampling Assumption.
Now THAT is a less well defined term. Perhaps most people mean with the Unfair Sampling Assumption a detector characteristic, but I agree with you that their model is based on an unfair data picking explanation. That could equally well be called an Unfair Sampling, or more precisely, Sub-sampling Assumption.
 
  • #62
harrylin said:
If I hear you correctly, what you call "detector efficiency" (which refers to a physical characteristic of the detector) is in fact the data picking by means of the time window - that is, a human choice.

It was an incorrect use of language on my part. Mentally, I group all models in which there is some bias which causes the accepted sample to differ sufficiently from the universe as a whole. But there are definite legitimate differences between the models.

So my apologies. I will use the term "coincidence time window" instead of detector efficiency, with the understanding that in a computer simulation, some of this is arbitrary. If it were to be considered a candidate model, you would want to challenge whether such an effect really existed. Specifically, how does the photon get delayed without losing its entangled characteristic (i.e. perfect correlations)? Because if it lost that, it should NOT be considered at all.

If you vary the k= setting (in the spreadsheet, tab B. Entangled) from 1 to 30 to 100 you will see how things change in a very unphysical manner.
 
  • #63
DrChinese said:
[..] Mentally, I group all models in which there is some bias which causes the accepted sample to differ sufficiently from the universe as a whole. But there are definite legitimate differences between the models. [..]
Good to see that we now agree on this, and apology appreciated. :smile:
I will use the term "coincidence time window" instead of detector efficiency, with the understanding that in a computer simulation, some of this is arbitrary. If it were to be considered a candidate model, you would want to challenge whether such an effect really existed. Specifically, how does the photon get delayed without losing its entangled characteristic (i.e. perfect correlations)? Because if it lost that, it should NOT be considered at all.
That is exactly the kind of things that I try to discuss in the thread on "ad hoc" explanations. However, if I'm not mistaken it was you who pointed out that certain interactions do not or hardly affect entanglement.
If you vary the k= setting (in the spreadsheet, tab B. Entangled) from 1 to 30 to 100 you will see how things change in a very unphysical manner.
I'll try that. :smile:
 
  • #64
harrylin said:
That is exactly the kind of things that I try to discuss in the thread on "ad hoc" explanations. However, if I'm not mistaken it was you who pointed out that certain interactions do not or hardly affect entanglement.

That is true, generally I would not expect that the transport mechanism would be much of a factor. However, I guess it is *possible* that one photon could have an interaction that would reveal its spin (of course not to us) AND delay it both. If that case occurred, for example, it correctly should not be considered as the pair is no longer entangled on the polarization basis.
 
  • #65
By the way, this thread is dredged up from some time ago. I would like to say that the de Raedt team was kind enough to work with me to refine my spreadsheet model. After they supplied me with some modifications to their original Fortran code, my primary objection* to their model disappeared. I have not come to understand why it was able to accomplish that feat - simply because I have not devoted the time to the matter.

So while I disagree with Hans and Kristel on the conclusions that should be drawn from the model, I agree with its operation.

Here is the link to the Excel spreadsheet model:

http://drchinese.com/David/DeRaedtComputerSimulation.EPRBwithPhotons.C.xls

* Which had to do with a specific case of PDC simulation, not the general case.
 
Last edited:
  • #66
DrChinese said:
I have not come to understand why it was able to accomplish that feat - simply because I have not devoted the time to the matter.
...
So while I disagree with Hans and Kristel on the conclusions that should be drawn from the model, I agree with its operation.

So you agree with the way it works although you do not understand why it works but you disagree with their conclusion nonetheless? :confused:

If you could be kind as to explain why you disagree with their conclusion, despite agreeing that their model is local and realistic, then we can discuss that.
 
  • #67
billschnieder said:
If you could be kind as to explain why you disagree with their conclusion, despite agreeing that their model is local and realistic, then we can discuss that.

There are a lot of problems with the model when you talk about it as more than a computer simulation. I.e. if you want to consider it as somehow corresponding to something physical. Hard to know where to begin really, so here are my opinions for what they are worth:

The good:
- It is 100% local and realistic, so no issue there.
- It does violate a Bell Inequality, so it succeeds there.
- It did model product state statistics correctly when it needed to, which was my original objection to the simulation itself.

The bad:
- It posits physical effects that are new, and subject to experimental rejection or confirmation (don't hold your breath on that one).
- It only matches experiment when the window size is very small, otherwise it deviates quite quickly towards the Bell boundary.
- It beats the Bell Inequality when the window size is made to be medium, but only barely.
- And most telling, it does not match QM for the full universe. Now, you don't seem to think this is a problem but it really is quite serious for a model of this type. Because there would be tests that could be constructed to exploit this difference. This is part of the reason that the team has attempted to construct further simulations to take things a few steps farther.
- It does not match the dynamics of actual data when the time window is varied. I.e. it is obviously ad hoc.

-DrC
 
  • #68
DrChinese said:
- It only matches experiment when the window size is very small, otherwise it deviates quite quickly towards the Bell boundary.
Just like the experiments it is modelling.

- It beats the Bell Inequality when the window size is made to be medium, but only barely.
Just like the experiments it is modelling

- And most telling, it does not match QM for the full universe. Now, you don't seem to think this is a problem but it really is quite serious for a model of this type. Because there would be tests that could be constructed to exploit this difference.
There is no such thing as QM prediction for "full universe". QM only makes predictions about actual measurement outcomes.

- It does not match the dynamics of actual data when the time window is varied.
Just like the experiments it is modelling. QM doesn't match the experiments either when the time window is varied.
 
  • #69
billschnieder said:
1. There is no such thing as QM prediction for "full universe". QM only makes predictions about actual measurement outcomes.

2. Just like the experiments it is modelling. QM doesn't match the experiments either when the time window is varied.

1. Of course it does. The expectation is cos^2(theta) always. But that is not the case with the De Raedt et al model.

2.Not so! Otherwise it wouldn't be an issue.
 
  • #70
DrChinese said:
1. Of course it does. The expectation is cos^2(theta) always. But that is not the case with the De Raedt et al model.

2.Not so! Otherwise it wouldn't be an issue.

1) cos^2(theta) is the expectation value for OUTCOMES. QM does not predict anything other than what is observed! You change the time window you get a DIFFERENT observation! Looking at stuff that is not observed and calling ing "full universe" is simply wrong-headed.
2) This is false. Look at figure 2 in their article in which they analyze the actual experimental data, varying the window: http://arxiv.org/pdf/1112.2629v1.pdf. QM is violated by 5 standard deviations!
 

Similar threads

Back
Top