Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #456
DrChinese said:
I cannot for the life of me understand how DATASET is not clear. A formula is a general case. A dataset is the specific. The purpose of the dataset is to demonstrate your point, because saying the formula isn't.
Property K is measured at 3 settings: A, B, and C.
Formula: If property K at A+B+C > 100%, then the properties associated with K must be measurable at more than one detector setting.
A=50%, B=50%, C=50% = 150%
 
Physics news on Phys.org
  • #457
Note: The proof does not involve any correlated particles. Merely randomized polarization of a local particle source.
 
  • #458
DrChinese said:
No, experiments use beam splitters with detectors for both the H and V cases (of course the designation H and V is more or less arbitrary).

So if you can label as H/T or +/- or 0/1, that would be great. Show me a dataset and repeat your point.
I am not using any beam splitters, correlations, etc, etc. I specified a randomly polarized light source only, with a single local polarizer/detector at 3 settings.
 
  • #459
my_wan said:
I am not using any beam splitters, correlations, etc, etc. I specified a randomly polarized light source only, with a single local polarizer/detector at 3 settings.

Are entangled pairs involved?

EDIT: I see that now that we are not talking about entangled pairs. See my next post.
 
Last edited:
  • #460
my_wan said:
The empirical facts about how polarizers measure polarization of randomly polarized light, irrespective of interpretation, is a game changer. Here's what it does:

It provides a mechanism by which the overcount of coincidences, over and above Bell's inequalities, can be fully defined by the local properties of the measuring instrument, so long as conservation laws perfectly specify anti-correlations. Because the same correlations can be counted at different detector settings.

1415926535 is a dataset of digits. What correlations, what is the setup?

I would be glad to discuss polarized beams, unpolarized beams, and a sequence of 2/3 polarizers with their variations. I happen to think it is very interesting, and agree with you that even these examples involved conceptual issues. But keep in mind that the QM formalism handles these situations nicely regardless. A lot of folks also think classical wave theory handles this too (which it does) but of course the same wave theory does not explain the particle nature of light. Which QM does.

There are a lot of experiments out there that can confirm or deny any proposed hypothesis you might put forth. So don't forget my point about light's particle behavior. There is no classical analog. And when we talk about polarizers, all QM states is that the cos^2 rule is in effect for polarized beams. For unpolarized beams, the rule is 50%. I am curious as to what you hope to make with this. Good luck. :smile:
 
  • #461
I thought it might be worthy to post the words of Phillippe Grangier (2007) from his refutation of Christian:

"More generally, Bell’s theorem cannot be 'disproved', in the sense that its conclusions follow from its premices in a mathematically correct way. On the other hand, one may argue that these premices [sic] are unduly restrictive, but this is discussing, not disproving."

I.e. he is saying that you are indeen free to ignore Bell's definition of Realism. Of course, you are stuck with trying to replace it with a definition of Realism that does NOT follow the Bell result. No easy task, because such definition ends up being useless (i.e. has no utility, which is an important measure within scientific theory). Grangier mentions this point too.
 
  • #462
Coming in a few moments is an attempt at a more thorough description, including very careful distinction between what I claimed to prove, and more general claims that appear to follow from it. I'm well aware that QM handles this perfectly, without interpretation, and anything that I claim in 'empirical' contradiction to QM is patently false. That does not restrict me to a 'literal' reading of the stated principles as 'the' reality. The QM formalism is as perfect a proxy for the truth of an empirical claim as can be imagined at this time, though in principle empiricism can potentially trump it. Hard to imagine how.
 
  • #463
DrChinese said:
The elements of reality is the part that EPR and Bell agree on. This is the so called perfect correlations. To have a Bell state, in a Bell test, you must have these. The disagreement is whether these represent SIMULTANEOUS elements of reality. EPR thought they must, in fact thought that was the only reasonable view. But Bell realized that this imposed an important restriction on things.
One of these important restrictions assumed that a particle, measured with particular detector setting, is unique to that detector setting. Thus counterfactually assumed that another detector setting would not have detected the same correlation with an alternative detector setting, i.e., definite variable value. Yet we can invalidate this without the use of any correlated/entangled particles.

We have a particle source emitting a single beam of particles with randomized polarizations. We use a single polarizer as our detector to measure polarizations, and all we are interested in is the percentage of the particles in the beam that has a polarization property consonant with a particular detector setting.

Question: Does our detector setting uniquely identify a property 'value', or can this property 'value' of a single particle be counterfactually detected with multiple detector settings?

Assertion: We know the polarization is randomized. Thus if we can add the particle detections from two or more detector settings, of the same particle beam, to add up to more than 100% of the detectable particles, then counterfactually we know the same particles can be detected from more than one detector setting.

We have property K, which we measure at unique detector settings [A, B, C, ...]. If A+B+C+... > 100% of the detectable particles, then we are measuring the same property K at multiple detector settings, and can't call a unique detector setting a unique '''value''' of that property of that unique particle.

Now we choose 3 settings: [A, B, C] at setting [0°, 45°, 90°].
Our results, per QM, is:
50% of particles have property K with a 'value' of 0°.
50% of particles have property K with a 'value' of 45°.
50% of particles have property K with a 'value' of 90°.
A+B+C=150%

Conclusion: Detector settings do not uniquely identify the '''value''' of property K, rather unique detector settings can include a range of possible values for the singular property K. K is of course polarization in this case. Per conservation law, we are forbidden to add extra particles to account for this discrepancy, but no such restriction exist for counterfactually measuring the same unique particle/property using multiple detector settings. Thus we cannot assume the value of property K, as provided by our measuring device, uniquely identifies the property K. The same property K can also be detected, counterfactually, with alternative detector settings.

Relevance to the EPR paradox:
This merely proves the counterfactual condition used in conjunction Bell's inequalities to label 'real' values as a proxy for realism is invalid. It does not invalidate the reality (or lack of) property K itself. Nor does it prove the this property of measurement alone is enough to account for the specific empirical statistical profile of QM in violation of Bell's inequality. That requires a little more than the simple proof that the counterfactual assumption used is invalid. If those are the numbers you wanted, those are in the process of being polished.

What I'll say, without proof, atm:
This entails that, under arbitrary detector settings, our detector can include a range of possible values for K, not just those with a particular value of K. In any EPR correlation experiment the polarization, from the perspective of anyone detector, is randomized. The detector, at any given setting, empirical has a 50% chance of detecting property K, not necessarily value, from this random sequence of correlated particles. Preferentially those nearest the polarization of the measuring device, as other empirical test demonstrate. Thus, with minor changes in the detector settings, minor changes are made in which individual particles it couterfactually detects. This entails that the relative difference in detector settings on both ends is all that matters in capturing coincidences. The fact that value of K, as provided by the detector setting, is assumed to uniquely define property K leads to an overcount of coincidences when added over all arbitrary detector settings. As noted, small changes in detector settings have similarly small effects on which particular particles are couterfactually detected. This means the arbitrary detector setting choices act exactly as they should, relative settings. And can fully characterize Bell inequality violation solely by the local empirical detection properties of a polarizer, if the anti-correlations, per conservation law, is real.

Want proof of that last paragraph? Sorry, work in progress. But if you'll look at it yourself... :bugeye:

Ask any questions I wasn't clear on.
 
  • #464
ajw1 said:
For those acquainted with c-sharp, I have the same de Raedt simulation, but converted in an object oriented way (this allows a clear separation between the objects(particles and filters) used in the simulation).

And here it is:
 

Attachments

  • DeRaedt.zip
    146.6 KB · Views: 191
  • #465
DrChinese said:
A lot of folks also think classical wave theory handles this too (which it does) but of course the same wave theory does not explain the particle nature of light. Which QM does.
Actually consider these:
A[/PLAIN] Soliton Model of the Electron with an internal Nonlinearity cancelling the de Broglie-Bohm Quantum
http://www.springerlink.com/content/j3m4p4026332r455/"
http://arxiv.org/abs/physics/9812009"

Your still essentially correct. The whole classical wave theory approach is basically a disparate collection of works of wildly varying quality, with lots of toy models. About the only universal thread tying them together is a rough connection to classical thermodynamics. Of course QM already has a strong connection to generalized thermodynamics. Such classical leaning models lack a true foundational groundwork to build from. Then there's the problem of QM+GR, but then there's this:
http://arxiv.org/abs/0903.0823"

As intriguing as this is as a whole, with a few intriguing individual works, something needs to break to provide a cohesive foundation, so it don't look so much like an ad hoc force fit of disparate elements of the standard model on a classical thermodynamic model, like Christmas tree lights on a barn. The QM modeling has been improving, but even at its best it still looks more interpretive than theoretical.
 
Last edited by a moderator:
  • #466
my_wan said:
One of these important restrictions assumed that a particle, measured with particular detector setting, is unique to that detector setting. ...
Assertion: We know the polarization is randomized. Thus if we can add the particle detections from two or more detector settings, of the same particle beam, to add up to more than 100% of the detectable particles, then counterfactually we know the same particles can be detected from more than one detector setting.

We have property K, which we measure at unique detector settings [A, B, C, ...]. If A+B+C+... > 100% of the detectable particles, then we are measuring the same property K at multiple detector settings, and can't call a unique detector setting a unique '''value''' of that property of that unique particle.

Now we choose 3 settings: [A, B, C] at setting [0°, 45°, 90°].
Our results, per QM, is:
50% of particles have property K with a 'value' of 0°.
50% of particles have property K with a 'value' of 45°.
50% of particles have property K with a 'value' of 90°.
A+B+C=150%

Conclusion: Detector settings do not uniquely identify the '''value''' of property K, rather unique detector settings can include a range of possible values for the singular property K. K is of course polarization in this case. Per conservation law, we are forbidden to add extra particles to account for this discrepancy, but no such restriction exist for counterfactually measuring the same unique particle/property using multiple detector settings. Thus we cannot assume the value of property K, as provided by our measuring device, uniquely identifies the property K. The same property K can also be detected, counterfactually, with alternative detector settings.

Relevance to the EPR paradox:
This merely proves the counterfactual condition used in conjunction Bell's inequalities to label 'real' values as a proxy for realism is invalid. It does not invalidate the reality (or lack of) property K itself. Nor does it prove the this property of measurement alone is enough to account for the specific empirical statistical profile of QM in violation of Bell's inequality. That requires a little more than the simple proof that the counterfactual assumption used is invalid. If those are the numbers you wanted, those are in the process of being polished.

... The fact that value of K, as provided by the detector setting, is assumed to uniquely define property K leads to an overcount of coincidences when added over all arbitrary detector settings. ...

Ok, you have discovered a variation of some old logic examples: All boys are human, but not all humans are boys. And a little thought would indicate that 100% of all particles have either H or V values at ALL angles. By your thinking, A + B + C + D ... infinity means that the sum is actually infinite. Not what I would call a meaningful formula.

But where does Bell say anything remotely like this? Or EPR for that matter? A quote from Bell would be a good response.

In fact: Bell does NOT in any way require the outcomes to be unique to a measurement setting. Nor does Bell require all of the "rules" to relate to the particle itself. Some could relate to the interaction with the polarizer. All Bell requires is that whatever they are, there are 3 of them simultaneously.

I understand you have something in the back of your head, but you aren't making it easy. You obviously don't think the Bell result means that Local Realism is ruled out. Well, I can lead a horse to the bar but I can't make him take a drink. But it is wildly unreasonable for you to say the drinks are no good when everyone in the bar is having a grand time. That would be, for instance, because we are celebrating new and more exotic entanglement experiments daily. There were probably about 10 this week alone. The point being that nothing you are saying is useful. If these experimentalists followed your thinking, none of these experiments would ever be performed. Because every one of them involved finding and breaking Bell Inequalities.

I will leave you with 2 thoughts on the matter: a) Can you put forth a local realistic model that yields the same predictions as QM? If you did, it would be significant.

The De Raedt team has worked diligently on this matter, and so you would find it difficult to out-gun them. They have yet to succeed, see my model for a proof of that.b) Can you explain how, in a local realistic world, particles can be perfectly correlated when those particles have never existed within a common area of spacetime? If you could explain that, it would be significant.You will see soon enough that the combination of a) and b) above will box you in.
 
  • #467
DrChinese said:
Forget it. You're the one out on the limb with your non-standard viewpoint.
On the contrary, it's the advocates of nonlocality that hold the nonstandard viewpoint. One can't get much more unscientific, or nonscientific, than to posit that Nature is fundamentally nonlocal. The problem with it as an explanation for entanglement correlations is that it then remains to explain the explanation -- and I don't think that can be done.

On the other hand, there's a much simpler explanation for the correlations in, say, the Freedman and Clauser experiment, or the Aspect et al. experiments, that fits with the fundamental theories and assumptions on which all of modern science has been based -- and that explanation begins with the notion that the entanglement is due to the photons being emitted during the same atomic transition (ie., that there is a relationship between properties imparted at emission that, wrt analysis by a global measurement parameter, results in correlations that we refer to as entanglement stats).

What's being suggested is that, before we trash relativity or posit the existence of an underlying preferred frame where ftl propagations or instantaneous actions at a distance (whatever that might mean) are happening, perhaps it would be more logical (in light of what's known) to explore the possibility that Bell inequalities are violated for reasons that have nothing to do with ftl propagations or instantaneous actions at a distance. To that end, it's been suggested that Bell's lhv ansatz is incompatible with the experimental situations for which it was formulated for reasons that have nothing to do with whether or not Nature is exclusively local. In another, recent, thread it was demonstrated that there's a contradiction between probability theory, as utilized by Bell to denote locality, and probability theory as it should correctly be applied to the joint experimental situations that Bell's lhv ansatz purports to describe. What this entails is that Bell inequalities are violated because of that contradiction -- and not because the photons (or whatever) are communicating ftl or instantaneously. You responded to that OP's consideration in a decidedly nonsequiter, and yet charming, way, asking for ... a dataset. To which the OP responded, appropriately I think, something to the effect, "What's that got to do with what I was talking about?". The point is that there are considerations pertinent to the issue of evaluating the physical meaning of Bell's theorem that don't mean or require that the presenters of those considerations are advocating that an lhv interpretation of qm is possible. (Maybe the OP in the other thread is advocating the possibiltiy of an lhv interpretation of qm, but that's his problem. Anyway, he wasn't advocating that wrt the consideration he presented in that thread, afaict. )

By the way, DrC, please don't take my sarcasm too seriously (as I don't take yours that way). As I've said before, I admire your abilities, and contributions here, and have learned from you. But sometimes discussing things with you can be, well, a bit ... difficult.

Here's some light reading for those who care to partake:

http://bayes.wustl.edu/etj/articles/cmystery.pdf

Apparently, Jaynes viewed 'nonlocalists' with as much contempt as Mermin. I do hope that no one thinks that these guys (Jaynes and Mermin) are crackpots.

DrChinese said:
I can't prove the unprovable.
And no one is asking anyone to do that. What would be nice is that contributors to these discussions at least try to discuss the issues that have been presented.

Of course, as usual with foundational issues, there are several 'threads' within this thread.

RUTA presents a conceptualization (and,at least symbolically, a realization) of quantum nonseparabiltiy which is both fascinating and, it seems, impossible to reconcile with the way I think it's most logical to presume that Nature is and the way she behaves. (OK, I don't understand it. Look, if it took Bub three, that's THREE, epiphanies to get it, then what hope do us normal people have to understand what RUTA's done . Anyway, I have a simpler conception of the physical meaning of quantum nonseparability which hasn't been refuted.)

DrC's instructive and informative VisualBasic construction I do understand (not that I could replicate it without months of getting back up to speed wrt programming), and it does what it purports to do.

I don't yet understand My_wan's considerations, having not had time to ponder them. But I will.

Zonde's consideration, wrt fair sampling, is certainly relevant wrt the proper application of the scientific method. However, it's preceded by considerations of the applicability of Bell's lhv ansatz to the experimental situation, and to the extent that these prior considerations effectively rule out inferences regarding what's happening in Nature from violations of BI's, then the fair sampling loophole is mooted wrt the OP of this thread. Anyway, I see no reason to assume that if an experiment were to simultaneously close all the technical loopholes, that the qm predictions would then, thereby, be invalidated. I'm not sure if Zonde thinks otherwise, or, if he does, what his reasons are for thinking this.

DrChinese said:
There is a formula, yes, I can read that.
Ok, that's a step in the right direction.

DrChinese said:
But it is not a local realistic candidate ...
I don't think it's meant to be -- at least not in the sense of EPR-Bell. Anyway, it's at least local. Explicitly so. It's just local wrt a different hidden parameter than Bell's lhv ansatz. And the fact that it's explicitly local, and reproduces the qm predictions, is all that matters wrt this thread.

I keep saying this, and you are, apparently, not reading it: An lhv interpretation of qm compatible with Bell's requirements is impossible.

DrChinese said:
... and there is no way to generate a dataset.
If his formula matches the qm formula for the same experimental situation, then they'll predict the same results. Right? So, does it, or doesn't it?

DrChinese said:
Folks, we have another local realist claiming victory after demonstrating... ABSOLUTELY NOTHING. AGAIN.
I don't recall claiming any sort of victory. The goal here is to get at the truth of things, collectively. Then we all win.

Naaaaaaaaah!
----------------------------------------------------

The following are some points to ponder -- more neatly presented than before.

WHY ARE BELL INEQUALITIES VIOLATED?

... USING LOCALITY ...

(1) Bell tests are designed and prepared to produce statistical dependence between separately accumulated data sets via the joint measurement of disturbances which have a local common origin (eg. emission by the same atom during the same transitional process).

(2) A correct model of the joint measurement situation must express the statistical dependence that the experiments are designed and prepared to produce.

(3) The assumption of locality is expressed in terms of the statistical INdependence of the separately accumulated data sets.

Conclusion: (3) contradicts (1) and (2), hence BIs based on limitations imposed by (3) are violated because an experimental situation designed to produce statistical dependence has been modeled as an experimental situation not designed to produce statistical dependence (ie., it's being modeled as a situation designed to produce statistical INdependence).. And since statistical dependencies can be due to local common causes, and since the experiments are jointly measuring disturbances that have a common origin, then no nonlocality is necessary to understand the violation of BIs based on (3).

... USING ELEMENTS OF REALITY ...

(4) Bell tests are designed and prepared to measure a relationship between two or more disturbances.

(5) The relationship between the measured disturbances does not determine individual results.

(6) EPR elements of reality require that a local hidden variable model of the joint measurement situation be expressed in terms of the variable or variables which, if it(they) were known, would allow the prediction of individual results.

Conclusion: (6) contradicts (4) and (5), hence the 'no lhv' theorems (eg., GHZ) based on limitations imposed by (6) are violated because the limitations imposed by (6) contradict an experimental situation designed to produce correlations based on a relationship between disturbances incident on the measuring devices. And since the relationship between the incident disturbances can reasonably be assumed to have been created locally during, say, an emission process, then no nonlocality is necessary to understand contradictions revealed by 'no lhv' theorems.

-------------------------------------

ARE LHV FORMULATIONS OF ENTANGLEMENT POSSIBLE?

No. Unless we want to change the historical meaning of 'local hidden variables', then Bell demonstrated that lhv formulations of entanglement are impossible. To paraphrase Bell, the statistical predictions of qm for the joint entangled state are incompatible with separable predetermination. In other words, a theory in which parameters are added to qm to determine the results of individual measurements cannot use those same parameters to determine the results of joint measurements. The relationship between jointly measured disturbances is nonseparable wrt the joint measurement parameter.

-------------------------------------

IS NONLOCALITY POSSIBLE?

Obviously, nonlocality is impossible if our universe is evolving in accordance with the principle of locality. Since there's presently no reason to suppose that it isn't, then, for now at least, based on what is known, the answer to that question has to be no.
 
  • #468
You may believe that non-locality is incorrect, or even absurd, but it is standard. To say otherwise distorts the meaning of "standard". For the rest, you conclude that non-locality is impossible,"obviously", which makes me wonder why you've bothered to discuss such a "silly" topic with we poor fools who believe mounting evidence contrary to your a priori prejudice.
 
  • #469
DrChinese said:
Ok, you have discovered a variation of some old logic examples: All boys are human, but not all humans are boys. And a little thought would indicate that 100% of all particles have either H or V values at ALL angles. By your thinking, A + B + C + D ... infinity means that the sum is actually infinite. Not what I would call a meaningful formula.
But the point is that A+B+C+.. can't exceed the total number of particles emitted.

DrChinese said:
But where does Bell say anything remotely like this? Or EPR for that matter? A quote from Bell would be a good response.
Counterfactual definiteness is a fundamental assumption when Bell's theorem is used to elucidate issues of locality. The clearest presentation puts it this way: The theorem indicates the universe must violate either locality or counterfactual definiteness.

What I have pointed out, by the fact that a single polarizer always measures 50% of randomly polarized light as having a single polarization, is that there is a specific empirically consistent way in in which we can talk about counterfactual measurements, at least statistically. Provided we can't measure more particles than was emitted. This not only results in a violation of Bell's inequalities, though by exactly how much I can't say yet, it requires the violations to be dependent only on the relative polarization settings. Thus no incongruencies in arbitrary settings, because many of the coincidences counted at one detector setting would also have counted by most other detector settings also.

Of course you have every right to ask for proof of this stronger claim, where I only proved that counterfactual definiteness as assumed by the use of Bell's inequalities isn't valid. I'll make one more post after this one to point it out again. Then hold off to provide at least a toy model to demonstrate.

DrChinese said:
In fact: Bell does NOT in any way require the outcomes to be unique to a measurement setting. Nor does Bell require all of the "rules" to relate to the particle itself. Some could relate to the interaction with the polarizer. All Bell requires is that whatever they are, there are 3 of them simultaneously.

Your own words:
"Yes, you are required to maintain a strict adherence to Bell Realism."
"It does NOT say that tails up itself is an element of reality."
Bell Realism is defined as a measurement we can predict, but that in some circumstances "tails-up" is a quiet predictable measurement.

But Bell inequalities goes further, it counts those predictions at a given polarizer setting and says whoa, there's too many coincidences to "realistically" account for at this one polarizer setting. Yet as I pointed out, the particles have a random polarization wrt one detector, and more importantly, one polarizer setting is detecting 50% of ALL the particles that come in contact with it, regardless of actual polarization prior to measurement. The only particles not subject to detection at all at a given polarizer angle are those exactly orthogonal, very few. How else do you account for 50% of all randomly polarized particles getting detected, even without correlations/entanglements. Thus any given photon has a 50% chance of being detected at any random detector setting, and tested for a correlation.

I'll construct a simple model to demonstrate the stronger claims I made.
 
  • #470
ThomasT said:
1. But sometimes discussing things with you can be, well, a bit ... difficult.

2. Here's some light reading for those who care to partake:

http://bayes.wustl.edu/etj/articles/cmystery.pdf

Apparently, Jaynes viewed 'nonlocalists' with as much contempt as Mermin. I do hope that no one thinks that these guys (Jaynes and Mermin) are crackpots.

1. Pot calling the kettle...

2. You apparently don't follow Mermin closely. He is as far from a local realist as it gets.

Jaynes is a more complicated affair. His Bell conclusions are far off the mark and are not accepted.

--------------------

I am through discussing with you at this time. You haven't done your homework on any of the relevant issues and ignore my suggestions. I will continue to point out your flawed comments whenever I think a reader might actually mistake your commentary for standard physics.
 
  • #471
DrChinese said:
1. Pot calling the kettle...

2. You apparently don't follow Mermin closely. He is as far from a local realist as it gets.

Jaynes is a more complicated affair. His Bell conclusions are far off the mark and are not accepted.

--------------------

I am through discussing with you at this time. You haven't done your homework on any of the relevant issues and ignore my suggestions. I will continue to point out your flawed comments whenever I think a reader might actually mistake your commentary for standard physics.

I would not worry, no one could mistake personal fanaticism for scientific inquiry here, I hope.
 
  • #472
my_wan said:
1. But the point is that A+B+C+.. can't exceed the total number of particles emitted.

2. But Bell inequalities goes further, it counts those predictions at a given polarizer setting and says whoa, there's too many coincidences to "realistically" account for at this one polarizer setting.

3. I'll construct a simple model to demonstrate the stronger claims I made.

1. This is fairly absurd. You might want to re-read what you are saying. Why would A+B+C... have any limit? I asked for a quote from Bell, where is it?

2. It is true that Bell Inequalities are usually expressed in terms of a limit. But that is a direct deduction from the Realism requirement. Which is essentially that counterfactual cases have a likelihood of occurring between 0 and 100%. Most consider this a reasonable requirement. If you use the cos^2 rule for making predictions, then some cases end up with a predicted occurance rate of less than -10% (that's a negative sign). If that is reasonable to you, then Local Realism is a go.

3. I truly look forward to that! :smile: And please, take as much time as you need.

----------------------

Again, a pattern is developing: I am challenging you on specific points. Here are 3 more. I recommend that you stop, read the above, and address them BEFORE going on to other points. I realize you have a lot to say, but we are simply going around in circles as you abandon one line of thinking in favor of another. So please, do us both a favor, let's discuss the 3 above before going elsewhere. I have provided very specific criticisms to what you are saying, and they should be taken seriously. That is, if you want me to take you seriously.
 
  • #473
IcedEcliptic said:
I would not worry, no one could mistake personal fanaticism for scientific inquiry here, I hope.

I hope not, thanks for your welcome comments and support.
 
  • #474
DrChinese said:
I hope not, thanks for your welcome comments and support.

Thanks for your tireless efforts to educate and further the discussion of Bell and N-L issues here. I've been reading through your threads, and truly you have the patience of a saint. I don't follow everything, but I really learn when I read these discussions. Some of this is a real challenge to accept and visualize, even when I believe it to be true.
 
  • #475
ThomasT said:
On the other hand, there's a much simpler explanation for the correlations in, say, the Freedman and Clauser experiment, or the Aspect et al. experiments, that fits with the fundamental theories and assumptions on which all of modern science has been based -- and that explanation begins with the notion that the entanglement is due to the photons being emitted during the same atomic transition (ie., that there is a relationship between properties imparted at emission that, wrt analysis by a global measurement parameter, results in correlations that we refer to as entanglement stats).

You can entangle atoms that have not interacted with each other by using interaction-free measurement in an interferometer. Accordingly, these atoms don't interact with the photon in the interferometer either.
 
  • #476
Yeah, the limit in the case I described is the total number of particles emitted, 100%. You still talking 'as if' I'm was talking about correlations, when there weren't even any entangled particles involved.

Yeah, the so called negative predicted occurrence rate occurs when detections are more likely in only one of the detectors, rather than neither or both. You almost made it sound like a "probability".
:frown:
 
  • #477
DrChinese said:
b) Can you explain how, in a local realistic world, particles can be perfectly correlated when those particles have never existed within a common area of spacetime? If you could explain that, it would be significant.

This is trivial. Every clock is correlated with every other clock whether or not they've ever been in a common area of spacetime. Any two harmonic signals are correlated irrespective of differences in amplitude, phase and frequency.
 
  • #478
billschnieder said:
This is trivial. Every clock is correlated with every other clock whether or not they've ever been in a common area of spacetime.

That's bull. I am shocked you would assert this. Have you not been listening to anything about Bell? You sound like someone from 1935.
 
  • #479
billschnieder said:
This is trivial. Every clock is correlated with every other clock whether or not they've ever been in a common area of spacetime. Any two harmonic signals are correlated irrespective of differences in amplitude, phase and frequency.

There are no global correlations. And on top of my prior post, I would like to mention that a Nobel likely awaits any iota of proof of your statement. Harmonic signals are correlated in some frames, but not in all.
There can be no entanglement - in a local realistic world - and classical particles will NOT violate Bell Inequalities. All of which leads to experimental disproof of your assertion. That being that perfect correlations are some easy to achieve feat, and do not require shared wave states. They only occur with entangled particles. Look at unentangled particle pairs and this will be clear.
 
Last edited:
  • #480
I'm tired and getting sloppier, but I read your negative probabilities page at:
http://www.drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm
I was thinking in terms of the of a given value E(a,b) from possible outcomes P(A,B|a,b) in the general proof of Bell's theorem. You had something else in mind.

What you have, at your link, is 3 measurements at angles A=0, B=67.5, and C=45. A and B are actual measurements where C is a measurement that could have been performed at A or B, let's say B in this case. This does indeed lead to the given negative probabilities, if you presume that what you measured at B cannot interfere with what you could have measured at C, had you done the 3 measurements simultaneously. The counterfactual reasoning is quoted: "When measuring A and B, C existed even if we didn't measure it."

So where do the negative probability come from here? What I claimed, and empirically justified on the grounds that a polarizer always detects 50% of all randomly polarized light (an absurdity if only light at that one polarization is being detected), is that some subset of the same particles detected at B would also have been detected at C, had that measurement been done. Since the same particle, presumed real, cannot be detected by both detectors, detection at one detector precludes a detection at the other detector, because the particles are considered real regardless of the variation of angles capable of detecting it. Therefore measuring the particle at B can negatively interfere with the measurement of that same particle at C.

So the page quote: "When measuring A and B, C existed even if we didn't measure it." Not when some subset of the particles, when measures are performed separately, are measured by both B and C. Thus when you consider simultaneous measures, at these detectors, the same particles must be detected twice by both B and C simultaneously to be counterfactually consistent with the separate measures.

Now I know this mechanism can account for interference in counterfactual detection probabilities, but you can legitimately write it off until the sine wave interference predicted by QM is quantitatively modeled by this interference mechanism. But I still maintain the more limited claim that the counterfactual reasoning contained in the quote: "When measuring A and B, C existed even if we didn't measure it" is falsified by the fact that the same particles cannot simultaneously be involved in detections at B and C. Yet it still existed, at one or the other detector, just not both. Probability interference is a hallmark of QM.
 
Last edited:
  • #481
This is a fascinating debate although I must admit it is difficult to follow at times. my_wan's arguments appear very deep and well thought out, but I think I'm missing the requisite philosophical training to appreciate fully his viewpoint. However the exchanges between my_wan and DrChinese are very educational and I thank them for their efforts here in enlightening the subtle issues at the core of the EPR debate. :)

Earlier, I suggested a scientific experiment that would help settle this one way or the other, since as I understand it my_wan's explanation for the non-local correlations in entanglement would require that the correlations are instantaneous.

If we can demonstrate any delay in the entanglement correlations would that not rule out the relational theory of QM or the existence of fundamental probabilistic elements of reality (probabilistic realism)?

In principle it may be possible to construct a quantum computer which could record the time of qubit switching for certain qubits, although we would have to factor out the limit on qubit switching speed imposed by the uncertainty principle (mentioned previously)

Alternatively it may be possible to demonstrate a delay in Aspect type experiments by refining the timing and precision of the switching apparatus until it reaches a switching speed so fast that we can observe a reduction in entanglement effects (as we reached the threshold for the FTL signalling mechanism I proposed earlier we would expect entanglement effects to gradually fail). This would be tricky with the original Aspect setup, since we would have to switch the deflectors very precisely almost as the photons were about to hit them (since remember we are looking for a faster than light signalling mechanism between the entangled photons)
 
Last edited:
  • #482
my_wan said:
...I read your negative probabilities page at:
http://www.drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm
I was thinking in terms of the of a given value E(a,b) from possible outcomes P(A,B|a,b) in the general proof of Bell's theorem. You had something else in mind.

What you have, at your link, is 3 measurements at angles A=0, B=67.5, and C=45. A and B are actual measurements where C is a measurement that could have been performed at A or B, let's say B in this case. This does indeed lead to the given negative probabilities, if you presume that what you measured at B cannot interfere with what you could have measured at C, had you done the 3 measurements simultaneously. The counterfactual reasoning is quoted: "When measuring A and B, C existed even if we didn't measure it."

So where do the negative probability come from here?...

So the page quote: "When measuring A and B, C existed even if we didn't measure it." Not when some subset of the particles, when measures are performed separately, are measured by both B and C. Thus when you consider simultaneous measures, at these detectors, the same particles must be detected twice by both B and C simultaneously to be counterfactually consistent with the separate measures.

Now I know this mechanism can account for interference in counterfactual detection probabilities, but you can legitimately write it off until the sine wave interference predicted by QM is quantitatively modeled by this interference mechanism. But I still maintain the more limited claim that the counterfactual reasoning contained in the quote: "When measuring A and B, C existed even if we didn't measure it" is falsified by the fact that the same particles cannot simultaneously be involved in detections at B and C. Yet it still existed, at one or the other detector, just not both. Probability interference is a hallmark of QM.

OK, there are a couple of issues. This is indeed a counterfactual case. There are only 2 readings, not 3, so you have that correct.

As to interference: yes, you must consider the idea that there is a connection between Alice and Bob. But NOT in the case that there is local realism. In that case - which is where the negative probabilities come from - there is no such interaction. QM would allow the interaction, but explicit denies the counterfactual case as existing. Because it is not a Realistic theory.
 
  • #483
unusualname said:
Earlier, I suggested a scientific experiment that would help settle this one way or the other, since as I understand it my_wan's explanation for the non-local correlations in entanglement would require that the correlations are instantaneous.

If we can demonstrate any delay in the entanglement correlations would that not rule out the relational theory of QM or the existence of fundamental probabilistic elements of reality (probabilistic realism)?

In principle it may be possible to construct a quantum computer which could record the time of qubit switching for certain qubits, although we would have to factor out the limit on qubit switching speed imposed by the uncertainty principle (mentioned previously)

Alternatively it may be possible to demonstrate a delay in Aspect type experiments by refining the timing and precision of the switching apparatus until it reaches a switching speed so fast that we can observe a reduction in entanglement effects (as we reached the threshold for the FTL signalling mechanism I proposed earlier we would expect entanglement effects to gradually fail). This would be tricky with the original Aspect setup, since we would have to switch the deflectors very precisely almost as the photons were about to hit them (since remember we are looking for a faster than light signalling mechanism between the entangled photons)

Scientists would in fact love to answer this question. Experiments have been done to the edge of current technology, and no limit has been found yet up to 10,000 times c. So I expect additional experiments as time goes on. If I see anything more on this, I will post it.
 
  • #484
ThomasT said:
One can't get much more unscientific, or nonscientific, than to posit that Nature is fundamentally nonlocal.

Not only is it "scientific" to posit that Nature is fundamentally nonlocal, it is also the only "logical" thing to do. That is, we know that the physical space of our universe consists of three dimensions. Through pure force of reasoning, therefore, we should expect that the elements that constitute a physical reality such as ours are fundamentally spatial in nature. It is for this reason that Erwin Schrodinger posited the existence of a mathematically defined, dynamical object that can be understood--for lack of a better phrase--as an "ontological unity".

The main problem here, though, is that physics had never before been in a position to come to terms with the necessarily space-occupying nature of elemental reality. And this is indeed *necessary* because a three-dimensional universe that consists only of purely local (i.e. zero-dimensional) objects is simply a void. That is, all objects that are anything less than three-dimensional will occupy precisely a zeroth of the space of the universe. In other words, it only makes sense to understand that the parts of a three-dimensional universe are themselves three-dimensional.

But the reason why locality is taken so seriously by certain "naive" individuals is because the entire course of physics since the time of Galileo (up to the 20th century) has been simply to chart the trajectories of empirical bodies through "void" space rather than to come to terms with the way in which any such experience of physical seperateness is at all possible.

So, we can now understand Newton's famous hypothesis non fingo as an implicit acknowledgment that the question of the "true nature" of physical reality is indeed an interesting/important question, but that his particular job description at Cambridge University did not give him any reason to depart from the [nascent] tradition of physics as empirical prediction rather than ontological description.

But given the rise of Maxwellian "field type" theories in the 19th century, the question of the space-filling quality of elemental matter could not be ignored for much longer. It is for this reason that ether theories came into prominence. So by the early 1900's, there was an urgent need to find a resolution between the manifestly continuous aspects and granular aspects of physical experience.

This resolution was accomplished by way of the logical "quantization" of the electromagnetic continuum, giving a way for there to be a mathematical description for the way in which atoms are able to interact with one another. That is, photons are taken to be "radiant energy particles" that are able to cross the "void" that separates massive bodies. So, we must understand that the desire to understand energy in a quantitative way was nothing other than a continuation of the Newtonian project of developing theories of a mathematically analytical nature, rather than a break from classical Newtonian thought. That is, the *real* break from the classical model is Maxwell's notion that there is a continuous "something" that everywhere permeates space. One implication of this way of thinking is that this "something" is the only "ontologically real thing," and that all experiences of particularity are made possible by modulations of continuous fields.

The reason why there is so much difficulty in coming into a physical theory that attains the status of being a compelling, "ontologically complete" model is that there is always a desire on the parts of human beings to be able to predict phenomena--that is, to be "certain" about the future course of events. And our theories reflect this desire by way of being reduced to trivially solvable mathematical formulations (i.e. differential equations of a single independent variable) rather than existing in formulations whose solutions are anything but apparent (i.e. partial differential equations of several independent variables).

So, we can now understand that Schrodinger's idea of reality as consisting of harmonically oscillating, space filling waveforms raised an extremely ominous mathematical spectre--which was summarily overcome by way of the thought of the psi function as a "field of probabilities" that can be satisfactorily "reduced" by way of applying Hermitian operators (i.e. matrices of complex conjugates) to it.

But now, we can see that Schrodinger's conceptually elegant ontological solution has been replaced by a purely logical formalism that is not meant to have any ontological significance. That is, the system of equations that can be categorized under the general heading of "quantum mechanics" is only meant to be a theory of empirical measurement, rather than a theory that offers any guidance as regards "what" it is that is "really" happening when any experimental arrangement registers a result.

So, if there is anyone who is searching for, shall we say, "existential comfort" as regards the nature of the "stuff" of physical reality, you are setting yourself up for major disappointment by looking towards the mainstream academic physics establishment (with physicsforums being its best online representative). Your best bet would probably be to pick up a book by or about Erwin Schrodinger, the man, rather than a book that merely uses his name in its exposition of the pure formalism that is quantum mechanics.

And other than that, I am doing my best to continue the tradition of pushing towards a thoroughly believable ontological theory of physical reality here at physicsforums.
 
  • #485
DrChinese said:
OK, there are a couple of issues. This is indeed a counterfactual case. There are only 2 readings, not 3, so you have that correct.

As to interference: yes, you must consider the idea that there is a connection between Alice and Bob. But NOT in the case that there is local realism. In that case - which is where the negative probabilities come from - there is no such interaction. QM would allow the interaction, but explicit denies the counterfactual case as existing. Because it is not a Realistic theory.

To the assertion:"NOT in the case that there is local realism":
In the local realism assumption, the connection between Alice and Bob is carried by the particles as inverse local properties, and read via statistical coincidence responses to polarizers with various settings. Since Alice and Bob are the emitted particle pairs, C is not Charlie, but a separate interrogator of Bob asking for Bob's identity papers. In any reasonable experimental construction, B and counterfactual C, either B or C gets to Bob first to interrogate his identity. But whichever interrogates Bob first interferes with the other getting to interrogate Bob also. This is a requirement of local realism.

Thus Alice and Bob are the particles emitted, not A, B, and C interrogators (polarizers) that you choose to interrogate Alice and Bobs identity with, nor the singular arbitrary settings A, B, and C, used to interpret Alice and Bobs reported identity.

If this explanation holds, then your model, used to refute the De Raedt team's modeling attempts, is physically valid in interference effects, but fails to fully refute them.
http://msc.phys.rug.nl/pdf/athens06-deraedt1.pdf
The physical interpretation of the negative probability, defined from the possibly valid explanation given, is actually a positive possibility that the interrogator C will interrogate Bob first, before B gets to him. Thus if you assign this probability to interrogator B instead of C, which actually intercepted Bob, it takes a negative value.

This means my original assumption when you challenged me on negative probabilities, before I read your sites page on negative probabilities, wasn't as far off as I thought. As I stated then, the negative probability results from a case instance E(a,b) of a possibility derived from probability P(A,B|a,b). Thus not technically a probability in the strict sense. As I noted then, this only occurs "when detections are more likely in only one of the detectors, rather than neither or both". Such as when interrogator C gets to Bob before interrogator B does, producing a detection at C and missing at B. Which a full, and possibly valid, explanation was given above.

QM:
Technically QM doesn't confirm nor deny counterfactual reasoning. It merely calcs for whatever situation you 'actually' provide it. The counterfactual conflict only comes in after the fact, when you compare two cases you 'actually' provided. The fact that QM is not explicitly time dependent makes counterfactual reasoning even more difficult to interpret. If any time dependent phenomena are involved, it must be counterfactually interpreted as something that occurred between an event and a measurement, for which we have no measurements to empirically define, except after the fact when the measurement is performed.
 
Last edited by a moderator:
  • #486
ajw1 said:
For those acquainted with c-sharp, I have the same de Raedt simulation, but converted in an object oriented way (this allows a clear separation between the objects(particles and filters) used in the simulation).

OOP is cool, my favorite is Delphi/Object Pascal, which is very similar to C# (Anders Hejlsberg was/are chief architect for both).

Maybe I’ll check https://www.physicsforums.com/showpost.php?p=2728427&postcount=464". They are claiming to prove deterministic EPR–Bohm & NLHVT, stating this:
Thus, these numbers are no “random variables” in the strict mathematical sense. Probability theory has nothing useful to say about the deterministic sequence of these numbers. In fact, it does not even contain, nor provides a procedure to generate random variables.

(!?) Funny approach... when the probabilistic nature of QM is the very foundation of Bell’s work...?? It’s like proving that Schrödinger's cat can’t run, by cutting off the legs?:bugeye:?
(And true random numbers from atmospheric noise is available for free at random.org :wink:)

And what is this "time window"?? The code is executed sequentially, not multithreaded or parallel! And then multiply the "measurement" with a (pseudo-)random number to "check" if the "measurement" is inside this "time window"!? ... jeeess, I wouldn’t call this a "simulation"... more like an "imitation".

And De Raedt has another gigantic problem with his 'proof' of the non-local hidden variable theory:
http://arxiv.org/abs/0704.2529"
(Anton Zeilinger et.al)

Here we show by both theory and experiment that a broad and rather reasonable class of such non-local realistic theories is incompatible with experimentally observable quantum correlations. In the experiment, we measure previously untested correlations between two entangled photons, and show that these correlations violate an inequality proposed by Leggett for non-local realistic theories.


The best statement in the de Raedt article is this:
In the absence of a theory that describes the individual events, the very successful computational-physics approach “start from the theory and invent/use a simulation algorithm” cannot be applied to this problem.

Which lead to the next...

(I still admire all the work that you and DrC have put into this.)

ajw1 said:
But an open framework should probably be started in something like http://maxima.sourceforge.net/" .

The more I think about an "EPR framework" I realize it’s probably not a splendid idea, as De Raedt says – we don’t have a theory that describes the individual events. We don’t know what really happens!

So it’s going to be very hard, if not impossible, to produce an 'all-purpose' framework, that could be used for testing new ideas. All we can do is what De Raedt has done – to mimic already performed experiments.

I think...

If you think I’m wrong, there’s another nice alternative to Maxima in http://en.wikipedia.org/wiki/FreeMat" ) which has an interface to external C, C++, and Fortran code (+ loading dll’s).

Cheers!
 
Last edited by a moderator:
  • #487
DrChinese said:
I have cos^2(22.5) as 85.36%, although I don't think the value matters for your example. I think you are calculating cos^2 - sin^2 - matches less non-matches - to get your rate, which yields a range of +1 to -1. I always calc based on matches, yielding a range from 0 to 1. Both are correct.

:biggrin:


You bet! Because I’ve got my value from a public lecture by Alain Aspect at the Perimeter Institute for Theoretical Physics, talking about Bell's theorem! :smile:

To avoid that this thread soon get’s a subtitle – "The noble art of not answering simple questions" – I’m going to act proactive. :biggrin:

This is wrong:
DrChinese said:
We can perform the test on Alice, and use that result to predict Bob. If we can predict Bob with certainty, without changing Bob in any way prior to Bob's observation, then the Bob result is "real". Bell real.


Bell's theorem is all about statistical QM probability (except for 0° and 90° which also LHV handles perfect).
 
  • #488
DevilsAvocado said:
(!?) Funny approach... when the probabilistic nature of QM is the very foundation of Bell’s work...?? It’s like proving that Schrödinger's cat can’t run, by cutting off the legs?:bugeye:?
(And true random numbers from atmospheric noise is available for free at random.org :wink:)
It is very common to use pseudo random numbers in these kind of simulations, and often not worth the effort to get real random values. I don't think this is really an issue, provided that your pseudo random generator is ok for the purpose.
DevilsAvocado said:
And what is this "time window"?? The code is executed sequentially, not multithreaded or parallel! And then multiply the "measurement" with a (pseudo-)random number to "check" if the "measurement" is inside this "time window"!? ... jeeess, I wouldn’t call this a "simulation"... more like an "imitation".
De Raedt is not proposing a hidden variable theory, he says he can obtain the results of real Bell type experiments in a local realistic way.
So in real experiments one has to use a time frame for determining whether to clicks at two detectors belong to each other or not.
There are indications that particles are delayed by the angle of the filter. This delay time is used by de Raedt, and he obtains the exact QM prediction for this setup (well, similar results as the real experiment, which more or less follows the QM prediction).

DevilsAvocado said:
The more I think about an "EPR framework" I realize it’s probably not a splendid idea, as De Raedt says – we don’t have a theory that describes the individual events. We don’t know what really happens!

So it’s going to be very hard, if not impossible, to produce an 'all-purpose' framework, that could be used for testing new ideas. All we can do is what De Raedt has done – to mimic already performed experiments.

I think...

If you think I’m wrong, there’s another nice alternative to Maxima in http://en.wikipedia.org/wiki/FreeMat" ) which has an interface to external C, C++, and Fortran code (+ loading dll’s).

Cheers!
I don't know about 'all-purpose'. It seems to me that a De Raedt like simulation structure should be able to obtain the datasets DrChinese often mentions, for all kinds of new LR ideas.
 
Last edited by a moderator:
  • #489
DevilsAvocado said:
This is wrong:

[something DrChinese says...]

Bell's theorem is all about statistical QM probability (except for 0° and 90° which also LHV handles perfect).

Ha!

But we are talking about 2 different things. Yes, Bell is about the statistical predictions of QM vs. Local Realism. But both EPR and Bell use the idea of the "elements of reality" (defined as I have) as a basis for their analysis.

Score: Avocado 1, DrC 1.
 
  • #490
ajw1 said:
1. De Raedt is not proposing a hidden variable theory, he says he can obtain the results of real Bell type experiments in a local realistic way.

So in real experiments one has to use a time frame for determining whether to clicks at two detectors belong to each other or not.

There are indications that particles are delayed by the angle of the filter. This delay time is used by de Raedt, and he obtains the exact QM prediction for this setup (well, similar results as the real experiment, which more or less follows the QM prediction).

2. I don't know about 'all-purpose'. It seems to me that a De Raedt like simulation structure should be able to obtain the datasets DrChinese often mentions, for all kinds of new LR ideas.

1. The delay issue is complicated, but the bottom line is this is a testable hypothesis. I know of several people who are investigating this by looking at the underlying data using a variety of analysis techniques. I too am doing some work in this particular area (my expertise is in the data processing side). At this time, there is no evidence at all for anything which might lead to the bias De Raedt et al propose. But there is some evidence of delay on the order of a few ns. This is far too small to account for pairing problems.

2. Yes, it is true that the De Raedt simulation exploits the so-called "fair sampling assumption" (the time window) to provide a dataset which is realistic. The recap on this is:

a) The full universe obeys the Bell Inequality, and therefore does not follow Malus.
b) The sample violates the Bell Inequality and is close to the QM predictions.
c) The model is falsified for entangled photons which are not polarization entangled.
 

Similar threads

2
Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
2K
Replies
6
Views
2K
Replies
2
Views
1K
Replies
100
Views
10K
Replies
6
Views
3K
Back
Top