Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #701
DrChinese & my_wan

How about a request to PF Admin for a new option in PF that would allow us to set a "Footer Disclaimer" (maybe thread specific), that is shown whether the "readers" are logged on or not?

This would probably avoid a lot of unnecessary internal "hubbub"... and be a guarantee for the reader not to get the wrong "impression"...

My "Disclaimer" would look something like this:
I’m a 100% curious layman looking for more knowledge. Naturally, I accept all standards in the scientific community, but I think it’s fun to find (what I imagine) new perspectives and questions (that probably already have been answered). Everything I say can be totally wrong (read at own risk), though I regard myself as perfectly sane - but even this fact could be questioned by some. :wink:

(Realize... this would only work as a "popup function"...)

What do you think?
 
Last edited:
Physics news on Phys.org
  • #702
EPR-Bell Experiment for Dummies
A Reference for the Rest of Us

Found a very informative video which explains all parts in a modern EPR-Bell setup.

https://www.youtube.com/watch?v=<object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/c8J0SNAOXBg&hl=en_US&fs=1&rel=0&color1=0x006699&color2=0x54abd6"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/c8J0SNAOXBg&hl=en_US&fs=1&rel=0&color1=0x006699&color2=0x54abd6" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object>

 
Last edited by a moderator:
  • #703
my_wan said:
I will continue to object to BI violations being presented as an overly general "proof", however significant and physically valid experimentally. I object almost as strongly as I would to absolute claims that it must have a realistic explanation.

my_wan, I honestly think you are stretching the meaning of the words a bit (and I am not trying to criticize as I see words to the same effect from others too). Absolute might be a little strong about ANYTHING we think we know. At some point, you have to say: this is proven, this is supported experimentally, or this is a conjecture. Clearly, all sides are NOT equal.

I would say that Bell is proven, local realism is not experimentally supported, and there are conjectures regarding various interpretations. Are any of these absolutes? I think each of us has a slightly different opinion on that and I don't think that is too important. But it would be quite unfair to characterize local realism as being on the same footing as QM in terms of Bell/Bell tests.
 
  • #704
Absolute may be too strong, but when it's said Bell's theorem proves non-locality or non-realism, it is overstated. What has been proven is that nature violates BI. I even go with the extension that it has been irrevocably proven is that nature does not assign properties to things in a manner consistent with that one definition of realism.

By the time I was 10 years old, based on purely mechanistic reasoning, the notion of "physical property" as used in classical physics wrt -fundamental- parts, sounded like an oxymoron to me. When I apply that same reasoning today wrt BI, BI violations only justify my original, age 10, issues with the notion of fundamental properties. Yet to insist that experimental evidence that "fundamental property" wrt to things is an oxymoron proves the lack of realism in things requires assuming the definition wasn't an oxymoron from the start. Before I ever even started kindergarten, I was sneaking rocks in the car to drop out the window, to compare how the path looked from inside and outside the car. I tried using telephone poles and mail boxes as reference points.

DrChinese said:
At some point, you have to say: this is proven, this is supported experimentally, or this is a conjecture. Clearly, all sides are NOT equal.
BI violations are proven beyond ANY reasonable doubt. But no, you can't assume that because the fact of BI violations remain factual proves an interpretation of what it means physically.

DrChinese said:
I would say that Bell is proven, local realism is not experimentally supported, and there are conjectures regarding various interpretations.
Yes, BI violations are factual, and will never go away simply as a result of better experiments. As to what it means wrt realism requires the assumptions that the definition of realism used wasn't predicated on an oxymoron from the start.

If you take a rabbit to have the property 'rabbit, which eats clover with the property 'clover', what happened to the 'clover' property when the rabbit eats it? Does that mean the 'rabbit' property is not 'real'? If no, does that mean the 'rabbit' is not real?

So yes you can say with certainty that BI is valid. You cannot make claims of what it means wrt to realism in general, irrespective of chosen definitions which might even be an oxymoron from the perspective of realism itself, and then claim that the fact that it has remained an oxymoron as defined for some time strengthens the claim realism is falsified. I find it ironic that experimental evidence that my perception at 10, predicated on realism, that 'physical properties' as defined was an oxymoron, is now used to claim realism is falsified.
 
  • #705
How did the original EPR paper actually define realism?
http://www.drchinese.com/David/EPR.pdf

This was the primary completeness condition (unequivocated) which is predicated on realism:
(EPR) [PLAIN]http://www.drchinese.com/David/EPR.pdf said:
Whatever[/PLAIN] the meaning assigned to the term complete, the following requirement for a complete theory seems to be a necessary one: every element of physical reality must have a counterpart in physical theory. We shall call this the condition of completeness.

Now this is far more general than the definition actually used, where it was stated:
(EPR) [PLAIN]http://www.drchinese.com/David/EPR.pdf said:
A[/PLAIN] comprehensive definition is, however, unnecessary for our purposes. We shall be satisfied with the following criterion, which we regard as reasonable.

Notice the equivocations? The following definition was even, in the original paper, disavowed as a complete specification of realism. The following definition was purely utilitarian for purposes of the argument:
(EPR) [PLAIN]http://www.drchinese.com/David/EPR.pdf said:
If,[/PLAIN] without in any way disturbing the system, we can predict with certainty (i.e., with probability equal to one) the value of a physical quantity, then there exist an element of physical reality corresponding to this physical quantity.

Note that "there exist an element of physical reality" is not even a condition that the "physical quantity" associated with it must be singular or innate to singular "elements". This was in a sense the basis on which Einstein rejected Neumann's proof. Deterministic (classical) was taken to mean dispersion free, in which the measurables were taken as distinct preexisting properties of individual "beables". Bell showed that the properties of any such "beables" must also depend on the context of the measurement, much like classical momentum is context dependent. How many different times, not counting the abstract, was this definition equivocated? Let's see:
1) A comprehensive definition is, however, unnecessary for our purposes.
2) It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur.
3) Regarded not as necessary, but merely as sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ides of reality.

The point here is that not even the original EPR paper supported the notion that an invalidation of the singular utilitarian definition used was itself an invalidation of reality, or that singular properties represented singular elements. It allowed many more methods and contexts with which to define reality, and merely chose this one to show, given the assumptions, that cases existed where conservation law allowed values lacking a real values in QM could be predicted when QM defined them as fundamentally undefined. To predicate a proof on this singular utilitarian definition as proof that all definitions of objective reality are falsified goes well beyond the claims of the EPR paper. It is also this artificial restriction, to this utilitarian definition provided, that is the weakness in the proof itself.

Look at the rabbit analogy again. Given a rabbit and its diet, the physical quantity of a substance with the property [rabbit poo] is predictable. That by definition, under the utilitarian definition used, defines [rabbit poo] as an element of reality, but does that means the rabbit poo property is also an element of reality. If so, where was the "poo" property before the rabbit eat the clover? If not does mean the "poo" property does not define an element of reality? The only reasonable assumptions are:
1) The [rabbit poo] property in fact represents an element of reality.
2) The [rabbit poo] property is not itself a physical element of reality, but a contextual element of reality representing a real physical state.
3) The rabbit poo itself is a physical element of reality.

Taken this way, BI violations might only indicate that ALL measurable properties have the same contextual dependence as every property we are familiar with in the everyday world. It may only be our notion that fundamental properties of "beables" exist that is at fault. Yet a "beable" lacking measurable properties of its own may still gain properties through persistent, or quasi-persistent, interactions with other beables. A Schneider quote I like a lot from "[URL Determinism Refuted[/URL], illustrating the unobservability of independent variables is fitting here. This entails that what we perceive as the physical world is built from verbs, rather than nouns, but doesn't prove that nouns don't exist to define the verbs. So the claim of a proof of the nonexistence of beables goes well beyond any reasonable level of generality that can be claimed.

The issue of completeness is twofold. If -every- possible empirical observation and prediction is contained within a mathematical formalism, is it complete? I would say so, even if reality contains physical constructs at some level, not defined in the formalism, that provides for the outcomes predicted by the formalism. Einstein insisted on these physical constructs being specified in order to qualify as complete. Funny he didn't insist on the same with his own theories, presumably on the grounds that they didn't conflict with certain realist notions. Thus I don't consider, as Einstein did, that every element of physical reality must have a counterpart in physical theory to be considered complete. If QM is considered lacking in completeness, gravity is the issue. Yet a model, complete in the Einstein sense, would be a useful construct, and maybe even play a pivotal role in unification.
 
Last edited by a moderator:
  • #706
my_wan said:
... but does that means the rabbit poo property is also an element of reality.

If the rabbit poo hits the fan, then I think most would regard 3) as the most plausible alternative. :smile:

Seriously, I’m not quite following all this talk about what is real or not... is a measured photon more real than an unmeasured photon?? Is the measuring apparatus 100% real??

According to Quantum Chromodynamics (QCD) both rabbit poo and measuring apparatus consist of 90% virtual particles, popping in and out all the time:

[URL]http://www.physics.adelaide.edu.au/~dleinweb/VisualQCD/QCDvacuum/su3b600s24t36cool30actionHalf.gif[/URL]

So, what is really real real or counterfactual real or context real, etc !?:confused:!?
 
Last edited by a moderator:
  • #707
my_wan said:
How did the original EPR paper actually define realism?
http://www.drchinese.com/David/EPR.pdf

This was the primary completeness condition (unequivocated) which is predicated on realism:

Now this is far more general than the definition actually used, where it was stated:

Notice the equivocations? The following definition was even, in the original paper, disavowed as a complete specification of realism. The following definition was purely utilitarian for purposes of the argument:

Note that "there exist an element of physical reality" is not even a condition that the "physical quantity" associated with it must be singular or innate to singular "elements". This was in a sense the basis on which Einstein rejected Neumann's proof. Deterministic (classical) was taken to mean dispersion free, in which the measurables were taken as distinct preexisting properties of individual "beables". Bell showed that the properties of any such "beables" must also depend on the context of the measurement, much like classical momentum is context dependent. How many different times, not counting the abstract, was this definition equivocated? Let's see:
1) A comprehensive definition is, however, unnecessary for our purposes.
2) It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur.
3) Regarded not as necessary, but merely as sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ides of reality.

The point here is that not even the original EPR paper supported the notion that an invalidation of the singular utilitarian definition used was itself an invalidation of reality, or that singular properties represented singular elements. It allowed many more methods and contexts with which to define reality, and merely chose this one to show, given the assumptions, that cases existed where conservation law allowed values lacking a real values in QM could be predicted when QM defined them as fundamentally undefined. To predicate a proof on this singular utilitarian definition as proof that all definitions of objective reality are falsified goes well beyond the claims of the EPR paper. It is also this artificial restriction, to this utilitarian definition provided, that is the weakness in the proof itself.

I do agree with much of what you are saying here. There are definitely utilitarian elements to Bell's approach. But I may interpret this in a slightly different way than you do. In my mind, Bell says to the effect: "Define realism however you like, and I would still expect you to arrive at the same place." I think he took it for granted that the reader might object to any particular definition as somewhat too lenient or alternately too restrictive. But that one's substitution of a different definition would do little to alter the outcome.

Again, for those following the discussion, I would state as follows: EPR defined elements of reality as being able to predict the result of an experiment without first disturbing the particle. They believed that there were elements of reality for simultaneous measurement settings a and b. Bell hypothesized that there should, by the EPR definition, be also a simultaneous c. This does not exist as part of the QM formalism, and is generally disavowed as part of most treatments. So it is a requirement of the realistic school, i.e. the school of thought that says that hidden variables exist. But not an element of QM.
 
  • #708
my_wan said:
Look at the rabbit analogy again. Given a rabbit and its diet, the physical quantity of a substance with the property [rabbit poo] is predictable. That by definition, under the utilitarian definition used, defines [rabbit poo] as an element of reality, but does that means the rabbit poo property is also an element of reality. If so, where was the "poo" property before the rabbit eat the clover? If not does mean the "poo" property does not define an element of reality? The only reasonable assumptions are:
1) The [rabbit poo] property in fact represents an element of reality.
2) The [rabbit poo] property is not itself a physical element of reality, but a contextual element of reality representing a real physical state.
3) The rabbit poo itself is a physical element of reality.

The EPR view was that there was an element of reality associated with the ability to predict an outcome with certainty. There was no claim the what was measured was itself "real", as it was understood that it might be a composite or derived quantity. Is temperature real?

But the EPR view was also that the element of reality is non-contextual. They said that any other view was "unreasonable".
 
  • #709
(my_wan, sorry for the silly rabbit joke... parrots & rabbits + EPR seems to short circuit my brain...)


I’m going to stick my layman nose out, for any to flatten.

To my understanding, Einstein didn’t like the idea that nature was uncertain according to QM. That was the main problem – not if A & B was "real" or not.

Einstein formulated the EPR paradox to show that there was a possibility to get 'complete' information about a QM particle, like momentum and position, by measuring one of the properties on a twin particle, without disturbing the 'original'.

One cornerstone in QM is the Heisenberg uncertainty principle, which says it’s impossible to get 'complete' information about a QM particle (like momentum and position), not because the lack of proper equipment – but because uncertainty and randomness is a fundamental part of nature.

Einstein raised the bet and placed his own special theory of relativity at stake (probably certain it couldn’t fail) stating – either local hidden variables exist, or spooky action at a distance is a requirement – to explain what happens in the EPR paradox.

Einstein didn’t know that his own argument would boomerang back on him...

And here we are today with a theoretical proven and physical (99,98%) theory stating that the QM world is non-local, in Bell’s theorem.

This means, beyond any doubt, that GR <> QM and to solve this dilemma we need to get GR = QM.

So gentlemen, why all this 'fuss' about reality, counterfactuals, context, C, etc?
 
Last edited:
  • #710
DrChinese said:
The EPR view was that there was an element of reality associated with the ability to predict an outcome with certainty. There was no claim the what was measured was itself "real", as it was understood that it might be a composite or derived quantity. Is temperature real?
Yes, exactly. But the issue is what BI has to assume about the contextually of the measured values wrt elements of reality. EPR needed only the fact it was predictable, and no other assumption. I have to object to your next claim.

DrChinese said:
But the EPR view was also that the element of reality is non-contextual. They said that any other view was "unreasonable".
No. EPR did not assume reality is non-contextual. The "unreasonable" quote only denied a singular form of contextuality, i.e., that the reality of measurement P was dependent on measurement Q. That is certainly far from the only form of contextuality that exist, and the interpretation the BI demonstrates this form denies any other form of contextuality, and presumes correlation equals causation. It would certainly be "unreasonable" to conclude that classical physics does not allow correlations without the defining the measurement itself as the causative agent of the correlation.

Consider what it entails if we assume a realist perspective of BI violations.
1) Correlations at common detector settings is a physical certainty.
2) Offsets from a common detector setting introduces noise, completely random from an experimental/empirical perspective.

Now, via BI violations, counterfactually we can show the randomness of the noise in 2) cannot show the same randomness wrt another detector setting. Well big shocker when arbitrary but common detector settings doesn't show any significant randomness. If this noise itself is -fundamentally- deterministic but unpredictable, then you can always choose an after the fact measurement you could have done that would have given a different value than the expectation value of this randomness. The same for any random series of predefined heads/tails can be chosen after the fact to show a non-random correlation with a set of coin tosses.

To illustrate, note how in the negative probability proof the non-correlations, (Y = SIN^2(45 degrees), are given the same ontological certainty status as the correlations at common angles. Certainly, from a purely statistical standpoint, the noise of 2) is a certainty in the limit. Yet if you assume a realist position, you can always choose an after the fact condition in which noise becomes a signal, or visa versa. I can win the lottery every time if I can choose after the fact.

Of course, a good rebuttal is, the problem in BI violations is that BI violations are always inconsistent with what an alternative measurement would have indicated. The problem here is that the randomness of the noise in 2) is given the same ontological status as the certainty of 1). When you define a counterfactual channel, you are by definition imposing a non-random after the fact condition on C. The noise of the counterfactual channel is predefined to be non-random wrt any performable actual experiment, for either leg A or B, since it is after the fact correlated and anti-correlated respectively. This entails that the noise is -predefined- to be inversly related to the randomness of any actual measurement of A and B, thus the randomness of Y = SIN^2(45 degrees) is defined out of it after the fact. Like calling heads after the toss. The stochastic noise can't be considered to have the same ontological certainty status as the certainty of the physical correlation itself, which exist even when the noise introduced by offsets shows noncorrelated measurements.

I still think the Born rule is probably directly involved here, which by itself would give realist a headache. :-p I haven't had time to test my rotationally variant vectorial ideas yet either. I'll get to it sooner or later.
 
  • #711
DevilsAvocado said:
This means, beyond any doubt, that GR <> QM and to solve this dilemma we need to get GR = QM.

So gentlemen, why all this 'fuss' about reality, counterfactuals, context, C, etc?

Because precisely what we can presume about reality, counterfactuals, context, etc., plays a large role in what we can consider to get GR <> QM to GR = QM. Short of doing that, I don't see the value of purely interpretive models.

And your rabbit poo joke was fine :smile:
 
  • #712
my_wan said:
I still think the Born rule is probably directly involved here, which by itself would give realist a headache.
Why do you think that? Doesn't the Born rule have an empirical basis?
 
  • #713
DrChinese said:
But the EPR view was also that the element of reality is non-contextual. They said that any other view was "unreasonable".
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.

Do you doubt that this is the view of virtually all physicists?

Do you see anything wrong with this view?
 
  • #714
my_wan said:
Let's get inequality violations without correlation in a single PBS:
Let's assume a perfect detection efficiency in a single channel, 100% of all particles sent in this channel get detected either go left or right using a PBS. Consider a set of detections at this PBS at angle 0. 50% go left and 50% right. Now if you ask which left would have went right, and visa versa, these photons would have went at an angle setting of 22.5, it's reasonable to say ~15% that would have went left go right, and visa versa.
This is only true if the source produces only H and V photons.
You can easily check it with such setup. Let's say single run of experiment lasts 10 seconds. Our photon source produces H polarized photons for first 5 seconds of experiment and V polarized photons for other 5 seconds.
Say for first 5 seconds all photons appear in PBS channel #1 and for other 5 seconds all photons appear in channel #2. When we rotate PBS by 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is indeed reasonable to assume that 15% of photons changed their channel.

However if source produces +45° and -45° polarized photons we will have different picture.
For PBS at 0° we have:
50% photons in channel #1 and 50% photons in #2 for first half and
50% photons in channel #1 and 50% photons in #2 for second half.
For PBS at 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is reasonable to assume that 35% of photons changed their channel.

my_wan said:
Yet this same assumption indicates that at an angle of 45, ~50% that would have gone left go right (relative to the 0 angle), and visa versa. Yet, relative to angle 22.5, the 45 angle can only have switched ~15% of the photon detection rates. 15% + 15% = 30%, not 50%.
Above explanation indicate that you don't get the problem you are stating here.
 
  • #715
DrChinese said:
GHZ tests are not considered to rely on the Fair Sampling assumption.
In original GHZ paper "Bell's theorem without inequalities" (it is pay per view unfortunately) it is said:
"The second step is to show the test could be done even with low-efficiency detectors, provided that we make a plausible auxiliary assumption, which we call fair sampling. Finally, we show that the auxiliary assumption is dispensable if detector efficiencies exceed 90.8%."

DrChinese said:
Now there is a kicker on this that may confuse folks. It is true that only a sample is used, so you might think the Fair Sampling issue is present. But it is not. The sample looks like this:

-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1...

Where local realism predicts

1
1
1
1
1
1
1
1
1
1
1...

A little consideration will tell you that local realism is falsified in this case. Every case, individually, is a falsification.
GHZ experiments use four photons not one photon.
If we talk about three photon GHZ then it's results are acquired using four different modifications of setup. And GHZ inequalities are calculated from all four results together that each consists of three-fold coincidences in four detectors.
Nothing of this indicates that you can simplify experimental outcome the way you did.
 
  • #716
zonde said:
my_wan said:
Let's get inequality violations without correlation in a single PBS:
Let's assume a perfect detection efficiency in a single channel, 100% of all particles sent in this channel get detected either go left or right using a PBS. Consider a set of detections at this PBS at angle 0. 50% go left and 50% right. Now if you ask which left would have went right, and visa versa, these photons would have went at an angle setting of 22.5, it's reasonable to say ~15% that would have went left go right, and visa versa.

This is only true if the source produces only H and V photons.
Absolutely not. The statistics, as stated, are in fact predicated on completely randomized polarizations coming from the source. However, if the photons coming from the source were 50% H and V, strictly at those 2 polizations, it would have the same statistical effect, because the rate photons switch paths from H is the same rate they would switch from V in reverse.

But the fact remains, purely randomed polarization would have the same statistics. I went to great lengths to verify this assumption.

zonde said:
You can easily check it with such setup. Let's say single run of experiment lasts 10 seconds. Our photon source produces H polarized photons for first 5 seconds of experiment and V polarized photons for other 5 seconds.
Say for first 5 seconds all photons appear in PBS channel #1 and for other 5 seconds all photons appear in channel #2. When we rotate PBS by 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is indeed reasonable to assume that 15% of photons changed their channel.
Yep, but this is quiet different from random polarizations, where any setting of the PBS sends 50% in each direction, but also incidentally matches, at all PBS settings, the statistics as two pure polarizations at 90 degree offsets.

Consider this: Add the first and second set of 5 second runs together and the 85% and 15% wash out, just like what you initially specified above. Now check and see that the same thing happens at all settings. Thus if it always washed out at all settings in the strict H and V cases, why would a completely randomized source, which only changes those same settings via the photons rather than polarizer settings lead to anything different in the overall statistics?

zonde said:
However if source produces +45° and -45° polarized photons we will have different picture.
For PBS at 0° we have:
50% photons in channel #1 and 50% photons in #2 for first half and
50% photons in channel #1 and 50% photons in #2 for second half.
For PBS at 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is reasonable to assume that 35% of photons changed their channel.
Yep, but only if the photon from the source are not randomized, which you falsely assumed my description didn't do, simply because the overall statistics happen to match for both pure H and V and randomized photon polarization cases.[/QUOTE]

zonde said:
my_wan said:
Yet this same assumption indicates that at an angle of 45, ~50% that would have gone left go right (relative to the 0 angle), and visa versa. Yet, relative to angle 22.5, the 45 angle can only have switched ~15% of the photon detection rates. 15% + 15% = 30%, not 50%.
Above explanation indicate that you don't get the problem you are stating here.
The only mistake I see in your reasoning is thinking that because there is a statistical match between the pure H and V case, and the randomized case, I must have only have been referring the pure H and V case. This is wrong. Check the same statistics for the randomized case and you'll see a statistical match for both cases, but the randomized case would invalidate your +45° and -45° case, because I was assuming the randomized case.
 
  • #717
DrChinese said:
Hey, I hope you know I am glad you are here.
Was there something in my prior post in this thread that indicated that I think that you're not glad that I'm here? (Please don't misunderstand the 'tone' of any of my posts. A day without you at PF would be like a day without ... sunshine. However, while I do like the fact that the sun is shining, it doesn't contradict the fact of shade. This is just elementary optics which both you and Bell seem to be avoiding in your interpretations of Bell's theorem.)

I quote you from a previous post:
DrChinese said:
You shouldn't be able to have this level of correlation if locality and realism apply.
This betrays an apparent lack of understanding of elementary optics. Which, by the way, also applies in qm.

DrChinese said:
I hope nothing I say discourages you in any way. In fact, I encourage you to challenge from every angle. I enjoy a lot of your ideas and they keep me on my toes.
Then, when I, or someone else, offers a, purported, LR model of entanglement that reproduces the qm predictions, why not look at it closely and express exactly why you think it is or isn't an LR model of entanglement?

DrChinese said:
I think you know that there are a lot of readers who are not active posters in many of our discussions. Just look at the view count on these threads. While I know what is what throughout the thread, these readers may not. That is why I frequently add comments to the effect of "not generally accepted", "show peer reviewed reference" , etc. my_wan and billschnieder get that too. So my objective is to keep casual readers informed so that they can learn both the "standard" (generally accepted) and the "non-standard" (minority) views. I would encourage any reader to listen and learn to a broad spectrum of ideas, but obviously the mainstream should be where we start. And that is what PhysicsForums follows as policy as well.
My approach to understanding Bell's theorem isn't a 'nonstandard' or 'minority' approach. To characterize it as such does a disservice to me and misinforms less sophisticated posters. What you are stating, sometimes, as the mainstream view is, I think, incorrect, and also not the mainstream view.

There's a very important difference between:
1. No physical theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics.
and:
2. No Local Realistic physical theory can ever reproduce all of the predictions of Quantum Mechanics.

We KNOW that 2. is incorrect, because viable LR models of entanglement exist, and they remain unrefuted. If you refuse to acknowledge them, then so what. They exist nonetheless.

I want readers of this thread to understand this. There are LR theories of entanglement which reproduce all of the predictions of qm. They're in the preprint archives at arxiv.org, and there are some that have even been published in peer reviewed journals. Period. If you, DrChinese, want to dispute this, then it's incumbent on you, or anyone who disputes these claims, to analyze the theories in question and refute their claims regarding locality or realism or compatibility with qm. If this isn't done, then the claims stand unrefuted. And, since no such refutations exist, then the current status of LR theories which reproduce all qm predictions is that they remain unrefuted.

If you don't want to inform casual readers of this thread of this fact, then fine. I've informed them.

And just so there's no confusion about this, let me say it again. Bell's theorem does not rule out local realistic theories of entanglement. If DrChinese disagrees with this, then I want you, the casual reader of this thread, to demand that DrChinese analyze a purported LR theory and show that it either isn't local or realistic or both or that it doesn't reproduce qm predictions.

DrChinese said:
On the other, when posters suitably label items then that is not an issue and I don't feel compelled to add my (sometimes snippy) comments. Also, many times a personal opinion can be converted to a question so as not to express an opinion that can be mistrued. For example: "Is it possible that Bell might not have considered the possibility of X?". That statement - er question - does not attempt to contradict Bell per se. And then the discussion can continue.
And what you often don't do in many of your statements is to qualify exactly what you're saying. So, bottom line, your statements often perpetuate the myth that Bell's theorem informs us about facts of nature -- rather than facts of what sorts of theoretical forms are compatible with certain experimental situations.

DrChinese said:
And less feelings get hurt. And people won't think I am resorting to authority as a substitute for a more convincing argument. As I often say, it only takes one. Of course, me being me, that line is stolen (in mangled form) from a man who is quite well known. In fact, maybe it is time to add something new to my tag line...
There are, at least, a dozen different LR models of entanglement in the literature which reproduce the qm predictions. Of course, if you won't look at any of them then 10^1000 wouldn't be enough. Would it?

All you have to do is look at one. If you think it doesn't qualify as a local or a realistic model, then you can point out why (but don't require that it produce incorrect predictions, because that's just silly). If you're unwilling to do that, then your Einstein quote is just fluffy fuzziness wrt your position on LR models of entanglement.

I want you to refute an LR theory of entanglement that I present. You've been called out. Will you accept the challenge?

By the way, I like the Korzybski quote.

www.DrChinese.com "The map is not the territory." - Korzybski.

"Why 100? If I were wrong, one would have been enough." - Albert Einstein, when told of publication of the book One Hundred Authors Against Einstein.
 
Last edited:
  • #718
ThomasT has a point wrt mainstream view on the realism issue. I know very few that take as hard a view on realism as DrC. Rather an acceptance the uncertainty in any particular interpretation. Of course my personal experience is limited. However, a review of published opinions is not necessarily indicative of the general opinion. Like the myth that violence is increasing, when in fact it's been steadily dropping year to year for many generations. I would be curious what the actual numbers look like.

So even though BI might specify the status quo of the argument, it's likely much more suspect to claim the standard interpretation represents the predominate view.

DrChinese said:
The sample looks like this:

-1
-1
-1
-1...

Where local realism predicts

1
1
1
1...

A little consideration will tell you that local realism is falsified in this case.
Only with a very restricted notion of realism and what it entails can this be said. I also never got a response to my objection to calling realistic ways a defining contextualization of such variables a Fair Sampling argument.

I would love to hear a definition of contextual variables? Certain statements made it sound like contextual variables, by definition, meant non-realistic. I never got a response to the questions, is velocity a contextual variable?

I also never got an objection when I pointed out that straight forward squaring of any vector leads to values that are unavoidably coordinate dependent, that is it produces different answers and not just the same answer defined by a different coordinate system. Yet the requirement that a realistic model must model arbitrary detector settings, rather than arbitrary offsets, requires a coordinate independent square of a vector.

To say realism is falsified most certainly is an overreach of what can be ascertained from the facts. I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.
 
  • #719
I must inform the casual reader: Don’t believe everything you read at PF, especially if the poster defines you as "less sophisticated".

Everything is very simple: If you have one peer reviewed theory (without references or link) stating that 2 + 2 = 5 and a generally accepted and mathematical proven theorem stating 2 + 2 = 4, then one of them must be false.

And remember: Bell’s theorem has absolutely nothing to do with "elementary optics" or any other "optics", I repeat – absolutely nothing. Period.
 
  • #720
my_wan said:
Absolutely not. The statistics, as stated, are in fact predicated on completely randomized polarizations coming from the source. However, if the photons coming from the source were 50% H and V, strictly at those 2 polizations, it would have the same statistical effect, because the rate photons switch paths from H is the same rate they would switch from V in reverse.

But the fact remains, purely randomed polarization would have the same statistics. I went to great lengths to verify this assumption.


Yep, but this is quiet different from random polarizations, where any setting of the PBS sends 50% in each direction, but also incidentally matches, at all PBS settings, the statistics as two pure polarizations at 90 degree offsets.

Consider this: Add the first and second set of 5 second runs together and the 85% and 15% wash out, just like what you initially specified above. Now check and see that the same thing happens at all settings. Thus if it always washed out at all settings in the strict H and V cases, why would a completely randomized source, which only changes those same settings via the photons rather than polarizer settings lead to anything different in the overall statistics?

Yep, but only if the photon from the source are not randomized, which you falsely assumed my description didn't do, simply because the overall statistics happen to match for both pure H and V and randomized photon polarization cases.


The only mistake I see in your reasoning is thinking that because there is a statistical match between the pure H and V case, and the randomized case, I must have only have been referring the pure H and V case. This is wrong. Check the same statistics for the randomized case and you'll see a statistical match for both cases, but the randomized case would invalidate your +45° and -45° case, because I was assuming the randomized case.
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that. I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
 
  • #721
ThomasT said:
...2. No Local Realistic physical theory can ever reproduce all of the predictions of Quantum Mechanics.

We KNOW that 2. is incorrect, because viable LR models of entanglement exist, and they remain unrefuted. If you refuse to acknowledge them, then so what. They exist nonetheless.

I want readers of this thread to understand this. There are LR theories of entanglement which reproduce all of the predictions of qm. They're in the preprint archives at arxiv.org, and there are some that have even been published in peer reviewed journals. Period. If you, DrChinese, want to dispute this, then it's incumbent on you, or anyone who disputes these claims, to analyze the theories in question and refute their claims regarding locality or realism or compatibility with qm.

If you don't want to inform casual readers of this thread of this fact, then fine. I've informed them.

And just so there's no confusion about this, let me say it again. Bell's theorem does not rule out local realistic theories of entanglement. If DrChinese disagrees with this, then I want you, the casual reader of this thread, to demand that DrChinese analyze a purported LR theory and show that it either isn't local or realistic or both or that it doesn't reproduce qm predictions.
...

There are, at least, a dozen different LR models of entanglement in the literature which reproduce the qm predictions. Of course, if you won't look at any of them then 10^1000 wouldn't be enough. Would it?

All you have to do is look at one. If you think it doesn't qualify as a local or a realistic model, then you can point out why (but don't require that it produce incorrect predictions, because that's just silly). If you're unwilling to do that, then your Einstein quote is just fluffy fuzziness wrt your position on LR models of entanglement.

I want you to refute an LR theory of entanglement that I present. You've been called out. Will you accept the challenge?

I have a requirement that is the same requirement as any other scientist: provide a local realistic theory that can provide data values for 3 simultaneous settings (i.e. fulfilling the realism requirement). The only model that does this that I am aware of is the simulation model of De Raedt et al. There are no others to consider. There are, as you say, a number of other *CLAIMED* models yet none of these fulfill the realism requirement. Therefore, I will not look at them.

Perhaps you will show me where any of the top scientific teams have written something to the effect of "local realism is tenable after Bell". Because all of the teams I know about state the diametric opposite. Here is Zeilinger (1999) in a typical quote of his perspective:

"Second, a most important development was due to John Bell (1964) who continued the EPR line of reasoning and demonstrated that a contradiction arises between the EPR assumptions and quantum physics. The most essential assumptions are realism and locality. This contradiction is called Bell’s theorem."

I would hope you would recognize the above as nearly identical to my line of reasoning. So if you know of any hypothesis that contradicts the above AND yields a local realistic dataset, please give a link and I will give you my thoughts. But I cannot critic that which does not exist. (Again, an exception for the De Raedt model which has a different set of issues entirely.)
 
  • #722
my_wan said:
Because precisely what we can presume about reality, counterfactuals, context, etc., plays a large role in what we can consider to get GR <> QM to GR = QM.


Maybe you’re right. Personally, I think semantic discussions on "reality" could keep you occupied for a thousand years, without substantial progress. What if Einstein presented something like this:

"The causal reality for the joint probabilities of E having a relation to M, in respect of the ideal context, is strongly correlated to C."

Except for the very fine "sophistication" – could this be of any real use?

Maybe I’m wrong, and Einstein indeed used this very method to get to:

E = mc2

... I don’t know ...

But wrt "reality", I think we have a very real problem, in that the discordance for aligned parallels is 0:

N(0°, 0°) = 0​

If we then turn one minus thirty degrees and the other plus thirty degrees, from a classical point of view we should get:

N(+30°, -30°) ≤ N(+30°, 0°) + N(0°, -30°)​

Meaning that the discordance when both are turned cannot be greater than the sum of the two turned separately, which is very logical and natural.

But this is NOT true according to quantum mechanical predictions and experiments!

Even a high school freshman can understand this problem, you don’t have to be "sophisticated" or "intellectual superior", that’s just BS.

Now, to start long die-hard discussions on "elementary optics" to get an illusion of a probable solution is not very bright, not even "sophisticated".

I think most here realize that attacking the mathematics as such cannot be considered "healthy".

To discuss what’s real or not maybe could lead to "something", but it will never change the mathematical reality.

Therefore, the only plausible way 'forward' is to find a 'flaw' in QM, which will be very very hard since QM is the most precise scientific theory we got. That potential 'flaw' in QM has to be mathematical, not semantical – words won’t change anything about the EPR paradox and the mathematical predictions of QM.

I think it’s very interesting with your attempts to get a 'classical' explanation for what happens in EPR-Bell experiments, but how is this ever going to change the real mathematical truth, which we both know is true?
 
  • #723
my_wan said:
1. ThomasT has a point wrt mainstream view on the realism issue. I know very few that take as hard a view on realism as DrC.

2. Only with a very restricted notion of realism and what it entails can this be said. I also never got a response to my objection to calling realistic ways a defining contextualization of such variables a Fair Sampling argument.

3. I would love to hear a definition of contextual variables? Certain statements made it sound like contextual variables, by definition, meant non-realistic. I never got a response to the questions, is velocity a contextual variable?

4. I also never got an objection when I pointed out that straight forward squaring of any vector leads to values that are unavoidably coordinate dependent, that is it produces different answers and not just the same answer defined by a different coordinate system. Yet the requirement that a realistic model must model arbitrary detector settings, rather than arbitrary offsets, requires a coordinate independent square of a vector.

5. To say realism is falsified most certainly is an overreach of what can be ascertained from the facts. I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.

In trying to be complete in my response so you won't think I'm avoiding anything:

1. ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.

2. GHZ is very specific. It is a complex argument, but uses the very same definition of reality as does Bell. And this yields a DIFFERENT prediction in every case from QM, not just in a statistical ensemble. So NO, your conclusion is incorrect.

3. A contextual variable is one in which the nature of the observation is part of the equation for predicting the results. Thus it does not respect observer independence. You will see that in your single particle polarizer example, observer dependence appears to be a factor in explaining the results. Keep in mind, contextuality is not an assumption of Bell.

4. Your argument here does not follow regarding vectors. So what if it is or is not true? This has nothing to do with a proof of QM over local realism. I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out. OK, then local realism is ruled out which is what I am asserting anyway. But that result is not generally accepted as true and so I just don't follow. How am I supposed to make your argument for you?

5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.
 
  • #724
my_wan said:
I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.
I’m with you on this one 1000%. This is what we should discuss, not "elementary optics".

I think that it’s overlooked in this thread that this was a major problem for John Bell as well (and I’m going to prove this statement in a few days).

Bell knew that his theorem creates a strong contradiction between QM & SR, one or both must be more or less wrong. Then if QM is more or less wrong, it could mean that Bell’s theorem is also more or less wrong, since it builds its argument on QM predictions.

DrChinese said:
5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.

Don’t you think that interpretations are a little too easy way out of this?? I don’t think John Bell would have agreed with you here...
 
  • #725
DevilsAvocado said:
Bell knew that his theorem creates a strong contradiction between QM & SR, one or both must be more or less wrong. Then if QM is more or less wrong, it could mean that Bell’s theorem is also more or less wrong, since it builds its argument on QM predictions.

Don’t you think that interpretations are a little too easy way out of this?? I don’t think John Bell would have agreed with you here...

Bell shifted a bit on interpretations. I think the majority view is that he supported a Bohmian perspective, but I am not sure he came down fully in anyone interpretation. At any rate, I really don't know what we can say about underlying physical mechanisms. We just don't know how nature manages to implement what we call the formalism.

And don't forget that Bell does not require QM to be correct, just that the QM predictions are incompatible with LR predictions. Of course, Bell tests confirm QM to many SD.
 
  • #726
DrChinese said:
... QM predictions are incompatible with LR predictions.

Yes you are right, and this is what causes the dilemma. The Einsteinian argument fails:

no action on a distance (polarisers parallel) ⇒ determinism

determinism (polarisers nonparallel) ⇒ action on a distance

Meaning QM <> SR.
 
  • #727
zonde said:
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that.
No. I understood what you asserted.

zonde said:
I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
Yes, I seen that. The pure case is in fact what I used to empirically verify the assumption.

zonde said:
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
And this is where you go wrong again. I stand by my factual statement (not assumption) that randomized photon polarizations will have the same route switching statistics as an even mixture of pure H and V polarizations. I verified it both mathematically and in computer simulations.

Consider, in the pure case where you got it right, where you move a detector setting from 0 to 22.5 degrees. The route switching statistics look like cos^2(22.5) = sin^2(67.5), thus you are correct about the pure polarization pairs at 90 degree offsets. Now notice that cos^2(theta) = sin^2(theta +- 90) for ANY arbitrary theta. Now add a second pair of pure H and V photons polarizations that is offset 45 degrees from the first pair. Now at a 0 angle detector setting you've added 50% more photons to be detected from the new H and 50% from the new V polarization beams. Since cos^2(theta) = sin^2(theta +- 90) in ALL cases the overall statistics have not changed. To add more pure beam pairs without changing overall statistics, you have to add 2 pair of pure H and V beams at both 22.5 and 67.5 degree offsets. To add more pure beam sets, without changing overall statistics, requires 4 more H and V pure beams offset equidistant from those 4. Next step requires 8 to maintain the same statistics, and simply take the limit. You then end up with a completely randomized set of photons polarization that exhibit the exact same path switching statistics as the pure H and V case, because cos^2(theta) = sin^2(theta +- 90) for absolutely ALL values of theta.

So if you still don't believe it, show me. If you want a computer program that uses a random number generator, to generate randomly polarized photons and send them to a virtual detector, ask. I can write the program pretty quick. You'll need AutoIt (freeware, not nagware) if you don't want to be sent an exe. With AutotIt installed, you can run the script directly without compiling it.

zonde said:
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
False, and false is not an assumption, it is a demonstrable fact. So long as the pure randomization case exhibits those statistics, physically so must the completely randomized case. This fact is central to EPR modeling attempts. If you can demonstrate otherwise, I'll add a sig line to my profile stating that and linking to where you made a fool of me.
 
  • #728
zonde said:
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that. I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
I also just notice you contradicted yourself. You say:
1) ...it is reasonable only if you assume that you have source with even mixture of H and V photons.
2) If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.

But a random distribution is an "even mixture of H and V" as defined by 1), just not all on the same 2 axis. For a random distribution, there statistically exist both an opposite and perpendicular case for every possible polarization instance.
 
  • #729
DrChinese said:
In trying to be complete in my response so you won't think I'm avoiding anything:

1. ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.
Don't have much to refute this with. I've read the arguments and counterarguments, I was more curious about the general opinion among physicist, with published positions on EPR or not.

DrChinese said:
2. GHZ is very specific. It is a complex argument, but uses the very same definition of reality as does Bell. And this yields a DIFFERENT prediction in every case from QM, not just in a statistical ensemble. So NO, your conclusion is incorrect.
The question was the reasoning behind labeling any specific form of contextualization of contextual variables a Fair Sampling argument. I'm not even sure what this response has to do with the issue as stated. Though I have previously expressed confusion how you defined precisely what did or didn't qualify as realism even with that definition. Merely restating the definition doesn't help much. Nor does it indicate whether realistic models can exist that doesn't respect that definition.

DrChinese said:
3. A contextual variable is one in which the nature of the observation is part of the equation for predicting the results. Thus it does not respect observer independence. You will see that in your single particle polarizer example, observer dependence appears to be a factor in explaining the results. Keep in mind, contextuality is not an assumption of Bell.
Nice definition, I'll keep that for future reference. I'm well aware that my single polarizer example contains contextual dependencies, yet empirically valid consequences. It was the fact that the contextual values didn't depend on any correlations to anything that was important to the argument. Thus it was limited to refuting a non-local claim, not a realism claim. What it indicates is that a classical mechanism for the nonlinear path switching of uncorrelated photon responses to a single polarizer is required to fully justify a realistic model. I even give the opinion that a mechanistic explanation of the Born rule might be required to pull this off. Some would be happy to just accept the empirical mechanism itself as a local classical optics effect and go from there. I'm not. I'm aware contextuality was not an assumption of Bell. Hence the requirement of some form of classical contextuality to escape the stated consequences of his inequality.

DrChinese said:
4. Your argument here does not follow regarding vectors. So what if it is or is not true? This has nothing to do with a proof of QM over local realism. I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out. OK, then local realism is ruled out which is what I am asserting anyway. But that result is not generally accepted as true and so I just don't follow. How am I supposed to make your argument for you?
1) You say: "Your argument here does not follow regarding vectors. So what if it is or is not true?", but the claim about this aspect of vectors is factually true. Read this carefully:
http://www.vias.org/physics/bk1_09_05.html
Note: Multiplying vectors from a pool ball collision under 2 different coordinate systems don't just lead to the same answer expressed in a different coordinate system, but an entirely different answer altogether. For this reason such vector operations are generally avoided, using scalar multiplication instead. Yet the Born rule and cos^2(theta) do just that.
2) You say: "I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out.", but they don't commute for pool balls either, when used this way. That doesn't make pool balls not real. Thus the formalism has issues in this respect, not the reality of the pool balls. I even explained why: because given only the product of a vector, there exist no way of -uniquely- defining the particular vectors that went into defining it.

DrChinese said:
5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.
That's more than a little difficult when you seem to falsely represent any particular contextualization of variables as a Fair Sampling argument. Refer back to 2. where your response was unrelated to my objection to labeling contextualization arguments as a Fair Sampling argument.
 
  • #730
my_wan said:
Consider, in the pure case where you got it right, where you move a detector setting from 0 to 22.5 degrees. The route switching statistics look like cos^2(22.5) = sin^2(67.5), thus you are correct about the pure polarization pairs at 90 degree offsets.
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|

my_wan said:
Now notice that cos^2(theta) = sin^2(theta +- 90) for ANY arbitrary theta. Now add a second pair of pure H and V photons polarizations that is offset 45 degrees from the first pair. Now at a 0 angle detector setting you've added 50% more photons to be detected from the new H and 50% from the new V polarization beams. Since cos^2(theta) = sin^2(theta +- 90) in ALL cases the overall statistics have not changed.
The same way as above
|cos^2(67.5)-cos^2(45)| and it is not equal to |cos^2(22.5)-cos^2(0)|
and
|sin^2(67.5)-sin^2(45)| and it is not equal to |sin^2(22.5)-sin^2(0)|

so if you add H and V photons that are offset by 45 degrees you change your statistics.

my_wan said:
So if you still don't believe it, show me. If you want a computer program that uses a random number generator, to generate randomly polarized photons and send them to a virtual detector, ask. I can write the program pretty quick. You'll need AutoIt (freeware, not nagware) if you don't want to be sent an exe. With AutotIt installed, you can run the script directly without compiling it.
I would stick to simple example:
Code:
      polarizer at 0    polarizer at 22.5
p=0   cos^2(0-0)  =1    cos^2(0-22.5)  =0.85  difference=0.15
p=45  cos^2(45-0) =0.5  cos^2(45-22.5) =0.85  difference=0.35
p=90  cos^2(90-0) =0    cos^2(90-22.5) =0.15  difference=0.15
p=135 cos^2(135-0)=0.5  cos^2(135-22.5)=0.15  difference=0.35
average difference=0.25

my_wan said:
I also just notice you contradicted yourself. You say:
1) ...it is reasonable only if you assume that you have source with even mixture of H and V photons.
2) If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.

But a random distribution is an "even mixture of H and V" as defined by 1), just not all on the same 2 axis. For a random distribution, there statistically exist both an opposite and perpendicular case for every possible polarization instance.
The statement in bold makes the difference between 1) and 2).
 
  • #731
zonde said:
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|

This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.

Consider, you have 1 polarizer at 1 setting (0 degrees) with 1 uncorrelated beam pointed at it, such that 50% of the light goes through. You change settings to 22.5 degrees. Now 15% of the V photons switch from going through to not going through the detector, sin^2(22.5). Now at the SAME 22.5 degree setting, you get cos^2(67.5) = 15% more detections from the H photons. 15% lost from V and 15% gained from H. This is even more general in that sin^2(theta) = |cos^2(90-theta)| for all theta. This is NOT a counterfactual measure. This is what you get from the one measure you are getting at the one setting. So you can't use cos from the previous measurement you are not currently measuring. Else it amounts to subtracting cos from a cos that's not even part of the polarizer setting at that time, which breaks it's consistency with BI violations statistics for other possible settings.

ONLY include the statistics of whatever measurement you are performing at THAT time, and you get statistical consistency between BI violations and photon route switching without correlations, with purely randomized photon polarizations. The key is DON'T mix the math for both settings for one measurement. This is key to subverting the couterfactuals in BI and still getting the same statistics. Only count what photons you can empirically expect to switch routes upon switching to that ONE setting by counting H adds and V subtracts at that ONE setting.

Then, by noting it's applicable at all thetas it remains perfectly valid for fully randomized photon polarizations at ANY arbitrary setting, provided you are allowed to arbitrarily relabel the 0 point of the non-physical coordinate labels.
 
  • #732
Besides, you can't change my formula, then claim my formula doesn't do what I claimed because the formula you swapped in doesn't. :wink:
 
  • #733
my_wan said:
This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.
Yes, that is only speculation. Nothing straightforwardly testable.

my_wan said:
Consider, you have 1 polarizer at 1 setting (0 degrees) with 1 uncorrelated beam pointed at it, such that 50% of the light goes through. You change settings to 22.5 degrees. Now 15% of the V photons switch from going through to not going through the detector, sin^2(22.5). Now at the SAME 22.5 degree setting, you get cos^2(67.5) = 15% more detections from the H photons. 15% lost from V and 15% gained from H.
Or lost 35% and gained 35%. Or lost x% and gained x%.
The question is not about lost photon count = gained photon count.
Question is about this number - 15%.
You will keep insisting that it's 15% because it's 15% both ways then we can stop our discussion right there.

my_wan said:
This is even more general in that sin^2(theta) = |cos^2(90-theta)| for all theta.
sin(theta)=cos(90-theta) is trivial trigonometric identity. What you expect to prove with that?

my_wan said:
This is NOT a counterfactual measure. This is what you get from the one measure you are getting at the one setting. So you can't use cos from the previous measurement you are not currently measuring. Else it amounts to subtracting cos from a cos that's not even part of the polarizer setting at that time, which breaks it's consistency with BI violations statistics for other possible settings.

ONLY include the statistics of whatever measurement you are performing at THAT time, and you get statistical consistency between BI violations and photon route switching without correlations, with purely randomized photon polarizations. The key is DON'T mix the math for both settings for one measurement. This is key to subverting the couterfactuals in BI and still getting the same statistics. Only count what photons you can empirically expect to switch routes upon switching to that ONE setting by counting H adds and V subtracts at that ONE setting.
Switch routes to ... FROM what?
You have no switching with ONE setting. You have to have switching FROM ... TO ... otherwise there is no switching.
 
  • #734
my_wan said:
That's more than a little difficult when you seem to falsely represent any particular contextualization of variables as a Fair Sampling argument. Refer back to 2. where your response was unrelated to my objection to labeling contextualization arguments as a Fair Sampling argument.

To me, the (Un)Fair Sampling argument is as follows: "The full universe does not respect Bell's Inequality (or similar), while a sample does. The reason an attributes of the sample is different than that of the universe is that certain data elements are more likely to be detected than others, causing a skewing of the results."

I reject this argument as untenable; however, I would say my position is not generally accepted. A more generally accepted argument is that the GHZ argument renders the Fair Sampling assumption moot.

Now, I am not sure how this crept into our discussion except that as I recall, you indicated that this had some relevance to Bell. I think it is more relevant to tests of Bell's Inequality, which we aren't really discussing. So if there is nothing further to this line, we can drop it.
 
  • #735
my_wan said:
1) You say: "Your argument here does not follow regarding vectors. So what if it is or is not true?", but the claim about this aspect of vectors is factually true. Read this carefully:
http://www.vias.org/physics/bk1_09_05.html
Note: Multiplying vectors from a pool ball collision under 2 different coordinate systems don't just lead to the same answer expressed in a different coordinate system, but an entirely different answer altogether. For this reason such vector operations are generally avoided, using scalar multiplication instead. Yet the Born rule and cos^2(theta) do just that.
2) You say: "I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out.", but they don't commute for pool balls either, when used this way. That doesn't make pool balls not real. Thus the formalism has issues in this respect, not the reality of the pool balls. I even explained why: because given only the product of a vector, there exist no way of -uniquely- defining the particular vectors that went into defining it.

Again, I am missing your point. So what? How does this relate to Bell's Theorem or local realism?
 

Similar threads

2
Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
2K
Replies
6
Views
2K
Replies
2
Views
1K
Replies
100
Views
10K
Replies
6
Views
3K
Back
Top