Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #981
DrChinese said:
I can see why it is hard to understand.

a) Fill any a set of hidden variables for angle settings 0, 120 and 240 degrees for a group of hypothetical entangled photons.
b) This should be accompanied by a formula that allows me to deduce whether the photons are H> or V> polarized, based on the values of the HVs.
c) The results should reasonably match the predictions of QM, a 25% coincidence rate, regardless of which 2 different settings I might choose to select. I will make my selections randomly, before I look at your HVs but after you have established their values and the formula.

When Christian shows me this, I'll read more. Not before, as I am quite busy: I must wash my hair tonight.
Your 'realism' requirement remains a mystery. Christian's paper is there for you to critique. I don't think you understand it.
 
Physics news on Phys.org
  • #982
my_wan said:
... I'm not trying to make the point that Bell was wrong, he was absolutely and unequivocally right, within the context of the definition he used. I'm merely rejecting the over generalization of that definition. Even if no such realistic model exist, by any definition, I still want to investigate all these different principles that might be behind such an effect. The authoritative claim that Bell was right is perfectly valid, to over generalize that into a sea of related unknowns, even by authoritative sources, is unwarranted.


my_wan, could we make a parallel to the situation when Albert Einstein, over a hundred years ago, started to work on his theory of relativity? (Note, I'm not saying that you are Einstein! :smile:)

Einstein did not reject the work of Isaac Newton. In most ordinary circumstances, Newton's law of universal gravitation is perfectly valid, and Einstein knew that. The theory of relativity is 'merely' an 'extension' to extreme situations, where we need finer 'instrumentation'.

Beside this fact; Einstein also provided a mechanism for gravity, which thus far had been a paradox, without any hope for a 'logical' explanation. As Newton put it in a letter to Bentley in 1692:

"That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it."

If we look at the situation today, there are a lot of ('funny' "action at a distance") similarities. QM & SR/GR works perfectly fine side by side, in most 'ordinary' circumstances. It's only in very rare situations that we see clear signs of something that looks like an undisputable contradiction between Quantum mechanics and Relativity.

John Bell did absolutely not dispute the work of the grandfathers of QM & SR/GR - he was much too intelligent for that. It's only cranky "scientists" like Crackpot Kracklauer who, without hesitating, dismisses QM & SR/GR as a "foundation" for their "next paradigm" in physics.

Now, if we continue the parallel; it's easy to see that IF Einstein would have had the same "mentality" as Crackpot Kracklauer and billschnieder - he would NOT have been successful in formulating the theory of relativity.

A real crackpot would have started his work, in the beginning of the 1900-th century, by stating:

It's not plausible (I can feel it in my gut!), to imagine bodies affecting each other thru vacuum at a distance! Therefore I shall prove that the mathematical genius Isaac Newton made a terrible mistake when he used a comma instead of a vertical bar, and consequently, Newton's law of universal gravitation is all false. Gravity does not exist. Period.

I shall also prove that there are no experiments proving the existence of Newton's law that has closed all loopholes simultaneously, and there never will be.

I don't know about you, but to me - this is all pathetic. It's clear that billschnieder, with Crackpot Kracklauer as the main source of inspiration, are undoubtedly arguing along these cranky lines above. And to some extent, so does ThomasT, even if he has changed attitude lately.

So I agree, Bell's Theorem can very well be the sign for the "Next Einstein" to start working on an 'extension' to QM & SR/GR, that would make them 100% compatible, and besides this, also provide a mechanism for what we see in current theories and thousands of performed experiments.

This "Next Einstein" must without any doubts include ALL THE WORK OF THE GRANDFATHERS, since in all history of science THIS HAS ALWAYS BEEN THE CASE.

Looking for commas and vertical bars is a hilarious permanent dead-end.
 
  • #983
DevilsAvocado said:
my_wan, could we make a parallel to the situation when Albert Einstein, over a hundred years ago, started to work on his theory of relativity? (Note, I'm not saying that you are Einstein! :smile:)

Einstein did not reject the work of Isaac Newton. In most ordinary circumstances, Newton's law of universal gravitation is perfectly valid, and Einstein knew that. The theory of relativity is 'merely' an 'extension' to extreme situations, where we need finer 'instrumentation'.

I have not provided any well defined mechanisms to equate in such a way. Certainly any such future models can't simply reject the standard model on the grounds of some claim of ontological 'truth'. That is raw crackpottery, even if they are right in some sense. There's a term for that: "not even wrong".

The notion that a particular ontological notion of realism, predicated on equating properties with localized things (localized not meant in a FTL sense here), can be generalized over the entire class called realism simply exceeds what the falsification of that one definition, with it's ontological predicates, justifies.

The individual issues I attempted to discuss were considered incomprehensible when viewed from an ontological perspective they weren't predicated on. Well duh, no kidding. I only hoped to get some criticism on the points, irrespective of what it entailed in terms of realism, to help articulates such issues more clearly. But so long as responses are predicated on some singular ontological notion of realism, as if it fully defined "realism", the validity of BI within that ontological context insures the discussion will go nowhere. I'll continue to investigate such issues myself.

My core point, the overgeneralization of BI local realism to all realism classes, remains valid. Being convinced of the general case by a proof of a limited case is, at a fundamental level, tantamount to proof by lack of evidence. It is therefore invalid, but might not be wrong. I certainly haven't demonstrated otherwise.
 
  • #984
my_wan said:
I have not provided any well defined mechanisms to equate in such a way. Certainly any such future models can't simply reject the standard model on the grounds of some claim of ontological 'truth'. That is raw crackpottery, even if they are right in some sense. There's a term for that: "not even wrong".

I agree, I agree very much. I think your 'agenda' is interesting and healthy. billschnieder on the other hand... well, those words are not allowed here...
 
  • #985
ThomasT said:
JesseM, regarding intellectual humility, don't ever doubt that I'm very thankful that there are educated people like you and DrC willing to get into the details, and explain your current thinking to feeble minded laypersons, such as myself, who are interested in and fascinated by various physics conundrums.

Welcome to club ThomasT! I'm glad that you have finally stepped down from the "sophisticated" throne and become an open-minded "wonderer" as many others in this thread, with respect for professionals with much greater knowledge. :wink:

ThomasT said:
You've said that Bell's(2) isn't about entanglement.

No, JesseM didn't say that - I made that laymanish simplification. JesseM wanted more details:

JesseM said:
Basically I'd agree, although I'd make it a little more detailed: (2) isn't about entanglement, it's about the probabilities for different combinations of A and B (like A=spin-up and B=spin down) for different combinations of detector settings a and b (like a=60 degrees, b=120 degrees), under the assumption that there is a perfect correlation between A and B when both sides use the same detector setting, and that this perfect correlation is to be explained in a local realist way by making use of hidden variable λ.

The key is: Bell's (2) is about perfect correlation, explained in a local realist way, using the Hidden variable λ.

ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.

The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results. But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.) I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.

...

Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl? Stated simply by DA, and which you (and I) agree with:

ThomasT, I see you and billschnieder spend hundreds of posts in trying to disprove Bell's (2) with various farfetched arguments, believing that if Bell's (2) can be proven wrong – then Bell's Theorem and all other work done by Bell will go down the drain, including nonlocality.

I'm only a layman, but I think this is terrible wrong, and I think I can prove it to you in a very simple way.

But first, let's start from the beginning – to be sure that we are indeed talking about the same matters:

After a long debate between Albert Einstein and Niels Bohr, about the uncertain nature of QM, Einstein finally formulated the EPR paradox in 1935 (together with Boris Podolsky and Nathan Rosen).

The aim of the EPR paradox was to show that there was a preexisting reality at the microscopic QM level - that the QM particles indeed had a real value before any measurements were performed (thus disproving Heisenberg uncertainty principle HUP).

To make the EPR paper extremely short; If we know the momentum of a particle, then by measuring the position on a twin particle, we would know both momentum & position for a single QM particle - which according to HUP is impossible information, and thus Einstein had proven QM to be incomplete ("God does not play dice").

Okay? Do you agree?


Einstein & Bohr could never solve this dispute between them as long as they lived (which bothered Bohr throughout his whole life). And as far as I understand, Einstein in his last years, became more at 'unease' with the signs of nonlocality, than the original question of the uncertain nature of QM.

Thirty years after the publication of the EPR paradox, John Bell entered the scene. To my understanding, Bell was hoping that Einstein was right, but as the real scientist as he was, he didn't hesitate to publish what he had found – even if this knowledge was a contradiction to his own 'personal taste'.

In the original paper from 1964, Bell formulates in Bell's (2) the mathematical probabilities representing the vital assumptions made by Einstein in 1949, on the EPR paradox:

"But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of system S2 is independent of what is done with system S1, which is spatially separated from the former."

In Bell's (3) he makes an equal QM expectation value, and then he states, in third line after Bell's (2):

"BUT IT WILL BE SHOWN THAT THIS IS NOT POSSIBLE"

(my caps+bold)​

Do you understand why we get upset when you and billschnieder argue the way you do? You are urging PF users to read cranky papers - while you & billschnieder obviously hasn’t read, or understand, the original Bell paper that this is all about??

Do you really think that John Bell was incapable of formulating the probabilities for getting spin up/down from a local preexisting hidden variable? Or, the odds of getting a red/blue card out of a box? If applying Bell's (2) on the "card trick" we would get 0.25, according to billschnieder, instead of 0.5!? The same man who undoubtedly discovered something that both geniuses Albert Einstein and Niels Bohr missed completely? Do you really think that this is a healthy non-cranky argument to spend hundreds of posts on??!?

Never mind. Forget everything you have (not) "learned". Forget everything and start from scratch. Because now I'm going to show you that there is a problem with locality in EPR, with or without Bell & BI. And we are only going to use your personal favorite – Malus' law.

(Hoping that you didn’t had too many hotdogs & beers tonight? :rolleyes:)



Trying to understand nonlocality - only with Malus' law, and without BI!

Malus' law: I = I0 cos2 θi

Meaning that the intensity (I) is given by the initial intensity (I0) multiplied by cos2 and the angle between the light’s initial polarization direction and the axis of the polarizer (θi).

Translated to QM and one single photon, we get the probability for getting thru the polarizer in cos2i)

If 6 photons have polarization direction 0º, we will get these results at different polarizer angles:

Code:
[B]Angle	Perc.	Result[/B]
----------------------
0º	100%	111111
22.5º	85%	111110
45º	50%	111000
67.5º	15%	100000
90º	0%	000000

1 denotes the photon got thru and 0 denotes stopped. As you can see, this is 100% compatible to Malus' law and the intensity of polarized light.

In experiments with entangled photons, the parameters are tuned and adjusted to the laser polarization, to create the state |ΨEPR> and the coincidence count for N(0º,0º) and N(90º,90º) and N(45º,45º) is checked to be accurate.

As you see, not one word about Bell or BI so far, it’s only Malus' law and EPR.

Now, if we run 6 entangled photons for Alice & Bob with both polarizers at 0º, we will get something like this:

Code:
[B]A(0º) 	B(0º)	Correlation[/B]
---------------------------
101010	101010	100%

The individual outcome for Alice & Bob is perfectly random. It's the correlation that matters. If we run the same test once more, we could get something like this:

Code:
[B]A(0º) 	B(0º)	Correlation[/B]
---------------------------
001100	001100	100%

This time we have different individual outcome, but the same perfect correlation statistics.

The angle of the polarizers is not affecting the result, as long as they are the same. If we set both to 90º we could get something like this:

Code:
[B]A(90º) 	B(90º)	Correlation[/B]
---------------------------
110011	110011	100%

Still 100% perfect correlation.

(In fact, the individual outcome for Alice & Bob can be any of the 64 combinations [26] on any angle, as long as they are identical, when the two angles are identical.)

As you might have guessed, there is absolutely no problem to explain what is happening here by a local "phenomena". I can write a computer program in 5 min that will perfectly emulate this physical behavior. All we have to do is give the entangled photon pair the same random preexisting local value, and let them run to the polarizers. No problem.

Now, let's make things a little more 'interesting'. Let's assume that Alice polarizer will stay fixed at angle 0º and Bob's polarizer will have any random value between 0º and 90º. To not make things too complicated at once, we will only check the outcome when Alice get a photon thru = 1.

What will the probabilities be for Bob, at all these different angles? Is it at all possible to calculate? Can we make a local prediction?? Well YES!

Code:
[B]Bob	Corr.	Result[/B]
----------------------
0º	100%	111111
22.5º	85%	111110
45º	50%	111000
67.5º	15%	100000
90º	0%	000000

WE RUN MALUS' LAW! And it works!

Obviously at angles 0º and 90º the individual photon outcome must be exactly as above. For any other angle, the individual photon outcome is random, but the total outcome for all 6 photons must match Malus' law.

But ... will this work even when we count Alice = 0 at 0º ... ??

Sure! No problem!

All we have to do is to check locally if Alice is 0 or 1, and mirror the probabilities according to Malus' law. If Alice = 0 we will get this for Bob:

Code:
[B]Bob	Corr.	Result[/B]
----------------------
0º	100%	000000
22.5º	85%	100000
45º	50%	111000
67.5º	15%	111110
90º	0%	111111

Can I still write a computer program that perfectly emulates this physical behavior? Sure! It will maybe take 15 min this time, but all I have to do is to assign Malus' law locally to Bob's photon, in respect of Alice random value 1/0, and let the photons run to the polarizers. No problem.

We should note that Bob's photons in this scenario will not have a preexisting local value, before leaving the common source. All Bob's photons will get is Malus' law, 'adapted' to Alice preexisting local value 1 or 0.

I don't know if this qualify for local realism, but it would work mathematically, and could be emulated perfectly in a computer program.

And please note: Not one word about Bell or BI so far, only Malus' law and EPR.


BUT NOW IT'S TIME FOR THAT 'LITTLE THING' THAT CHANGES EVERYTHING! :devil:

Are you ready ThomasT? This is the setup:

Alice & Bob are separated by 20 km. The source creating entangled photon pairs is placed in the middle, 10 km from Alice and 10 km from Bob.

The polarizers at Alice & Bob are rotating independently random at very high speed between 0º and 90º.

It takes light 66 microseconds (10-6) to travel 20 km (in vacuum) from Alice to Bob.

The total time for electronic and optical processes in the path of each photon at the detector is calculated to be approximately 100 nanoseconds (10-9).

Now the crucial question is - Can we make anything at the local source to 'save' the statistics at polarizers separated by 20 km? Can we use any local hidden variable or formula, or some other unknown 'magic'?? Could we maybe use the 'local' Malus' law even in this scenario to 'fix it'??

I say definitely NO. (What would that be?? A 20 km long Bayesian-probability-chain-rule? :eek:)

WHY!?

BECAUSE WE DO NOT KNOW WHAT ANGLE THE TWO POLARIZERS SEPARATED BY 20 KM WILL HAVE UNTIL THE LAST NANOSECONDS AND IT TAKES 66 MICROSECONDS FOR ALICE & BOB TO EXCHANGE ANY INFORMATION.

ThomasT, I will challenge you on the 'easiest' problem we have here - to get a perfect correlation (100%) when Alice & Bob measures the entangled photon pairs at the same angle. That's all.

Could you write a simple computer program, or explain in words and provide some examples of the outcome for 6 pair of photons, as I have done above, how this could be achieved without nonlocality or FTL?

(Philosophical tirades on "joint probabilities" etc are unwarranted, as they don't mean anything practical.)

If you can do this, and explain it to me, I promise you that I will start a hunger strike outside the door of the Royal Swedish Academy of Sciences, until you get the well deserved Nobel Prize in Physics!

AND REMEMBER – I HAVE NOT MENTIONED ONE WORD ABOUT BELL OR BI !

Good luck!


P.S. Did I say that you are not allowed to get perfect correlation (100%) anywhere else in your example, when the angles differ? And "weird" interpretations don’t count. :biggrin:
 
Last edited:
  • #986
question about the double split experiment.

So detectors placed at the slits create the wave function collapse of the photon! why doesn't the actual slit experiment itself create the wave function collapse?
 
  • #987
I'm curious if it's possible to create polarization entangled beams in which the each beam can have some statistically significant non-uniform polarization. The shutter idea I suggested breaks the inseparable condition, collapses the wavefunction so to speak. Yet still might be worth looking at in some detail.

Anybody know what kind of effects a PBS would have on the polarization of a polarized beam that is passed through it? Would each resulting beam individually retain some preferential polarization?

Rabbitrabbit,
Not real sure what your asking. It appears a bit off topic in this thread. The interference pattern, locations and distribution of the individual points of light, doesn't tell you which hole the photons came through. So how can it collapse the wave function of something that can't be known from the photon detections? This thread is probably not the best place for such a discussion.
 
  • #988
Does BI violations require an oversampling, relative to the (max) classical limit, of the "full Universe" to account for? This may be an entirely separate argument from the local or realism issues, but the answer is no. Here's why.

Pick any offset, such as 22.5, and note the over-count relative to the (max) classical limit, 10.36% in this case. Now for every unique angle in which a coincidences exceed the classical limit, there exist a one to one correspondence to a unique angle that undercounts the (max) classical limit by that same percentage. In the example given it's 67.5. Quantitatively equivalent angles, of course, exist in each quadrant of the coordinate system, but a truly unique one to one correspondence exist in each quadrant alone, of a given coordinate choice.

This, again, doesn't involve or make any claims about the capacity for a classical model to mimic product state statistics. What it does prove is that a coincidence average over all possible settings, involving BI violations, does not exceed the coincidence average, over all settings, given a classical 'maximum' per Bell's ansatz. They are equal averaged over all settings.

The point of this is that I agree that the "unfair sample" argument isn't valid. By this I mean that the notion that you can account for the observed relative variations by assuming that a sufficient portion of the events go undetected is incongruent with experimental constraints. However, other forms of sampling arguments can also in general be defined as an "unfair sampling" argument. Which don't necessarily involve missing detections. Thus it may not always be valid to invoke the illegitimacy of the missing detection "unfair sampling" argument to every "fair sampling" argument.

In fact the only way to rule out all possible forms of a sampling argument is to demonstrate that the sum of all coincidences over all possible detector settings exceeds the classical maximum limit. Yet the above argument proves they are exactly equivalent in this one respect.

Any objections?
 
  • #989
DevilsAvocado said:
... ThomasT, I will challenge you on the 'easiest' problem we have here - to get a perfect correlation (100%) when Alice & Bob measures the entangled photon pairs at the same angle. That's all.

Could you write a simple computer program, or explain in words and provide some examples of the outcome for 6 pair of photons, as I have done above, how this could be achieved without nonlocality or FTL?
...
If you can do this, and explain it to me, I promise you that I will start a hunger strike outside the door of the Royal Swedish Academy of Sciences, until you get the well deserved Nobel Prize in Physics!


OMG! I have to give the Nobel to myself! :smile:

Sorry... :redface:

All we have to do is to assign Malus' to both Alice & Bob (mirrored randomly 1/0), and this will work fine for checking perfect correlation (100%) at the same angle:

Code:
[B]Angle	Bob	Alice	Correlation[/B]
-----------------------------------
0º	111111	111111	100%
22.5º	111110	111110	100%
45º	111000	111000	100%
67.5º	100000	100000	100%
90º	000000	000000	100%


The 'problems' only occurs when we have different angles for Alice & Bob (except 0º/90º):

Code:
[B]A 67.5º	B 22.5º	Correlation[/B]
---------------------------
100000	111110	33%

Here the difference is 67.5 - 22.5 = 45º and the correlation should be 50%, and this is also depends on the individual outcome, since this will give 0% correlation (instead of the correct 50%):

Code:
[B]A 67.5º	B 22.5º	Correlation[/B]
---------------------------
000001	111110	0%


Well, something more to consider... it’s apparently possible to solve the perfect correlation locally... and maybe that’s what Bell has been telling us all the time! :biggrin:

Sorry again. :blushing:
 
Last edited:
  • #990
my_wan said:
... In fact the only way to rule out all possible forms of a sampling argument is to demonstrate that the sum of all coincidences over all possible detector settings exceeds the classical maximum limit. Yet the above argument proves they are exactly equivalent in this one respect.

Any objections?

The "fair sampling assumption" is also called the "no-enhancement assumption", and I think that is a much better term. Why should we assume that nature has an unknown "enhancement" mechanism that filter out those photons, and only those, who would give us a completely different experimental result!?

Wouldn’t that be an even stranger "phenomena" than nonlocality?:bugeye:?

And the same logic goes for "closing all loopholes at once". Why nature should chose to expose different weaknesses in different experiments? That is closed separately??

It doesn’t make sense.
 
  • #991
Here's a particular case were the fair sampling of full Universe objection may not be valid, in the thread:
https://www.physicsforums.com/showthread.php?t=369286"
DrChinese said:
Strangely, and despite the fact that it "shouldn't" work, the results magically appeared. Keep in mind that this is for the "Unfair Sample" case - i.e. where there is a subset of the full universe. I tried for 100,000 iterations. With this coding, the full universe for both setups - entangled and unentangled - was Product State. That part almost makes sense, in fact I think it is the most reasonable point for a full universe! What doesn't make sense is the fact that you get Perfect Correlations when you have random unknown polarizations, but get Product State (less than perfect) when you have fixed polarization. That seems impossible.

However, by the rules of the simulation, it works.

Now, does this mean it is possible to violate Bell? Definitely not, and they don't claim to. What they claim is that a biased (what I call Unfair) sample can violate Bell even though the full universe does not. This particular point has not been in contention as far as I know, although I don't think anyone else has actually worked out such a model. So I think it is great work just for them to get to this point.

Here "unfair sampling" was equated with a failure to violate BI, while the "full universe" was invoked to differentiate between BI and the and a violation of BI. Yet, as I demonstrated in https://www.physicsforums.com/showthread.php?p=2788956#post2788956", the BI violations of QM, on average of all setting, does not contain a "full universe" BI violation.

Let's look at a more specific objection, to see why the "fair sampling" objection may not valid:
DrChinese said:
After examining this statement, I believe I can find an explanation of how the computer algorithm manages to produce its results. It helps to know exactly how the bias must work. :smile: The De Raedt et al model uses the time window as a method of varying which events are detected (because that is how their fair sampling algorithm works). That means, the time delay function must be - on the average - such that events at some angle settings are more likely to be included, and events at other angle setting are on average less likely to be included.

Here it was presented 'as if' event detections failures represented a failure to detect photons. This is absolutely not the case. The detection accuracy, of photons, remained constant throughout. Only the time window in which they were detected varied, meaning there was no missing detections, only a variation of whether said detections fell within a coincidence window or not. Thus the perfectly valid objection to using variations in detection efficiency (unfair sampling) does not apply to all versions of unfair sampling. The proof provided in https://www.physicsforums.com/showthread.php?p=2788956#post2788956" tells us QM BI violations are not "full universe" BI violation either.
 
Last edited by a moderator:
  • #992
DevilsAvocado said:
The "fair sampling assumption" is also called the "no-enhancement assumption", and I think that is a much better term. Why should we assume that nature has an unknown "enhancement" mechanism that filter out those photons, and only those, who would give us a completely different experimental result!?

Wouldn’t that be an even stranger "phenomena" than nonlocality?:bugeye:?

And the same logic goes for "closing all loopholes at once". Why nature should chose to expose different weaknesses in different experiments? That is closed separately??

It doesn’t make sense.

That depends on what you mean by "enhancement". If by "enhancement" you mean that a summation of all possible or "full universe" choice of measurements settings leads to an excess of detection events, then yes, I would agree. But the point of post #988 was that the BI violations defined by QM, and measured, do not "enhance" detection totals over the classical limit when averaged over the "full universe" of detector settings.

That is that for ever detector setting choice which exceeds the classical coincidence limit, there provably exist another choice where coincidences fall below classical coincidence limit, by the exact same amount.

22.5 and 67.5 is one pair such that cos^2(22.5) + cos^2(67.5) = 1. These detection variances are such that there exist an exact one to one ratio between overcount angles and quantitatively identical undercount angles, such that averaged over all possible setting QM and the classical coincidence limits exactly match.
 
  • #993
To make the difference between an experimentally invalid "unfair sampling" argument, involving detection efficiencies, and more general "fair sampling" arguments more clear, consider:

You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.
 
  • #994
There's a comparison I'd like to make between the validity of BI violations applied to realism and the validity of objections to fair sampling arguments.

When I claim that the implications of BI are valid but are often overgeneralized, the exact same thing happened, in which the demonstrable invalidity of "unfair sampling", involving detection efficiencies, is overgeneralized to improperly invalidate all "fair sampling" arguments.

The point here is that you are treading in dangerous territory when you attempt to apply a proof involving a class instance to make claims about an entire class. Doing so technically invalidates the claim, whether you are talking about the "fair sampling" class or the "realism" class. Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Of course you can try and object to my refutation of the invalidity of "fair sampling" when such "fair sampling" doesn't involve less than perfect detection efficiencies. :biggrin:
 
  • #995
my_wan said:
There's a comparison I'd like to make between the validity of BI violations applied to realism and the validity of objections to fair sampling arguments.

When I claim that the implications of BI are valid but are often overgeneralized, the exact same thing happened, in which the demonstrable invalidity of "unfair sampling", involving detection efficiencies, is overgeneralized to improperly invalidate all "fair sampling" arguments.

The point here is that you are treading in dangerous territory when you attempt to apply a proof involving a class instance to make claims about an entire class. Doing so technically invalidates the claim, whether you are talking about the "fair sampling" class or the "realism" class. Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Of course you can try and object to my refutation of the invalidity of "fair sampling" when such "fair sampling" doesn't involve less than perfect detection efficiencies. :biggrin:


Dear my_wan;

This is very interesting to me. I would love to see some expansion on the points you are advancing, especially about this:

Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Many thanks,

JenniT
 
  • #996
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).

For example, you can write any real number, given you as input. However, as power of continuum is higher than the power of integers, there are infinitely many real numbers, which can't be given and an example. You can even provide a set of real numbers, defined in a tricky way so you can't give any examples of the numbers, belonging to that set, even that set covers [0,1] almost everywhere and it has infinite number of members!

Imagine: set of rational numbers. For example, 1/3
Set of transendent numbers, for example pi.
Magic set I provide: no example can be given

It becomes even worse when some properties belong exclusively to that 'magic' set. See Banach-Tarski paradox as an example. No example of that weird splitting can be provided (because if one could do it then the theorem could be proven without AC)
 
  • #997
my_wan said:
... Here it was presented 'as if' event detections failures represented a failure to detect photons. This is absolutely not the case. The detection accuracy, of photons, remained constant throughout. Only the time window in which they were detected varied, meaning there was no missing detections, only a variation of whether said detections fell within a coincidence window or not. Thus the perfectly valid objection to using variations in detection efficiency (unfair sampling) does not apply to all versions of unfair sampling. The proof provided in https://www.physicsforums.com/showthread.php?p=2788956#post2788956" tells us QM BI violations are not "full universe" BI violation either.

Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:

6oztpt.png


I don’t think this has much to do with real experiments – this is a case of trial & error and "fine-tuning".

One thing that I find 'peculiar' is the case that the angles of the of the detectors are not independently random, angle1 is random but angle2 is always at fixed value offset...?:confused:?

To me this does not look like the "real thing"...

Code:
' Initialize the detector settings used for all trials for this particular run - essentially what detector settings are used for "Alice" (angle1) and "Bob" (angle2)
If InitialAngle = -1 Then
  angle1 = Rnd() * Pi ' set as being a random value
  Else
  angle1 = InitialAngle ' if caller specifies a value
  End If
angle2 = angle1 + Radians(Theta) ' fixed value offset always
angle3 = angle1 + Radians(FixedOffsetForChris) ' a hypothetical 3rd setting "Chris" with fixed offset from setting for particle 1, this does not affect the model/function results in any way - it is only used for Event by Event detail trial analysis

...

For i = 1 To Iterations:

  If InitialAngle = -2 Then ' SPECIAL CASE: if the function is called with -2 for InitialAngle then the Alice/Bob/Chris observation settings are randomly re-oriented for each individual trial iteration.
    angle1 = Rnd() * Pi ' set as being a random value
    angle2 = angle1 + Radians(Theta) ' fixed value offset always
    angle3 = angle1 + Radians(FixedOffsetForChris) ' a hypothetical 3rd setting "Chris" with fixed offset from setting for particle 1, this does not affect the model/function results in any way - it is only used for Event by Event detail trial analysis
    End If

...
 
Last edited by a moderator:
  • #998
Dmitry67 said:
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).

For example, you can write any real number, given you as input. However, as power of continuum is higher than the power of integers, there are infinitely many real numbers, which can't be given AS an example. You can even provide a set of real numbers, defined in a tricky way so you can't give any examples of the numbers, belonging to that set, even IF that set covers [0,1] almost everywhere and it has infinite number of members!

Imagine: set of rational numbers. For example, 1/3
Set of transendent numbers, for example pi.
Magic set I provide: no example can be given

It becomes even worse when some properties belong exclusively to that 'magic' set. See Banach-Tarski paradox as an example. No example of that weird splitting can be provided (because if one could do it then the theorem could be proven without AC)

Dear Dmitry67, many thanks for quick reply. I put 2 small edits in CAPS above.

Hope that's correct?

But I do not understand your "imagine" ++ example.

Elaboration in due course would be nice.

Thank you,

JenniT
 
  • #999
my_wan said:
That depends on what you mean by "enhancement".
It means exactly the same as "fair sampling assumption": That the sample of detected pairs is representative of the pairs emitted.

I.e. we are not assuming that nature is really a tricky bastard, by constantly not showing us the "enhancements" that would spoil all EPR-Bell experiments, all the time. :biggrin:

my_wan said:
the "full universe" of detector settings.
What does this really mean??

my_wan said:
That is that for ever detector setting choice which exceeds the classical coincidence limit, there provably exist another choice where coincidences fall below classical coincidence limit, by the exact same amount.

22.5 and 67.5 is one pair such that cos^2(22.5) + cos^2(67.5) = 1. These detection variances are such that there exist an exact one to one ratio between overcount angles and quantitatively identical undercount angles, such that averaged over all possible setting QM and the classical coincidence limits exactly match.

my_wan, no offence – but is this the "full universe" of detector settings?:bugeye:?

I don’t get this. What on Earth has cos^2(22.5) + cos^2(67.5) = 1 to do with the "fair sampling assumption"...?

Do you mean that we are constantly missing photons that would, if they were measured, always set correlation probability to 1?? I don’t get it...
 
  • #1,000
my_wan said:
... You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.

I could be wrong (as last time when promising a Nobel o:)). But to my understanding, the question of "fair sampling" is mainly a question of assuming – even if we only have 1% detection efficiency – that the sample we do get is representative of all the pairs emitted.

To me, this is as natural as when you grab hand of white sand on a white beach, you don’t assume that every grain of sand that you didn’t get into your hand... is actually black! :wink:
 
  • #1,001
DevilsAvocado said:
Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:
I'm in the process of reviewing De Raedt's work. I'm not convinced of his argument, the physical interpretation is quiet a bit more complex. Even made the observation:
[PLAIN]http://arxiv.org/abs/1006.1728 said:
The[/PLAIN] EBCM is entirely classical in the sense that it uses concepts of the macroscopic world and makes no reference to quantum theory but is nonclassical in the sense that it does not rely on the rules of classical Newtonian dynamics.

My point does not depend on any such model, working or not, or even whether or not the claim itself was ultimately valid. I argued against the validity only of the argument itself, not it's claims. My point was limited to the over-generalization of interpreting the obvious invalidity of a "fair sampling" involving detection efficiencies to all "fair sampling" arguments that assumes nothing less than perfect detection efficiencies.

In this way, my argument is not dependent of De Raedt's work at all, and only came into play as an example involving DrC's rebuttal which inappropriately generalized "fair sampling" as invalid, on the basis that a class instance of "fair sampling" that assumes insufficient detection efficiencies is invalid.

DevilsAvocado said:
I don’t think this has much to do with real experiments – this is a case of trial & error and "fine-tuning".

One thing that I find 'peculiar' is the case that the angles of the of the detectors are not independently random, angle1 is random but angle2 is always at fixed value offset...?:confused:?

To me this does not look like the "real thing"...
Yes, it appears to suffer in the same way my own attempts did, but I haven't actually got that far yet in the review. If the correspondence holds, then he accomplished algebraically what I did with a quasi-random distribution of a bit field. However, when you say angle2 is always at fixed value offset, what is it always offset relative to? You can spin the photon source emitter without effect, so it's not a fixed value offset relative to the source emitter. It's not a fixed value offset relative to the the other detector. In fact, the fixed offset is relative to an arbitrary non-physical coordinate choice, which itself can be arbitrarily chosen.

I still need a better argument to fully justify this non-commutativity between arbitrary coordinate choices, but the non-commutativity of classical vector products may pay a role.

Again, my latest argument is not predicated on De Raedt's work or any claim that BI violations can or can't be classically modeled. My argument was limited to, and only to, the use of applying the invalidity of an "unfair sampling" involving limited detection efficiencies to "unfair sampling" not involving any such limits in detection efficiencies. It's a limit of what can be claimed as a proof, and involves no statements about how nature is.
 
Last edited by a moderator:
  • #1,002
JenniT said:
Dear my_wan;

This is very interesting to me. I would love to see some expansion on the points you are advancing, especially about this:

Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Many thanks,

JenniT
I give an example in post #993, when I described two different "fair sampling" arguments. One involving variations in detection statistics, the other involving variations involving detection timing. The point was not that either is a valid explanation of BI violations, the point was that proving the first instance is invalid in the EPR context does not rule out the second instance. Yet they are both members of the same class called "fair sampling" arguments. This was only an example, not a claim of a resolution to BI violations.

Dmitry67 said:
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).
Yes! I personally think it likely you have made fundamental connection that gets a bit deeper than what I could do more than hint at in the context of the present debate. :biggrin:
 
  • #1,003
my_wan said:
DevilsAvocado said:
Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:
I'm in the process of reviewing De Raedt's work. I'm not convinced of his argument, the physical interpretation is quiet a bit more complex.

In this way, my argument is not dependent of De Raedt's work at all, and only came into play as an example involving DrC's rebuttal which inappropriately generalized "fair sampling" as invalid, on the basis that a class instance of "fair sampling" that assumes insufficient detection efficiencies is invalid.

The De Raedt simulation is an attempt to demonstrate that there exists an algorithm whereby (Un)Fair Sampling leads to a violation of a BI - as observed - while the full universe does not (as required by Bell). They only claim that their hypothesis is "plausible" and do not really claim it as a physical model. A physical model based on their hypothesis would be falsifiable. Loosely, their idea is that a photon might be delayed going through the apparatus and the delay might depend on physical factors. Whether you find this farfetched or not is not critical to the success of their simulation. The idea is that it is "possible".

My point has been simply that it is not at all certain that a simulation like that of De Raedt can be successfully constructed. So that is what I am looking at. I believe that there are severe constraints and I would like to see these spelled out and documented. Clearly, the constraint of the Entangled State / Product State mentioned in the other thread is a tought one. But as of this minute, I would say they have passed the test.

At any rate, they acknowledge that Bell applies. They do not assert that the full universe violates a BI.
 
  • #1,004
DevilsAvocado said:
It means exactly the same as "fair sampling assumption": That the sample of detected pairs is representative of the pairs emitted.
Yes, the "fair sampling assumption" does assume the sample of detected pairs is representative of the pairs emitted, and assuming otherwise is incongruent with the experimental constraints, thus invalid. An alternative "fair sampling assumption" assumes that the time taken to register a detection is the same regardless of the detector offsets. The invalidity of the first "fair sampling assumption" does not invalidate the second "fair sampling assumption". It's doesn't prove it's valid either, but neither is the claim that the invalidity of the first example invalidates the second.

DevilsAvocado said:
I.e. we are not assuming that nature is really a tricky bastard, by constantly not showing us the "enhancements" that would spoil all EPR-Bell experiments, all the time. :biggrin:
Again, tricky how. We know it's tricky in some sense. Consider the event timing verses event detection rates in the above example. If you bounce a tennis ball off the wall, its return time is dependent on the angle it hits the wall in front of you. It's path length is also dependent on the angle it hits the wall. Is nature being "tricky" doing this? Is nature being "tricky" if it takes longer to detect a photon passing a polarizer at an angle, than it takes if the polarizer has a common, or more nearly common, polarization as the photon? I wouldn't call that "tricky", any more than a 2 piece pyramid puzzle is. In years only 1 person I met, that hadn't seen it before, was able to solve it without help.
http://www.puzzle-factory.com/pyramid-2pc.html
We already know the speed of light is different in mediums with a different index of refraction.

DevilsAvocado said:
What does this really mean??
This was in reference to "full universe". DrC and I did use it in a slightly different sense. DrC used it to mean mean any possible 'set of' detector settings. I used it to mean 'all possible' detector settings. I'll explain the consequences in more detail below.

DevilsAvocado said:
my_wan, no offence – but is this the "full universe" of detector settings?:bugeye:?

I don’t get this. What on Earth has cos^2(22.5) + cos^2(67.5) = 1 to do with the "fair sampling assumption"...?

Do you mean that we are constantly missing photons that would, if they were measured, always set correlation probability to 1?? I don’t get it...
No, there are NO photon detections missing! Refer back to post #993. The only difference is in how fast the detection occurs, yet even this is an example, not a claim. If 2 photons hit 2 different detectors at the same time, but one of them takes longer to register the detection, then they will not appear correlated because they appeared to occur at 2 separate times. Not one of the detections is missing, only delayed.

Ok, here's the "full universe" argument again, in more detail.
The classical limit, as defined, sets a maximum correlation rate for any given setting offset. QM predicts, and experiments support, that for the offsets between 0 and 45 degrees the maximum classical limit is exceeded. QM also predicts that, for the angles between 45 and 90 degrees, the QM correlations are less than the classical limit. This is repeated on every 90 degree segment. If you add up all the extra correlations between 0 and 45 degrees, that exceed the classical limit, and add it to the missing correlations between 45 and 90 degrees, that the classical limit allows, you end up with ZERO extra correlations. Repeat for the other 3 90 degree segments and 4 x 0 = 0. QM does not predict any extra correlations when you average over all possible settings. It only allows you to choose certain limited non-random settings where the classical limit is exceeded, which presents problems for classical models.
 
Last edited:
  • #1,005
DrC,
Please note that my argument has nothing to do with the De Raedt simulation. It was merely an example of overextending the lack of validity of a fair sampling argument involving limited detection efficiencies to fair sampling arguments that could remain valid even if detection efficiencies were always absolutely perfect.
 
  • #1,006
DrC, two questions,
1) Do you agree that "fair sampling" assumptions exist, irrespectively of validity, that does not involve the assumption that photon detection efficiencies are less than perfect?
2) Do you agree that averaged over all possible settings, not just a choice some subset of settings, that the QM and classical correlation limit leads to the same overall total number of detections?
 
  • #1,007
my_wan said:
However, when you say angle2 is always at fixed value offset, what is it always offset relative to?

angle2 = angle1 + Radians(Theta) ' fixed value offset always

And Theta is a (user) argument into the main function.

my_wan said:
Again, tricky how.
DevilsAvocado said:
To me, this is as natural as when you grab hand of white sand on a white beach, you don’t assume that every grain of sand that you didn’t get into your hand... is actually black! :wink:


my_wan said:
No, there are NO photon detections missing! Refer back to post #993. The only difference is in how fast the detection occurs, yet even this is an example, not a claim. If 2 photons hit 2 different detectors at the same time, but one of them takes longer to register the detection, then they will not appear correlated because they appeared to occur at 2 separate times. Not one of the detections is missing, only delayed.

Ahh! Now I get it! Thanks for explaining. My guess on this specific case, is that it’s very easy to change the detection window (normally 4-6 ns?) to look for dramatic changes... and I guess that in all of the thousands EPR-Bell experiments, this must have been done at least once...? Maybe DrC knows?

my_wan said:
Ok, here's the "full universe" argument again, in more detail.
The classical limit, as defined, sets a maximum correlation rate for any given setting offset. QM predicts, and experiments support, that for the offsets between 0 and 45 degrees the maximum classical limit is exceeded. QM also predicts that, for the angles between 45 and 90 degrees, the QM correlations are less than the classical limit. This is repeated on every 90 degree segment.

Okay, you are talking about this curve, right?

2wr1cgm.jpg
 
  • #1,008
my_wan said:
If you add up all the extra correlations between 0 and 45 degrees, that exceed the classical limit, and add it to the missing correlations between 45 and 90 degrees, that the classical limit allows, you end up with ZERO extra correlations.

You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...
 
  • #1,009
DA, nice recent (long) post, #985. Sorry for the delay in replying. I've been busy with holiday activities. Anyway, I see that there have been some replies to (and amendents or revisions by you of) your post. I've lost count of how many times I've changed my mind on how to approach understanding both Bell and entanglement correlations. One consideration involves the proper interpretation of Bell's work and results wrt LHV or LR models of entanglement. Another consideration involves the grounds for assuming nonlocality in nature. And yet another consideration involves approaches to understanding how light might be behaving in optical Bell tests to produce the observed correlations, without assuming nonlocality. The latter involves quantum optics. Unfortunately, qo doesn't elucidate instrument-independent photon behavior (ie., what's going on between emission and filtration/detection). So, there's some room for speculation there (not that there's any way of definitively knowing whether a proposed, and viable, 'realistic' model of 'interim' photon behavior corresponds to reality). In connection with this, JenniT is developing an LR model in the thread on Bell's mathematics, and Qubix has provided a link to a proposed LR model by Joy Christian.

Anyway, it isn't like these are easy question/considerations.

Here's a paper that I'm reading which you might be interested in:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.2097v2.pdf

And here's an article in the Stanford Encyclopedia of Philosophy on the EPR argument:

http://plato.stanford.edu/entries/qt-epr/#1.2

Pay special attention to Einstein on locality/separability, because it has implications regarding why Bell's LHV ansatz might be simply an incorrect model of the experimental situation rather than implying nonlocality in nature.

Wrt to your exercises illustrating the difficulty of understanding the optical Bell test correlations in terms of specific polarization vectors -- yes, that is a problem. It's something that probably most, or maybe all, of the readers of this thread have worked through. It suggests a few possibilities: (1) the usual notion/'understanding' of polarization is incorrect or not a comprehensive physical description, (2) the usual notion/'understanding' of spin is incorrect or not a comprehensive physical description, (3) the concepts are being misapplied or inadequately/incorrectly modeled, (4) the experimental situation is being incorrectly modeled, (5) the dynamics of the reality underlying instrumental behavior is significantly different from our sensory reality/experience, (6) there is no reality underlying instrumental behavior or underlying our sensory reality/experience, etc., etc. My current personal favorites are (3) and (4), but, of course, that could change. Wrt fundamental physics, while there's room for speculation, one still has to base any speculations on well established physical laws and dynamical principles which are, necessarily, based on real physical evidence (ie. instrumental behavior, and our sensory experience, our sensory apprehension of 'reality' -- involving, and evolving according to, the scientific method of understanding).

And now, since I have nothing else to do for a while, I'll reply to a few of your statements. Keep a sense of humor, because I feel like being sarcastic.

DevilsAvocado said:
ThomasT, I see you and billschnieder spend hundreds of posts in trying to disprove Bell's (2) with various farfetched arguments, believing that if Bell's (2) can be proven wrong – then Bell's Theorem and all other work done by Bell will go down the drain, including nonlocality.
My current opinion is that Bell's proof of the nonviability of his LHV model of entanglement doesn't warrant the assumption of nonlocality. Why? Because, imo, Bell's (2) doesn't correctly model the experimental situation. This is what billschnieder and others have shown, afaict. There are several conceptually different ways to approach this, and so there are several conceptually different ways of showing this, and several conceptually different proposed, and viable, LR, or at least Local Deterministic, models of entanglement.

If any of these approaches is eventually accepted as more or less correct, then, yes, that will obviate the assumption of nonlocality, but, no, that will not flush all of Bell's work down the drain. Bell's work was pioneering, even if his LHV ansatz is eventually accepted as not general and therefore not implying nonlocality.

DevilsAvocado said:
The aim of the EPR paradox was to show that there was a preexisting reality at the microscopic QM level - that the QM particles indeed had a real value before any measurements were performed (thus disproving Heisenberg uncertainty principle HUP).

To make the EPR paper extremely short; If we know the momentum of a particle, then by measuring the position on a twin particle, we would know both momentum & position for a single QM particle - which according to HUP is impossible information, and thus Einstein had proven QM to be incomplete ("God does not play dice").
The papers I referenced above have something to say about this.

DevilsAvocado said:
Do you understand why we get upset when you and billschnieder argue the way you do?
Yes. Because you're a drama queen. But we're simply presenting and analyzing and evaluating ideas. There should be no drama related to that. Just like there's no crying in baseball. Ok?

DevilsAvocado said:
You are urging PF users to read cranky papers - while you & billschnieder obviously hasn’t read, or understand, the original Bell paper that this is all about??
I don't recall urging anyone to read cranky papers. If you're talking about Kracklauer, I haven't read all his papers yet, so I don't have any opinion as to their purported (by you) crankiness. But, what I have read so far isn't cranky. I think I did urge 'you' to read his papers, which would seem to be necessary since you're the progenitor, afaik, of the idea that Kracklauer is a crank and a crazy person.

The position you've taken, and assertions you've made, regarding Kracklauer, put you in a precarious position. The bottom line is that the guy has some ideas that he's promoting. That's all. They're out there for anyone to read and criticize. Maybe he's wrong on some things. Maybe he's wrong on everything. So what? Afaict, so far, he's far more qualified than you to have ideas about and comment on this stuff. Maybe he's promoting his ideas too zealously for your taste or sensibility. Again, who cares? If you disagree with an argument or an idea, then refute it if you can.

As for billschnieder and myself reading Bell's papers, well of course we've read them. In fact, you'll find somewhere back in this thread where I had not understood a part of the Illustrations section, and said as much, and changed my assessment of what Bell was saying wrt it.

And of course it's possible, though not likely, that neither billschnieder nor I understand what Bell's original paper was all about. But I think it's much more likely that it's you who's missing some subleties wrt its interpretation. No offense of course.

Anyway, I appreciate your most recent lengthy post, and revisions, and most of your other posts, as genuine attempts by you to understand the issues at hand. I don't think that anybody fully understands them yet. So physicists and philosophers continue to discuss them. And insights into subtle problems with Bell's formulation, and interpretations thereof, continue to be presented, along with LR models of entanglement that have yet to be refuted.

Please read the stuff I linked to. It's written by bona fide respected physicists.

And, by the way, nice recent posts, but the possible experimental 'loopholes' (whether fair sampling/detection, or coincidence, or communication, or whatever) have nothing to do with evaluating the meaning of Bell's theorem. The correlation between the angular difference of the polarizers and coincidental detection must be, according to empirically established (and local) optical laws, a sinusoidal function, not a linear one.
 
  • #1,010
my_wan said:
To make the difference between an experimentally invalid "unfair sampling" argument, involving detection efficiencies, and more general "fair sampling" arguments more clear, consider:

You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.

I think it is a mistake to think that "unfair sampling" is only referring to detection rate. The CHSH inequality is the following:

|E(a,b) + E(a,b') + E(a',b) - E(a',b')| <= 2

It is true that in deriving this, Bell assumed every photon/particle was detected given that his A(.) and B(.) functions are defined as two-valued functions (+1, -1) rather than three-valued functions with a non-detection outcome included. An important point to note here is (1) there is a P(λ), implicit in each of the expectation value terms in that inequality, and Bell's derivation relies on the fact that P(λ) is exactly the same probability distribution for each and every term in that inequality.

Experimentally, not all photons are detected, so the "fair sampling assumption" together with "coincidence circuitry" is used to overcome that problem. Therefore the "fair sampling assumption" is invoked in addition to the coincidence counting to state that the detected coincident photons are representative of the full universe of photon pairs leaving the source.

The next important point to remember is this; (2) in real experiments each term in the inequality is a conditional expectation value, conditioned on "coincidence". The effective inequality being calculated in a real experiment is therefore:

|E(a,b|coinc) + E(a,b'|coinc) + E(a',b|coinc) - E(a',b'|coinc)| <= 2

So then looking at both crucial points above and remember the way experiments are actually performed we come to understand that "fair sampling assumption" entails the following:

1) P(coinc) MUST be independent of λ
2) P(coinc) MUST be independent of a and/or b (ie joined channel efficiencies must be factorizable)
3) P(λ) MUST be independent of a and/or b
4) If for any specific setting pair(a,b), the probability of "non-consideration" of a photon pair (ie, no coincidence) is dependent on the hidden parameter λ, then (1), (2) and (3) will fail, and together with them, the "fair sampling assumption" will fail.

The question then becomes, is it unreasonable to expect that for certain hidden λ, P(coinc) will not be the same in all 4 terms and therefore P(λ) can not be expected to always be the same for all 4 terms?

In fact (2) has been put to the test using real data from the Weihs et al experiment and failed. See the article here (http://arxiv4.library.cornell.edu/abs/quant-ph/0606122 , J. Phys. B 40 No 1 (2007) 131-141)
Abstract:
We analyze optical EPR experimental data performed by Weihs et
al. in Innsbruck 1997-1998. We show that for some linear combinations of the
raw coincidence rates, the experimental results display some anomalous behavior
that a more general source state (like non-maximally entangled state) cannot
straightforwardly account for. We attempt to explain these anomalies by taking
account of the relative efficiencies of the four channels. For this purpose, we use the fair
sampling assumption, and assume explicitly that the detection efficiencies for the pairs
of entangled photons can be written as a product of the two corresponding detection
efficiencies for the single photons. We show that this explicit use of fair sampling cannot
be maintained to be a reasonable assumption as it leads to an apparent violation of
the no-signalling principle.
 
Last edited by a moderator:
  • #1,011
Note that I am describing classes of realistic constructs, to demonstrate the absurdity of generalizing the refutation of a single class instance of a realism class to represent a refutation of realism in general. It goes to the lagitamacy of this generalization of realism, as defined by EPR, not to any given class or class instance described.

The most surprising result of such attempts at providing examples realism models that explicitly at odds with realism as defined by EPR, is I'm often paraphrased as requiring what these example model classes are explicitly formulated to reject. Namely: 1) That observables are representative indicators of elements of reality. 2) Real observables are linear representative indicators of such elements. 3) Properties are pre-existing (innate) to such elements. These are all presumptuous, but are diametrically opposed to realism as defined by EPR, thus such constructive elements of reality are not addressed by BI, with or without locality.

JesseM said:
But Bell's proof is abstract and mathematical, it doesn't depend on whether it is possible to simulate a given hidden variables theory computationally, so why does it matter what the "computational demands of modeling BI violations" are? I also don't understand your point about a transfinite set of hidden variables and Hilbert's Hotel paradox...do you think there is some specific step in the proof that depends on whether lambda stands for a finite or transfinite number of facts, or that would be called into question if we assumed it was transfinite?
I understand the mathematical abstraction BI is based on. It is because the mathematics is abstract that the consequent assumptions of the claims goes beyond validity of BI. Asher Peres notes that "element of reality" are identified with the EPR definition. He also notes the extra assumption that the sum or product of two commuting elements of reality also is an element of reality. In:
http://www.springerlink.com/content/g864674334074211/"
He outlines the algebraic contradiction that ensues from these assumptions. On what basis is these notions of realism predicated? If "elements of reality" exist, how justified are we in presuming that properties are innate to these elements?

Our own DrC has written some insightful comments concerning realism, refuting Hume, in which it was noted how independent variables must be unobservable. If all fundamental variables are in some sense independent, how do we get observables? My guess is that observables are a propagation of events, not things. Even the attempt to detect an "elements of reality" entails the creation of events, where what's detected is not the "elements of reality" but the propagation observables (event sets) created by the events, not the properties of "elements of reality".

Consider a classical analog involving laminar verses turbulent flow, and suppose you could only define density in terms of the event rates (collisions in classical terms) in the medium. The classical notion of particle density disappears. This is at a fundamental level roughly the basis of many different models, involving both GR, QM, and some for QG. Erik Verlinde is taking some jousting from his colleagues for a preprint along roughly similar lines.

The point here is that associating properties are something owned by things is absurdly naive, and even more naive to assume real properties are commutative representations of things (think back to the event rate example). This is also fundamentally what is meant by "statistically complete variables" in published literature.

Now you can object to it not being "realistic" on the basis of not identifying individual "elements of reality", but if the unobservability argument above is valid, on what grounds do you object to a theory that doesn't uniquely identify unobservables (independent elements of reality)? Is that justification for a claim of non-existence?

JesseM said:
I'm not sure what you mean by "projections from a space"...my definition of local realism above was defined in terms of points in our observable spacetime, if an event A outside the past light cone of event B can nevertheless have a causal effect on B then the theory is not local realist theory in our spacetime according to my definition, even if the values of variables at A and B are actually "projections" from a different unseen space where A is in the past light cone of B (is that something like what you meant?)
Consider a standard covariant transform in GR. A particular observers perspective is a "projection" of this curved space onto the Euclidean space our perceptions are predisposed to. Suppose we generalize this even further, to include the Born rule, |/psi|^2, such that a mapping of a set of points involves mapping them onto a powerset of points. Aside from the implications in set theory, this leads to non-commutativity even if the variables are commutative within the space that defines them. Would such a house of mirrors distortion of our observer perspective of what is commutative invalidate "realism", even when those same variables are commutative in the space that defined them?

Again, this merely points to the naivety of "realism" as has been invalidated by BI violations. What BI violations don't do is invalidate "realism", or refute that "elements of reality" exist that is imposing this house of mirrors effect on our observation of observables. Assuming we observe "reality" without effect on it is magical thinking from a realist perspective. Assuming we are a product of these variables, while assuming 'real' variables must remain commutative is as naive as the questions on this forum asking why doubling speed more than doubles the kinetic energy. But if your willing to just "shut up and calculate" it's never a problem.

JesseM said:
They did make the claim that there should in certain circumstances be multiple elements of reality corresponding to different possible measurements even when it is not operationally possible to measure them all simultaneously, didn't they?
Yes, but that is only a minimal extension to the point I'm trying to make, not a refutation of it. This corresponds to certain classical contextuality schemes attempted to model BI violations. The strongest evidence against certain types of contextuality schemes, from my perspective, involves metamaterials and other such effects, not BI violations. I think Einstein's assumptions of what constraints realism imposes is overly simplistic, but that doesn't justify the claim that "elements of reality" don't exist.

JesseM said:
I don't follow, what "definitions counter to that EPR provided" are being rejected out of hand?
Are you trying to say here that no "realism" is possible that doesn't accept "realism" as operationally defined by EPR? The very claim that BI violations refute "realism" tacitly makes this claim. If you predicate "realism" on the strongest possible realism, then the notion that a fundamental part has properties is tantamount to claiming it contains a magic spell. It would also entail that measuring without effect is telepathy, and at a fundamental level such an effect must be at least as big as what you want to measure. The Uncertainty Principle, as originally derived, was due to these very thought experiments involving realistic limits, not QM.

So as long as you insist that a local theory cannot be "realistic", even by stronger definitions of realism than EPR provided, then you are rejecting realism "definitions counter to that EPR provided". Have I not provided examples and justification for "realism" definitions that are counter to the EPR definition? Those examples are not claims of reality, they are examples illustrating the naivety of the constraints imposed on the notion of realism and justified on the EPR argument.

JesseM said:
What's the statement of mine you're saying "unless" to? I said "there's no need to assume ... you are simply measuring a pre-existing property which each particle has before measurement", not that this was an assumption I made. Did you misunderstand the structure of that sentence, or are you actually saying that if "observable are a linear projection from a space which has a non-linear mapping to our measured space of variables", then that would mean my statement is wrong and that there is a need to assume we are measuring pre-existing properties the particle has before measurement?
I said "unless" to "there's no need to assume, [...], you are simply measuring a pre-existing property". This was only an example, in which a "pre-existing property" does not exist, yet both properties and "elements of reality do. I give more detail on mappng issue with the Born rule above. These examples are ranges of possibilities that exist within certain theoretical class instances as well as in a range of theoretical classes. Yet somehow BI violations is supposed to trump every class and class instance and disprove realism if locality is maintained. I don't think so.

You got the paraphrase sort of right until you presumed I indicated, "is a need to assume we are measuring pre-existing properties the particle has before measurement". No, I'm saying the lack of pre-existing properties says nothing about the lack of pre-existing "elements of reality". Nor does properties dynamically generated by "elements of reality" a priori entail any sort of linearity between "elements of reality" and properties, at any level.

JesseM said:
Why would infinite or non-compressible physical facts be exceptions to that? Note that when I said "can be defined" I just meant that a coordinate-independent description would be theoretically possible, not that this description would involve a finite set of characters that could be written down in practice by a human. For example, there might be some local variable that could take any real number between 0 and 1 as a value, all I meant was that the value (known by God, say) wouldn't depend on a choice of coordinate system.
Why would infinite indicate non-compressible? If you define an infinite set of infinitesimals in a arbitrary region, why would that entail even a finite subset of that space is occupied? Even id a finite subset of that space was occupied, it still doesn't entail that it's a solid. Note my previous reference to Hilbert's paradox of the Grand Hotel. Absolute density wouldn't even have a meaning. Yes, a coordinate-independent description would be theoretically possible, yet commutativity can be dependent on a coordinate transform. You can make a gravitational field go away by the appropriate transform, but you can't make its effects on a given observers perspective go away. The diffeomorphism remains under any coordinate choice, and what appears linear in one coordinate choice may not be under another coordinate choice.

JesseM said:
As you rotate the direction of the beams, are you also rotating the positions of the detectors so that they always lie in the path of the beams and have the same relative angle between their orientation and the beam? If so this doesn't really seem physically equivalent to rotating the detectors, since their the relative angle between the detector orientation and the beam would change.
Actually the detectors remain as the beams are rotated, such that the relative orientation of the emitter and photon polarizations changes wrt the detectors, without effecting the coincidence rate. The very purpose of rotating the beam is to change emitter and photons orientation wrt the detectors. Using the same predefined photons, it even changes which individual photons take which path through the polarizers, yet the coincidence rates remain. I can also define a bit field for any non-zero setting. I'm attempting to rotate the polarization of the photons to be located at different positions within the bit field, to mimic this effect on the fly. So the individual photons contain this information, rather than some arbitrarily chosen coordinate system. It will also require a statistical splitting of properties if it works, which I have grave doubts.

JesseM said:
But that's just realism, it doesn't cover locality (Bohmian mechanics would match that notion of realism for example). I think adding locality forces you to conclude that each basic element of reality is associated with a single point in spacetime, and is causally affected only by things in its own past light cone.
Would a local theory with "elements of reality" which dynamically generate but do not posses pre-existing properties qualify as a "realistic" theory? I think your perception what I think about points in spacetime is distorted by the infinite density assumption, much like Einstein's thinking. Such scale gauges, to recover the hierarchical structure of the standard model, tend to be open parameters in deciding a theoretical construct to investigate. At a fundamental level, lacking any hierarchy, gauges lose meaning due to coordinate independence. The infinite density assumption presumes a pre-existing meaning to scale. It might be better to think in terms of non-standard calculus to avoid vague or absolutist (as in absolutely solid) notions of infinitesimals. Any reasonable conception of infinitesimals in set theory indicates the "solid" presumption is the most extreme case of an extreme range of possibilities. Whole transfinite hierarchies of limits exist in the interim.
 
Last edited by a moderator:
  • #1,012
billschnieder said:
I think it is a mistake to think that "unfair sampling" is only referring to detection rate.
Weird, that was the entire point of several post. Yet here I am making the mistake of claiming what I spent all these post refuting? Just weird.
 
  • #1,013
my_wan said:
Weird, that was the entire point of several post. Yet here I am making the mistake of claiming what I spent all these post refuting? Just weird.
Oh not your mistake. I was agreeing with you from a different perspective, there is a missing also somewhere there!
 
  • #1,014
DevilsAvocado said:
Ahh! Now I get it! Thanks for explaining. My guess on this specific case, is that it’s very easy to change the detection window (normally 4-6 ns?) to look for dramatic changes... and I guess that in all of the thousands EPR-Bell experiments, this must have been done at least once...? Maybe DrC knows?
Yes, it may be possible to refute this by recording time stamps and analyzing any continuity in time offsets of detections that missed the coincidence time window.

The main point remains, irrespective of experimental validity of this one example. You can't generally apply a proof invalidating a particular class instance to invalidate the whole class.

DevilsAvocado said:
Okay, you are talking about this curve, right?
Yes. You can effectively consider the curve above the x-axis as exceeding the classical 'max' limit, while the curve below the x-axis as falling short of the classical 'max' limit by the exact same amount it was exceeded in the top part.

Again. This doesn't demonstrate any consistency with any classical model of BI violations. It only indicates that in the "full universe" of "all" possible settings there are no excess of detections relative to the classical limit. Thus certain forms of "fair sampling" arguments are not a priori invalidated by the known invalid "fair sampling" argument involving detection efficiencies. Neither does it mean that such "fair sampling" arguments can't be ruled out by other means., as indicated above.

It's difficult to maintain my main point, which involves the general applicability of a proof to an entire class or range of classes, when such a proof is known to be valid in a given class instance. My example of cases where a given constraint is abrogated is too easily interpreted as a claim or solution in itself. Or worse, reinterpreted as a class instance of the very class instance it was specifically formulated not to represent.
 
  • #1,015
DevilsAvocado said:
You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...
Physically it's exactly equivalent to a tennis ball being bounced off a wall taking a longer route back as the angle it hits the wall increases. It only requires the assumption that the more offset a polarizer is the longer it takes the photon to tunnel through it. Doesn't really convince me either without some testing, but certainly not something I would call nature being tricky. At least not any more tricky than even classical physics is known to be at times. Any sufficiently large set of dependent variables are going to be tricky, no matter how simple the underlying mechanisms. Especially if it looks deceptively simple on the surface.
 

Similar threads

2
Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
2K
Replies
6
Views
2K
Replies
2
Views
1K
Replies
100
Views
10K
Replies
6
Views
3K
Back
Top