Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #911
JesseM said:
If you are doing an experiment which matches the condition of the Bell inequalities that says each measurement must yield one of two binary results...

This requirement by itself is unreasonable because according to Malus law, it is normal to expect that some photons will not go through the polarizer. Therefore Bell's insistence on having only binary outcomes (+1, -1) goes off the boat right from the start. He should have included a non-detection outcome too.
 
Physics news on Phys.org
  • #912
(my emphasis)
JesseM said:
... Yes, of course I disagree, you're just totally misunderstanding the most basic logic of the proof which is assuming a perfect correlation between A and B whenever both experimenters choose the same detector setting. It's really rather galling that you make all these confident-sounding claims about Bell's proof being flawed when you fail to understand something so elementary about it! Could you maybe display a tiny bit of intellectual humility and consider the possibility that it might not be that the proof itself is flawed and that you've spotted a flaw that so many thousands of smart physicists over the years have missed, that it might instead be you are misunderstanding some aspects of the proof?


JesseM, thank you so very much for these very intelligent and well expressed words! You’ve hit the nail! THANKS!

And I can guarantee you that you are not the only one exasperated on ThomasT’s general attitude.
 
  • #913
DrChinese said:
OK, ask this: so what if Malus rules out local realistic theories per se? You are trying to somehow imply that is not reasonable. Well, it is.

We don't live in a world respecting BIs while we do live in one which Malus is respected. There is no contradiction whatsoever. You are trying to somehow say Malus is classical but it really isn't. It is simply a function of a quantum mechanical universe. So your logic needs a little spit polish.

The claim that my logic needs a little spit polish is absolutely valid. That's part of why I debate these issues here, to articulate clear my own thinking on the matter.

I'm not making claims about what is or isn't reasonable. Here's the thing, so long as the terms "local" and "realistic" are well defined to be restricted to X and Y conceptions of those terms, I have no issue with the claim that X and/or Y conceptions are invalid. To generalize that claim as evidence that all conception of those terms is likewise invalid is technically dishonest. Reasonable? Perhaps.., but also presumptuous. It may not rise to the level of presumptuousness the realist in general tend toward, but I find it no less distasteful. It would likewise be a disservice to academically lock those terms as representative of certain singular conceptions.

Then there is a more fundamental theoretical issue. Correctly identifying the issues leading to certain empirical results can play an essential role in extending theory in unpredictable ways. To simply imply that if Malus alone can lead to BI violations is a vindication of some status quo interpretation is unwarranted. Your concretion is misplaced. Often the point of questioning the reason X is false is not to establish a claim of its truth value, but to gain insight to a wider range of questions. Teachers that would respond to enumerations of all the possible reasons something was false with: but the point is that it's false, missed the point entirely.

So, in spite of the justifications implied by Malus consequences lacking contradiction, what does it indicate when a preeminent classical theoretical construct predicts a consequence that violates BI? It certainly does NOT indicate that realism, as narrowly defined by EPR and Bell, erred in ruling out a particular form of realism. It does in fact question the generalization of BI to a broader range of conceptions of realism. It also directly brings into question the relevance of nonlocality, when the polarizer path version, resulting from a classical field construct, doesn't even have an existential partner to correlate with. The difficulties might even be a mathematical decomposition limitation, some akin to a la Gödel maybe? It also begs the question, if Maxwell's equations can produce a path version of BI violations, what besides quanta and the Born rule is fundamentally unique to QM, notwithstanding the claim of being fundamental?

I don't have the answers, but I'm not going to restrict myself to BI tunnel vision either.
 
  • #914
billschnieder said:
This requirement by itself is unreasonable because according to Malus law, it is normal to expect that some photons will not go through the polarizer. Therefore Bell's insistence on having only binary outcomes (+1, -1) goes off the boat right from the start. He should have included a non-detection outcome too.
I'm pretty sure that in experiments with entangled photons, there is a difference between non-detection and not making it through the polarizer. For example, if you look at the description and diagram of a "typical experiment" in this section of the wikipedia article on Bell test loopholes, apparently if the two-channel polarizer like "a" doesn't allow a photon through to be detected at detector D+, it simply deflects it at a different angle to another detector D-, so the photon can be detected either way.
 
  • #915
Had to get some sleep after that last post, I go too long sometimes.
JesseM said:
How does Malus' law apply to individual photons, though? The classical version of Malus' law requires uniformly polarized light with a known polarization angle, are you talking about a photon that's known to be in a polarization eigenstate for a polarizer at some particular angle? In that case, whatever the angle v of the eigenstate, I think the probability the photon would pass through another angle at angle a would be cos^2(v-a). But when entangled photons are generated, would they be in such a known polarization eigenstate? If not it seems like you wouldn't be able to talk about Malus' law applying to individual members of the pair.
Yes, I'll restate this again. This is the statistics I have verified as an exact with Malus law expressed sole in terms of intensity for all pure or mixed polarization state beams, as well as cases of passage through arbitrary multiple polarizers. Malus consistency is predicated on modeling intensity, but it even perfectly models EPR correlations with the caveats given. It's a straight up assumption that individual photons, randomly polarized as a group or not, has a very definite default polarization. The default polarization is unique only in that it is the only polarization at which a polarizer with a matching setting effectively has a 100% chance of passing that photon. The odds of any given photon passing a polarizer that is offset from that default is defined by a straight up assumption that ∆I (intensity) constitutes a ∆p (odds), i.e., for any arbitrary theta offset from the individual photons default ∆I = ∆p.

JesseM said:
But with electromagnetic waves in Maxwell's equations there's no probabilities involved, Malus' law just represents a deterministic decrease in intensity. So there's no case where two detectors at different angles a and b have a probability cos^2(a-b) of opposite results, including the fact that they give opposite results with probability 1 with the detectors at the same angle. This is true even if you design the detectors to give one of two possible outputs depending on the decrease in intensity, as I suggested in post #896 to ThomasT. So no violations of BI and no reason Maxwell's laws can't be understood as a local realist theory, so I'm not sure why you have a problem with the reality of classical fields.
Yes, neither does Maxwell's equations explicitly recognize the particle aspect of photons, thus lacked any motivation for assigning probabilities to individual photons. But as noted, ∆I = ∆p perfectly recovers the proper Malus predicted intensities in all pure and mixed state beams, as well as ∆I in stacked (series) polarizer cases. The EPR case is a parallel case involving anti-twins.

At no point, in modeling Malus intensities or EPR correlation, did I use cos^2(a-b), where a and b represented different polarizers. I've already argued with ThomasT over this point. I used cos^2(theta), where theta is defined solely as the offset of the polarizer relative to the photon that actually comes in contact with that polarizer. In fact, since the binary bit field that predetermined outcomes already had cos^2(theta) statistically built in, I didn't even have to calc cos^2(theta) at detection time. I merely did a linear one to one count into the bit field defined by theta (between polarizer and the photon that actually came in contact with it) alone, took that bit and passed it if it was a 1, diverted if 0. I used precisely the same rules, in the computer models, in both Malus intensity and parallel EPR cases, and only compared EPR correlations after the fact.

To clarify, the bit field I used to predetermine outcomes, to computer model both Malus intensities and EPR parallel cases, set a unique bit for each offset, such that a photon that passed at a given offset could sometimes fail at a lesser offset. The difficulties arose only the the EPR modeling case when it only successfully modeled BI violations correlations when one or the other detector was defined as 0. Yet uniformly rotating the default polarizations of the randomly polarized beam changed which individual photons took what path it had no effect whatsoever on the BI violated statistics. The EPR case, like the Malus intensities, was limited to relative coordinates only wrt detector settings. Absolute values didn't work, in spite of the beam rotation statistical invariance with individual photon path variance.

Maxwell's equations didn't assign a unique particulate identity to individual photons. Yet, if consider classically distinct paths as a real property of a particulate photon (duality), you can construct BI violations from path assumptions required to model Malus law. It's easy to hand wave away when were talking a local path through a local polarizer, until you think about it. In fact the path version of BI violation only become an issue when you require absolute arbitrary coordinates, rather than relative offsets, to model. The same issue that is the sticking point in modeling EPR BI violations.
 
  • #916
I was looking over DrC's Frankenstein particles and read the wiki page JesseM linked:
http://en.wikipedia.org/wiki/Loopholes_in_Bell_test_experiments
Where it mentions a possible failure of of rotational invariance and realized it should be possible, if the Malus law assumptions I used is valid, to construct a pair of correlated beams that explicitly fails rotational invariance.

I need to think through it more carefully, but it would involve using a PBS to split both channels from the source emitter. Use a shutter to selectively remove some percentage of both correlated pairs of a certain polarization, and recombine the remainder. Similar to DrC's Frankenstein setup. Done such that all remaining photons after recombination should have a remaining anti-twin, which, as a group, has a statistically preferred polarization. The only photons that can be defined as observed, and their partners, are no longer present in the beam to effect the correlation statistics. Then observe the effects on correlation statistics at various common detector settings and offsets. The possible variations of this is quiet large.

Edit: A difficulty arises when you consider that while photons are being being shuttered in one side of the polarized beam, before recombination, any photon that takes the other path at that time can be considered detected. Non-detection in QM can sometime qualify as a detection in QM.
 
Last edited:
  • #917
JesseM said:
... it might instead be you are misunderstanding some aspects of the proof ...
Of course, that's why I'm still asking questions.

JesseM said:
The joint probability P(AB) is not being modeled as the product of P(A)*P(B) by Bell's equation.

ThomasT said:
My thinking has been that it reduces to that. Just a stripped down shorthand for expressing Bell's separability requirement. If you don't think that's ok, then how would you characterize the separability of the joint state (ie., the expression of locality) per Bell's (2)?

JesseM said:
I would characterize it in terms of A and B being statistically independent only when conditioned on the value of the hidden variable λ. They are clearly not statistically independent otherwise ...

JesseM said:
A and B are not independent in their marginal probabilities (which determine the actual observed frequencies of different measurement outcomes), only in their probabilities conditioned on λ.


Ok, you seem to be saying that P(AB)=P(A) P(B) isn't an analog of Bell's (2). You also seem to be saying that Bell's (2) models A and B as statistically independent for all joint settings except (a-b)=0. Is this correct?

My thinking has been that Bell's(2) is a separable representation of the entangled state, and that this means that it models the joint state in a factorable form. Is this correct? If so, then is this the explicit expression of Bell locality in Bell's (2).
 
  • #918
JesseM, please correct me if I’m wrong, but haven’t you already answer the question above perfectly clear??

JesseM said:
... Yes, and this was exactly the possibility that Bell was considering! If you don't see this, then you are misunderstanding something very basic about Bell's reasoning. If A and B have a statistical dependence, so P(A|B) is different than P(A), but this dependence is fully explained by a common cause λ, then that implies that P(A|λ) = P(A|λ,B), i.e. there is no statistical dependence when conditioned on λ. That's the very meaning of equation (2) in Bell's original paper, that the statistical dependence which does exist between A and B is completely determined by the state of the hidden variables λ, and so the statistical dependence disappears when conditioned on λ. Again, please tell me if you disagree with this.


Could we also make it as simple that even a 10-yearold can understand, by stating:

Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable λ.​
 
Last edited:
  • #919
ThomasT said:
Of course, that's why I'm still asking questions.
I'm glad you're still asking questions, but if you don't really understand the proof, and you do know it's been accepted as valid for years by mainstream physicists, doesn't it make sense to be a little more cautious about making negative claims about it like this one from an earlier post?
ThomasT said:
I couldn't care less if nonlocality or ftl exist or not. In fact, it would be very exciting if they did. But the evidence just doesn't support that conclusion.
On to the topic of probabilities:
ThomasT said:
Ok, you seem to be saying that P(AB)=P(A) P(B) isn't an analog of Bell's (2). You also seem to be saying that Bell's (2) models A and B as statistically independent for all joint settings except (a-b)=0. Is this correct?
No. In any Bell test, the marginal probability of getting either of the two possible results (say, spin-up and spin-down) should always be 0.5, so P(A)=P(B)=0.5. But if you're doing an experiment where the particles always give identical results with the same detector setting, then if you learn the other particle gave a given result (like spin-up) with detector setting b and you're using detector setting a, then the conditional probability your particle gives the same result is cos^2(a-b). So if A and B are identical results, in this case P(AB)=P(B)*P(A|B)=0.5 * cos^2(a-b), so as long as a and b have an angle between them that's something other than 45 degrees (since cos^2(45) = 0.5), P(AB) will be different than P(A)*P(B), and there is a statistical dependence between them.
ThomasT said:
My thinking has been that Bell's(2) is a separable representation of the entangled state, and that this means that it models the joint state in a factorable form. Is this correct? If so, then is this the explicit expression of Bell locality in Bell's (2).
It shows how the joint probability can be separated into the product of two independent probabilities if you condition on the hidden variables λ. So, P(AB|abλ)=P(A|aλ)*P(B|bλ) can be understood as an expression of the locality condition. But he obviously ends up proving that this doesn't work as a way of modeling entanglement...it's really only modeling a case where A and B are perfectly correlated (or perfectly anticorrelated, depending on the experiment) whenever a and b are the same, under the assumption that there is a local explanation for this perfect correlation (like the particles being assigned the same hidden variables by the source that created them).
 
  • #920
my_wan said:
The claim that my logic needs a little spit polish is absolutely valid. That's part of why I debate these issues here, to articulate clear my own thinking on the matter.

That is why I participate too. :smile:
 
  • #921
billschnieder said:
This requirement by itself is unreasonable because according to Malus law, it is normal to expect that some photons will not go through the polarizer. Therefore Bell's insistence on having only binary outcomes (+1, -1) goes off the boat right from the start. He should have included a non-detection outcome too.

Not with beam splitters! (Non-detection is not an issue ever, as we talk about the ideal case. Experimental tests must consider this.)
 
  • #922
I want to make an important statement that counters the essence of some of the arguments being presented about "non-detections" or detector effeciency.

A little thought will tell you why this is not much of an issue. If we have a particle pair A, B and we send them through a beam splitter with detectors at both output ports, we should end up with one of the following 4 cases:

1. A not detected, B not detected.
2. A detected, B not detected.
3. A not detected, B detected.
4. A detected, B detected.

However we won't actually know when case 1 occurs, correct? But unless the chance of 1 is substantially greater than either 2 or 3 individually (and probability logic indicates it should be definitely less - can you see why?), then we can estimate it. If case 4 occurs 50% of the time or more, then 1 should occur less than 10% of the time. This is in essence a vanishing number, since visibility is approaching 90%. That means cases 2 and 3 are happening only about 1 in 10, which would imply case 1 of about 1%.

So you have got to claim all of the "missing" photons are carrying the values that would prove a different result. And this number is not much. I guess it is *possible* if there is a physical mechanism which is responsible for the non-detections, but that would also make it experimentally falsifiable. But you should be aware of how far-fetched this really is. In other words, in actual experiments cases 2 and 3 don't occur very often. Which places severe constraints on case 1.
 
  • #923
JesseM said:
It shows how the joint probability can be separated into the product of two independent probabilities if you condition on the hidden variables λ. So, P(AB|abλ)=P(A|aλ)*P(B|bλ) can be understood as an expression of the locality condition. But he obviously ends up proving that this doesn't work as a way of modeling entanglement...it's really only modeling a case where A and B are perfectly correlated (or perfectly anticorrelated, depending on the experiment) whenever a and b are the same, under the assumption that there is a local explanation for this perfect correlation (like the particles being assigned the same hidden variables by the source that created them).

This must mean that my "10-yearold explanation" is correct, and hopefully this information can even make sense to ThomasT:

Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable λ.​

JesseM said:
No. In any Bell test, the marginal probability of getting either of the two possible results (say, spin-up and spin-down) should always be 0.5, so P(A)=P(B)=0.5. But if you're doing an experiment where the particles always give identical results with the same detector setting, then if you learn the other particle gave a given result (like spin-up) with detector setting b and you're using detector setting a, then the conditional probability your particle gives the same result is cos^2(a-b). So if A and B are identical results, in this case P(AB)=P(B)*P(A|B)=0.5 * cos^2(a-b), so as long as a and b have an angle between them that's something other than 45 degrees (since cos^2(45) = 0.5), P(AB) will be different than P(A)*P(B), and there is a statistical dependence between them.

Could we make a 'simplification' of this also, and say:

According to QM predictions, all depends on the relative angle between the polarizers a & b. If measured parallel (0º) or perpendicular (90º) the outcome is perfectly correlated/anticorrelated. In any other case, it’s statistically correlated thru QM predictions cos^2(a-b).

Every outcome on every angle is perfectly random for A & B, with one 'exception' for parallel and perpendicular, where the outcome for A must be perfectly correlated/anticorrelated to B (still individually perfectly random).​

Correct?
 
  • #924
DrChinese said:
... However we won't actually know when case 1 occurs, correct?

DrC, could you help me out? I must be stupid...

If we use a beam splitter, then we always get a measurement, unless something goes wrong. Doesn’t this mean we would know that case 1 has occurred = nothing + nothing?? :rolleyes:
 
  • #925
DevilsAvocado said:
DrC, could you help me out? I must be stupid...

If we use a beam splitter, then we always get a measurement, unless something goes wrong. Doesn’t this mean we would know that case 1 has occurred = nothing + nothing?? :rolleyes:

That is the issue everyone is talking about, except it really doesn't fly. Normally, and in the ideal case, a photon going through a beam splitter comes out either the H port or the V port. Hypothetically, photon Alice might never emerge in line, or might not trigger the detector, or something else goes wrong. So neither of the 2 detectors for Alice fires. Let's say that happens 1 in 10 times, and we can see it happening because one of the Bob detectors fires.

Ditto for Bob. There might be a few times in which the same thing happens on an individual trial for both Alice and Bob. If usual probability laws are applied, you might expect something like the following:

Neither detected: 1%.
Alice or Bob not detected, but not both: 18%.
Alice and Bob both detected: 81%.

I would call this visibility of about 90% which is about where things are at in experiments these days. But you cannot say FOR CERTAIN that the 1 case only occurs 1% of the time, you must estimate using an assumption. But if you *assert* that the 1 case occurs a LOT MORE OFTEN than 1% and you STILL have a ratio of 81% to 18% per above (as these are experimentally verifiable of course), then you have a lot of explaining to do.

And you will need all of it to make a cogent argument to that effect. Since any explanation will necessarily be experimentally falsifiable. The only way you get around this is NOT to supply an explanation. Which is the usual path when this argument is raised. So visibility is a function of everything involved in the experiment, and it is currently very high. I think around 85% + range but it varies from experiment to experiment. I haven't seen good explanations of how it is calculated or I would provide a reference. Perhaps someone else knows a good reference on this.
 
  • #926
DevilsAvocado said:
This must mean that my "10-yearold explanation" is correct, and hopefully this information can even make sense to ThomasT:

Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable λ.​
Basically I'd agree, although I'd make it a little more detailed: (2) isn't about entanglement, it's about the probabilities for different combinations of A and B (like A=spin-up and B=spin down) for different combinations of detector settings a and b (like a=60 degrees, b=120 degrees), under the assumption that there is a perfect correlation between A and B when both sides use the same detector setting, and that this perfect correlation is to be explained in a local realist way by making use of hidden variable λ.
DevilsAvocado said:
Could we make a 'simplification' of this also, and say:

According to QM predictions, all depends on the relative angle between the polarizers a & b. If measured parallel (0º) or perpendicular (90º) the outcome is perfectly correlated/anticorrelated. In any other case, it’s statistically correlated thru QM predictions cos^2(a-b).

Every outcome on every angle is perfectly random for A & B, with one 'exception' for parallel and perpendicular, where the outcome for A must be perfectly correlated/anticorrelated to B (still individually perfectly random).​

Correct?
By "perfectly random" you just mean that if we look at A individually or B individually, without knowing what happened at the other one, then there's always an 0.5 chance of one result and an 0.5 chance of the opposite result, right? (so P(A|a)=P(B|b)=0.5) And this is still just as true when talk about the "exception" case of parallel or perpendicular detectors (as you point out when you say 'still individually perfectly random'), so it could be a little misleading to call this an "exception", but otherwise I have no problem with your summary.
 
  • #927
DrChinese said:
... I haven't seen good explanations of how it is calculated or I would provide a reference. Perhaps someone else knows a good reference on this.

Thanks for the info DrC.
 
  • #928
JesseM said:
... this perfect correlation is to be explained in a local realist way by making use of hidden variable λ.
Yes, this is obviously the key.
JesseM said:
By "perfectly random" you just mean that if we look at A individually or B individually, without knowing what happened at the other one, then there's always an 0.5 chance of one result and an 0.5 chance of the opposite result, right? (so P(A|a)=P(B|b)=0.5) And this is still just as true when talk about the "exception" case of parallel or perpendicular detectors (as you point out when you say 'still individually perfectly random'), so it could be a little misleading to call this an "exception", but otherwise I have no problem with your summary.
Correct, that’s what I meant. Thanks!
 
  • #929
DrChinese said:
Not with beam splitters! (Non-detection is not an issue ever, as we talk about the ideal case. Experimental tests must consider this.)

1) Non-detection is present in every bell-test experiment ever performed
2) The relevant beam-splitters are real ones used in real experiments, not idealized ones that have never and can never be used in any experiment.

Bell should still have considered non-detection as one of the outcomes in addition to (-1, and +1). If you are right that non-detection is not an issue, the inequalities derived by assuming there are three possible outcomes right from the start, should also be violated. But if you do this and end up with an inequality that is no longer violated, then non-detection IS an issue.
 
  • #930
DrChinese said:
A little thought will tell you why this is not much of an issue. If we have a particle pair A, B and we send them through a beam splitter with detectors at both output ports, we should end up with one of the following 4 cases:

1. A not detected, B not detected.
2. A detected, B not detected.
3. A not detected, B detected.
4. A detected, B detected.
Correct.
In Bell's treatment, only case 4 is considered, the rest are simply ignored or assumed to not be possible.

However we won't actually know when case 1 occurs, correct? But unless the chance of 1 is substantially greater than either 2 or 3 individually (and probability logic indicates it should be definitely less - can you see why?), then we can estimate it. If case 4 occurs 50% of the time or more, then 1 should occur less than 10% of the time. This is in essence a vanishing number, since visibility is approaching 90%. That means cases 2 and 3 are happening only about 1 in 10, which would imply case 1 of about 1%.
This is not even wrong. It is outrageous. Case 1 corresponds to two photons leaving the source but none detected, cases 2-3 correspond to two photons leaving the source and only one detected on any of the channels. In Bell test experiments, coincidence-circuitry eliminates 1-3 from consideration because there is no way in the inequalities to include that information. The inequalities are derived assuming that only case 4 is possible.

To determine likelihood of each case from relative frequencies, you will need to count that specific case, and divide by the total number for all cases (1-5), or alternatively, all photon pairs leaving the source. Therefore, the relative frequency will be the total number of the specific case observed divided by the total number of photon pairs produced by the source.
ie: P(caseX) = N(caseX) / { N(case1) + N(case2) + N(case3) + N(case4)}

If you are unable to tell that case 4 has occurred, then you can never know what proportion of the particle pairs resulted in any of the cases, because N(case4) is part of the denominator!

So when you say "if case 4 occurs 50% of the time", you have to explain what represents 100%.
for example consider the following frequencies, for the case in which 220 particle pairs are produced and we have:

case 1: 180
case 2: 10
case 3: 10
case 4: 20

Since according to you, we can not know when case 1 occurs, then our total is only (40)
according to you, P(case4) = 50% and P(case2) = 25% P(case 3) = 25%
according to you, P(case1) should be vanishingly small since P(case4) is high.

But as soon as you realize that our total is actually 220 not 40 as you mistakenly thought,
P(case1) now becomes 82%, for exactly the same experiment, just by correcting the simple error you made.
It is even worse with Bell because according to him, cases 1-3 do not exist so his P(case4) is 100%, since considering even only case 2 and 3 as you suggested would required including a non-detection outcome as well as (+1 and -1).

Now that this blatant error is clear, let us look at real experiments to see which approach is more reasonable, by looking at what proportion of photons leaving the source is actually detected.

For all Bell-test experiments performed to date, only 5-30% of the photons emitted by the detector have been detected, with only one exception. And this exception, which I'm sure DrC and JesseM will remind us of, had other more serious problems. Let us make sure we are clear what this means.

It means of almost all those experiments usually thrown around as proof of non-locality, P(case4) has been at most 30% and even as low as 30% in some cases. The question then is, where did the whopping 70% go?

Therefore it is clear first of all by common sense, then by probability theory, and finally confirmed by numerous experiments that non-detection IS an issue and should have been included in the derivation of the inequalities!
 
  • #931
billschnieder said:
1) Non-detection is present in every bell-test experiment ever performed
2) The relevant beam-splitters are real ones used in real experiments, not idealized ones that have never and can never be used in any experiment.

Bell should still have considered non-detection as one of the outcomes in addition to (-1, and +1). If you are right that non-detection is not an issue, the inequalities derived by assuming there are three possible outcomes right from the start, should also be violated. But if you do this and end up with an inequality that is no longer violated, then non-detection IS an issue.

Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.
 
  • #932
DrChinese said:
Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.

Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.

QM works because it is not an idealized theoretical proof, it actually incorporates and accounts for the experimental setup. It is therefore not surprising that QM and real experiments agree while Bell's inequalities are the only ones left hanging in the cold.
 
Last edited:
  • #933
billschnieder said:
Correct.
In Bell's treatment, only case 4 is considered, the rest are simply ignored or assumed to not be possible.

This is not even wrong. It is outrageous. Case 1 corresponds to two photons leaving the source but none detected, cases 2-3 correspond to two photons leaving the source and only one detected on any of the channels. In Bell test experiments, coincidence-circuitry eliminates 1-3 from consideration because there is no way in the inequalities to include that information. The inequalities are derived assuming that only case 4 is possible.

Oh really. Cases 2 and 3 are quite observed and recorded. They are usually excluded from counting because of the Coincidence Time Window, this is true. But again, this is just a plain misunderstanding of the process. You cannot actually have the kind of stats you describe because the probability p(1)=p(2)*p(3)=p(2)^2 and p(4)=1 - p(1). Now this is approximate and there hypothetically could be a force or something that causes deviation. But as mentioned, that would require a physically testable hypothesis.

As far as I can see, there is currently very high detection efficiencies. From Zeilinger et al:

These can be characterized individually by measured visibilities, which were: for the source, ≈ 99% (98%) in the H/V (45°/135°) basis; for both Alice’s and Bob’s polarization analyzers, ≈ 99%; for the fibre channel and Alice’s analyzer (measured before each run), ≈ 97%, while the free-space link did not observably reduce Bob’s polarization visibility; for the effect of accidental coinci-dences resulting from an inherently low signal-to-noise ratio (SNR), ≈ 91% (including both dark counts and multipair emissions, with 55 dB two-photon attenuation and a 1.5 ns coincidence window).

Violation by 16 SD over 144 kilometers.
http://arxiv.org/abs/0811.3129

Or perhaps:

(You just have to read this as it addresses much of these issues directly. Suffice it to say that they address the issue of collection of pairs from PDC very nicely.)

Violation by 213 SD.
http://arxiv.org/abs/quant-ph/0303018
 
  • #934
billschnieder said:
Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.

You know, if there were only 1 experiment ever performed you might be correct. But this issue has been raised, addressed and ultimately rejected as on ongoing issue over and over.
 
  • #935
DrChinese said:
Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.
Yes, Bell's proof was just showing that the theoretical predictions of QM were incompatible with the theoretical predictions of local realism, not derive equations that were directly applicable to experiments. Though as I've already said, you can derive inequalities that include detector efficiency as a parameter, and there have been at least a few experiments with sufficiently high detector efficiency such that these inequalities are violated (though these experiments were vulnerable to the locality loophole).

A few papers I came across suggested that experiments which closed both the detector efficiency loophole and the locality loophole simultaneously would likely be possible fairly soon. If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
 
  • #936
JesseM said:
If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind. I am after the truth, I don't religiously follow one side just because I have invested my whole life to it. So why would I want to bet at all.

I am merely pointing out here that the so-called proof of non-locality is unjustified, which is not the same as saying there will never be any proof. it seems from your suggestion thatyou are already absolutely convinced of non-locality, so would you bet a large sum of money against the idea that non-locality will be found to be a serious misunderstanding?
 
  • #937
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.

Bell derives inequalities by assuming that a single particle is measured at multiple angles. Experiments are performed in which many different particles are measured at multiple angles. Apples vs oranges. Comparing the two is equivalent to comparing the average height obtained by measuring a single person's height 1000000 times, with the average height obtained by measuring 1000000 different people each exactly one time.

The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same. Just because the experimenter sets a macroscopic angle does not mean that the complete microscopic state of the instrument, which he has no control over is in the same state.

CHSH:
|q(d1,y2) - q(a1,y2)| + |q(d1,b2)+q(a1,b2)| <= 2
d1, y2, a1, b2 each occur in two of the four terms. Same argument above applies.

Leggett-Garg:
Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1
 
  • #938
billschnieder said:
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind.
What do you mean by "assumptions", though? Are you just talking about the assumptions about the observable experimental setup, like spacelike separation between measurements and perfect detection of all pairs (or a sufficiently high number of pairs if we are talking about a derivation of an inequality that includes detector efficiency as a parameter)? Or are you including theoretical assumptions like the idea that the universe obeys local realist laws and that there is some set of local variables λ such that P(AB|ab)=P(A|aλ)*P(B|bλ)? Of course Bell would not expect that any real experiment could fulfill those theoretical assumptions, since he believed the predictions of QM were likely to be correct and his proof was meant to be a proof-by-contradiction showing these theoretical assumptions lead to predictions incompatible with QM under the given observable experimental conditions.
billschnieder said:
I am merely pointing out here that the so-called proof of non-locality is unjustified
You can only have "proofs" of theoretical claims, for empirical claims you can build up evidence but never prove them with perfect certainty (we can't 'prove' the Earth is round, for example). Bell's proof is not intended to be a proof that non-locality is real in the actual world, just that local realism is incompatible with QM. Of course you apparently doubt some aspects of this purely theoretical proof, like the idea that in any local realist universe it should be possible to find a set of local variables λ such that P(AB|ab)=P(A|aλ)*P(B|bλ), but you refuse to engage in detailed discussion on such matters. In any case I would say the evidence is strong that QM's predictions about Aspect-type experiments are correct, even if there are a few loopholes like the fact that no experiment has simultaneously closed the detector efficiency and locality loopholes (but again, I think it would be impossible to find a local realist theory that exploited both loopholes in a way consistent with the experiments that have been done so far but didn't look extremely contrived).
billschnieder said:
it seems from your suggestion thatyou are already absolutely convinced of non-locality, so would you bet a large sum of money against the idea that non-locality will be found to be a serious misunderstanding?
Personally I tend to favor the many-worlds interpretation of QM, which could allow us to keep locality by getting rid of the implicit assumption in Bell's proof that every measurement must have a unique outcome (to see how getting rid of this can lead to a local theory with Bell inequality violations, you could check out my post #11 on this thread for a toy model, and post #8 on this thread gives references to various MWI advocates who say it gives a local explanation for BI violations). I would however bet a lot of money that A) future Aspect-type experiments will continue to match the predictions of QM about Bell inequality violations, and B) mainstream physicists aren't going to end up deciding that Bell's theoretical proof is fundamentally flawed and that QM is compatible with a local realist theory that doesn't have any of the kinds of "weird" features that are included as loopholes in rigorous versions of the proof (like many-worlds, or like 'conspiracies' in past conditions that create a correlation between the choice of detector setting and the state of hidden variables at some time earlier than when the choice is made)
 
Last edited:
  • #939
billschnieder said:
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.

Thank you! I was hoping the artistry would show through.

All I can really say is that any local realistic prediction you care to make can pretty well be falsified. On the other hand, any Quantum Mechanical prediction will not. So at the end of the day, your definitional quibbling is not very convincing. All you need to do is define LR so we can test it. Saying "apples and oranges" when it looks like "apples and apples" (since we start with perfect correlations) is not impressive.

So... make an LR prediction instead of hiding.
 
  • #940
24 000 hits and still going... Einstein is probably turning in his grave at the way the EPR argument is still going and going...
 
  • #941
billschnieder said:
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.
I did respond to that post, but I didn't end up responding to your later post #128 on the subject here because before I got to it you said you didn't want to talk to me any more unless I agreed to make my posts as short as you wanted them to be and for me not to include discussions of things I thought were relevant if you didn't agree they were relevant. But since you bring it up, I think you're just incorrect in saying in post #128 that the Leggett-Garg inequality is not intrinsically based on a large collection of trials where on each trial we measure the same system at 2 of 3 possible times (as opposed to measuring two parts of an entangled system with 1 of several possible combinations detector settings as with other inequalities)--see this paper and http://www.nature.com/nphys/journal/v6/n6/full/nphys1641.html which both describe it using terms like "temporal inequality" and "inequality in time", for example. I also found the paper where you got the example with patients from different countries here, they explain in the text around equation (8) what the example (which doesn't match the conditions assumed in the Leggett-Garg inequality) has to do with the real Leggett-Garg inequality:
Realism plays a role in the arguments of Bell and followers because they introduce a variable λ representing an element of reality and then write

[tex]\Gamma = <A_a(\lambda)A_b(\lambda)> + <A_a(\lambda)A_c(\lambda)> + <A_b(\lambda)A_c(\lambda)> \, \geq -1 \, (8)[/tex]

Because no λ exists that would lead to a violation except a λ that depends on the index pairs (a, b), (a, c) and (b, c) the simplistic conclusion is that either elements of reality do not exist or they are non-local. The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs. This assumption implies the existence of the combinatorial-topological cyclicity that in turn implies the validity of a non-trivial inequality but has no physical basis. Why should the elements of reality not all be different? Why should they, for example not include the time of measurement?
If you look at that first paper they mention on p. 2 that in deriving the inequality we assume each particle is assumed to be in one of the two possible states at all times, so each particle has a well-defined classical "history" of the type shown in the diagram at the top of p. 4, and we assume there is some well-defined probability distribution on the ensemble of all possible classical histories. They also mention at the bottom of p. 3 that deriving the inequality requires that we assume it is possible to make "noninvasive measurements", so the choice of which of 3 times to make our first measurement does not influence the probability of different possible classical histories. They mention that this assumption can also be considered a type of "locality in time". This assumption is a lot more questionable than the usual type of locality assumed when there is a spacelike separation between measurements, since nothing in local realism really should guarantee that you can make "noninvasive" measurements on a quantum system which don't influence its future evolution after the measurement. And this also seems to be the assumption the authors are criticizing in the quote above when they say 'Why should the elements of reality not all be different? Why should they, for example not include the time of measurement?' (I suppose the λ that appears in the equation in the quote represents a particular classical history, so the inequality would hold as long as the probability distribution P(λ) on different possible classical histories is independent of what pair of times the measurements are taken on a given trial.) So this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
 
Last edited by a moderator:
  • #942
JesseM said:
...mainstream physicists aren't going to end up deciding that Bell's theoretical proof is fundamentally flawed and that QM is compatible with a local realist theory that doesn't have any of the kinds of "weird" features that are included as loopholes in rigorous versions of the proof (like many-worlds, or like 'conspiracies' in past conditions that create a correlation between the choice of detector setting and the state of hidden variables at some time earlier than when the choice is made)

True, not likely to change much anytinme soon.

The conspiracy idea (and they go by a lot of names, including No Free Will and Superdeterminism) is not really a theory. More like an idea for a theory. You would need to provide some kind of mechanism, and that would require a deep theoretical framework in order to account for Bell Inequality violations. And again, much of that would be falsifiable. No one actually has one of those on the table. A theory, I mean, and a mechanism.

If you don't like abandoning c, why don't you look at Relational Blockworld? No non-locality, plus you get the added bonus of a degree of time symmetry - no extra worlds to boot! :smile:
 
  • #943
DrChinese said:
... However we won't actually know when case 1 occurs, correct? But unless the chance of 1 is substantially greater than either 2 or 3 individually (and probability logic indicates it should be definitely less - can you see why?), then we can estimate it. If case 4 occurs 50% of the time or more, then 1 should occur less than 10% of the time. This is in essence a vanishing number, since visibility is approaching 90%. That means cases 2 and 3 are happening only about 1 in 10, which would imply case 1 of about 1%.

OMG. I can only hope that < 10% of my brain was 'connected' when asking about this first time... :redface:

OF COURSE we can’t know when case 1 occurs! There are no little "green EPR men" at the source shouting – Hey guys! Here comes entangled pair no. 2345! ARE YOU READY!

Sorry.

DrChinese said:
So you have got to claim all of the "missing" photons are carrying the values that would prove a different result. And this number is not much. I guess it is *possible* if there is a physical mechanism which is responsible for the non-detections, but that would also make it experimentally falsifiable. But you should be aware of how far-fetched this really is. In other words, in actual experiments cases 2 and 3 don't occur very often. Which places severe constraints on case 1.

Far-fetched?? To me this looks like something that Crackpot Kracklauer would use as the final disproval of all mainstream science. :smile:

Seriously, an unknown "physical mechanism" working against the reliability of EPR-Bell experiments!:bugeye:? If someone is using this cantankerous argument as a proof against Bell's Theorem, he’s apparently not considering the consequences...

That "physical mechanism" would need some "artificial intelligence" to pull that thru, wouldn’t it?? Some kind of "global memory" working against the fair sampling assumption – Let’s see now, how many photon pairs are detected and how many do we need to mess up, to destroy this silly little experiment?

Unless this "physical mechanism" also can control the behavior of humans (humans can mess up completely on their own as far as we know), it would need some FTL mechanism to verify that what should be measured is really measured = back to square one!

By the way, what’s the name of this interesting "AI paradigm"...?? :biggrin:


P.S. I checked this pretty little statement "QM works because it is not an idealized theoretical proof" against the http://math.ucr.edu/home/baez/crackpot.html" and it scores 10 points!
"10 points for each claim that quantum mechanics is fundamentally misguided (without good evidence)."
Not bad!
 
Last edited by a moderator:
  • #944
JesseM said:
But since you bring it up, I think you're just incorrect in saying in post #128 that the Leggett-Garg inequality is not intrinsically based on a large collection of trials where on each trial we measure the same system at 2 of 3 possible times (as opposed to measuring two parts of an entangled system with 1 of several possible combinations detector settings as with other inequalities)

As I mentioned to you earlier, it is your opinion here that is wrong. Of course, the LGI applies to the situation you mention, but inequalities of that form were originally proposed by Boole in 1862 (see http://rstl.royalsocietypublishing.org/content/152/225.full.pdf+html) and had nothing to do with time. All that is necessary for it to apply is n-tuples of two valued (+/-) variables. In Boole's case it was three boolean variables. The inequalities result simply from arithmetic, and nothing else.
We perform an experiment in which each data point consists of triples of data such as (i,j,k). Let us call this set S123. We then decide to analyse this data by extracting three data sets of pairs such as S12, S13, S23. What Boole showed was essentially if i, j,k are two valued variables, no matter the type of experiment generating S123, the datasets of pairs extracted from S123 will satisfy the inequalities:

|<S12> +/- <S13>| <= 1 +/- <S23>

You can verify that this is Bell's inequality (replace 1,2,3 with a,b,c,). Using the same ideas he came up with a lot of different inequalities one of which is the LGI, all from arithmetic. So a violation of these inequalities by data, points to mathematically incorrect treatment of the data.

You may be wondering how this applies to EPR. The EPR case involves performing an experiment in which each point is a pair of two-valued outcomes (i,j), let us call it R12. Bell and followers then assume that they should be able to substitute Sij for Rij in the inequalities forgetting that the inequality holds for pairs extracted from triples, but not necessarily for pairs of two-valued data.

Note that each term in Bell's inequality is a pair from a set of triples (a, b, c), but the data obtained from experiments is a pair from a set of pairs.

I also found the paper where you got the example with patients from different countries here,
That is why I gave you the reference before, have you read it, all of it?

So this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
This is not a valid criticism for the following reason:

1) You do not deny that the LGI is a Bell-type inequality. Why do you think it is called that?
2) You have not convincingly argued why the LGI should not apply to the situation described in the example I presented
3) You do not deny the fact that in the example I presented, the inequalities can be violated simply based on how the data is indexed.
4) You do not deny the fact that in the example, there is no way to ensure the data is correctly indexed unless all relevant parameters are known by the experimenters
5) You do not deny that Bell's inequalities involve pairs from a set of triples (a,b,c) and yet experiments involve triples from a set of pairs.
6) You do not deny that it is impossible to measure triples in any EPR-type experiment, therefore Bell-type inequalities do not apply to those experiments. Boole had shown 100+ years ago that you can not substitute Rij for Sij in those type of inequalities.
 
Last edited:
  • #945
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.
 

Similar threads

2
Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
2K
Replies
6
Views
1K
Replies
2
Views
1K
Replies
100
Views
10K
Replies
6
Views
3K
Back
Top