Scholarpedia article on Bell's Theorem

In summary, the article is a biased overview of the many criticisms of Bell's theorem and does not provide an unbiased perspective.
  • #36
ThomasT said:
=
But I still retain the assumption that nature is evolving in accordance with the principle of locality. Why? Because that's the world of my experience, and I don't know of any physical evidence contradicting that assumption, and also because I suppose (assume/hypothesize) that there just might be a less exotic (more parsimonious, simpler, but nonetheless subtle) explanation for why BI's are violated than the assumption that there are nonlocal transmissions happening in the reality underlying instrumental behavior.

That's exactly what everybody should think -- until they learn about Bell's theorem. In other words, your statement here reads to me like a confession that you haven't looked at or understood Bell's theorem.


What if the λ's determining rate of individual detection and rate of coincidental detection are different underlying parameters?

Sorry, but none of this makes sense. Look at the role this lambda actually plays in the theorem. It can be *anything*. So the kind of scenario you describe (there are two different "parts" to lambda, one that affects such and such, the other affecting thus and so...) is perfectly well covered already -- i.e., it is already ruled out by the theorem.


Anyway, I'll read your article, even though I disagree with the very first sentence in it. :smile: You've taught me some things before. Maybe I'm just missing something.

Yes, you should read it. It is precisely an understanding of Bell's theorem that you are currently missing.
 
Physics news on Phys.org
  • #37
ThomasT said:
Of course, I understand that you, and ttn, and other dBB advocates have a certain vested interest in promoting a certain interpretation of Bell's theorem.

That is patently absurd. Tell me specifically where any of us let some kind of Bohmian bias sneak into the arguments about Bell's theorem.



You assume nonlocality based on your interpretation of Bell's theorem.

This profoundly mis-states the situation. I *infer* nonlocality based on my *understanding* of Bell's theorem. You make it sound (with all this talk of "assumptions" and "vested interests") that I and others just *arbitrarily* decide we like nonlocality, so we interpret Bell's theorem that way. That is just backwards. Read the article if you want to actually understand the issues.
 
  • #38
I don't understand what some people are arguing here.

ttn presents (in his scholarpedia article) several mathematical theorems.

As far as I see, they are all correct. I don't see any mathematical error in his mathematical proofs.

Then you have two possibilities:

1) You find a mathematical error in one of the mathematical proofs he presents. Then show it.

2) You don't find any mathematical error in his mathematical theorems, but you think that the mathematical expression he uses as a "necessary condition of locality" is actually not a necessary condition of locality based on YOUR definition of locality. Then show your definition of locality as clearly as possible and prove that his condition is not a necessary condition for your definition of locality.

I don't see anyone here doing 1) or 2)
 
  • #39
ttn said:
Hi everybody. I dropped into Physics Forums for the first time in a while just to see what was going on in one of my old hangouts. It was nice to see about 10 threads raging about Bell's theorem! But perhaps not so nice to see many people, with whom I argued at length in the old days here, saying the same exact WRONG things still after all these years! =)

Anyway, I just thought it might be helpful to advertise the existence of a really systematic, careful review article on Bell's Theorem that Goldstein, Tausk, Zanghi, and I finished last year (after working on it for more than a year). It's free online here

http://www.scholarpedia.org/article/Bell%27s_theorem

and addresses very explicitly and clearly a number of the issues being debated on the other several current Bell's Theorem threads. It is, in my hardly unbiased opinion, far and away the best and most complete existing resource for really understanding Bell's Theorem, so anybody with a remotely serious interest in the topic should study the article. I'd be happy to try to answer any questions anybody has, but post them here and base them somehow on the scholarpedia article since I won't have time to follow (let alone get entangled in) all the parallel threads.

Travis
At first sight the article looks very nice.

On a first negative note, when checking the little stuff that I know rather well (SR, not QM) by way of test, I find it nicely informative but a bit inaccurate. SR is, just like QM, an empirical theory that is based on observations as summarized in its postulates, and with a resulting prediction of observations. Which is why Einstein could (and did) flip-flop about the ether, and why Lorentz and Langevin could (and did) promote SR . However, I read something similar as what you claim in a book on QM, so I guess that it comes straight out of one or two of such books. And that brings me to a possible weak point of Scholarpedia, it seems to have a rather narrow basis. I thus expect its articles to be a high quality reflection of a narrow range of opinions.

Anyway, I think that it (Scholarpedia incl. your article) is a useful complement to Wikipedia. :smile:
 
  • #40
mattt said:
I don't understand what some people are arguing here.

Thanks mattt. I share your surprise/confusion about what some people are arguing. If we were all co-authors, trying to decide how to structure a not-yet-written article, this kind of squabbling about what level of neutrality is appropriate, etc., would be quite reasonable. But... that's not the situation here. The article is written. If you don't like its style or don't think it's "fair", OK, whatever, don't read it. But there's really no point *arguing* about that.

Anyway, hopefully people will at some point get around to actually reading the thing and then raising questions about the proofs, arguments, definitions, etc.
 
  • #41
ttn, what's your opinion of Herbert's version of Bell's proof?
http://quantumtantra.com/bell2.html
I think it may be the simplest known proof of Bell's theorem.
 
  • #42
mattt said:
I don't understand what some people are arguing here.

ttn presents (in his scholarpedia article) several mathematical theorems.

As far as I see, they are all correct. I don't see any mathematical error in his mathematical proofs.

Then you have two possibilities:

1) You find a mathematical error in one of the mathematical proofs he presents. Then show it.

2) You don't find any mathematical error in his mathematical theorems, but you think that the mathematical expression he uses as a "necessary condition of locality" is actually not a necessary condition of locality based on YOUR definition of locality. Then show your definition of locality as clearly as possible and prove that his condition is not a necessary condition for your definition of locality.

I don't see anyone here doing 1) or 2)

(Apologies to ttn for answering mattt's question, which is not directly related to the article itself.)

Considering that the vast majority of the scientific community, including Einstein, believed that realism IS quite relevant to the EPR Paradox (completeness of QM), and therefore to Bell, you shouldn't be surprised that they won't feel compelled to refute this argument. There are just too many side elements to the matter. In other words: unless you are prepared to say that Bell's argument is an unnecessary step to disproving EPR, you cannot ignore realism.

You will quickly see that the base argument ttn is making has both semantic and definitional overtones. I don't expect you, ttn or anyone else to accept my reasoning, but this seems so clear to the remainder of this community it more or less goes without saying:

(1) Locality + Perfect Correlations -> Realism

(2) Since Realism is deduced, and not assumed in (1), then it is not a necessary condition to arrive at the Bell result.

I agree with (1) but disagree with (2). For the leap to occur from (1) to (2), you must assume there exist *simultaneous* Perfect Correlations. That is, the individual "elements of reality" a la EPR exist simultaneously and independently of the act of observation. So the Realism requirement is actually implicit in (1). I think a more correct rendering of (1) is:

(3) Simultaneous (*see note below) Perfect Correlations -> Realism

Notice Locality is dropped as not being a necessary condition for this conclusion. On the other hand, Locality is required so you can satisfy the EPR requirement that you are not disturbing the particle being observed. So then you end up with:

(4) Locality + Realism -> Bell result

QED. So ttn argues (1) and (2) which the rest of us see as (3) and (4). Please, there is no reason anyone to refute my argument as we will just argue over words at this point. I have answered the question about what is in play here. Ultimately, depending on your perspective, you will adopt the definitions and requirements from EPR - or you will not. And that will drive what side you come down on. Note that these things almost all agree upon, regardless of the words:

a) Nature exhibits quantum non-locality (or non-separability, or whatever you call it).
b) Nature is contextual (or non-realistic, or whatever you call it).

In other words, a) and b) are today so wound up together that it takes word games to separate the conclusions of one group from the other. Note that ttn was able to acknowledge that our views are not so dissimilar even though our labels appears to be different. The only truly distinct position is that of those who are still in the local realistic camp - and even their position is not so distinct when you drill into it enough.

* Simultaneous meaning: 3 or more. EPR had only 2 and that is why the EPR Paradox was not resolved prior to Bell. Bell added the 3rd. See after Bell's (14) where this is dropped in quietly.
 
  • #43
Travis, you might be interested to see that in my blog I quoted some nontechnical highlights from your paper:
https://www.physicsforums.com/blog.php?bt=5628#comment5628
 
Last edited by a moderator:
  • #44
Demystifier said:
Perhaps I could digest that claim if instead of "evidence" you said "proof". But experiments demonstrating violation of Bell inequalities definitely ARE physical evidence for nonlocality, even if they are not strictly a proof of it.
Physical evidence would entail the observation of a ftl transmission.
Demystifier said:
But if you still disagree, then it would be helpful if you could answer the following questions:

1. In your opinion, the experimental violation of Bell inequalities is evidence for what?
It's evidence for the existence of a relationship between entangled disturbances that can only be produced via certain preparations. (Hence, BI violations are used as entanglement witnesses.) Whether that relationship is produced locally or nonlocally is an open question. However, wrt Aspect 1982 the standard model seems to indicate that the relationship is produced locally via entangled disturbances being emitted by the same atom. If that's true, and if the global measurement parameter, θ, is, in effect, measuring that relationship, then assuming nonlocality seems unwarranted.
Demystifier said:
2. Suppose that experimental violation of Bell inequalities has been observed before the theory of quantum mechanics has been discovered. For such experimentalists, what would be a natural interpretation of their experimental results?
The same two options that exist now, I suppose. That either there's some underlying nonlocal transmission (in some medium other than the em medium) between entangled disturbances, or that there's something in the formalism on which BIs are based that doesn't fit the design and execution of the experiments.
 
  • #45
Demystifier said:
If I were primarily experimentalist, I would assume neither locality nor nonlocality. Instead, I would make experiments without any theoretical prejudices. And if in a particular experiment I would found correlations between spatially separated results of measurements such as those that violate Bell inequalities, then I would conclude (not assume!) that this particular experiment suggests the existence of some nonlocal influences.
The experiments assume that nature is local. BIs are based on a certain formalization of that assumption which limits the correlation between θ and rate of coincidental detection. A limitation which, imho, is not in line with what's known about the behavior of light. BI violation suggests two possibilities -- either nature is nonlocal or there's something about the BI which doesn't fit the experimental design and execution. One can assume the former or the latter as a working hypothesis.
 
  • #46
Demystifier said:
Let me guess: But you have absolutely no idea what that explanation might be. Am I right?
One hypothesis wrt why BIs are violated is that the limitation they place on the correlation between θ and rate of coincidental detection is unwarranted by the experimental designs and what's known about the behavior of light in scenarios involving crossed polarizers.

Demystifier said:
And the mere fact that you have no idea how to explain it without nonlocality should already be taken as evidence (not yet a proof) that in some cases nature might be nonlocal.
What's wrong with the notion that θ is measuring a relationship between entangled entities, that that relationship is produced via local interactions/transmissions, and that LR models of entanglement setups are unduly restricted?

Nonlocality is a possibility. But not the most parsimonious working assumption.
 
  • #47
ttn said:
That's exactly what everybody should think -- until they learn about Bell's theorem. In other words, your statement here reads to me like a confession that you haven't looked at or understood Bell's theorem.
Or that I interpret it's physical meaning differently that you do.

ttn said:
Sorry, but none of this makes sense. Look at the role this lambda actually plays in the theorem. It can be *anything*. So the kind of scenario you describe (there are two different "parts" to lambda, one that affects such and such, the other affecting thus and so...) is perfectly well covered already -- i.e., it is already ruled out by the theorem.
Bell concludes that separable predetermination of rate of coincidental detection is ruled out. I agree. The key term here is separable. A nonseparable relationship between λa and λb can't be separated and encoded in the function determining coincidental detection vis the functions that determine individual detection, and be expected to produce the same correlation curve that using a single nonvarying and nonseparable λ would. But this is what Bell's LR formulation does. So everything isn't perfectly well covered, and resulting BIs place an unwarranted restriction on the correlation between θ and rate of coincidental detection.

I'm just suggesting that the conceptualization of the experimental situation might be more closely scrutinized.

Or, somebody can produce a nonlocal transmission and I'll just shut up.
 
Last edited:
  • #48
mattt said:
I don't understand what some people are arguing here.

ttn presents (in his scholarpedia article) several mathematical theorems.

As far as I see, they are all correct. I don't see any mathematical error in his mathematical proofs.

Then you have two possibilities:

1) You find a mathematical error in one of the mathematical proofs he presents. Then show it.

2) You don't find any mathematical error in his mathematical theorems, but you think that the mathematical expression he uses as a "necessary condition of locality" is actually not a necessary condition of locality based on YOUR definition of locality. Then show your definition of locality as clearly as possible and prove that his condition is not a necessary condition for your definition of locality.

I don't see anyone here doing 1) or 2)
There's a third possibility. That there's no way to explicitly encode any locality condition in the function determining rate of coincidental detection that both clearly represents locality and which isn't at odds with the design and execution of Bell tests. At least I can't think of one.
 
  • #49
I forgot to reply to these, before I take on your paper.
ttn said:
That is patently absurd. Tell me specifically where any of us let some kind of Bohmian bias sneak into the arguments about Bell's theorem.

This profoundly mis-states the situation. I *infer* nonlocality based on my *understanding* of Bell's theorem. You make it sound (with all this talk of "assumptions" and "vested interests") that I and others just *arbitrarily* decide we like nonlocality, so we interpret Bell's theorem that way. That is just backwards. Read the article if you want to actually understand the issues.
Ok. I retract, with apology, my statements regarding assumptions and vested interests. But I still think your inference of nonlocality might be overlooking or mistreating something important in the relationship between LR formulation and experimental design and execution.

To the paper!
 
  • #50
lugita15 said:
ttn, what's your opinion of Herbert's version of Bell's proof?
http://quantumtantra.com/bell2.html
I think it may be the simplest known proof of Bell's theorem.

It's nice. (I hadn't seen it before, so thanks for pointing it out!) I don't think it's any simpler, though, than the proof we give in the scholarpedia article -- see the "Bell's inequality theorem" section and in particular the proof that

1/4 + 1/4 + 1/4 > 1.

Actually, this is very very closely related to what Herbert does, so probably instead of arguing about which one is simpler, we should just call them the same proof!

Incidentally, while Herbert's article is a nice proof of (what we in the scholarpedia article call) "Bell's inequality theorem", I find it less than ideal as a proof of non-locality, i.e., a full proof of (what we in the scholarpedia article call) "Bell's theorem". The reason is that it seems to tacitly rely on an assumption that there are "local deterministic hidden variables" determining the outcomes, but without explaining clearly why this is actually not an assumption at all but instead something that follows already from (a) the assumption of locality and (b) the perfect correlations one observes when the polarizers on the two sides are perfectly aligned.
 
  • #51
ThomasT said:
I still think your inference of nonlocality might be overlooking or mistreating something important in the relationship between LR formulation and experimental design and execution.

I will be anxious to hear your diagnosis of what, exactly, was overlooked or mistreated.
 
  • #52
harrylin said:
when checking the little stuff that I know rather well (SR, not QM) by way of test, I find it nicely informative but a bit inaccurate.

Could you say exactly what you thought was inaccurate? I couldn't understand, from what you wrote, what you had in mind exactly.
 
  • #53
DrChinese said:
Considering that the vast majority of the scientific community, including Einstein, believed that realism IS quite relevant to the EPR Paradox (completeness of QM), and therefore to Bell, you shouldn't be surprised ...

OK, OK, let's go through this again. It's not that complicated. There's no reason we can't all get onto the same page here.

1. Bohr asserts that "QM is complete". It's not entirely clear exactly what this is supposed to mean, but everybody agrees it at least means that particles can never possesses "simultaneous definite values" for non-commuting observables. For example, no spin 1/2 particle can ever possess, at the same time, a definite value for s_x and s_y.

2. EPR (really this is Bohm's 1951 version, but who cares) argue as follows: you can create a pair of spin 1/2 particles such that measuring s_x of particle 1 allows you to know, with certainty, what a subsequent measurement of s_x of particle 2 will yield. And similarly for s_y. So imagine the following experiment: such a pair is created, with one particle going toward Bob and one toward Alice. Now Alice is going to flip a coin (or in some other "random" way, i.e., a way that in no way relates to the physical state of the two particles here under discussion) and measure s_x or s_y on her particle depending on the outcome of the coin flip. She will thus come to know, with certainty, the value of one of these two properties of Bob's particle. So far there is nothing controversial here; it is just a summary of certain of QM's predictions. But now let us *assume locality*. This has several implications here. First, the outcome of Alice's coin flip cannot influence the state of Bob's particle. Second, Alice's subsequent measurement of either s_x or s_y on her particle cannot influence the state of Bob's particle. Now think about what all this implies. Suppose Alice got heads and so measured s_x. Now it is uncontroversial that Bob's particle now possesses a definite s_x value. But it couldn't have acquired this value as a result of anything Alice did; so it must have had it all along. And since Alice could (for all Bob's particle knows) have flipped tails instead, Bob's particle must also have possessed an s_y value all along. (Suppose it didn't. But then, if Alice had got tails, which she might have, Bob's particle wouldn't know how to "answer" if its s_y was subsequently measured... so it might sometimes answer "wrong", i.e., contrary to the perfect correlations predicted by QM.) Conclusion: locality requires Bob's particle to possesses simultaneous definite values for s_x and s_y. (A slightly more precise way to put this would be: simultaneous definite values which then simply get revealed by measurements, i.e., what are usually called "hidden variables", are the *only local way* to account for the perfect correlations that are observed when Alice and Bob measure along the same axis.) This conclusion of course contradicts Bohr's completeness doctrine, so for EPR (who took locality for granted, as an unquestioned premise) this showed that, contra Bohr, QM was actually *incomplete*.

3. Bell shows that these "simultaneously definite values that simply get revealed by measurements" (i.e., hidden variables) imply conflicts with other predictions of QM -- predictions we now know to be empirically correct. Bell concludes that these hidden variables are not the correct explanation of QM statistics, which in turn means that locality is false (since these hidden variables were the only way to locally explain *some* of the QM statistics).

Now the reason I wanted to lay this out is that you insist on grouping 1,2, and 3 together as if they were all some inseparable whole. But they're not. There are two different things going on here. The first one is: the EPR argument, which is a response to Bohr's completeness claim. The logic is simple: EPR prove that locality --> SDV (simultaneous definite values), which in turn shows that completeness is false... so long as you assume locality! Note in particular that if that's all you're talking about -- the EPR argument -- there is no implication whatsoever that reality is non-local, or anything like that. Now the second issue is Bell's theorem. This is the conjunction of 2 and 3 above: locality --> SDV, but then SDV --> conflict with QM predictions; hence locality --> conflict with QM predictions. What I want to stress here is that this completely disentangles from the "completeness doctrine" issue, the issue of whether or not there are hidden variables. That's what I wrote the other day, about "hidden variables" functioning merely as a "middle term" in the logic here. The point is, if you just run the EPR+Bell argument, i.e., prove that locality --> conflict with QM predictions, you don't make any *assumptions* about whether QM is complete or not, and you don't get to *infer* anything about whether QM is complete or not. It just doesn't speak to that at all one way or the other.

Yet you insist on repeating over and over again that "realism [i.e., hidden variables] is quite relevant to the EPR paradox and therefore to Bell". This isn't exactly the wrongest thing ever, but it sure is misleading! You make it sound as if (indeed, I'm pretty sure you believe that) one needs to make some *assumption* about "realism" in order to run Bell's argument. But that isn't the case. And the fact that Bell's argument starts by recapitulating the EPR argument, and that the EPR argument has some implications about "realism" in another discussion, don't change that at all.


unless you are prepared to say that Bell's argument is an unnecessary step to disproving EPR, you cannot ignore realism.

Here you equivocate on "EPR". Does this mean the *argument* from locality --> SDV? Or does it mean the *conclusion*, namely, SDV?

My view is that the *argument* is entirely valid. However we now know, thanks to Bell, that the premise (namely, locality) is false. So we now know that the EPR argument doesn't tell us anything one way or the other about SDV/realism/HVs.

Do you disagree? If so, where is the flaw in the *argument* (recapitulated in 2 above)?

(1) Locality + Perfect Correlations -> Realism

(2) Since Realism is deduced, and not assumed in (1), then it is not a necessary condition to arrive at the Bell result.

I agree with (1) but disagree with (2). For the leap to occur from (1) to (2), you must assume there exist *simultaneous* Perfect Correlations.

Huh? Recall that "realism" here means (for example) that Bob's particle possesses simultaneous definite pre-existing values for both s_x and s_y, which values are simply revealed if an s_x or s_y measurement is made on the particle. Nothing more than that is needed to derive a Bell inequality. (Less, actually, is needed... but this should suffice here.)


(3) Simultaneous (*see note below) Perfect Correlations -> Realism

Huh? You'll have to explain what this SPC means, and then run the proof.


Notice Locality is dropped as not being a necessary condition for this conclusion. On the other hand, Locality is required so you can satisfy the EPR requirement that you are not disturbing the particle being observed. So then you end up with:

(4) Locality + Realism -> Bell result

QED.

I'm sorry, I can't follow this at all.


Ultimately, depending on your perspective, you will adopt the definitions and requirements from EPR - or you will not. And that will drive what side you come down on.

I'm sorry, there is no such ambiguity in the definitions/requirements. The argument is clear. You haven't understood it properly.


* Simultaneous meaning: 3 or more. EPR had only 2 and that is why the EPR Paradox was not resolved prior to Bell. Bell added the 3rd. See after Bell's (14) where this is dropped in quietly.

No, no, no. 2 is plenty. You can have a Bell inequality with only 2 settings on each side; see CHSH. But it doesn't matter anyway. The same exact argument for

locality --> 2-realism

(where "2-realism" means s_x and s_y both have simultaneous definite hidden variable realistic values) also leads immediate to

locality --> 3-realism.

There is no difference at all. You are totally barking up the wrong tree.
 
  • #54
ThomasT said:
There's a third possibility. That there's no way to explicitly encode any locality condition in the function determining rate of coincidental detection that both clearly represents locality and which isn't at odds with the design and execution of Bell tests. At least I can't think of one.

How about Bell's locality condition? (See section 6 of the scholarpedia article.)
 
  • #55
ThomasT said:
Bell concludes that separable predetermination of rate of coincidental detection is ruled out. I agree. The key term here is separable. A nonseparable relationship between λa and λb can't be separated and encoded in the function determining coincidental detection vis the functions that determine individual detection, and be expected to produce the same correlation curve that using a single nonvarying and nonseparable λ would.

I can't parse these words, but the issue is simple: does the kind of thing you have in mind respect, or not respect, Bell's definition of locality? If it does, it will make predictions in accord with the inequality (and hence in conflict with experiment). If it doesn't, it's nonlocal and you might as well adopt this simpler characterization of it.

That's what the theorem says. And ... paraphrasing mattt ... it's a theorem. You can't just claim that you "interpret" it differently because you don't like it. Point out the flaw in the proof, or reconcile yourself to it. Those are the options.
 
  • #56
ttn said:
No, no, no. 2 is plenty. You can have a Bell inequality with only 2 settings on each side; see CHSH.

CHSH has 4 settings: 0, 22.5, 45, 67.5. Bell used 3 for his: a, b, c. EPR-B used 2. So you are counting the wrong things. We know entangled pairs can only be measured at 2 angles at a time. But if 2 were plenty, we wouldn't have needed Bell. That is why the EPR Paradox was a "tie" until Bell arrived.

Again, my goal was not to debate the point (as we won't agree or change our minds) but to answer the question of WHY your perspective is not generally accepted. You do not define things the way the rest of us do.
 
  • #57
ttn said:
OK, OK, let's go through this again. It's not that complicated. There's no reason we can't all get onto the same page here.

1. Bohr asserts that "QM is complete". It's not entirely clear exactly what this is supposed to mean, but everybody agrees it at least means that particles can never possesses "simultaneous definite values" for non-commuting observables. For example, no spin 1/2 particle can ever possess, at the same time, a definite value for s_x and s_y.

2. EPR (really this is Bohm's 1951 version, but who cares) argue as follows: you can create a pair of spin 1/2 particles such that measuring s_x of particle 1 allows you to know, with certainty, what a subsequent measurement of s_x of particle 2 will yield. And similarly for s_y. So imagine the following experiment: such a pair is created, with one particle going toward Bob and one toward Alice. Now Alice is going to flip a coin (or in some other "random" way, i.e., a way that in no way relates to the physical state of the two particles here under discussion) and measure s_x or s_y on her particle depending on the outcome of the coin flip. She will thus come to know, with certainty, the value of one of these two properties of Bob's particle. So far there is nothing controversial here; it is just a summary of certain of QM's predictions. But now let us *assume locality*. This has several implications here. First, the outcome of Alice's coin flip cannot influence the state of Bob's particle. Second, Alice's subsequent measurement of either s_x or s_y on her particle cannot influence the state of Bob's particle. Now think about what all this implies. Suppose Alice got heads and so measured s_x. Now it is uncontroversial that Bob's particle now possesses a definite s_x value. But it couldn't have acquired this value as a result of anything Alice did; so it must have had it all along. And since Alice could (for all Bob's particle knows) have flipped tails instead, Bob's particle must also have possessed an s_y value all along. (Suppose it didn't. But then, if Alice had got tails, which she might have, Bob's particle wouldn't know how to "answer" if its s_y was subsequently measured... so it might sometimes answer "wrong", i.e., contrary to the perfect correlations predicted by QM.) Conclusion: locality requires Bob's particle to possesses simultaneous definite values for s_x and s_y. (A slightly more precise way to put this would be: simultaneous definite values which then simply get revealed by measurements, i.e., what are usually called "hidden variables", are the *only local way* to account for the perfect correlations that are observed when Alice and Bob measure along the same axis.) This conclusion of course contradicts Bohr's completeness doctrine, so for EPR (who took locality for granted, as an unquestioned premise) this showed that, contra Bohr, QM was actually *incomplete*.

3. Bell shows that these "simultaneously definite values that simply get revealed by measurements" (i.e., hidden variables) imply conflicts with other predictions of QM -- predictions we now know to be empirically correct. Bell concludes that these hidden variables are not the correct explanation of QM statistics, which in turn means that locality is false (since these hidden variables were the only way to locally explain *some* of the QM statistics).

Now the reason I wanted to lay this out is that you insist on grouping 1,2, and 3 together as if they were all some inseparable whole. But they're not. There are two different things going on here. The first one is: the EPR argument, which is a response to Bohr's completeness claim. The logic is simple: EPR prove that locality --> SDV (simultaneous definite values), which in turn shows that completeness is false... so long as you assume locality! Note in particular that if that's all you're talking about -- the EPR argument -- there is no implication whatsoever that reality is non-local, or anything like that. Now the second issue is Bell's theorem. This is the conjunction of 2 and 3 above: locality --> SDV, but then SDV --> conflict with QM predictions; hence locality --> conflict with QM predictions. What I want to stress here is that this completely disentangles from the "completeness doctrine" issue, the issue of whether or not there are hidden variables. That's what I wrote the other day, about "hidden variables" functioning merely as a "middle term" in the logic here. The point is, if you just run the EPR+Bell argument, i.e., prove that locality --> conflict with QM predictions, you don't make any *assumptions* about whether QM is complete or not, and you don't get to *infer* anything about whether QM is complete or not. It just doesn't speak to that at all one way or the other.

You are right. I see it cristal clear. Even for those who don't undestand the previous explanation with words, in his scholarpedia article he proves it mathematically (with clearly stated mathematical definitions and mathematically correct proofs, as far as I could check).

The only way out I see (for those who don't like this result) is to show that what he calls "a necessary condition of locality" (and he defines it clearly in mathematical terms) is not a necessary condition of locality for YOUR definition of locality (and you must show your own definition of locality as clearly as possible and you must prove that it doesn't imply his condition).

Another way out is to think in an incredible great cosmic conspiration.
 
  • #58
assume the universe is a one path version of MWI.

there is no "non-locality" is there?
 
  • #59
ttn, in your description of the Alice and Bob experiment you keep talking about the two particles as separate systems, which they are not. I think it needs more careful phrasing.
 
  • #60
Hello Travis,

In the section titled "Bell's inequality theorem" you derive Bell's inequality supposing that the experimental outcomes were non-contextual (cf "To see this, suppose that the spin measurements for both particles do simply reveal pre-existing values."). To your credit, in the section on "Bell's theorem and non-contextual hidden variables" you discuss the fact that non-contextual hidden variables are naive and unreasonable.

You then proceed to show that you can still obtain the inequalities by assuming only locality in the section titled "The CHSH–Bell inequality: Bell's theorem without perfect correlations".

(1) You say
"While the values of A1 and A2 may vary from one run of the experiment to another even for the same choice of parameters, we assume that, for a fixed preparation procedure on the two systems, these outcomes exhibit statistical regularities. More precisely, we assume these are governed by probability distributions Pα1,α2(A1,A2) depending of course on the experiments performed, and in particular on α1 and α2."

By "statistical regularities" do you mean simply a probability distribution Pα1,α2(A1,A2) exists? Or are you talking about more than that.

(2) You say
"However, if locality is assumed, then it must be the case that any additional randomness that might affect system 1 after it separates from system 2 must be independent of any additional randomness that might affect system 2 after it separates from system 1. More precisely, locality requires that some set of data λ — made available to both systems, say, by a common source16 — must fully account for the dependence between A1 and A2 ; in other words, the randomness that generates A1 out of the parameter α1 and the data codified by λ must be independent of the randomness that generates A2 out of the parameter α2 and λ ."

What if instead you assumed that λ did not originate from the source but was instantaneoulsy (non-locally) imparted from a remote planet to produce result A2 together with α2, and result A1 together with α1. How can you explain away the suggestion that the rest of your argument, will now prove the impossibility of non-locality?

(3) You proceed to derive your expectation values Eα1,α2(A1A2|λ), defined over the probability measure, Pα1,α2(⋅|λ) and ultimately Bell's inequality based on it
[itex]C(\alpha_1,\alpha_2)=E_{\alpha_1,\alpha_2}(A_1A_2)=\int_\Lambda E_{\alpha_1,\alpha_2}(A_1A_2|\lambda)\,\mathrm dP(\lambda),[/itex]
...
[itex]|C(\mathbf a,\mathbf b)-C(\mathbf a,\mathbf c)|+|C(\mathbf a',\mathbf b)+C(\mathbf a',\mathbf c)|\le2,[/itex]

To make the following clear, I'm going to fully specify the implied notation in the above as follows:

[itex]|C(\mathbf a,\mathbf b|\lambda)-C(\mathbf a,\mathbf c|\lambda)|+|C(\mathbf a',\mathbf b|\lambda)+C(\mathbf a',\mathbf c|\lambda)|\le2,[/itex]

Which starts revealing the problem. Unless all terms in the above inequality are defined over the exact same probability measure. The above inequality does not make sense. In other words, the only way you were able to derive such an inequality was to assume that all the terms are defined over the exact same probability measure P(λ). Do you agree? If not please, show the derivation. In fact the very next "Proof" section explicitly confirms my statement.

(4) In the section titled "Experiments", you start by saying:
Bell's theorem brings out the existence of a contradiction between the empirical predictions of quantum theory and the assumption of locality.
(a) Now since you did not show it explicity in the article, I presume when you say Bell's theorem contradicts quantum theory, you mean, you have calculated the LHS of the above inequality from quantum theory and it was greater than 2. If you will be kind as to show the calculation and in the process explain how you made sure in your calculation that all the terms you used were defined over the exact same probability measure P(λ).
(b) You also discussed how several experiments have demonstrated violation of Bell's inequality, I presume by also calculating the LHS and comparing with the RHS of the above. Are you aware of any experiments in which experimenters made sure the terms from their experiments were defined over the exact same probability measure?

(5) Since you obviously agree that non-contextual hidden variables are naive and unreasonable, let us look at the inequality from the perspective of how experiments are usually performed. For this purpose, I will rewrite the four terms obtained from a typical experiment as follows:

[itex]C(\mathbf a_1,\mathbf b_1)[/itex]
[itex]C(\mathbf a_2,\mathbf c_2)[/itex]
[itex]C(\mathbf a_3',\mathbf b_3)[/itex]
[itex]C(\mathbf a_4',\mathbf c_4)[/itex]

Where each term originates from a separate run of the experiment denoted by the subscripts. Let us assume for a moment that the same distribution of λ is in play for all the above terms. However, if we were to ascribe 4 different experimental contexts to the different runs, we will have the terms.

[itex]C(\mathbf a,\mathbf b|\lambda,1)[/itex]
[itex]C(\mathbf a,\mathbf c|\lambda,2)[/itex]
[itex]C(\mathbf a',\mathbf b|\lambda,3)[/itex]
[itex]C(\mathbf a',\mathbf c|\lambda,4)[/itex]

Where we have moved the indices into the conditions. We still find that each term is defined over a different probability measure P(λ,i), i=1,2,3,4 , where i encapsulates all the different conditions which make one run of the experiment different from another.

Therefore could you please explain why this is not a real issue when we compare experimental results with the inequality.
 
Last edited:
  • #61
billschnieder said:
Hello Travis,

Hi Bill, thanks for the thoughtful questions about the actual article! =)



By "statistical regularities" do you mean simply a probability distribution Pα1,α2(A1,A2) exists? Or are you talking about more than that.

Nothing more. But of course the real assumption is that this probability distribution can be written as in equations (3) and (4). In particular, that is where the "no conspiracies" and "locality" assumptions enter -- or really, here, are formulated.



What if instead you assumed that λ did not originate from the source but was instantaneoulsy (non-locally) imparted from a remote planet to produce result A2 together with α2, and result A1 together with α1. How can you explain away the suggestion that the rest of your argument, will now prove the impossibility of non-locality?

I don't understand. The λ here should be thought of as "whatever fully describes the state of the particle pair, or whatever you want to call the 'data' that influences the outcomes -- in particular, the part of that 'data' which is independent of the measurement interventions". It doesn't really matter where it comes from, though obviously if you have some theory where it swoops in at the last second from Venus, that would be a nonlocal theory.

But mostly I don't understand your last sentence above. What is suggesting that the rest of the argument will prove the impossibility of non-locality? I thought the argument proved the inevitability of non-locality!



To make the following clear, I'm going to fully specify the implied notation in the above as follows:

[itex]|C(\mathbf a,\mathbf b|\lambda)-C(\mathbf a,\mathbf c|\lambda)|+|C(\mathbf a',\mathbf b|\lambda)+C(\mathbf a',\mathbf c|\lambda)|\le2,[/itex]

You've misunderstood something. The C's here involve averaging/integrating over λ. They are in no sense conditional/dependent on λ. See the equation just above where CHHS gets mentioned, which defines the C's.

Which starts revealing the problem. Unless all terms in the above inequality are defined over the exact same probability measure. The above inequality does not make sense. In other words, the only way you were able to derive such an inequality was to assume that all the terms are defined over the exact same probability measure P(λ). Do you agree?

No. You are confusing the probability [itex]P_{\alpha_1,\alpha_2}(\cdot|\lambda)[/itex] with [itex]P(\lambda)[/itex]. You first average the product [itex] A_1 A_2[/itex] with respect to [itex]P_{\alpha_1,\alpha_2}(\cdot|\lambda)[/itex] to get [itex]E_{\alpha_1,\alpha_2}(A_1 A_2 | \lambda)[/itex]. Then you average this over the possible λs using P(λ).

Maybe you missed the "no conspiracies" assumption, i.e., that P(λ) can't depend on [itex]\alpha_1[/itex] or [itex]\alpha_2[/itex].




(a) Now since you did not show it explicity in the article, I presume when you say Bell's theorem contradicts quantum theory, you mean, you have calculated the LHS of the above inequality from quantum theory and it was greater than 2. If you will be kind as to show the calculation and in the process explain how you made sure in your calculation that all the terms you used were defined over the exact same probability measure P(λ).

I don't understand. The QM calculation is well-known and not controversial. You really want me to take the time to explain that? Look in any book. But I have the sense you know how the calculation goes and you're trying to get at something. So just tell me where you're going. Your last statement makes no sense to me. In QM, λ is just the usual wave function or quantum state for the pair; typically we assume that this can be completely controlled, so P(λ) is a delta function. But in QM, you can't do the factorization that's done in equation (4). It's not a local theory. (Not that you need Bell's theorem to see/prove this.)



(b) You also discussed how several experiments have demonstrated violation of Bell's inequality, I presume by also calculating the LHS and comparing with the RHS of the above. Are you aware of any experiments in which experimenters made sure the terms from their experiments were defined over the exact same probability measure?

No, the experiments don't measure the LHS of what you had written above. What they can measure is the C's as we define them -- i.e., involving the averaging over λ.



(5) Since you obviously agree that non-contextual hidden variables are naive and unreasonable, let us look at the inequality from the perspective of how experiments are usually performed. For this purpose, I will rewrite the four terms obtained from a typical experiment as follows:

[itex]C(\mathbf a_1,\mathbf b_1)[/itex]
[itex]C(\mathbf a_2,\mathbf c_2)[/itex]
[itex]C(\mathbf a_3',\mathbf b_3)[/itex]
[itex]C(\mathbf a_4',\mathbf c_4)[/itex]

Where each term originates from a separate run of the experiment denoted by the subscripts. Let us assume for a moment that the same distribution of λ is in play for all the above terms. However, if we were to ascribe 4 different experimental contexts to the different runs, we will have the terms.

[itex]C(\mathbf a,\mathbf b|\lambda,1)[/itex]
[itex]C(\mathbf a,\mathbf c|\lambda,2)[/itex]
[itex]C(\mathbf a',\mathbf b|\lambda,3)[/itex]
[itex]C(\mathbf a',\mathbf c|\lambda,4)[/itex]

Where we have moved the indices into the conditions. We still find that each term is defined over a different probability measure P(λ,i), i=1,2,3,4 , where i encapsulates all the different conditions which make one run of the experiment different from another.

Therefore could you please explain why this is not a real issue when we compare experimental results with the inequality.

Yes, for sure, if P(λ) is different for the 4 different (types of) runs, then you can violate the inequality (without any nonlocality!). The thing we call the "no conspiracies" assumption precludes this, however. It is precisely the assumption that the distribution of λ's is independent of the alpha's.

So I guess your issue is just what I speculated above: you do not accept the reasonableness of "no conspiracies", or didn't realize this assumption was being made. (I doubt it's the latter since we drum this home big time in that section especially, and elsewhere.)
 
  • #62
unusualname said:
assume the universe is a one path version of MWI.

there is no "non-locality" is there?

I don't know exactly what you mean by "one path version of MWI". But in general, about MWI, I'd say the problem is that there is no locality there either.
 
  • #63
DrChinese said:
CHSH has 4 settings: 0, 22.5, 45, 67.5.

but only 2 for each particle, which is (I thought) what you were talking about.

But the main point is that this whole counting (2, 3, 4) business is nonsensical. Can you really not follow the EPR argument, which establishes -- on the assumption of locality! -- that definite pre-existing values must exist... for one angle, for 2, for 3, for 113, for however many you care to prove. Let me just put it simply: the EPR argument shows that locality + perfect correlations implies definite pre-existing values for the spin/polarization along *all* angles.

Either you accept the validity of this or you don't. If you don't, tell me where it goes off the track. If you do, then there's nothing further to discuss because now, clearly, you can derive a Bell inequality.


We know entangled pairs can only be measured at 2 angles at a time.

Uh, you mean, each particle can be measured at 1 angle at a time? That's true. But why in the world does that matter? Nobody ever said you could measure (e.g.) all four of the correlation coefficients in the CHHS inequality on one single pair of particles!



Again, my goal was not to debate the point (as we won't agree or change our minds) but to answer the question of WHY your perspective is not generally accepted. You do not define things the way the rest of us do.

I don't hold out a lot of hope of changing your mind, either, but still, as long as you keep saying stuff that makes no sense, I will continue to call it out. Maybe somebody watching will learn something?

Actually I have a serious question. What, exactly, do you think I define differently than others? You really think it's disagreement over the definition of some term that explains our difference of opinion? What term??
 
  • #64
ttn said:
So I guess your issue is just what I speculated above: you do not accept the reasonableness of "no conspiracies", or didn't realize this assumption was being made. (I doubt it's the latter since we drum this home big time in that section especially, and elsewhere.)
No, I don't think superdeterminism is the reason billschneider rejects Bell. If you want to see my (unsuccessful) attempt to ascertain what exactly billschneider is talking about, see the last page or so of this thread.
 
  • #65
ttn said:
I don't understand. The λ here should be thought of as "whatever fully describes the state of the particle pair, or whatever you want to call the 'data' that influences the outcomes -- in particular, the part of that 'data' which is independent of the measurement interventions". It doesn't really matter where it comes from, though obviously if you have some theory where it swoops in at the last second from Venus, that would be a nonlocal theory.

But mostly I don't understand your last sentence above. What is suggesting that the rest of the argument will prove the impossibility of non-locality? I thought the argument proved the inevitability of non-locality!
If lambda can be anything which influences the outcomes, then why do you think the proof restrincts it to locality? I can use the same argument to deny non-locality by simply redefining lambda the way I did. Why would this be wrong?

You've misunderstood something. The C's here involve averaging/integrating over λ. They are in no sense conditional/dependent on λ. See the equation just above where CHHS gets mentioned, which defines the C's.
If the C's are obtained by integrating over a certain probability distribution λ, then it means the C's are defined ONLY for the distribution of λ, let us call it ρ(λ), over which they were obtained. I included λ, and a conditioning bar just to reflect the fact that the C's are defined over a given distribution of λ which must be the same for each term. Do you disagree with this?

No. You are confusing the probability [itex]P_{\alpha_1,\alpha_2}(\cdot|\lambda)[/itex] with [itex]P(\lambda)[/itex]. You first average the product [itex] A_1 A_2[/itex] with respect to [itex]P_{\alpha_1,\alpha_2}(\cdot|\lambda)[/itex] to get [itex]E_{\alpha_1,\alpha_2}(A_1 A_2 | \lambda)[/itex]. Then you average this over the possible λs using P(λ).

Maybe you missed the "no conspiracies" assumption, i.e., that P(λ) can't depend on [itex]\alpha_1[/itex] or [itex]\alpha_2[/itex].
I don't think you are getting my point so let me try again using your Proof just above equation (5). Let us focus on what you are doing within the integral first. You start with (simplifying notation)

E(AB|λ) = E(A|λ)E(B|λ) which follows from your equation (4).Within the integral, you start with 4 terms based on this presumably with something like:

[itex]
\big|E_{\mathbf a}(A_1|\lambda)E_{\mathbf b}(A_2|\lambda)-E_{\mathbf a}(A_1|\lambda)E_{\mathbf c}(A_2|\lambda)\big|\,+\,\big|E_{\mathbf a'}(A_1|\lambda)E_{\mathbf b}(A_2|\lambda)+E_{\mathbf a'}(A_1|\lambda)E_{\mathbf c}(A_2|\lambda)\big|[/itex]

You the proceed to factor out the terms as follows:

[itex]
\big|E_{\mathbf a}(A_1|\lambda)\big|\,\big(\big|E_{\mathbf b}(A_2|\lambda)-E_{\mathbf c}(A_2|\lambda)\big|\big)\,+\,\big|E_{\mathbf a'}(A_1|\lambda)\big|\,\big(\big|E_{\mathbf b}(A_2|\lambda)+E_{\mathbf c}(A_2|\lambda)\big|\big)[/itex]

Remember, we are still dealing with what is within the integral. It is therefore clear that according to your proof, that the Ea term from the E(a,b) experiment is exactly the same Ea term from the E(a,c) experiment. In other words, the E(a,b) and E(a,c) experiments must have the Ea term in common and the E(a′,b) and E(a′,c) must have the Ea′ term in common and the E(a,b) and E(a′,b) experiments must have the Eb term in common and E(a,c) and E(a′,c) experiments must have the Ec term in common. Note the cyclicity in the relationships between the terms. In fact, according to your proof, you really only have 4 terms individual terms of the type Ei which you have combined to form E(x,y) type terms using your factorizability condition (equation 4). If you now consider the integral, you now have lists of values so to speak which must be identical from term to term and reduceable to only 4 lists.

If the above condition does not hold, your proof fails. This is evidenced by the fact that you can not complete your proof without the factorization which you did. Another way of looking at it is to say that all of the paired products within the integral depend on the same λ. The proof depends on the fact that all the terms within the integral are defined over the same λ and contain the cyclicity described above which allows you to factor terms out.

So what does this mean for the experiment? In a typical experiment we collect lists of numbers (±1). For each run, you collect 2 lists, for 4 runs you collect 8 lists. You then calculate averages for each pair (cf integrating) to obtain a value for the corresponding E(x,y) term. However, according to your proof, and the above analysis, those 8 lists MUST be redundant in the sense that 4 of them must be duplicates. Unless experimenters make sure their 8 lists are sorted and reduced to 4, it is not mathematically correct to think the terms they are calculating will be similar to Bell's or the CHSH terms. Do you disagree?

I don't understand. The QM calculation is well-known and not controversial. You really want me to take the time to explain that? Look in any book. But I have the sense you know how the calculation goes and you're trying to get at something.
Ok let me present it differently. When you calculate the 4 CHSH terms from QM and using them simultaneouly in the LHS of the inequality, are you assuming that each term originated from a different particle pair, or that they all originate from the same particle pair?

No, the experiments don't measure the LHS of what you had written above. What they can measure is the C's as we define them -- i.e., involving the averaging over λ.
Do you know of any experimen in which in which the 8 lists of numbers could be reduced to 4 as implied by your proof?
Yes, for sure, if P(λ) is different for the 4 different (types of) runs, then you can violate the inequality (without any nonlocality!). The thing we call the "no conspiracies" assumption precludes this, however. It is precisely the assumption that the distribution of λ's is independent of the alpha's.

Your "no-consipracy" assumption boils down to : "the exact same series of λs apply to each run of the experiment"

As I hope you see now, all that is required for your "no-conspiracy" assumption to fail, is for the actual distribution of λs to be different from one run to another, which is not unreasonable. I think your "no-conspiracy" assumption is misleading because it gives the impression that there has to be some kind of conspiracy in order for the λs to be different. But given that the experimenters have no clue the exact nature of λ, or howmany distinct λ values exist it is reasonable to expect the distribution of λ to be different from run to run. My question to you therefore was if you knew of any experiment in which the experimenters made sure the exact same series of λs were realized for each run in order to be able to use the "no-conspiracy" assumption. Just becuase you chose the name "no-conspiracy" to describe the condition does not mean it's violation implies what is commonly known as "conspiracy". It is something that happens all the time in non-stationary processes. It would have been better to call it a "stationarity" assumption.

Note: if the same series of λs apply for each run, then the 8 lists of numbers MUST be reduceable to 4. Do you agree? We can easily verify this from the experimental data available.
 
Last edited:
  • #66
Demystifier said:
So, how should we call articles concerned with truth, but not containing new results?
Not sure I understand. If article is concerned with truth it should say something new about argumentation, perspective, whatever. If it says nothing new then how is it concerned with truth? And if it says something new then it is research article.

EDIT: I just thought that there can be new way how to explain something (in sense of teaching). In that case I am not sure about answer.
 
Last edited:
  • #67
ttn said:
Actually I have a serious question. What, exactly, do you think I define differently than others? You really think it's disagreement over the definition of some term that explains our difference of opinion? What term??

I told you that Perfect Correlations are really Simultaneous Perfect Correlations. Each Perfect Correlation defines an EPR element of reality, I hope that is clear. If they are *simultaneously* real, which I say is an assumption but you define as an inference, then you have realism. If it is an assumption, then QED. If it is inference, then realism is not assumed and you are correct.

My point is that if in fact spin is contextual, then there cannot be realism. Ergo, the realism inference fails. So, for example, if I have a time symmetric mechanism (local in that c is respected, but "quantum non-local" and not Bell local), it will fail the assumption of realism (since there are not definite values except where actually measured). MWI is exactly the same in this respect.

In other words, the existence of an explicitly contextual model invalidates the inference of realism. That is why it must be assumed. Anyway, you asked where the difference of opinion is, and this is it.
 
  • #68
ttn said:
OK then, I take it back. It's not a review article. It's an encyclopedia entry. Am I allowed to be concerned with truth now?
I guess no. Well, for example, wikipedia has very strict policy on neutrality - Wikipedia:Neutral point of view
And Scholarpedia:Aims and policy says:
"Scholarpedia does not publish "research" or "position" papers, but rather "living reviews" ..."
But of course it might be that Scholarpedia has more relaxed attitude toward neutrality because they have other priorities.

ttn said:
Well, of course the details depend on exactly what the entangled state is, but for the states standardly used for EPR-Bell type experiments, I would accept that as a rough description. But what's the point? Surely there's no controversy about what the predictions of QM are??
Then certainly "perfect correlations" are not convincingly confirmed by the experiment. Only the other one i.e. "sinusoidal relationship" prediction.
 
  • #69
billschnieder said:
If lambda can be anything which influences the outcomes, then why do you think the proof restrincts it to locality?

Quoting Bell: "It is notable that in this argument nothing is said about the locality, or even localizability, of the variable λ."


I can use the same argument to deny non-locality by simply redefining lambda the way I did. Why would this be wrong?

I guess I missed the argument. How does assuming λ comes from Venus result in denying non-locality??


If the C's are obtained by integrating over a certain probability distribution λ, then it means the C's are defined ONLY for the distribution of λ, let us call it ρ(λ), over which they were obtained. I included λ, and a conditioning bar just to reflect the fact that the C's are defined over a given distribution of λ which must be the same for each term. Do you disagree with this?

At best, it's bad notation. If you want to give them a subscript or something, to make explicit that they are defined for a particular assumed ρ(λ), then give them the subscript ρ, not λ. The whole idea here is that (in general) there is a whole spectrum of possible values of λ, with some distribution ρ, that are produced when the experimenter "does the same thing at the particle source". There is no control over, and no knowledge of, the specific value of λ for a given particle pair.


It is therefore clear that according to your proof, that the Ea term from the E(a,b) experiment is exactly the same Ea term from the E(a,c) experiment.

Yes, correct.


In other words, the E(a,b) and E(a,c) experiments must have the Ea term in common and the E(a′,b) and E(a′,c) must have the Ea′ term in common and the E(a,b) and E(a′,b) experiments must have the Eb term in common and E(a,c) and E(a′,c) experiments must have the Ec term in common.

Correct.


Note the cyclicity in the relationships between the terms. In fact, according to your proof, you really only have 4 terms individual terms of the type Ei which you have combined to form E(x,y) type terms using your factorizability condition (equation 4).

Correct.



If you now consider the integral, you now have lists of values so to speak which must be identical from term to term and reduceable to only 4 lists.

Just to make sure, by the "lists" you mean the functions (e.g.) [itex]E_a(A_1|\lambda)[/itex]?


Another way of looking at it is to say that all of the paired products within the integral depend on the same λ.

No, they all assume the same *distribution* over the lambdas.


The proof depends on the fact that all the terms within the integral are defined over the same λ and contain the cyclicity described above which allows you to factor terms out.

I don't even know what that means. The things you are talking about are *functions* of λ. What does it even mean to say they "assume the same λ"? No one particular value of λ is being assumed anywhere. Suppose I have two functions of x: f(x) and g(x). Now I integrate their product from x=0 to x=1. Have I "assumed the same value of x"? I don't even know what that means. What you're doing is adding up, for all of the values x' of x, the product f(x')g(x'). No particular value of x is given any special treatment. Same thing here.


So what does this mean for the experiment? In a typical experiment we collect lists of numbers (±1). For each run, you collect 2 lists, for 4 runs you collect 8 lists. You then calculate averages for each pair (cf integrating) to obtain a value for the corresponding E(x,y) term. However, according to your proof, and the above analysis, those 8 lists MUST be redundant in the sense that 4 of them must be duplicates.

Huh? Nothing at all implies that. The lists here are lists of outcome pairs, (A1, A2). The experimenters will take the list for a given "run" (i.e., for a given setting pair) and compute the average value of the product A1*A2. That's how the experimenters compute the correlation functions that the inequality constrains. You are somehow confusing what the experimentalists do, with what is going on in the derivation of the inequality.



Unless experimenters make sure their 8 lists are sorted and reduced to 4, it is not mathematically correct to think the terms they are calculating will be similar to Bell's or the CHSH terms. Do you disagree?

I don't even understand what you're saying. There is certainly no sense in which the experimenters' lists (of A1, A2 values) will look like, or even be comparable to, the "lists" I thought you had in mind above (namely, the one-sided expectation functions).



Ok let me present it differently. When you calculate the 4 CHSH terms from QM and using them simultaneouly in the LHS of the inequality, are you assuming that each term originated from a different particle pair, or that they all originate from the same particle pair?

The question doesn't arise. You are just calculating 4 different things -- the predictions of QM for a certain correlation in a certain experiment -- and then adding them together in a certain way. No assumption is made, or needed, or even meaningful, about each of the 4 calculations somehow being based on the same particle pair. (I say it's not even meaningful because what you're calculating is an expectation value -- not the kind of thing you could even measure with only a single pair.)


Do you know of any experimen in which in which the 8 lists of numbers could be reduced to 4 as implied by your proof?

?


Your "no-consipracy" assumption boils down to : "the exact same series of λs apply to each run of the experiment"

I dont' know what you mean by "series of λs". What the assumption boils down to is: the distribution of λs (i.e., the fraction of the time that each possible value of λ is realized) is the same for the billion runs where the particles are measured along (a,b), the billion runs where the particles are measured along (a,c), etc. That is, basically, it is assumed that the settings of the instruments do not influence or even correlate with state of the particle pairs emitted by the source.

Note that in the real experiments, the experimenters go to great length to try to have the instrument settings (for each pair) be chosen "randomly", i.e., by some physical process that is (as far as any sane person could think) totally unrelated to what's going on at the particle source. It really is just like a randomized drug trial, where you flip a coin to decide who will get the drug and who will get the placebo. You have to assume that the outcome of the coin flip for a given person is uninfluenced by and uncorrelated with the person's state of health.


As I hope you see now, all that is required for your "no-conspiracy" assumption to fail, is for the actual distribution of λs to be different from one run to another, which is not unreasonable.

Yes, that's right. That's indeed exactly what would make it fail. We disagree about how unreasonable it is to deny this assumption, though. I tend to think, for example, that if a randomized drug trial shows that X cures cancer, you'd have to be pretty unreasonable to refuse to take the drug yourself (after you get diagnosed with cancer) on the grounds that the trial *assumed* that the distribution of initial healthiness for the drug and placebo groups were the same. This is an assumption that gets made (usually tacitly) whenever *anything* is learned/inferred from a scientific experiment. So to deny it is tantamount to denying the whole enterprise of trying to learn about nature through experiment.

I think your "no-conspiracy" assumption is misleading because it gives the impression that there has to be some kind of conspiracy in order for the λs to be different.

I think it's accurately-named, for the same reason.


But given that the experimenters have no clue the exact nature of λ, or howmany distinct λ values exist it is reasonable to expect the distribution of λ to be different from run to run.

I disagree. It is normal in science to be ignorant of all the fine details that determine the outcomes. Think again of the drug trial. Would you say that, because the doctors don't know exactly what properties determine whether somebody dies of cancer or survives, therefore it is reasonable to assume that the group of people who got the drug (because some coin landed heads) is substantially different in terms of those properties than the group who got the placebo (because the coin landed tails)?


My question to you therefore was if you knew of any experiment in which the experimenters made sure the exact same series of λs were realized for each run in order to be able to use the "no-conspiracy" assumption.

Uh, again, the λs aren't something the experimenters know about. Indeed, nobody even knows for sure what they are -- different quantum theories say different things! That's what makes the theorem general/interesting: you don't have to say/know what they are exactly to prove that, whatever they are, if locality and no conspiracies are satisfied, you will get statistics that respect the inequality.


Just becuase you chose the name "no-conspiracy" to describe the condition does not mean it's violation implies what is commonly known as "conspiracy". It is something that happens all the time in non-stationary processes. It would have been better to call it a "stationarity" assumption.

Of course I agree that the name doesn't make it so. The truth though is that we chose that name because we think it accurately reflects what the assumption actually amounts to. It's clear you disagree. Incidentally, did you read the whole article? There is some further discussion of this assumption elsewhere, so maybe that will help.


Note: if the same series of λs apply for each run, then the 8 lists of numbers MUST be reduceable to 4. Do you agree? We can easily verify this from the experimental data available.

No, I don't agree. What you're saying here doesn't make sense. You're confusing the A's that the experimentalists measure, with the λs that only theorists care about.
 
  • #70
DrChinese said:
I told you that Perfect Correlations are really Simultaneous Perfect Correlations. Each Perfect Correlation defines an EPR element of reality, I hope that is clear. If they are *simultaneously* real, which I say is an assumption but you define as an inference, then you have realism. If it is an assumption, then QED. If it is inference, then realism is not assumed and you are correct.

But we don't disagree about the definitions of "assumption" or "inference". I've explained how the argument goes several times, so I don't see how you can suggest that my claim (that it's an inference) is somehow a matter of definition. I inferred it, right out in public in front of you. If I made a mistake in that inference, then tell me what the mistake was. Burying your head in the sand won't make the argument go away!


My point is that if in fact spin is contextual, then there cannot be realism. Ergo, the realism inference fails.

The non-contextuality of spin *follows* from the EPR argument, i.e., that too is an *inference*. Maybe you're right at the end of the day that this is false. But if so, that doesn't show the *argument* was invalid -- it shows that one of the premises must have been wrong! This is elementary logic. I say "A --> B". You say, "ah, but B is false, therefore A doesn't --> B". That's not valid reasoning.
 

Similar threads

Replies
16
Views
3K
Replies
4
Views
2K
Replies
333
Views
15K
Replies
14
Views
4K
Replies
47
Views
4K
Replies
22
Views
32K
Replies
19
Views
2K
Back
Top