Is Bell's Logic Aimed at Decoupling Correlated Outcomes in Quantum Mechanics?

  • Thread starter Gordon Watson
  • Start date
  • Tags
    Logic
In summary, the conversation discusses the separation of Bell's logic from his mathematics and the understanding of one in relation to the other. A paper by Bell is referenced, where he suggests decoupling outcomes in order to avoid inequalities. However, his logic is deemed flawed and it is concluded that the implications of Bell's lambda and his logic are not fully understood. The importance of Bell's theorem in the physics community is also questioned.
  • #36
Maaneli said:
But when you are talking about Bell's theorem and what Bell actually said and proved, then you should talk about the definition of locality that he specifically used in his theorem, and not some other more limited definition of "locality".
The point is that Bell's definition is the more limited (i.e. narrow) one, since Bell's definition implies the Bell inequalities which are violated in quantum mechanics, despite the fact that quantum field theory is still "local" under a broader definition of locality. So, presumably physicists wanted some shorthand for Bell's more narrow definition of locality which goes beyond the definition being used when they talk about the locality of QFT, and "local realism" seems to have become the accepted term. If your objection is just about the words being used, rather than physicists actually misunderstanding the logic of Bell's reasoning, then the objection seems kind of pointless, "local realism" may not be the best choice of words but it's the term that's stuck.
Maaneli said:
BTW, even the definition of locality implied by the equal-time commutation relation in QFT still assumes a notion of realism.
This may be true depending on your definition of "realism", but certainly it's not a form of "realism" that allows us to derive Bell's inequalities. In particular it doesn't say that the universe has an objective state at all times, and that all information about the universe's state can be reduced to some collection of local facts about what's going on at each point in spacetime, with facts about any given point in spacetime being influenced only by facts in the past light cone of that point.
 
Physics news on Phys.org
  • #37
Maaneli said:
This is why I cited Norsen's paper earlier. Norsen methodically goes through various uses in the literature of the phrase 'local realism' by many prominent physicists in the field, and shows that the phrase has no clear meaning, and is certainly not equivalent to Bell's definition of local causality. I will also say that in my personal experience of talking with many quantum opticists (most notably, Joseph Eberly, Alain Aspect, and Pierre Meystre), I have not seen any evidence that they are aware of Bell's definition of local causality, or that they have a sharp definition of what 'realism' means in the phrase 'local realism'.
But don't you think these physicists have a good working understanding of what was entailed by Bell's assumptions about the laws of physics in deriving his inequalities, and are just using "local realism" to describe this set of assumptions, even if they may have some trouble precisely defining which aspects follow from "locality" and which follow from "realism"?
 
  • #38
JesseM said:
The point is that Bell's definition is the more limited (i.e. narrow) one, since Bell's definition implies the Bell inequalities which are violated in quantum mechanics, despite the fact that quantum field theory is still "local" under a broader definition of locality.

Oh, OK, now I see what you meant by 'more limited'.

JesseM said:
So, presumably physicists wanted some shorthand for Bell's more narrow definition of locality which goes beyond the definition being used when they talk about the locality of QFT, and "local realism" seems to have become the accepted term.

No, in all my years in the field, and with all my interactions with physicists in the field, and in all the literature I have read, I haven't seen any evidence that that's the case. And I am very close (used to be moreso) to the field of theoretical and experimental quantum optics and AMO physics.

JesseM said:
If your objection is just about the words being used, rather than physicists actually misunderstanding the logic of Bell's reasoning, then the objection seems kind of pointless, "local realism" may not be the best choice of words but it's the term that's stuck.

No, that's not my objection. I think I've made my objection pretty clear in previous posts. But to make it even clearer, my point is that many physicists have actually misunderstood (or never even understood) the logic of Bell's reasoning. I again refer you to the Norsen paper for a discussion of the evidence for this. It is also evidenced by the fact that many physicists think that the experimental violations of the Bell inequalities imply that 'realism' is untenable in quantum theory and/or that they confirm that QM is 'local' but 'non-real'. But you never see Zeilinger or Meystre define what they mean by 'local' as equal-time commutation relations. In fact, Zeilinger is pretty explicit about his personal belief that the violations of the Bell inequalities imply that even the realism in an ontological formulation of QM like de Broglie-Bohm theory, is not tenable.

JesseM said:
This may be true depending on your definition of "realism", but certainly it's not a form of "realism" that allows us to derive Bell's inequalities.

It is not the realism in local causality, I agree. It does not refer to facts about local beables. Rather, the notion of realism in the equal-time commutation relation is in the assumption that the field operators encode the statistical distributions of objectively real fields (called observables) that would be observed by an experimenter doing an ensemble of measurements on a quantum system. So the statistical distribution of field observables is objectively real, as is the experimenter and the experimental apparatus that is making the measurement of the spacetime position of the field observable.
 
Last edited:
  • #39
JesseM said:
But don't you think these physicists have a good working understanding of what was entailed by Bell's assumptions about the laws of physics in deriving his inequalities, and are just using "local realism" to describe this set of assumptions, even if they may have some trouble precisely defining which aspects follow from "locality" and which follow from "realism"?

From my experience, they often don't have a very good understanding of Bell's assumptions, although they are certainly capable of working with the formal mathematical manipulations and assumptions that lead to Bell's inequality (and the CSHS inequality). And yes, they do use the phrase 'local realism' to refer to Bell's assumptions (whatever they think they are), but the point is that it is simply not accurate, and it has led to a lot of misunderstanding and confusion about what Bell actually proved.
 
  • #40
Maaneli said:
From my experience, they often don't have a very good understanding of Bell's assumptions, although they are certainly capable of working with the formal mathematical manipulations and assumptions that lead to Bell's inequality (and the CSHS inequality). And yes, they do use the phrase 'local realism' to refer to Bell's assumptions (whatever they think they are), but the point is that it is simply not accurate, and it has led to a lot of misunderstanding and confusion about what Bell actually proved.
What's not accurate, though? The complaint that many don't understand Bell's assumption very well may be a reasonable one, but if most of them would say that "local realism" is intended to refer to Bell's assumptions, then any change in their understanding of those assumptions would just change their understanding of the meaning of "local realism", it wouldn't cause them to think there was something inherently inaccurate about using the phrase "local realism" to describe those assumptions. Again I don't really see why this isn't just a semantic complaint, provided you agree most physicists just use the phrase "local realism" as a shorthand for "the type of local theory Bell was assuming" (as opposed to having some definite, clear idea about what is entailed by 'realism' such that a better understanding of Bell's assumptions would force them to conclude either 'I guess Bell's theory isn't necessarily realistic after all, given my understanding of the meaning of that term' or alternately 'I guess Bell's assumptions are more specific than just "locality" + "realism" as I understand those terms'...i.e. already having in mind sufficiently clear definitions of locality and realism such that we could check whether the overlap of these two circles in a Venn diagram would match the class of theories Bell was considering)
 
  • #41
JesseM said:
What's not accurate, though?

Once again, please have a read of Norsen's paper, if you want an answer to this. Also have a look at my post #25 to DrChinese.
JesseM said:
The complaint that many don't understand Bell's assumption very well may be a reasonable one, but if most of them would say that "local realism" is intended to refer to Bell's assumptions, then any change in their understanding of those assumptions would just change their understanding of the meaning of "local realism", it wouldn't cause them to think there was something inherently inaccurate about using the phrase "local realism" to describe those assumptions.

Er ... or, if they have a more accurate understanding of Bell's assumptions, they might be convinced to abandon the inaccurate phrase 'local realism', and to start using the more accurate 'local causality'.
JesseM said:
Again I don't really see why this isn't just a semantic complaint, provided you agree most physicists just use the phrase "local realism" as a shorthand for "the type of local theory Bell was assuming" (as opposed to having some definite, clear idea about what is entailed by 'realism' such that a better understanding of Bell's assumptions would force them to conclude either 'I guess Bell's theory isn't necessarily realistic after all, given my understanding of the meaning of that term' or alternately 'I guess Bell's assumptions are more specific than just "locality" + "realism" as I understand those terms'...i.e. already having in mind sufficiently clear definitions of locality and realism such that we could check whether the overlap of these two circles in a Venn diagram would match the class of theories Bell was considering)

This sentence is difficult to read and understand due to its length and odd grammar. But let me just emphasize that it is not a semantic complaint. The whole point (for the Nth time) is that Bell's definition of locality already contains within it a precise notion of realism. To use the phrase 'local realism' implies that there is some other notion of realism in Bell's theorem, over and above the notion of realism that is already implicit in Bell's definition of locality. But (again, for the Nth time) as Norsen points out, there is no such additional notion of realism. And this phrase 'local realism' has led to considerable confusion about what Bell actually assumed in his theorem, and what the violations of the Bell inequalities actually imply. BTW, this isn't just my opinion or just the opinion of Norsen - this is understood by the majority of physicists in the foundations of QM community, and philosophers of physics in the philosophy of physics community.
 
Last edited:
  • #42
Maaneli said:
Once again, please have a read of Norsen's paper, if you want an answer to this. Also have a look at my post #25 to DrChinese.
Well, first I'd like some simple summary that reassures me this isn't just a semantic complaint combined with an observation that a lot of physicists misunderstand the nature of Bell's conditions.
Maaneli said:
Er ... or, if they have a more accurate understanding of Bell's assumptions, they might be convinced to abandon the inaccurate phrase 'local realism', and to start using the more accurate 'local causality'.
But unless "realism" has some set preexisting meaning in physics, how can it be "inaccurate" to define "local realism" to mean the same thing as "the type of local theory Bell was assuming"? A definition cannot be "inaccurate", that doesn't make any sense. To pick a silly example, the phrase "Bug Bunnyism" has no preexisting technical meaning in physics, so there would be nothing innacurate about defining the phrase "local Bug Bunnyism" to mean "the type of local theory Bell was assuming", and as long as everyone agreed to use that phrase consistently, there'd be no problem.

Maybe the issue is that you have some implicit sense of what "realism" means drawn from everyday language. But unless physicists have some clear technical definition of "realism" I don't see how it makes sense to call their use of the phrase "local realism" inaccurate.
Manneli said:
This sentence is difficult to read and understand due to its length and odd grammar.
It is long and perhaps therefore hard to understand, but I think the grammar is fine...just break it up into parts:

1. Again I don't really see why this isn't just a semantic complaint, provided you agree most physicists just use the phrase "local realism" as a shorthand for "the type of local theory Bell was assuming"

2. as opposed to having some definite, clear idea about what is entailed by 'realism' such that a better understanding of Bell's assumptions would force them to conclude either ('I guess Bell's theory isn't necessarily realistic after all, given my understanding of the meaning of that term') or alternately ('I guess Bell's assumptions are more specific than just "locality" + "realism" as I understand those terms')

3. i.e. already having in mind sufficiently clear definitions of locality and realism such that we could check whether the overlap of these two circles in a Venn diagram would match the class of theories Bell was considering
Manneli said:
But let me just emphasize that it is not a semantic complaint. The whole point (for the Nth time) is that Bell's definition of locality already contains within it a precise notion of realism. To use the phrase 'local realism' implies that there is some other notion of realism in Bell's theorem, over and above the notion of realism that is already implicit in Bell's definition of locality.
Not necessarily. It could just imply there are definitions of locality other than Bell's, and "local realism" is used to distinguish Bell's version from other versions. Also, you talk as though "realism" has some widely-understood meaning outside of the phrase "local realism", so that it is meaningful to say there is a "notion of realism" in Bell's definition. What is this meaning? If it has no clear meaning outside of the phrase "local realism", then saying "the notion of realism that is already implicit in Bell's definition of locality" would be just as meaningless as saying "the notion of Bugs Bunnyism that is already implicit in Bell's definition of locality" (in my hypothetical where physicists choose to use 'local Bugs Bunnyism' to refer to the type of local theory Bell assumed)
 
  • #43
JesseM said:
On the other hand, if they picked a large population and then used a truly random method

Could you give me an example of a truly random method that could be used to select patients if those experiments knew nothing about any of the factors that affected the treatment? The only thing they know about the patients is that they have or do not have the disease! And who said anything about socioeconomic status, the omniscient being already knows that the only factor, hidden from the experimenters which matters is the size of kidney stones.

but if the subjects were assigned randomly to group A or B by some process like a random number generator on a computer, there should be no correlation between P(computer algorithm assigns subject to treatment A) and P(subject has large kidney stones), so any difference in frequency in kidney stones between the two groups would be a matter of random statistical fluctuation, and such differences would be less and less likely the larger a population size was used.
But they already did that. Their groups were already randomly selected according to them, they could very well have done it by use of a random number generator. It matters not a bit. What makes you think a computer will do any better, without taking into consideration the size of the stones. To obtain comparable results, you must have exactly the same proportion of people in each group with large or small stones! If the experimenters knew that, they could just make sure they have the same proportions and their results will agree with the omniscient being. But they don't know that! Try and to calculate the probability that a random number generator will produce two groups of patients with exactly the same number of large-stones patients as small stones patients in both. And mind you we are only dealing with a single hidden element of reality in this case, not to talk of the EPR case in which there could be many more.

The relevance of your example to what we were debating is unclear.
Oh, it is very clear. You said Bell is calculating from the perspective of an omniscient being. But his inequalities are compared with what is obtained in actual experiments. I just gave you an example in which the results of the omniscient being were at odds with those of the experimenters, without any spooky action involved. The relevance is the fact that without knowing all the parameters of all the hidden elements of reality known and considered by the omniscient being, the experimenters can not possibly be able to obtain a fair sample, which can be compared to the inequalities of the omniscient being. There's no escape here.

So, let me modify your example with some different numbers.
That is your usual response, modifying my numbers, so that it is no longer the example I presented. You think I chose those numbers at random? Those specific numbers were chosen to illustrate the point. Do you deny the fact that in the numbers I gave, the omniscient being concludes that Treatment A is more effective, while the experimenters conclude that treament B is more effective? Surely you do not deny this.

Your only relevant response is that maybe the groups were not really random. So I ask you to present a mechanism by which they can ensure that the groups are truly random if they do not know all the hidden factors. If you still think a computer random number generator can do the job, after everything I have mentioned above, say so. Hopefully now you understand why I asked you earlier to point to an Aspect type experiment in which all the values of all possible hidden elements of reality were realized fairly.

When you say "fair sample", "fair" in what respect? If your 350+350=700 people were randomly sampled from the set of all people receiving treatment A and treatment B, then this is a fair sample
What are you talking about, did you read what I wrote? The experimenters randomly selected two groups of people with the disease and THEN gave treament A to one and treament B to the other, YET their results were at odds with those of the omniscient being! And you call that a fair sample. Clearly, looking at their samples from the perspective of the omniscient being, it is definitely not fair. So I don't know what you are talking about here.

The problem of Simpson's paradox is that this marginal positive correlation between B and recovery does not tell you anything about a causal relation between these variables
So? The example I presented clearly show you that the results obtained by the experimenters is at odds with that of the omniscient being. Do you deny that? It also clearly shows that the sampling by the experimenters is unfair with respect to the hidden elements of reality at play.Do you deny that?

('correlation is not causation')
Tell that to Bell. He clearly defined "local causality" as lack of logical dependence.

Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables
You are confused. Bell clearly states that logical dependence between A and (B or b), is not allowed nor is logical dependence between B and (a or A) allowed in his definition of "local causality".

In Bell's equation (2) does he not integrate over all possible hidden elements of reality? Do you expect that the LHS of his equation (2) in his original paper will have the same value if the integral was not over the full set of possible realizations of hidden elements of reality? I need a yes or no answer here. For example say n=10 (10 different possible λs) and Bells integral was from λ1 to λ10. Do you expect an integral that is calculated only for from λ1 to λ9 to give you the same result as Bell's integral? Please answer with a simple yes or no.

So then if in an experiment, only λ1 to λ9 were ever realized, will the observed frequencies obey Bell's inequalities? Yes or No please.

How can an Aspect-type experimenter be expected to ensure a fair sample, one that represents all possible λs, without knowing the details of what λ is in the first place?! Is this too difficult for you to understand.
 
  • #44
JesseM said:
Well, first I'd like some simple summary that reassures me this isn't just a semantic complaint combined with an observation that a lot of physicists misunderstand the nature of Bell's conditions.

Then outside of our current exchange, I'd suggest at least reading the abstract of Norsen's paper.
JesseM said:
But unless "realism" has some set preexisting meaning in physics, how can it be "inaccurate" to define "local realism" to mean the same thing as "the type of local theory Bell was assuming"? A definition cannot be "inaccurate", that doesn't make any sense. To pick a silly example, the phrase "Bug Bunnyism" has no preexisting technical meaning in physics, so there would be nothing innacurate about defining the phrase "local Bug Bunnyism" to mean "the type of local theory Bell was assuming", and as long as everyone agreed to use that phrase consistently, there'd be no problem.

You are confused. The point is that 'local realism' is purported to refer to Bell's assumptions (namely, 'locality' and 'realism'). People intend 'locality' to refer to Bell's definition of locality, and 'realism' to refer to something else that Bell assumed somewhere in his theorem, and which is distinct from Bell's locality. The suggested implication is that the experimental violation of the Bell inequalities could imply that QM is 'non-real', but still respects Bell's definition of locality. Now, 'local realism' is not an accurate characterization of what Bell assumed in his theorem because (A) there is no additional assumption of realism that is distinct from the notion of realism already used in Bell's definition of locality (and this is the point that Norsen also argues), and (B) the phrase ignores another condition that Bell did in fact assume (in addition to and distinct from his definition of locality), namely, causality. Hence, an accurate characterization of the assumptions in Bell's theorem is local causality, which is the phrase that Bell himself used.
JesseM said:
Maybe the issue is that you have some implicit sense of what "realism" means drawn from everyday language. But unless physicists have some clear technical definition of "realism" I don't see how it makes sense to call their use of the phrase "local realism" inaccurate.

See above.
JesseM said:
Not necessarily. It could just imply there are definitions of locality other than Bell's, and "local realism" is used to distinguish Bell's version from other versions.

No, that doesn't seem to be the intended use of 'local realism'. Again, see Norsen's paper. But even if it was the intended use, then it would be much more accurate to instead use the phrase that Bell himself coined, namely, 'local causality' or even 'Bell locality' (another, less popular phrasing sometimes found in the literature), to distinguish from, say, the locality in the equal-time commutation relations of QFT.
JesseM said:
Also, you talk as though "realism" has some widely-understood meaning outside of the phrase "local realism", so that it is meaningful to say there is a "notion of realism" in Bell's definition. What is this meaning?

Bell's notion of realism involves the use of 'beables', and in particular, 'local beables'. In post #25, I show specifically how Bell used local beables in his definition of a locally causal theory.
 
Last edited:
  • #45
billschnieder said:
You are confused. Bell clearly states that logical dependence between A and (B or b), is not allowed nor is logical dependence between B and (a or A) allowed in his definition of "local causality".

Yep. Otherwise, the joint probability expression for outcome values A and B would not be factorizable, and the foil theory he assumes would not be locally causal.
 
Last edited:
  • #46
Maaneli said:
Sorry, I can't let you off the hook that easily. :smile:

You keep making this claim that Bell's theorem refutes 'local realism'; and you will most likely continue to do so if no one continues to challenge you on it. Why are you all of a sudden unwilling to debate this issue and address the evidence I provided? When I initially asserted (without evidence from Bell) that Bell did not invoke any concept of 'local realism', you 'strongly disagreed' with me, and even claimed to point out exactly where Bell smuggled in 'the' realism assumption. Now that your claims has been challenged directly with evidence from Bell's own writings, I think the least you can do (not just for me, but for other people reading this thread) is to try and defend your claim. Or, if you feel that your claim is no longer tenable, then why not just concede that Bell did not talk at all of 'local realism', but rather... local causality?

I would be happy to debate any element of Norsen's paper or your ideas about Bell's (2). Just didn't want to unnecessarily head off in that direction.

Local causality or local realism? Hmmm. I dunno, which is EPR about? Because it seems like it is about local realism to me: "...when the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality..." or perhaps: "On this point of view, since either one or the other, but not both simultaneously, of the quantities P and Q can be predicted, they are not simultaneously real. This makes the reality of P and Q depend upon the process of measurement carried out on the first system in any way. No reasonable definition of reality could be expected to permit this."

So I would say that in EPR, there is clearly a discussion of the simultaneous reality of P and Q (a and b in Bell). In fact, what's the difference between local realism and local causality? I guess the difference is in one's definition. In my mind, I might take Bell's (2) as a definition of local causality. And then Bell (14) as a statement of counterfactual definiteness (CD) or alternately realism.

Because I don't think there is any question that Bell's intent was to address the concept of EPR and show a fallacy regarding the completeness conclusion (i.e. EPR's conjecture that a more complete specification of the wave function is possible). I would hope we agree on this point. Assuming we do, I would say that (2) is not enough to achieve Bell's result. That in fact (14) is required and without it, you simply return to where things were after EPR.
 
  • #47
Here is a typical quote from Anton Zeilinger (1999), who is certainly one of the foremost authorities on this subject:

"Second, a most important development was due to John Bell (1964) who continued the EPR line of reasoning and demonstrated that a contradiction arises between the EPR assumptions and quantum physics. The most essential assumptions are realism and locality. This contradiction is called Bell’s theorem."

Or perhaps this from Aspect (1999):

"The experimental violation of Bell’s inequalities confirms that a pair of entangled photons separated by hundreds of metres must be considered a single non-separable object — it is impossible to assign local physical reality to each photon... Bell’s theorem changed the nature of the debate. In a simple and illuminating paper, Bell proved that Einstein’s point of view (local realism) leads to algebraic predictions (the celebrated Bell’s inequality) that are contradicted by the quantum-mechanical predictions for an EPR gedanken experiment involving several polarizer orientations..."

Einstein's local realism was of course: a) there is no spooky action at a distance; and b) the moon is there even when no one is looking. That being 2 separate assumptions.

Now I guess maneeli might say that this does not PROVE that a, b and c are required for these conclusions. However, as I have said many times before, all I need to see is a Bell proof that does not involve the assumption of 3 simultaneous elements of reality. Then I will agree with Norsen. But until then, you will note that this is in fact introduced after Bell (14) and is explicit. And of course, Norsen has not provided such derivation in his work. But it should be clear from the above that the general view is that there are 2 assumptions - locality and realism - required for the Bell result.
 
Last edited:
  • #48
Hello Maaneli and Dr Chinese. I see the old argument continues! :-)
 
  • #49
Coldcall said:
Hello Maaneli and Dr Chinese. I see the old argument continues! :-)

:biggrin:
 
  • #50
billschnieder said:
Could you give me an example of a truly random method that could be used to select patients if those experiments knew nothing about any of the factors that affected the treatment?
I already gave you an example--just get a bunch of people who haven't received any treatment yet to volunteer for a study, then have a computer with a random number generator randomly assign each person to receive treatment A or treatment B. Do you agree that P(given person will be assigned by random number generator to receive treatment A) should be uncorrelated with P(given person will have some other background factor such as high socioeconomic status or large kidney stones)? If so, then the only reason group A might contain more people with a given factor (like large kidney stones) than group B would be a random statistical fluctuation, and the likelihood of any statistically significant difference in these background factors between group A and group B would get smaller and smaller the larger your sample size.
billschnieder said:
But they already did that. Their groups were already randomly selected according to them, they could very well have done it by use of a random number generator.
In the actual version of this study they weren't randomly selected. See the paradox wikipedia page[/url] where I think you got this example from (unless it also appears in other sources):
The sizes of the groups, which are combined when the lurking variable is ignored, are very different. Doctors tend to give the severe cases (large stones) the better treatment (A), and the milder cases (small stones) the inferior treatment (B). Therefore, the totals are dominated by groups three and two, and not by the two much smaller groups one and four.
In other words, they were sampling a group that had already been assigned A or B by their doctors, and the likelihood that the doctor would assign them A was affected by the severity of their case, which was in turn affected by the size of their stones. So in this case, P(given person will be assigned by doctor to receive treatment A) is correlated with P(given person will have background factor of large kidney stones). If the subjects were volunteers for a study who had not received any treatment, and their treatment was randomly assigned by a random number generator, then we expect P(given person will be assigned by doctor to receive treatment A) to be uncorrelated with P(given person will have background factor of large kidney stones). Of course the probability of an event differs from the frequency over a finite number of trials--if two people are flipping fair coins, we expect P(person #1 gets heads) to be uncorrelated with P(person #2 gets heads), i.e. P(person #1 gets heads, person #2 gets heads)=P(person #1 gets heads)*P(person #2 gets heads), but if there are only 4 trials the results might be HH, HH, HT, TT, in which case F(person #1 gets heads, person #2 gets heads) > F(person #1 gets heads)*F(person #2 gets heads), where F represents the frequency on those 4 trials. This is what I meant by a random statistical fluctuation, that the their can be a correlation in empirical frequencies even in situations where the probabilities should be uncorrelated. But again, the likelihood of a statistically significant correlation in frequencies in a scenario where the probabilities should be uncorrelated goes down the larger your sample size is.
billschnieder said:
What makes you think a computer will do any better, without taking into consideration the size of the stones.
Because there is no causal reason that the random number generator's likelihood of assigning a person to group A should be influenced by the size of someone's kidney stones (unlike with the case where doctors were deciding the treatment). So if we're using a random number generator to assign treatment, in the limit as the sample size goes to infinity, the fraction of people with large kidney stones in group A should approach equality with the fraction of people with large kidney stones in group B (and 'probability' is defined in terms of the frequency in the limit as the sample size goes to infinity, so this is why the probabilities are uncorrelated). With a finite sample size you might have a difference in the fractions for each group, but it could only be due to random statistical fluctuation.
billschnieder said:
You said Bell is calculating from the perspective of an omniscient being. But his inequalities are compared with what is obtained in actual experiments. I just gave you an example in which the results of the omniscient being were at odds with those of the experimenters, without any spooky action involved.
No you didn't. This is the key point you seem to be confused about: the marginal correlation between treatment B and recovery observed by the omniscient being is exactly the same as that observed by the experimenters. The omniscient being does not disagree that those who receive treatment B have an 83% chance of recovery, and a person who receives treatment A has a 73% chance of recovery. All you are pointing out is that the omniscient being knows that this marginal correlation does not indicate a causal relation between treatment B and recovery; the omniscient being knows that this correlation is related to the fact that doctors are more likely to assign patients treatment B if they have small kidney stones, and patients with small kidney stones are more likely to recover. (Alternately, if the patients were assigned to groups randomly and these numbers resulted, the omniscient being would know that the marginal correlation is just due to the fact that a random statistical fluctuation caused the number of patients with small kidney stones to differ significantly between the two groups)

Similarly, in a local hidden variables theory, an omniscient being knows that marginal correlations between different measurement outcomes don't represent a causal relation between the different measurements, but are in fact explained by the statistics of hidden variables which influence each measurement. But just as above, the omniscient being sees exactly the same marginal correlation between measurements that's seen by the experimenters, so it's perfectly legitimate to use the omniscient being's perspective to derive some statistical rules that would apply to the marginal correlations under the assumption of local realism, then see if the actual statistics for marginal correlations seen by real experimenters obey those rules, and if they don't take that as a falsification of local realism.
billschnieder said:
That is your usual response, modifying my numbers, so that it is no longer the example I presented. You think I chose those numbers at random? Those specific numbers were chosen to illustrate the point. Do you deny the fact that in the numbers I gave, the omniscient being concludes that Treatment A is more effective, while the experimenters conclude that treament B is more effective? Surely you do not deny this.
No, I don't deny this, but when you say "the omniscient being concludes that treatment A is more effective", you are talking about the causal relation between treatment A and recovery, not the marginal correlation between those two variables. Again, the omniscient being agrees completely with the experimentalists about the marginal correlation between the variables, he just doesn't think this demonstrates a causal link. And in Bell's argument, the Bell inequalities are just statements about the marginal correlations between different measurements, not about causal relations between them. This is why your analogy makes absolutely no sense as a criticism of Bell's argument. The omniscient being who knows about the hidden variables in a local realist universe should say exactly the same thing about marginal correlations between measurement outcomes as is seen by hypothetical experimenters in the same local realist universe. Do you disagree?
billschnieder said:
Your only relevant response is that maybe the groups were not really random. So I ask you to present a mechanism by which they can ensure that the groups are truly random if they do not know all the hidden factors.
If by "random" you mean the statistics seen in our small group accurately match the statistics seen in a larger population, with a sample size of 700 they most likely already do; if doctors have a higher probability of assigning treatment B to those with small kidney stones in our sample of 700, then doctors in the larger population probably do so at about the same rate. So if we looked at the entire population of patients receiving either treatment A or treatment B, the marginal correlation with recovery would likely be about the same: about 83% of all people receiving treatment B would recover, and about 73% of all people receiving treatment A would recover.

But more likely, by "random" you mean that all other variables like large vs. small kidney stones are evenly distributed between the population receiving treatment A and the population receiving treatment B, so that any difference in recovery indicates a causal relation between treatment and recovery rate. If so, then again, my answer is twofold:

1. If you used a random number generator on a computer to assign patients treatment, in the limit as the number of patients approaches infinity, all other variables would approach being evenly distributed in the two groups (i.e. the probabilities are uncorrelated), so any difference in a finite-sized group would just be a random statistical fluctuation, and the larger the sample size the smaller the likelihood of statistically significant fluctuations (see law of large numbers)

2. In any case this issue is completely irrelevant to Bell's argument, because Bell is only looking at the marginal correlations between measurement outcomes themselves, he's not claiming that these marginal correlations indicate a causal relation between the outcomes (quite the opposite in fact). In your example there is no dispute between the omniscient being and the experimentalists about the marginal correlation between receiving treatment B and recovering, it's just that if the experimentalists foolishly conclude this indicates a causal link between B and recovery, the omniscient being knows they're wrong.
billschnieder said:
What are you talking about, did you read what I wrote? The experimenters randomly selected two groups of people with the disease and THEN gave treament A to one and treament B to the other, YET their results were at odds with those of the omniscient being!
OK sorry, once I looked up the wikipedia page on Simpson's paradox I assumed you were just taking the example from there, I neglected to reread your post and see that you specified that "the experimenters select the two groups according to their best understanding of what may be random". In this case, if they are using some random method like a random number generator on a computer, it should be true that in the limit as the sample size approaches infinity, the percentage of patients with small kidney stones in group A should approach perfect equality with the percentage of patients with small kidney stones in group B, and the fact that they are very different in the actual groups of 350 must be very unlikely statistical fluctuation, like if you had two groups of 350 coin flips and the first had 70 heads while the other had 300 heads. Do you disagree?

I suppose it could be true that the marginal correlations seen in actual Aspect-type experiments so far could differ wildly from the marginal correlations that an omniscient being would expect in the limit as the number of particle pairs sampled went to infinity. Still, this too should only be due to statistical fluctuations, and the law of large numbers says that the more trials you do, the more probably it is that your measured statistics will be in close agreement with the expected probabilities under the same experimental conditions, with "probability" defined in terms of the statistics that would be seen in the limit as the number of trials approaches infinity. Again, do you disagree with this?
billschnieder said:
So? The example I presented clearly show you that the results obtained by the experimenters is at odds with that of the omniscient being. Do you deny that? It also clearly shows that the sampling by the experimenters is unfair with respect to the hidden elements of reality at play.Do you deny that?
If it's a part of your assumption that the patients are assigned randomly to different treatments, then I agree the marginal correlation in frequencies is very different than the marginal correlation in probabilities, i.e. the marginal correlation in frequencies that would be expected in the limit as the size of the sample went to infinity (with the experiment performed in exactly the same way in the larger sample, including the same random method of assigning patients treatments). But this sizeable difference would just be due to a freak statistical fluctuation--in fact we can calculate the odds, if we have 700 people and 357 have small kidney stones, and they are each randomly assigned to a group by a process whose probability of assigning someone to a group is independent of whether they have small kidney stones or not, then we can use the hypergeometric distribution to calculate the probability that a group of 350 would contain 87 or less with small stones, or 270 or more. Using the calculator http://stattrek.com/Tables/Hypergeometric.aspx , with population size=700, sample size=350, and number of successes in population=357, you can see that if you plug in number of successes in sample=87, the probability of getting that many or fewer is 1.77*10^-45, just slightly higher than the probability of getting exactly that many, which is 1.60013*10^-45; and similarly if number of successes in sample=270, the probability of getting exactly that many is also 1.60013*10^-45 (the calculator breaks down in calculating the probability of getting that many or more, but it should also be 1.77*10^-45). So under the assumption that the patients were assigned treatment by a random process whose probability of assigning A vs. B is in no way influenced by the size of a patient's kidney stones, you can see that the numbers in your example represent an astronomically unlikely statistical fluctuation, and if the experiment were to be repeated with another group of 700 it's extremely probable the observed statistics would be a lot closer to the correct probabilities known by the omniscient being (and the law of large numbers says that the more times you repeat the experiment, the less likely a significant difference between true probabilities and observed statistics becomes).
JesseM said:
('correlation is not causation')
billschnieder said:
Tell that to Bell. He clearly defined "local causality" as lack of logical dependence.
No, he didn't define it as a lack of logical dependence in the marginal correlations between measurement outcomes, only in the correlations conditioned on values of λ. The meaning of "correlation is not causation" is that marginal correlations don't indicate causal dependence, and Bell didn't say they should, nor did he say that a lack of causal influence between measurement outcomes would mean a lack of marginal correlations between them.
JesseM said:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables
billschnieder said:
You are confused. Bell clearly states that logical dependence between A and (B or b), is not allowed nor is logical dependence between B and (a or A) allowed in his definition of "local causality".
You're the one who's confused here. In Bell's example there clearly can be a marginal correlation (logical dependence) between A and B; in fact Bell's original paper dealt with the simplest case where if you just looked at the measurement results when both experimenters chose the same detector setting, there was a perfect anticorrelation between the results (read the first paragraph of the 'Formulation' section). Bell is just saying that the correlation disappears when you condition on any specific value of the variable λ.
 
Last edited by a moderator:
  • #51
(continued from previous post)
billschnieder said:
In Bell's equation (2) does he not integrate over all possible hidden elements of reality? Do you expect that the LHS of his equation (2) in his original paper will have the same value if the integral was not over the full set of possible realizations of hidden elements of reality? I need a yes or no answer here.
Yes, of course.
billschnieder said:
For example say n=10 (10 different possible λs) and Bells integral was from λ1 to λ10. Do you expect an integral that is calculated only for from λ1 to λ9 to give you the same result as Bell's integral? Please answer with a simple yes or no.
No, a partial integral wouldn't give the same results.
billschnieder said:
So then if in an experiment, only λ1 to λ9 were ever realized, will the observed frequencies obey Bell's inequalities? Yes or No please.
Simple yes or no is not possible here; there is some probability the actual statistics on a finite number of trials would obey Bell's inequalities, and some probability they wouldn't, and the law of large numbers says the more trials you do, the less likely it is your statistics will differ significantly from the ideal statistics that would be seen given an infinite number of trials (so the less likely a violation of Bell's inequalites would become in a local realist universe).

I'm fairly certain that the rate at which the likelihood of significant statistical fluctuations drops should not depend on the number of λn's in the integral. For example, suppose you are doing the experiment in two simulated universes, one where there are only 10 possible states for λ and one where there are 10,000 possible states for λ. If you want to figure out the number N of trials needed so that there's only a 5% chance your observed statistics will differ from the true probabilities by more than one sigma, it should not be true that N in the second simulated universe is 1000 times bigger than N in the first simulated universe! In fact, despite the thousandfold difference in possible values for λ, I'd expect N to be exactly the same in both cases. Would you disagree?

To see why, remember that the experimenters are not directly measuring the value of λ on each trial, but are instead just measuring the value of some other variable which can only take two possible values, and which value it takes depends on the value of λ. So, consider a fairly simple simulated analogue of this type of situation. Suppose I am running a computer program that simulates the tossing of a fair coin--each time I press the return key, the output is either "T" or "H", with a 50% chance of each. But suppose the programmer has perversely written an over-complicated program to do this. First, the program randomly generates a number from 1 to 1000000 (with equal probabilities of each), and each possible value is associated with some specific value of an internal variable λ; for example, it might be that if the number is 1-20 that corresponds to λ=1, while if the number is 21-250 that corresponds to λ=2 (so λ can have different probabilities of taking different values), and so forth up to some maximum λ=n. Then each possible value of λ is linked in the program to some value of another variable F, which can take only two values, 0 and 1; for example λ=1 might be linked to F=1, λ=2 might be linked to F=1, λ=3 might be linked to F=0, λ=4 might be linked to F=1, etc. Finally, on any trial where F=0, the program returns the result "H", and on any trial where F=1, the program returns the result "T". Suppose the probabilities of each λ, along with the value of F each one is linked to, are chosen such that if you take [sum over i from 1 to n] P(λ=i)*(value of F associated with λ=i), the result is exactly 0.5. Then despite the fact that there may be a very large number of possible values of λ, each with its own probability, this means that in the end the probability of seeing "H" on a given trial is 0.5, and the probability of seeing "T" on a given trial is also 0.5.

Now suppose that my friend is also using a coin-flipping program, where the programmer picked a much simpler design in which the computer's random number generator picks a digit from 1 to 2, and if it's 1 it returns the output "H" and if it's 2 it returns the output "T". Despite the differences in the internal workings of our two programs, there should be no difference in the probability either of us will see some particular statistics on a small number of trials! For example, if either of us did a set of 30 trials, the probability that we'd get more than 20 heads would be determined by the binomial distribution, which in this case says there is only an 0.049 chance of getting 20 or more heads (see the calculator http://stattrek.com/Tables/Binomial.aspx). Do you agree that in this example, the more complex internal set of hidden variables in my program makes no difference in statistics of observable results, given that both of us can see the same two possible results on each trial, with the same probability of H vs. T in both cases?

For a somewhat more formal argument, just look at http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter8.pdf, particularly the equation that appears on p. 3 after the sentence that starts "By Chebyshev's inequality ..." If you examine the equation and the definition of the terms above, you can see that if we look at the the average value for some random value X after n trials (the [tex]S_n / n[/tex] part), the probability that it will differ from the expectation value [tex]\mu[/tex] by an amount greater than or equal to [tex]\epsilon[/tex] must be smaller than or equal to [tex]\sigma^2 / n\epsilon^2[/tex], where [tex]\sigma^2[/tex] is the variance in the value of the original random variable X. And both the expectation value for X and the variance of X depend only on the probability that X takes different possible values (like the variable F in the coin example which has an 0.5 chance of taking F=0 and an 0.5 chance of taking F=1), it shouldn't matter if the value of X on each trial is itself determined by the value of some other variable λ which can take a huge number of possible values.
billschnieder said:
How can an Aspect-type experimenter be expected to ensure a fair sample, one that represents all possible λs, without knowing the details of what λ is in the first place?! Is this too difficult for you to understand.
No more need for him to "represent all possible λs" than there is in the coin-flipping example. Even if the program has 3000 possible values of λ (determined by the value of the random number from 1 to 1000000), as long as the total probability of getting result "H" is 0.5, the probability of various numbers of H's and T's on a small set of trials (say, 50) should be given by the binomial distribution, and the more trials I do, the smaller the probability of any significant departure from a 50/50 ratio of H:T. Agree or disagree? If you agree in the coin-flipping example, it shouldn't be "too difficult for you to understand" why similarly in a local hidden variables theory, the probability that your observed statistics differ by a given amount from the ideal probabilities will go down with the number of trials, and the rate at which it goes down should be independent of the number of possible values of λ.
 
Last edited:
  • #52
billschnieder said:
You are confused. Bell clearly states that logical dependence between A and (B or b), is not allowed nor is logical dependence between B and (a or A) allowed in his definition of "local causality".
Maaneli said:
Yep. Otherwise, the joint probability expression for outcome values A and B would not be factorizable, and the foil theory he assumes would not be locally causal.
Bill's blanket statement is not right. There is a logical dependence in probability expressions which are not conditioned on the variable λ; in other words, Bell would agree that P(A|a,b,B) may be different from P(A|a) even in a local hidden variables theory. But he'd also say that under a local hidden variables theory this cannot represent a causal influence of b and B on A, because the conditional dependence disappears when you do condition on λ, i.e. P(A|a,b,B,λ) must be equal to P(A|a,λ). This is exactly what I was saying in the statement Bill was responding to:
JesseM said:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables
Do you think my statement here was incorrect?
 
  • #53
JesseM said:
Bill's blanket statement is not right. There is a logical dependence in probability expressions which are not conditioned on the variable λ; in other words, Bell would agree that P(A|a,b,B) may be different from P(A|a) even in a local hidden variables theory. But he'd also say that under a local hidden variables theory this cannot represent a causal influence of b and B on A, because the conditional dependence disappears when you do condition on λ, i.e. P(A|a,b,B,λ) must be equal to P(A|a,λ). This is exactly what I was saying in the statement Bill was responding to:

Do you think my statement here was incorrect?

Yes, I think your statement here is not quite right.

I'll refer you to this part of my post #25, where I show exactly what Bell said about this:

"Bell then shows how one might try to embed quantum mechanics into a locally causal theory. To do this, he starts with the description of a spacetime diagram (figure 6) in which region 1 contains the output counter A (=+1 or -1), along with the polarizer rotated to some angle a from some standard position, while region 2 contains the output counter B (=+1 or -1), along with the polarizer rotated to some angle b from some standard position which is parallel to the standard position of the polarizer rotated to a in region 1. He then continues:

"We consider a slice of space-time 3 earlier than the regions 1 and 2 and crossing both their backward light cones where they no longer overlap. In region 3 let c stand for the values of any number of other variables describing the experimental set-up, as admitted by ordinary quantum mechanics. And let lambda denote any number of hypothetical additional complementary variables needed to complete quantum mechanics in the way envisaged by EPR. Suppose that the c and lambda together give a complete specification of at least those parts of 3 blocking the two backward light cones."

From this consideration, he writes the joint probability for particular values A and B as follows:{A, B|a, b, c, lambda} = {A|B, a, b, c, lambda} {B|a, b, c, lambda}

He then says, "Invoking local causality, and the assumed completeness of c and lambda in the relevant parts of region 3, we declare redundant certain of the conditional variables in the last expression, because they are at spacelike separation from the result in question. Then we have{A, B|a, b, c, lambda} = {A|a, c, lambda} {B|b, c, lambda}.

Bell then states that this formula has the following interpretation: "It exhibits A and B as having no dependence on one another, nor on the settings of the remote polarizers (b and a respectively), but only on the local polarizers (a and b respectively) and on the past causes, c and lambda. We can clearly refer to correlations which permit such factorization as 'locally explicable'. Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the formulation of 'local causality', but as a consequence thereof."

Bell then shows that this is the same local causality condition used in the derivation of the CSHS inequality, and which the predictions of quantum mechanics clearly violate. Hence, Bell concludes that quantum mechanics cannot be embedded in a locally causal theory."

I strongly urge you to read Bell's paper, La Nouvelle Cuisine.
 
  • #54
DrChinese said:
I would be happy to debate any element of Norsen's paper or your ideas about Bell's (2). Just didn't want to unnecessarily head off in that direction.

OK, in that case, could you start by taking a moment to directly address what I presented from Bell's La Nouvelle Cusine, in post #25? I'd like to know if YOU agree or disagree with Bell's own logic, and if you disagree, then where and why exactly.

DrChinese said:
Local causality or local realism? Hmmm. I dunno, which is EPR about? Because it seems like it is about local realism to me: "...when the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality..." or perhaps: "On this point of view, since either one or the other, but not both simultaneously, of the quantities P and Q can be predicted, they are not simultaneously real. This makes the reality of P and Q depend upon the process of measurement carried out on the first system in any way. No reasonable definition of reality could be expected to permit this."

Of course EPR talked about realism. But what you seem to misunderstand about EPR is precisely what Norsen points out in his paper:

"I wish to call attention to ... the statement that ‘locality’ and ‘realism’ were assumptions made by EPR. This represents exactly the confusion I just mentioned – specifically, the failure to grasp that EPR presented an argument from Locality to outcome-determining hidden variables (i.e., Naive Realism). [30] This argument simply must be grasped and appreciated before one can properly understand the meaning and implications of Bell’s Theorem."

Also, Bell himself did say the following about EPR (from "Bertlmann's socks"):

"Could it be that the first observation somehow fixes what was unfixed, or makes real what was unreal, not only for the near particle but also for the remote one? For EPR that would be an unthinkable 'spooky action at a distance'. To avoid such action at a distance, they have to attribute, to the spacetime regions in question, real properties in advance of observation, correlated properties, which predetermine the outcomes of these particular observations. Since these real properties, fixed in advance of observation, are not contained in quantum formalism, that formalism is incomplete."

And, "What is held sacred [in the EPR argument] is the 'principle of local causality' - or 'no action at a distance'."

So Bell's description of the EPR argument confirms Norsen's - It is an argument from Locality (specifically, the principle of local causality) to out-come determining hidden variables.
DrChinese said:
So I would say that in EPR, there is clearly a discussion of the simultaneous reality of P and Q (a and b in Bell). In fact, what's the difference between local realism and local causality? I guess the difference is in one's definition. In my mind, I might take Bell's (2) as a definition of local causality. And then Bell (14) as a statement of counterfactual definiteness (CD) or alternately realism.

As I explained in my post #25 from Bell's own reasoning, his introduction of c is not a statement of realism. Its introduction follows from the use of his principle of local causality. Did you read #25 at all?
 
Last edited:
  • #55
DrChinese said:
Here is a typical quote from Anton Zeilinger (1999), who is certainly one of the foremost authorities on this subject:

"Second, a most important development was due to John Bell (1964) who continued the EPR line of reasoning and demonstrated that a contradiction arises between the EPR assumptions and quantum physics. The most essential assumptions are realism and locality. This contradiction is called Bell’s theorem."

Or perhaps this from Aspect (1999):

"The experimental violation of Bell’s inequalities confirms that a pair of entangled photons separated by hundreds of metres must be considered a single non-separable object — it is impossible to assign local physical reality to each photon... Bell’s theorem changed the nature of the debate. In a simple and illuminating paper, Bell proved that Einstein’s point of view (local realism) leads to algebraic predictions (the celebrated Bell’s inequality) that are contradicted by the quantum-mechanical predictions for an EPR gedanken experiment involving several polarizer orientations..."

Einstein's local realism was of course: a) there is no spooky action at a distance; and b) the moon is there even when no one is looking. That being 2 separate assumptions.

Now I guess maneeli might say that this does not PROVE that a, b and c are required for these conclusions. However, as I have said many times before, all I need to see is a Bell proof that does not involve the assumption of 3 simultaneous elements of reality. Then I will agree with Norsen. But until then, you will note that this is in fact introduced after Bell (14) and is explicit. And of course, Norsen has not provided such derivation in his work. But it should be clear from the above that the general view is that there are 2 assumptions - locality and realism - required for the Bell result.

So to reiterate, a) and b) were not two separate assumptions. And again, I would not at all say that a, b, and c are not required for Bell's derivation. You must be pulling that out of thin air. I wish you would have taken the time to read post #25.

As for those Zeilinger and Aspect quotes, they are excellent examples of the confusion and conflation that they are both partly responsible for regarding what EPR said, and what Bell actually proved.
 
Last edited:
  • #56
Maaneli said:
Yes, I think your statement here is not quite right.

I'll refer you to this part of my post #25, where I show exactly what Bell said about this:

"Bell then shows how one might try to embed quantum mechanics into a locally causal theory. To do this, he starts with the description of a spacetime diagram (figure 6) in which region 1 contains the output counter A (=+1 or -1), along with the polarizer rotated to some angle a from some standard position, while region 2 contains the output counter B (=+1 or -1), along with the polarizer rotated to some angle b from some standard position which is parallel to the standard position of the polarizer rotated to a in region 1. He then continues:

"We consider a slice of space-time 3 earlier than the regions 1 and 2 and crossing both their backward light cones where they no longer overlap. In region 3 let c stand for the values of any number of other variables describing the experimental set-up, as admitted by ordinary quantum mechanics. And let lambda denote any number of hypothetical additional complementary variables needed to complete quantum mechanics in the way envisaged by EPR. Suppose that the c and lambda together give a complete specification of at least those parts of 3 blocking the two backward light cones."

From this consideration, he writes the joint probability for particular values A and B as follows:


{A, B|a, b, c, lambda} = {A|B, a, b, c, lambda} {B|a, b, c, lambda}

He then says, "Invoking local causality, and the assumed completeness of c and lambda in the relevant parts of region 3, we declare redundant certain of the conditional variables in the last expression, because they are at spacelike separation from the result in question. Then we have


{A, B|a, b, c, lambda} = {A|a, c, lambda} {B|b, c, lambda}.

Bell then states that this formula has the following interpretation: "It exhibits A and B as having no dependence on one another, nor on the settings of the remote polarizers (b and a respectively), but only on the local polarizers (a and b respectively) and on the past causes, c and lambda. We can clearly refer to correlations which permit such factorization as 'locally explicable'. Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the formulation of 'local causality', but as a consequence thereof."

Bell then shows that this is the same local causality condition used in the derivation of the CSHS inequality, and which the predictions of quantum mechanics clearly violate. Hence, Bell concludes that quantum mechanics cannot be embedded in a locally causal theory."

I strongly urge you to read Bell's paper, La Nouvelle Cuisine.
I have read some of Bell's other papers, is La Nouvelle Cuisine available online? Anyway I am unclear on how you think any of the above contradicts what I said in the quote billschnieder was responding to--can you point to the specific thing that I said there that you think conflicts with some specific thing Bell said? For example, do you think Bell is actually denying that there can be a statistical dependence in probabilities which are not conditioned on lambda, i.e. that he is saying P(A|a,b,B) cannot be different from P(A|a)?
 
  • #57
JesseM said:
I have read some of Bell's other papers, is La Nouvelle Cuisine available online? Anyway I am unclear on how you think any of the above contradicts what I said in the quote billschnieder was responding to--can you point to the specific thing that I said there that you think conflicts with some specific thing Bell said? For example, do you think Bell is actually denying that there can be a statistical dependence in probabilities which are not conditioned on lambda, i.e. that he is saying P(A|a,b,B) cannot be different from P(A|a)?

As far as I know, it is not online. It is in his book, 'Speakable and Unspeakable in Quantum Mechanics'.

What you said which I thought was inconsistent with Bell is this:

"instead his [Bell's] whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables"

I have no idea what you mean by 'other hidden variables'. It sounds like you are saying there are hidden variables other than lambda. But Bell certainly did not imply this in anything he said. Perhaps that's not what you intended to say, in which case, please clarify.
 
  • #58
Could one of you folks above please give a simplified explanation / example of the two
opposing arguments here , if possibe, for those not versed in advanced probability theory.
I understand basic Bell 101
thanks
 
  • #59
Maaneli said:
What you said which I thought was inconsistent with Bell is this:

"instead his [Bell's] whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables"

I have no idea what you mean by 'other hidden variables'. It sounds like you are saying there are hidden variables other than lambda. But Bell certainly did not imply this in anything he said. Perhaps that's not what you intended to say, in which case, please clarify.
Lambda is a single variable, but each value of lambda can correspond to some unique combination of values for an arbitrarily large number of local hidden variables. A simple example would be if we had only three hidden variables associated with each particle, which give them predetermined spins on each of the three possible measurement axes; for example, lambda=1 might correspond to values for these three variables of "spin-up on axis 1, spin-up on axis 2, spin-up on axis 3" while lambda=2 might correspond to "spin-up on axis 1, spin-up on axis 2, spin-down on axis 3", and so on for all eight possible combinations of predetermined spins on the three axes.

But Bell actually goes a lot further than this and allows the value of lambda to stand for a specification of some much larger (possibly infinite) set of local hidden variables. See p. 242 of Speakable and Unspeakable in Quantum Mechanics where he says "let lambda denote any number of hypothetical additional complementary variables needed to complete quantum mechanics in the way envisioned by EPR", and has the combination of c (representing the state of observable variables 'describing the experimental setup') and lambda give a "complete specification" of every local physical fact in the sections of the past light cones of the two measurements depicted in fig. 6.
 
  • #60
JesseM said:
But Bell actually goes a lot further than this and allows the value of lambda to stand for a specification of some much larger (possibly infinite) set of local hidden variables. See p. 242 of Speakable and Unspeakable in Quantum Mechanics where he says "let lambda denote any number of hypothetical additional complementary variables needed to complete quantum mechanics in the way envisioned by EPR", and has the combination of c (representing the state of observable variables 'describing the experimental setup') and lambda give a "complete specification" of every local physical fact in the sections of the past light cones of the two measurements depicted in fig. 6.

That's right, which is why I don't know what you mean by 'other hidden variables'.
 
  • #61
morrobay said:
Could one of you folks above please give a simplified explanation / example of the two
opposing arguments here , if possibe, for those not versed in advanced probability theory.
I understand basic Bell 101
thanks

The dispute is over whether the phrase 'local realism' is an appropriate characterization of the assumptions that Bell made in his theorem. I am arguing that it is not appropriate, and that physicists should drop that phrase in favor of Bell's 'local causality'.
 
  • #62
morrobay said:
for those not versed in advanced probability theory.

Believe it or not, this is elementary probability theory. But it often gets muddled by unnecessarily complicated analogies.
 
  • #63
Maaneli said:
That's right, which is why I don't know what you mean by 'other hidden variables'.
I meant there might be other variables besides the ones dealing with observable things like detector settings and measurement outcomes, and that these variables (unlike the former ones) would be hidden ones. Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b. Reread the comment in this light and hopefully you will no longer find anything to disagree with there:
Bell does not just assume that since there is a marginal correlation between the results of different measurements on a pair of particles, there must be a causal relation between the measurements; instead his whole argument is based on explicitly considering the possibility that this correlation would disappear when conditioned on other hidden variables
 
  • #64
JesseM said:
I meant there might be other variables besides the ones dealing with observable things like detector settings and measurement outcomes, and that these variables (unlike the former ones) would be hidden ones. Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b. Reread the comment in this light and hopefully you will no longer find anything to disagree with there:

Sorry, but I still don't understand. Are you saying that these other variables besides the ones dealing with observable things like detector settings and measurement outcomes, are not encompassed by lambda? If so, then what could you possibly mean by 'hidden'? And if not, then why not just say that there are no other hidden variables other than what Bell defines as encompassed by lambda?
 
  • #65
In the actual version of this study they weren't randomly selected. See the paradox wikipedia page[/url] where I think you got this example from (unless it also appears in other sources):

In other words, they were sampling a group that had already been assigned A or B by their doctors
My example is different from the wikipedia example, the fact the same numbers are used does not mean you should ignore everything I actually said and respond to the wikipedia treatment of simpson's paradox. For one, there is no omniscient being in the wikipedia. It seems to me you are just grasping at straws here.

JesseM said:
I already gave you an example--just get a bunch of people who haven't received any treatment yet to volunteer for a study, then have a computer with a random number generator randomly assign each person to receive treatment A or treatment B. Do you agree that P(given person will be assigned by random number generator to receive treatment A) should be uncorrelated with P(given person will have some other background factor such as high socioeconomic status or large kidney stones)? If so, then the only reason group A might contain more people with a given factor (like large kidney stones) than group B would be a random statistical fluctuation, and the likelihood of any statistically significant difference in these background factors between group A and group B would get smaller and smaller the larger your sample size.
You do not know what you are talking about. The question you asked is irrelevant to the discussion and for the last time, there are no socioeconomic factors in the example I presented. You seem to have a hard time actually following an argument, and spend a lot of ink responding to what you want the argument to be rather than what it actually is. Looks like grandstanding to me.

Your only relevant response so far is essentially that a random number generator can do the job of producing a fair sample. You clearly do not deny the fact that the probability of success of each treatment will differ from those of the omniscient being unless the proportions within the sampled population are the same as in the universe. Yet your only cop-out is the idea that a random number generator will produce the same distribution. I have performed the simulation, see attached python code, and the results confirm once and for all that you have no clue what you are saying. if you still deny do yours and post the result.

Remember, We are interested ONLY in obtain two groups that have the same proportion of large stones to small stones people as in the universe of all people with the disease. Alternative, we are interested in two groups with exactly the same proportion of small stones and large stones. Feel free to calculate the probability of drawing two groups with the same proportions.

Python Code:
Code:
import random

NUMBER_OF_TRIALS = 100
TEST_SIZE = 100
UNIVERSE_FRAC_LARGE = 0.7
UNIVERSE_SIZE = 1000000
DIFFERENCE_PERMITTED = 0.01
UNIVERSE_FRAC_SMALL = 1.0 - UNIVERSE_FRAC_LARGE

def calc_freqs(l):
    # takes a binary list and prints the fraction 
    # of large stones and small stones people.
    frac_large = 1.0 * l.count(1) / len(l)
    frac_small = 1.0 * l.count(0) / len(l)
    print 'Large: %8.2f, Small: %8.2f' % ( frac_large, frac_small)
    return frac_large, frac_small
    
# generate population of UNIVERSE_SIZE people, UNIVERSE_FRAC_LARGE of whom have large stones 
# and UNIVERSE_FRAC_SMALL with small stones as a binary list
# 1 = large stones,  0 = small stones

population = [1] * int(UNIVERSE_FRAC_LARGE * UNIVERSE_SIZE) + [0] * int(UNIVERSE_FRAC_SMALL * UNIVERSE_SIZE)

# randomize it to start with
population = random.sample(population, len(population))

# print fractions for 1000 different randomly select groups of 100 each
n = 0  # accumulator for number of groups for which the fractions
m = 0  # match to within DIFFERENCE_PERMITTED
       

# for each iteration, extract two groups of TEST_SIZE randomly from population and compute the fractions of
# large and small stones, compare with universe fractions
largest_deviation_btw = (0.0, 0.0)
largest_deviation_unv = (0.0, 0.0)

for i in range(NUMBER_OF_TRIALS):
    fl1, fs1 = calc_freqs(random.sample(population, TEST_SIZE)) # group 1
    fl2, fs2 = calc_freqs(random.sample(population, TEST_SIZE)) # group 2
    
    _dev_btw = (abs(fl1-fl2), abs(fs1-fs2))
    _dev_unv = (abs(fl1-UNIVERSE_FRAC_LARGE), abs(fs1-UNIVERSE_FRAC_SMALL))
    if _dev_btw[0] < DIFFERENCE_PERMITTED > _dev_btw[1]:
        n += 1
        if _dev_unv[0] < DIFFERENCE_PERMITTED > _dev_unv[1]:
            m += 1
    
    if largest_deviation_btw < _dev_btw:
        largest_deviation_btw = _dev_btw
    if largest_deviation_unv < _dev_unv:
        largest_deviation_unv = _dev_unv

print "Probability of producing two similar groups: %8.4f" % (float(n)/NUMBER_OF_TRIALS)
print "Probability of producing two similar groups, also similar to universe: %8.4f" % (float(m)/NUMBER_OF_TRIALS)
print "Largest deviation observed between groups -- Large: %8.2f, Small: %8.2f" % largest_deviation_btw
print "Largest deviation observed between groups and universe -- Large: %8.2f, Small: %8.2f" % largest_deviation_unv

Results:
Code:
Probability of producing two similar groups:   0.0700
Probability of producing two similar groups, also similar to universe:   0.0100
Largest deviation observed between groups -- Large:     0.21, Small:     0.21
Largest deviation observed between groups and universe -- Large:     0.13, Small:     0.13

Note, with a random number generator, you sometimes find deviations larger than 20% between groups! And this is just for a simple situation with only ONE hidden parameter. It quickly gets much-much worse if you increase the number of hidden parameters. At this rate, you will need to do an exponentially large number of experiments (compare to number of parameters) to even have the chance of measuring a single fair sample, and even then you will not know when you have had it because the experimenters do not even know what fair means. And remember we are assuming that a small stone person has a fair chance of being chosen as a large stone person. It could very well be that small stone people are shy and never volunteer, etc etc and you quickly get into a very difficult situation in which a fair sample is extremely unlikely.
 
Last edited by a moderator:
  • #66
Continuing...
JesseM said:
No you didn't. This is the key point you seem to be confused about: the marginal correlation between treatment B and recovery observed by the omniscient being is exactly the same as that observed by the experimenters. The omniscient being does not disagree that those who receive treatment B have an 83% chance of recovery, and a person who receives treatment A has a 73% chance of recovery.
Yes he does. He disagrees that treatment B is marginally more effective than treatment A. The experimenters think they are calculating a marginal probability of success for each treatment, but the omniscient being knows that they are not. This is the issue you are trying to dodge with your language here.
The problem is not with what the omniscient being knows! The problem is what the doctors believe they know from their experiments. Now I know that you are just playing tricks and avoiding the issue. Those calculating from Aspect type experiments do not know the nature of all the hidden elements of reality involved either, so they think they have fully sampled all possible hidden elements of reality at play. They think their correlations can be compared with Bell's marginal probability. How can they possibly they know that? What possible random number generator can ensure that they sample all possible hidden elements of reality fairly, when they have no clue about the details? For all we know some of them may even be excluded by the experimental set-ups!

Simple yes or no is not possible here; there is some probability the actual statistics on a finite number of trials would obey Bell's inequalities, and some probability they wouldn't, and the law of large numbers says the more trials you do, the less likely it is your statistics will differ significantly from the ideal statistics that would be seen given an infinite number of trials (so the less likely a violation of Bell's inequalites would become in a local realist universe).

This is an interesting admission. Would you say then that the law of large numbers will work for a situation in which the experimental setups typically used for Bell-type experiments were systematically biased against some λs but favored other λs? Yes or No. Or do you believe that Bell test setups are equally fair to all possible λs? Yes or No.
 
  • #67
morrobay said:
Could one of you folks above please give a simplified explanation / example of the two
opposing arguments here , if possibe, for those not versed in advanced probability theory.
I understand basic Bell 101
thanks

The summary of my argument is this. I am making two points:

1. Bell's definition of "local causality" also excludes all "logical dependence" which is unwarranted because logical dependence exists in situations that are demonstrably locally causal.

2. Bell calculates his marginal probability for the outcome at two stations by integrating over all possible values of hidden elements λ. Therefore his inequalities are only comparable to experiments performed where all possible hidden elements of λ are realized. But since experimenters do not know anything about λ (since it is hidden). It is not possible to perform an experiment comparable to Bell's inequalities.
 
  • #68
Maaneli said:
Sorry, but I still don't understand. Are you saying that these other variables besides the ones dealing with observable things like detector settings and measurement outcomes, are not encompassed by lambda?
No, I just explained to you you that "other" was meant to contrast with non-hidden variables like A, B, a, b, not with lambda. Nowhere did I suggest any hidden variables not encompassed by lambda.
Maaneli said:
And if not, then why not just say that there are no other hidden variables other than what Bell defines as encompassed by lambda?
I thought it was clear from my previous post that you were misunderstanding when you imagined the "other" was a contrast to lambda rather than a contrast to the non-hidden variables. That's why I said 'Maybe it would have been clearer if I had written "other, hidden, variables" or "other (hidden) variables" to make clear that I was contrasting them with the non-hidden variables like A, B, a, and b'.
 
  • #69
morrobay said:
Could one of you folks above please give a simplified explanation / example of the two
opposing arguments here , if possibe, for those not versed in advanced probability theory.
I understand basic Bell 101
thanks

:smile:

I am the guy who presents the standard approach. If I deviate, I say so. JesseM also presents standard science.

There are 2 other groups represented. One group advocates that Bell's Theorem + Bell Tests combined do not rule out Local Realism. The argument varies, but in recent posts relates to the idea that classical phenomena can violate Bell Inequalities - thus proving that Bell cannot be relied upon. This argument has been soundly rejected, we are simply rehashing for iteration 4,823.

The other group insists that Bell essentially requires there to be a violation of locality within QM. On the other hand, the consensus is instead that either locality or realism can be violated. (I.e. take your pick.) This argument has some merit as there does not appear to be another mechanism* for explaining entanglement. However, this is not strictly a deduction from Bell. So we are debating that point. Norsen, channeled here by maaneli, is arguing for one side. I am defending the status quo.

*Actually there are at least 2 others, but this is the short version of the explanation.
 
  • #70
DrChinese said:
:smile:

I am the guy who presents the standard approach. If I deviate, I say so. JesseM also presents standard science.

There are 2 other groups represented. One group advocates that Bell's Theorem + Bell Tests combined do not rule out Local Realism. The argument varies, but in recent posts relates to the idea that classical phenomena can violate Bell Inequalities - thus proving that Bell cannot be relied upon. This argument has been soundly rejected, we are simply rehashing for iteration 4,823.

The other group insists that Bell essentially requires there to be a violation of locality within QM. On the other hand, the consensus is instead that either locality or realism can be violated. (I.e. take your pick.) This argument has some merit as there does not appear to be another mechanism* for explaining entanglement. However, this is not strictly a deduction from Bell. So we are debating that point. Norsen, channeled here by maaneli, is arguing for one side. I am defending the status quo.

*Actually there are at least 2 others, but this is the short version of the explanation.

<< [Bell] and Norsen, channeled here by maaneli, is arguing for one side. >>

It is important to recognize that I am representing Bell's own understanding of his theorem, not just Norsen's.
 
Back
Top