Bell Theorem and probabilty theory

In summary: No, the problem with the argument is that it assumes that the coins are unbiased and that the results are independent of each other. These are not true assumptions, and the argument falls apart.
  • #1
mn4j
191
0
DrChinese said:
That is at odds with standard probabilty theory for +/-120 degrees, which should be 1/3. In other words, considering that the three positions are 120 degrees apart (total 360, a full circle): the three cannot all have a 25% chance of being like their neighbor. There wouldn't be internal consistency. Try randomly flipping 3 coins and then randomly looking at just 2 of them. You will quickly discover they are alike (correlated) at least 1/3 of the time.

This is not true. If the coins are designed in such a way that they are biased to produce those results at 120 degrees apart, it would not be at odds with probability.

The problem with bells theorem which many people still fail to realize to their own undoing is that it is only valid within the assumptions it is predicated on. But there are several problems with bells assumptions, or more specifically his understanding of the meaning of probability. For example

According to Bell, in a local causal theory, if x has no causal effect on Y, P(Y|xZ) = P(Y|Z) (J.S. Bell, "Speakable and unspeakable in quantum mechanics", 1989)

However this is false, as explained in (ET Jaynes, "Clearing up mysteries, the original goal", 1989), P(Y|xZ) = P(x|YZ) in a local causal theory.

This is easily verified: Consider an urn with two balls, one red and one white (Z). A blind monkey draws the balls, the first with the right hand and second with the left hand. Certainly the second draw can not have a causal effect on the first draw? Set Y to be "First draw is a red ball" and x to be Second draw is a red ball.

Exp 1. Monkey shows you the ball in his left hand (second draw) and it is white. What is p(Y|xZ). The correct answer is 1. However, according to Bell and believers, since the second draw does not have a causal effect on the first, p(Y|xZ) should be the same as P(Y|Z) = 1/2 ? This is patently false.

Bells problem is that he did not realize the difference between "logical independence" and "physical independence" with the consequence that whatever notion of locality he was representing in his equations is not equivalent to Einstein's.
 
Physics news on Phys.org
  • #2


mn4j said:
This is not true. If the coins are designed in such a way that they are biased to produce those results at 120 degrees apart, it would not be at odds with probability.

The problem with bells theorem which many people still fail to realize to their own undoing is that it is only valid within the assumptions it is predicated on. But there are several problems with bells assumptions, or more specifically his understanding of the meaning of probability. For example

According to Bell, in a local causal theory, if x has no causal effect on Y, P(Y|xZ) = P(Y|Z) (J.S. Bell, "Speakable and unspeakable in quantum mechanics", 1989)

However this is false, as explained in (ET Jaynes, "Clearing up mysteries, the original goal", 1989), P(Y|xZ) = P(x|YZ) in a local causal theory.

This is easily verified: Consider an urn with two balls, one red and one white (Z). A blind monkey draws the balls, the first with the right hand and second with the left hand. Certainly the second draw can not have a causal effect on the first draw? Set Y to be "First draw is a red ball" and x to be Second draw is a red ball.

Exp 1. Monkey shows you the ball in his left hand (second draw) and it is white. What is p(Y|xZ). The correct answer is 1. However, according to Bell and believers, since the second draw does not have a causal effect on the first, p(Y|xZ) should be the same as P(Y|Z) = 1/2 ? This is patently false.

Bells problem is that he did not realize the difference between "logical independence" and "physical independence" with the consequence that whatever notion of locality he was representing in his equations is not equivalent to Einstein's.

This is plain wrong, and on a lot of levels. Besides, you are basically hijacking the OP's thread to push a minority personal opinion which has been previously discussed ad nauseum here. Start your own thread on "Where Bell Went Wrong" (and here's a reference as a freebee) and see how far your argument lasts. These kind of arguments are a dime a dozen.

For the OP: You should try my example with the 3 coins. Simply try your manipulations, but then randomly compare 2 of the 3. You will see that the correlated result is never less than 1/3. The quantum prediction is 1/4, which matches experiments which are done on pretty much a daily basis.
 
Last edited:
  • #3


Yes, mn4j's analogy shows a misunderstanding of Bell's inequality. Here's a more accurate analogy I came up with a while ago:
Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get opposite results--if Bob scratches box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a lemon.

Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs of cards in such a way that the "hidden" fruit in a given box of one card is always the opposite of the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the opposite of the other--if the first card was created with hidden fruits A+,B+,C-, then the other card must have been created with the hidden fruits A-,B-,C+.

The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find opposite fruits on at least 1/3 of the trials. For example, if we imagine Bob's card has the hidden fruits A+,B-,C+ and Alice's card has the hidden fruits A-,B+,C-, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be:

Bob picks A, Alice picks B: same result (Bob gets a cherry, Alice gets a cherry)

Bob picks A, Alice picks C: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks B, Alice picks A: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks B, Alice picks C: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks C, Alice picks A: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks C, Alice picks picks B: same result (Bob gets a cherry, Alice gets a cherry)

In this case, you can see that in 1/3 of trials where they pick different boxes, they should get opposite results. You'd get the same answer if you assumed any other preexisting state where there are two fruits of one type and one of the other, like A+,B+,C-/A-,B-,C+ or A+,B-,C-/A-,B+,C+. On the other hand, if you assume a state where each card has the same fruit behind all three boxes, like A+,B+,C+/A-,B-,C-, then of course even if Alice and Bob pick different boxes to scratch they're guaranteed to get opposite fruits with probability 1. So if you imagine that when multiple pairs of cards are generated by the machine, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C-/A-,B+,C+ while other pairs are created in homogoneous preexisting states like A+,B+,C+/A-,B-,C-, then the probability of getting opposite fruits when you scratch different boxes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get opposite answers in less than 1/3 of trials where you scratch different boxes, provided you assume that each card has such a preexisting state with "hidden fruits" in each box.

But now suppose Alice and Bob look at all the trials where they picked different boxes, and found that they only got opposite fruits 1/4 of the time! That would be the violation of Bell's inequality, and something equivalent actually can happen when you measure the spin of entangled photons along one of three different possible axes. So in this example, it seems we can't resolve the mystery by just assuming the machine creates two cards with definite "hidden fruits" behind each box, such that the two cards always have opposite fruits in a given box.
And you can modify this example to show some different Bell inequalities, see post #8 of this thread if you're interested.
 
  • #4


Gothican said:
I bet it's a pretty complicated answer, so if you guys aren't up to writing it all down here, it'll be great if someone could post a link to another page which explains it more fully.
thanks

Your questions are answered by the following article:
Jaynes, E. T., 1989, "Clearing up Mysteries - The Original Goal ," in Maximum-Entropy and Bayesian Methods, J. Skilling (ed.), Kluwer, Dordrecht, p. 1 (http://bayes.wustl.edu/etj/articles/cmystery.pdf)

Specifically, it treats the Bell inequalities from p.10. It also shows how the QM expectation value is consistent with probability theory and explains very clearly the mistakes Bell made that caused the conflict.

If you have access to Bell's original paper (NOT the multitude of third-party proofs on the web) you can read and follow alongside and see exactly where Bell introduced the conflict (equation 2) in Bell's original paper.
 
  • #5


mn4j said:
Specifically, it treats the Bell inequalities from p.10. It also shows how the QM expectation value is consistent with probability theory and explains very clearly the mistakes Bell made that caused the conflict.
It's not consistent with probability theory applied to a local hidden variables scenario. If you disagree, please look at the example I gave and explain how any local hidden variables scenario (i.e. a scenario where each lotto card has a preexisting fruit under each box, and scratching merely reveals the fruit that was already underneath) could be consistent with the statistics given.

The article you posted is confused on many points--for example, they claim that "as his words show above, Bell took it for granted that a conditional probability P(X|Y) expresses a physical causal influence, exerted by Y on X". But if you look at Bell's words that they're quoting, this is a misinterpretation. What Bell said was "It would be very remarkable if b proved to be a causal factor for A, or a for B; i.e. if [tex]P(A | a \lambda)[/tex] depended on b or P(B | b \lambda) depended on a." Nowhere is Bell saying that in general P(X|Y) being different from P(X) implies that Y was causally influencing X, he's just saying that in this particular case the only sensible way that [tex]P(A | a \lambda)[/tex] could depend on b would be if b exerted an influence on the hidden variables [tex]\lambda[/tex]. And the reason for that has to do with the specific meaning of the terms--here a represents one experimenter's choice of what variable to measure, and b represents the other experimenter's choice of what variable to measure. It is assumed that these choices happen at a spacelike separation from one another so there can be no direct causal influence from one to another in a local hidden variables theory, and it is assumed that the choices are spontaneous and random, so that there were no factors in their common past light cone which preconditioned both the experimenter's choices and the hidden variables [tex]\lambda[/tex] for each particle in such a way that they could be correlated. In physics there are two ways that events at different locations can be correlated, either by a direct causal influence from one to another, or by some factor in their common past light cone which influenced both; the monkey/urn example is simply an example of the latter, since both the probability of a given ball being found in one hand and the probability of a given ball being found in another are determined by the selection procedure from the urn, with the combination of balls in the urn at the time the second ball was selected having been influenced by the event of the first ball being selected. In the case of the experimenter's choices, Bell was simply passing over the possibility of some factor in the common past light cone of both measurements somehow predetermined both their choices of what to measure and the hidden variables in such a way as to produce a correlation, because he was assuming the choices were really free; once this possibility is eliminated, the only remaining possibility for a dependence between [tex]P(A | a \lambda)[/tex] and b would be if the experimenter's choice of measurement b somehow exerted a faster-than-light influence on the hidden variables [tex]\lambda[/tex] of the other particle. In later writings Bell and other physicists have explicitly recognized this loophole of the experimenter's choices being predetermined by factors in their common past light cone, see for example near the end of this article from the Stanford Encyclopedia of Philosophy where they write:
The last resort of a dedicated adherent of local realistic theories, influenced perhaps by Einstein's advocacy of this point of view, is to conjecture that apparent violations of locality are the result of conspiracy plotted in the overlap of the backward light cones of the analysis-detection events in the 1 and 2 arms of the experiment. These backward light cones always do overlap in the Einstein-Minkowski space-time of Special Relativity Theory (SRT) — a framework which can accommodate infinitely many processes besides the standard ones of relativistic field theory. Elimination of any finite set of concrete scenarios to account for the conspiracy leaves infinitely many more as unexamined and indeed unarticulated possibilities. What attitude should a reasonable scientist take towards these infinite riches of possible scenarios? We should certainly be warned by the power of Hume's skepticism concerning induction not to expect a solution that would be as convincing as a deductive demonstration and not to expect that the inductive support of induction itself can fill the gap formed by the shortcoming of a deductive justification of induction (Hume 1748, Sect. 4). One solution to this problem is a Bayesian strategy that attempts to navigate between dogmatism and excessive skepticism (Shimony 1993, Shimony 1994). To avoid the latter one should keep the mind open to a concrete and testable proposal regarding the mechanism of the suspected conspiracy in the overlap of the backward light cones, giving such a proposal a high enough prior probability to allow the possibility that its posterior probability after testing will warrant acceptance. To avoid the former one should not give the broad and unspecific proposal that a conspiracy exists such high prior probability that the concrete hypothesis of the correctness of Quantum Mechanics is debarred effectively from acquiring a sufficiently high posterior probability to warrant acceptance. This strategy actually is implicit in ordinary scientific method. It does not guarantee that in any investigation the scientific method is sure to find a good approximation to the truth, but it is a procedure for following the great methodological maxim: “Do not block the way of inquiry” (Peirce 1931).[5]
In terms of my lotto card analogy, it would be as if the machine printing the cards could know ahead of time whether me and my partner would both scratch the same box on our respective cards or different boxes (because there was some causal factor in the past light cone of the scratching events which determined in advance which boxes we would scratch, and the machine could examine this causal factor to know ahead of time what the choices would be), and the machine would vary the statistics of what fruits were behind each box depending on what our future choices were going to be on each trial.
 
  • #6


JesseM said:
The article you posted is confused on many points--for example, they claim that "as his words show above, Bell took it for granted that a conditional probability P(X|Y) expresses a physical causal influence, exerted by Y on X". But if you look at Bell's words that they're quoting, this is a misinterpretation. What Bell said was "It would be very remarkable if b proved to be a causal factor for A, or a for B; i.e. if [tex]P(A | a \lambda)[/tex] depended on b or P(B | b \lambda) depended on a." Nowhere is Bell saying that in general P(X|Y) being different from P(X) implies that Y was causally influencing X, he's just saying that in this particular case the only sensible way that [tex]P(A | a \lambda)[/tex] could depend on b would be if b exerted an influence on the hidden variables [tex]\lambda[/tex].
You have not understood it. Bell is thinking that events at A should not influence events at B. Physically that is correct. However he imposes that physical condition on the probabilities. So his equations are wrong. Logically, the probability of an event at A can influence the probability of an event at B even if there is no physical dependence. That is precisely the reason why he makes the fatal error. You only need to look at the example of the monkey pulling two balls to see this. Physically, the second ball can have no influence on the first. But if you impose this condition on the probability, you end up with a probability of 1/2 for the first ball even after you have seen the second ball which is wrong. If you know that the second ball is white, the probability that the first one picked was red is 1 not 1/2.

Following the correct probability rules, QM is in line with probability theory.
 
  • #7


mn4j said:
You have not understood it. Bell is thinking that events at A should not influence events at B.
He is also saying the experimenter's choices of what variable to measure a and b (i.e. the spin axes they are measuring on a given trial), which are distinct from the outcomes they get A and B, should not influence the outcomes at the other experimenter's detector, or even be correlated with the outcomes at the other experimenter's detector, because their choices are freely-made.
mn4j said:
Physically that is correct. However he imposes that physical condition on the probabilities.
And what physical condition would those be? Are you talking about the one in the quote "It would be very remarkable if b proved to be a causal factor for A, or a for B; i.e. if [tex]P(A | a \lambda)[/tex] depended on b or [tex]P(B | b \lambda)[/tex] depended on a"? If not, please tell me what specific probability condition you're talking about. If you are talking about that quote--which seems to be the one the authors are focusing on with their urn analogy--then my analysis is correct, Bell's statement depended on his specific understanding of what a meant physically (a choice of what variable to measure by the experimenters), he wasn't saying that in general P(X|Y) can only depend on Z if Z exerts a causal influence on X.
mn4j said:
Logically, the probability of an event at A can influence the probability of an event at B even if there is no physical dependence.
In a local realist universe, the only way this can work is if their are causal factors in the past of both A and B that influenced both and caused them to be correlated--this is exactly what's going on in the urn example. Do you think there's a third option for how A and B can be correlated that's not based on A and B exerting a causal influence on one another, and is also not based on some set of factors C in the overlap between the past light cones of A and B which causally influenced both? If so, please explain it.
mn4j said:
You only need to look at the example of the monkey pulling two balls to see this. Physically, the second ball can have no influence on the first.
No, but the past event of the monkey pulling a ball from the urn is in the common past of both events (the event of our looking to see what color the monkey chose on his first pick, and the event of our looking to see what color the monkey chose on his second), and this shared past is what explains the correlation, obviously. Did you even bother to read my explanation?
mn4j said:
Following the correct probability rules, QM is in line with probability theory.
Nope, it's not compatible with both the condition of local hidden variables and the condition that the experimenter's choices of what variable to measure were free ones which were not causally determined by some past factors which also determined the hidden variables associated with the particles (the 'no-conspiracy' assumption discussed in the quote from the Stanford Encyclopedia). The authors of the paper you mention have only rediscovered the conspiracy loophole, although they don't seem to have realized that the only way their observations about probabilities are relevant to Bell's theorem is if there was a conspiracy in the initial conditions which caused the experimenter's choices to be causally correlated with the hidden variables.
 
Last edited:
  • #8


JesseM said:
And what physical condition would those be? Are you talking about the one in the quote "It would be very remarkable if b proved to be a causal factor for A, or a for B; i.e. if [tex]P(A | a \lambda)[/tex] depended on b or P(B | b \lambda) depended on a"? If not, please tell me what specific probability condition you're talking about. If you are talking about that quote--which seems to be the one the authors are focusing on with their urn analogy--then my analysis is correct, Bell's statement depended on his specific understanding of what a meant physically (a choice of what variable to measure by the experimenters), he wasn't saying that in general P(X|Y) can only depend on Z if Z exerts a causal influence on X.
The rules of probability demand that you calculate probabilities in specific ways. Bell used equation (12) when he should have used equation (13) and equation (14) when he should have used equation (15) [numbers according to Jaynes article]. The reason he did this was because he thought there should be no physical causal relationship. However, lack of a physical causal relationship does not mean lack of logical causality. This is explained in more detail in Jaynes' book " https://www.amazon.com/dp/0521592712/?tag=pfamazon01-20

No, but the past event of the monkey pulling a ball from the urn is in the common past of both events (the event of our looking to see what color the monkey chose on his first pick, and the event of our looking to see what color the monkey chose on his second), and this shared past is what explains the correlation, obviously. Did you even bother to read my explanation?
The point is that by knowing the outcome of the second pick, the calculated probability for the first pick changes from what it was if the second outcome was unknown. The calculation of probabilites is completely after the fact of the experiment. This tells you that there is a logical link between the two in the way the calculation should be done even though we know that the second ball can not cause the first one to be different.
 
  • #9


mn4j said:
The rules of probability demand that you calculate probabilities in specific ways. Bell used equation (12) when he should have used equation (13)
Apparently you missed (or misunderstood) the part immediately after (13) where the authors say:
But if we grant that knowledge of the experimenter's free choices (a,b) would give us no information about [tex]\lambda[/tex]: [tex]P(\lambda | ab) = p(\lambda )[/tex] (and in this verbiage we too are being carefully epistemological) then Bell's interpretation lies in the factorization

[tex]P(AB | ab\lambda) = P(A | a \lambda ) P(B | b \lambda )[/tex] (14)

whereas the fundamentally correct factorization would read:

[tex]P(AB | ab \lambda )[/tex] = [tex]P(A | Bab\lambda ) P(B | ab\lambda)[/tex] = [tex]P(A | ab\lambda ) P(B | ab\lambda )[/tex] (15)

in which both a, b appear as conditioning statements
In other words, they're saying that if you grant the assumption that "knowledge of the experimenter's free choices (a,b) would give us no information about [tex]\lambda[/tex]", which is exactly the same "no-conspiracy" condition I have been discussing, then the left hand of (14) reduces to the right hand, and their equation (13) reduces to the equation (12) that Bell used. It's only if you drop this this assumption that you're left with the "fundamentally correct" equations (13) and (15) in which no assumption about (a,b)'s relation to other events has been made.
mn4j said:
The reason he did this was because he thought there should be no physical causal relationship.
No, the explanation for the assumption is not just that there is no direct causal link between an experimenter's choice of what to measure at one detector b and the outcome at the opposite detector A. What is also being assumed is that in a local realist universe, there should be no causal factor in the common past of these two events that would lead to a correlation between b and A, because the experimenter's choices are assumed to be "free". Again, in a local realist universe there are two ways of explaining a correlation between two events X and Y; #1 is that either X is exerting a direct causal influence on Y or vice versa, but #2 is that some event(s) Z in their common past exerted a causal influence on both in such a way as to create the correlation. For example, if I put a black ball in one box and a white ball in another, and send one box to Alice and another to Bob, then if Alice opens her box and sees a white ball she'll know Bob must have gotten the black ball; this isn't because the event of Alice opening her box exerted a causal influence on the event of Bob opening his box or vice versa, but it is because both events were determined by the past events of my putting a black ball in one box and a white ball in the other. In fact this is exactly the idea behind the [tex]\lambda[/tex] in a local hidden variables theory, it represents some factors which both particles share because these factors were determined at the past event of both particles being created at the same location.
mn4j said:
However, lack of a physical causal relationship does not mean lack of logical causality.
Please name an example of "logical causality" between two events A and B that does not reduce to either a direct causal link between A and B or some event C in the common past of both A and B that conditioned both of them and caused the correlation.
JesseM said:
No, but the past event of the monkey pulling a ball from the urn is in the common past of both events (the event of our looking to see what color the monkey chose on his first pick, and the event of our looking to see what color the monkey chose on his second), and this shared past is what explains the correlation, obviously. Did you even bother to read my explanation?
mn4j said:
The point is that by knowing the outcome of the second pick, the calculated probability for the first pick changes from what it was if the second outcome was unknown.
Yes, and the explanation for that is a common causal factor in the past of both picks. In a local realist universe, when you have consistent correlated outcomes of pairs of experiments, the explanation is always either 1) one outcome exerting a direct causal influence on the other, or 2) a causal factor in the past which influenced both later outcomes. Do you agree or disagree with this?
 
Last edited:
  • #10


JesseM said:
Again, in a local realist universe there are two ways of explaining a correlation between two events X and Y; #1 is that either X is exerting a direct causal influence on Y or vice versa, but #2 is that some event(s) Z in their common past exerted a causal influence on both in such a way as to create the correlation.

...

Please name an example of "logical causality" between two events A and B that does not reduce to either a direct causal link between A and B or some event C in the common past of both A and B that conditioned both of them and caused the correlation.

You fail, just like Bell, to appreciate the difference between ontological correlations, which are results of real experimental data, and logical correlations which are epistemological in nature. Bell was not analysing real experimental results, he was constructing a mental picture of what we might see if we measured it. This means he was bound by the rules of probability to do the calculations differently than he did.

You asked for an example of "logical causality". I have given one already. The one of the urn. Knowledge of the second ball being red, causes the probability of the first ball to change from 1/2 to 1 even though there is no physical causation happening. To say that the result is only because of a common physical cause in the past does not work because the calculation of probabilities is completely after the monkey had picked the balls, yet calculating the probability of the first ball before and after the revealing of the second ball you get a different value. There is no way any new knowledge of the second ball can physically cause a change in the first ball is there?

Another example:
Consider the the following proposition
- "A implies B", logically, this implies "not B implies not A".
Now pick any A and B of your choice for which there is a physical causality such that A causes B. It would still hold logically that "not B implies not A", yet it would make no sense to talk of "not B" causing "not A" physically.

The point therefore is that lack of physical causality is not a good enough excuse to assume lack of logical causality in your equations as bell did. You must still use the correct rules of probability to calculate.
 
  • #11


mn4j said:
You fail, just like Bell, to appreciate the difference between ontological correlations, which are results of real experimental data, and logical correlations which are epistemological in nature. Bell was not analysing real experimental results, he was constructing a mental picture of what we might see if we measured it.
But Bell was assuming a realist picture in which any hidden variables that determine results of measurements already have well-defined values before measurement. You can imagine God continuously measuring [tex]\lambda[/tex] as the particles propogate, until after each one is measured by the experimenters, if it helps.
mn4j said:
You asked for an example of "logical causality". I have given one already. The one of the urn.
I asked for an example of logical causality where the measured correlation could not also be physically explained either in terms of one measurement influencing the other directly or both measurements being influenced by events in the past. The correlation between the colors of the two balls is completely based on the fact that they were both selected from the same urn.
mj4j said:
Knowledge of the second ball being red, causes the probability of the first ball to change from 1/2 to 1 even though there is no physical causation happening.
But if you take the God's eye view in which all objective facts exist before measurement, then what's happening here is the "hidden variable" of the second ball being red determines that when I look at it, the event "I see a red ball" is guaranteed to happen with probability 1. Meanwhile, the fact that the first ball picked was white, and was picked from an urn containing one red ball and one white ball, is what determines there'd be a probability 1 that the "hidden variable" associated with the second ball turned out to be red. In other words, P(I see a red ball when I look at second pick | I saw a white ball when I looked at the first pick) = 1 can be broken down into:

P(first ball picked had hidden state 'white' after being picked | I saw a white ball when I looked at the first pick) = 1
P(second ball picked had hidden state 'red' after being picked | first ball picked had hidden state 'white' after being picked) = 1
P(I see a red ball when I looked at second pick | second ball picked had hidden state 'red'' after being picked) = 1

In a realist universe, you should always be able to take such a God's eye view where the true state of all the unknown factors is part of the probability calculation (although these true states are represented as variables since we don't necessarily know them), and probability is interpreted in frequentist terms (in terms of the statistics of repeated trials) rather than Bayesian terms, and this is exactly what Bell was doing in his proof.

Meanwhile, I don't see the authors of the paper you link to arguing that we can only use epistemological Bayesian probabilities rather than objective frequentist probabilities, that appears to be your own original argument which has nothing to do with their own. The fact that they include [tex]\lambda[/tex] in their probability equations, despite the fact that [tex]\lambda[/tex] represents hidden variables whose value we can never actually know, shows this.
mn4j said:
Another example:
Consider the the following proposition
- "A implies B", logically, this implies "not B implies not A".
Now pick any A and B of your choice for which there is a physical causality such that A causes B. It would still hold logically that "not B implies not A", yet it would make no sense to talk of "not B" causing "not A" physically.
Why not? "Causality" just means that one physical fact determines another physical fact according to the laws of physics, and an absence of a certain physical event is still a physical fact about the universe, there's no reason it can't be said to "cause" some other fact.
 
  • #12


ZapperZ, thanks for splitting out the thread.

mnj4,

It is the realism argument that is most important to Bell, and it really doesn't matter your point about conditional probabilities. This is of little importance to Bell.

The mathematical description of realism is:

1 >= P(A, B, C) >= 0

The reason is that A, B and C individually are elements of reality, because they can be predicted in advance in a Bell test. The issue is not whether the measurement somehow distorts the results, it is whether these elements of reality (EPR) exist simultaneously independently of the ACT of observation.

If you believe they do, you are supporting realism. And NO physical theory of local realism (i.e. hidden variables) can reproduce the predictions of QM. If that statement is not true, you can provide a counterexample! But as pointed out earlier, this has befuddled everyone who has tried (so far).

Your urn example makes no sense. There are NO conditionals before any measurement occurs. So if we ask what percentage of humans are men, are Democrats, and/or are college educated, I can make predictions all day long and my answers will always satisfy the realism criterion above. It doesn't matter whether the attributes are causally related somehow or not. And yet when QM is applied, you cannot get around the negative probabilities for some of the subensembles if Malus is followed. You MUST know this already, how can you not?
 
Last edited:
  • #13


JesseM said:
But Bell was assuming a realist picture in which any hidden variables that determine results of measurements already have well-defined values before measurement. You can imagine God continuously measuring [tex]\lambda[/tex] as the particles propogate, until after each one is measured by the experimenters, if it helps.

I asked for an example of logical causality where the measured correlation could not also be physically explained either in terms of one measurement influencing the other directly or both measurements being influenced by events in the past. The correlation between the colors of the two balls is completely based on the fact that they were both selected from the same urn.
Exactly! Therefore any local realist theorem MUST also consider that measurements at A and B MUST be logically correlated! Bell sets out trying to determine the probability of observing certain events at A based on measurements at B. But in doing so, in he fails to incorporate logical dependence. This is the crucial point. And this is why you get the wrong answer for the urn if you calculate using bell's equation! Because there is no concept of logical dependence in it even though there must be! And the reason bell makes this mistake is because he does not clearly separate logical dependence from physical causation.

Meanwhile, I don't see the authors of the paper you link to arguing that we can only use epistemological Bayesian probabilities rather than objective frequentist probabilities, that appears to be your own original argument which has nothing to do with their own. The fact that they include [tex]\lambda[/tex] in their probability equations, despite the fact that [tex]\lambda[/tex] represents hidden variables whose value we can never actually know, shows this.
I am not arguing this at all. I have not even started talking about [tex]\lambda[/tex] because there are other assumptions Bell makes about [tex]\lambda[/tex] that are unfounded. The simplest explanation of what I am saying is that if A and B are determined by a local-realist theorem of hidden variables, then the probabilities of events at A MUST be logically dependent on those at B, even if there is no direct physical causation from A to B and vice versa. Therefore, failure to include logical dependence in his treatment is unfounded.
 
Last edited:
  • #14


mn4j said:
You must still use the correct rules of probability to calculate.

You have it backwards, as usual. Bell is pointing out what actually should have been obvious in retrospect: that subensembles cannot respect Malus as does QM.

1. If you had a group of 100 trials of Alice and Bob at settings of 0 and 45 degrees respectively, then we would expect a coincidence rate of 50%. Now, I ask does this statement somehow violate your definition of proper probability? I certainly hope not...

2. Now, imagine that we have the 4 permutations of the above and choose to subdivide it into another group, the results (which by the realistic definition must exist and be well-defined) of measurements (let's call this Carrie) at 22.5 degrees - i.e. midway between the 0 and 45 degree mark. Surely you already know that the correlation rate between Alice and Carrie must equal the correlation rate between Bob and Carrie. Is that too difficult? Does this statement somehow violate your definition of proper probability? I certainly hope not...

And finally, you must also already know that there is no set of trials possible in which 1. is true and also 2. is true. Oh, and that also matches the prediction of QM for 22.5 degrees (for photons) which of course is about 85%.

Now, none of this is a problem if you drop the requirement of realism. If you said that Bell's argument was not well-phrased, I would probably agree with you. But there is nothing wrong with the conclusion it leads to.
 
  • #15


mn4j said:
Exactly! Therefore any local realist theorem MUST also consider that measurements at A and B MUST be logically correlated! Bell sets out trying to determine the probability of observing certain events at A based on measurements at B. But in doing so, in he fails to incorporate logical dependence.
But the only type of "dependence" when you do frequentist calculations is statistical dependence in the objective facts of the matter over many trials, and this statistical dependence must always be explainable in terms of physical causation. For example, when I broke down the urn/ball probabilities like this:

P(first ball picked had hidden state 'white' after being picked | I saw a white ball when I looked at the first pick) = 1
P(second ball picked had hidden state 'red' after being picked | first ball picked had hidden state 'white' after being picked) = 1
P(I see a red ball when I looked at second pick | second ball picked had hidden state 'red'' after being picked) = 1

...none of these were subjective probabilities. For example, "P(first ball picked had hidden state 'white' after being picked | I saw a white ball when I looked at the first pick)" can be interpreted in frequentist terms as meaning "if you look at a large number of trials, and then look at only the subset of trials where the event of me seeing a white ball when I looked at the first pick occurred, in what fraction of this subset was it also true that the the ball had the hidden state white at the moment it was picked"? And the answer of course, is 1, the reason being that on any trial where the ball had hidden state white after it was picked, this caused me to predictably see a white ball when I looked at the color.

So you see, when you're dealing with a frequentist notion of probabilities in a realist universe, any statistical correlations must involve physical causation, either one fact causing the other or both facts being caused by some common cause in their past. Do you disagree? If so, please name an example of a situation where we can interpret probabilities in a purely frequentist manner yet there is a statistical correlation that can not be explained in one of these two causal ways.
mn4j said:
This is the crucial point. And this is why you get the wrong answer for the urn if you calculate using bell's equation!
What "Bell's equation" are you talking about? Please write it down, and explain what the variables are supposed to represent in terms of the urn example that you think leads the equation to be wrong.
mnj4 said:
The simplest explanation of what I am saying is that if A and B are determined by a local-realist theorem of hidden variables, then the probabilities of events at A MUST be logically dependent on those at A, even if there is no direct physical causation from A to B and vice versa.
Again, when you interpret probabilities in frequentist/realist terms, all statistical correlations (which I guess is what you mean by 'logical dependence') must be explained in terms of physical causes, though the explanation may involve a common cause in the past of two events rather than either one directly influencing the other. And Bell did not assume there'd be no statistical correlation between A and B--the whole point of including [tex]\lambda[/tex] was to show there could be such a correlation in a local realist universe, as long as it was explained by the source creating both particles with correlated hidden variables (a common cause in the past), just like my example of sending balls to Alice and Bob and always making sure one was sent a black ball and the other was sent a white one, so their measurements results would always be opposite (here I play the role of the 'source' which determines the hidden variables of each box that determine the correlations between their observations when they open their respective boxes).

The place that Bell assumed "no statistical correlation" was between the choice made by one experimenter of what spin axis to perform a measurement on, and the hidden variables of the other particle being measured by the other experimenter far away (as well as the other experimenter's choice of what to measure). In a local realist universe, for this assumption to be wrong, there would have to be some common cause(s) in the past of both an experimenter's choices and the particles' hidden variables which predetermined what they would be, and ensured a statistical correlation (so the source would act like it 'knows in advance' what choice the two experimenters would make, and adjusts the probabilities of emitting particles with different hidden variables accordingly).
 
  • #16


JesseM said:
But the only type of "dependence" when you do frequentist calculations is statistical dependence in the objective facts of the matter over many trials,
This make no sense. If you had the objective facts, you will not need induction. It will be deductive and you will never have a probability other than 0 or 1. Did you read the article? The Bernouli urn example is treated in the article. The reason inductive reasoning is used, where probabilities can have values between 0 and 1, is because we don't have all the objective facts. Which means we are trying to make a prediction of what we might see if you make the measurement. Bell did not base his calculations on ANY objective facts. The first experiments trying to test his theorem were done years after. By objective facts I assume you mean real experimental data.Take the example of the urn I gave earlier.

1. You know that there are two balls in the urn, one is red and one is white.
2. You know that the monkey picked one ball first and then another ball second.
3. You are asked to infer what the probability is of the first ball being red before seeing the result of the second ball. Then the second ball is shown to you and you are asked to again infer the probability that the first ball is red.

We both accept that "the second ball is red" has no physically causative effect on the state of the first ball, because it was picked after the second ball. At most, they have a single event in their past which caused them both. Yet, in calculating the probabilities in (3) above, you will not arrive at the correct result if you do not use the right equations which include logical dependence. Note that the urn example is the simplest example of a hidden variable theory. In this case, you are saying that once the balls have left the urn, the outcome of the experiment is determined and there is no superluminal communication between the monkey's right hand and left hand.

Bell's equation written for this situation is essentially,

P(AB|Z) = P(A|Z)P(B|Z) ( see Bell's equation (2) which is the same as eq 12 in Jaynes ) Remember the question is "What is the probability that both balls are red"?

Z: The premise that the urn contains two balls, one white and one red.
A: First ball is red
B: Second ball is red

Calculating based on that equation, the probability that both of those balls is red results in 0.5 * 0.5 or 0.25! Which is wrong! If you don't believe me, do the experiment, 1000000 times, you will never observe a red ball in both hands.

However, the correct equation should have been,
P(AB|Z) = P(A|B)P(B|Z) = P(B|A)P(A|Z) (Equation 15 in Jaynes).
which results in 0 * 0.5 = 0, the correct answer.

As you see, even though we accept that there is no physical causality from the second draw to the first draw, we still must include logical dependence to calculate the probabilities correctly. This means we must have a P(A|B) or P(B|A) term in our equation.
It is just like proponents of bell now point to experiments with real monkeys and 2-ball urns and say "since bell obtained 0.25 instead of 0 and real experiments obtain 0, it must mean that the experiments disprove local reality of the urn and the monkey."

You see, what is moving faster than light is not any physical influence. It is the same logical influence which caused our probability to suddenly change as soon as we knew the result of the second ball.

Again, when you interpret probabilities in frequentist/realist terms, all statistical correlations (which I guess is what you mean by 'logical dependence') must be explained in terms of physical causes, though the explanation may involve a common cause in the past of two events rather than either one directly influencing the other.
You are not reading what I write. I'll use a dramatic example.

"not Dead implies not Executed".
Do you agree with the above?

There is a logical dependence between "not Dead" and "not Executed". If a person is not dead, it MUST follow that the person is not Executed. However, you can not say "not Dead" physically causes "not Executed", otherwise nobody will ever be executed. Logical dependence is not the same as physical causation.

And Bell did not assume there'd be no statistical correlation between A and B--the whole point of including [tex]\lambda[/tex] was to show there could be such a correlation
He must have. Equation (2) in his article, (12) in Jaynes, means just that. The correct way to include logical dependence is in how you set up the equations, not by introducing additional parameters. Adding numbers to the balls in the urn example above, will not change the results if you do not use the right equations.

in a local realist universe, as long as it was explained by the source creating both particles with correlated hidden variables (a common cause in the past), just like my example of sending balls to Alice and Bob and always making sure one was sent a black ball and the other was sent a white one, so their measurements results would always be opposite (here I play the role of the 'source' which determines the hidden variables of each box that determine the correlations between their observations when they open their respective boxes).
Maybe I should ask you a question. If you know the outcome at A, the settings at A and the settings at B will you be able to deduce the outcome at B? Isn't this the premise of any hidden variable theorem, that the outcome is determined by the values they left the source with and the settings at the terminals?
Now explain to me why when you calculate the probabilities, you do not include logical dependence. By this I mean that the term P(A|B) never occurs in any of bell's equations.

Just like knowing the result of the first second draw should influence the way you calculate the probability of the second draw. Shouldn't it? How is it supposed to influence it if you do not have a P(A|B) term??

The place that Bell assumed "no statistical correlation" was between the choice made by one experimenter of what spin axis to perform a measurement on, and the hidden variables of the other particle being measured by the other experimenter far away (as well as the other experimenter's choice of what to measure). In a local realist universe, for this assumption to be wrong, there would have to be some common cause(s) in the past of both an experimenter's choices and the particles' hidden variables which predetermined what they would be, and ensured a statistical correlation (so the source would act like it 'knows in advance' what choice the two experimenters would make, and adjusts the probabilities of emitting particles with different hidden variables accordingly).
Look at Bell's original article, DrChinese has it on his website. Everything starts from Eq. (2). Clearly from (2) there are only two options
1- Either bell did not understand how to apply probability rules or
2- He assumed that knowing the results at B should not influence the way we calculate the probability at A.
Either one is devastating to his result.

There is no other way, if you see it , let me know. The equations speak for themselves!
 
Last edited:
  • #17


mn4j said:
You see, what is moving faster than light is not any physical influence. It is the same logical influence which caused our probability to suddenly change as soon as we knew the result of the second ball.

No one is saying that there is anything moving faster than light.

I think you are missing the bigger picture here. It is almost like you found a misspelled word and want to throw out a work for that reason. I have derived Bell's Theorem a variety of ways, and never do I bother with his "P(AB|Z) = P(A|Z)P(B|Z)" which you say is wrong. There are plenty of things in the original that could be presented differently, and something things I personally think are obscure in the extreme. But the ideas ultimately are fine. Like the EPR paper, he makes a few statements - the references to Bohmian Mechanics, for example - which leave a poor aftertaste.

What is generally accepted as the legacy of the paper is the following conclusion: No physical theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics.

The key here is the realism requirement, which is not explicitly mentioned at all! 1 >= P(A, B, C) >= 0. Einstein would have naturally agreed with this, because he insisted that particle attributes existed independently of the act of observation: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

So I would say that the 2 main assumptions of the paper are: a) realism, which is what the inequality is based upon - and NOT separability, as is often mentioned; and b) locality, because he was trying to point out that there might be some unknown force that can communicate between entangled particles or their measuring apparati superluminally.

I would like to make it clear that no dBB-type theory I have ever seen explains any mechanism by which Bell test results occur. They always simply state that they generically reproduce QM predictions. Maybe, maybe not. But the mechanism itself is *always* missing.

On the other hand: If you assume locality, then you can still stay with QM as a non-realistic local theory and be done with it. And there is no problem at all. Look at the bigger picture, and you will see why Bell is widely praised and accepted today: Bell tests are the heart of entanglement measures, and are leading to innovative new ways to explore the quantum world. And lo and behold, there are still no local realistic theories which have survived the rigors of both theory and experiment.
 
  • #18


If the equations on which his conclusions are based are wrong then the conclusions are baseless. Just because you prove it differently does not mean your equations are correct. For example, where in your eqations do you account for logical dependence? If you think logical dependence is not necessary then what you are modeling is not a local-realist system. No doubt the results contradict real experiments.
 
  • #19


mn4j said:
This make no sense. If you had the objective facts, you will not need induction.
I didn't say anything about you knowing the objective facts. Again, the frequentist idea is to imagine a God's-eye perspective of all the facts, and knowing the causal relations between the facts, figure out what the statistics would look like for a very large number of trials. Then, if you want to know the probability that you will observe Y when you have already observed X, just look at the subset of these large number of trials where one of the facts is "experimenter observes X", and figure out what fraction of these trials would also include the fact "experimenter observed Y".

If you believe there are objective facts in each trial, even if you don't know them, then it should be possible to map any statement about subjective probabilities into a statement about what this imaginary godlike observer would see in the statistics over many trials--do you disagree? For example, suppose there is an urn with two red balls and one white ball, and the experiment on each trial is to pick two balls in succession (without replacing the first one before picking the second), and noting the color of each one. If I open my hand and see that the first one I picked was red, and then I look at the closed fist containing the other and guess if it'll be red or white, do you agree that I should conclude P(second will be white | first was red) = 1/2? If you agree, then it shouldn't be too hard to understand how this can be mapped directly to a statement about the statistics as seen by the imaginary godlike observer. On each trial, this imaginary observer already knows the color of the ball in my fist before I open it, of course. However, if this observer looks at a near-infinite number of trials of this kind, and then looks at the subset of all these trials where I saw that the first ball was red, do you agree that within this subset, on about half these trials it'll be true that the ball in my other hand was white? (and that by the law of large numbers, as the number of trials goes to infinity the ratio should approach precisely 1/2?)

If you agree with both these statements, then it shouldn't be hard to see how any statement about subjective probabilities in an objective universe should be mappable to a statement about the statistics seen by a hypothetical godlike observer in a large number of trials. If you think there could be any exceptions--objectively true statements of probability which cannot be mapped in this way--then please give an example. It would be pretty earth-shattering if you could, because the frequentist interpretation of probabilities is very mainstream, I'm sure you could find explanations of probability in terms of the statistics over many trials in virtually any introductory statistics textbook.
mn4j said:
Take the example of the urn I gave earlier.

1. You know that there are two balls in the urn, one is red and one is white.
2. You know that the monkey picked one ball first and then another ball second.
3. You are asked to infer what the probability is of the first ball being red before seeing the result of the second ball. Then the second ball is shown to you and you are asked to again infer the probability that the first ball is red.

We both accept that "the second ball is red" has no physically causative effect on the state of the first ball, because it was picked after the second ball. At most, they have a single event in their past which caused them both.
Exactly, there was an event in their past which predetermined what the color of both the first and second ball would be (the event of the first ball being picked from the urn containing only a white and red ball). Don't you remember that this was exactly my point, that in a realist universe any statistical correlation between events must be explainable either in terms of one event causing the other or in terms of a common cause (or set of causes) in their common past? I asked if you had any counterexamples to this general statement about statistical correlations, the urn example certainly isn't one.
mn4j said:
Yet, in calculating the probabilities in (3) above, you will not arrive at the correct result if you do not use the right equations which include logical dependence.
What exactly do you mean by "logical dependence"? The probabilities can of course be calculated in the same frequentist manner as I discussed above--if you imagine a large number of trials of this type, it's certainly true that on the subset of trials where the first ball picked was white, the second ball was always red in 100% of this subset, and likewise in the subset of trials where the first ball picked was red, the second ball was always white in 100% of this subset.
mn4j said:
Bell's equation written for this situation is essentially,

P(AB|Z) = P(A|Z)P(B|Z) ( see Bell's equation (2) which is the same as eq 12 in Jaynes )
You are distorting Bell's claims again. He does not claim that as some sort of general rule, P(AB|Z) = P(A|Z)P(B|Z) for any arbitrary observations or facts A, B, and Z. Instead, he says that for the specific case where a and b represent the events of some experimenter's choices of what variable to measure on a given trial, we can assume that these choices are really "free" and were not predetermined by some common cause in the past which also determined the state of the hidden variables [tex]\lambda[/tex]. And thus, in this specific case with a and b having that specific meaning, we can write the equality in equation (14) from the Jaynes paper you referenced:

[tex]P(AB | ab\lambda ) = P(A | a \lambda) P(B | b \lambda)[/tex]

Jaynes does not disagree that this equation is correct if you make the assumption about a and b not being predetermined by factors that also determined [tex]\lambda[/tex], that's why he prefaces that equation by saying "But if we grant that knowledge of the experimenters' free choices (a,b) would give us no information about [tex]\lambda[/tex]". If you want to question the assumption of "free choice" (which just means choices not determined by factors which also determined the hidden variables produced by the source on a given trial, they might be determined by other complex factors in the experimenter's brains prior to the choice), then go ahead, this is a known loophole in the proof of Bell's theorem. But don't act like Bell was making some very broad statement about probability that would be true regardless of what events/observations the symbols a and b are supposed to represent.
mn4j said:
Remember the question is "What is the probability that both balls are red"?

Z: The premise that the urn contains two balls, one white and one red.
A: First ball is red
B: Second ball is red

Calculating based on that equation, the probability that both of those balls is red results in 0.5 * 0.5 or 0.25! Which is wrong!
And this is a strawman, since Bell never suggested such a broad equation that was supposed to work regardless of what the symbols represent. Try to think of an experiment where the symbol a represents the free choice of experimenter #1 of what measurement to perform (like which of the three boxes on the lotto card to scratch in my example), and b represents the free choice of experimenter #2 at some distant location (such that no signal moving at the speed of light can cross from the event of experimenter #1 making his choice/measurement to the event of experimenter #2 making his choice/measurement), and A represents the outcome seen by #1 while B represents the outcome #2, and [tex]\lambda[/tex] represents some factors in the systems being measured that determine (or influence in a statistical way) what outcome each sees when they perform their measurement. With the symbols having this specific meaning, can you think of an experiment in a local realist universe where the equation

[tex]P(AB | ab\lambda ) = P(A | a \lambda) P(B | b \lambda)[/tex]

would not work?
mn4j said:
As you see, even though we accept that there is no physical causality from the second draw to the first draw, we still must include logical dependence to calculate the probabilities correctly.
Um, have you been ignoring my point all along that in a realist universe, statistical correlations between events are always either due to one event influencing the other or events in their common past which influenced (or predetermined) both? I don't think I was very subtle about the idea that there were two options here. You seem to have simply ignored my second option, which is a little suspicious because it's precisely the one that applies to the case of the second ball drawn from the urn (whose color is predetermined by the event of the first ball being picked from the urn, since the urn only contained two balls to begin with).
JesseM said:
Again, when you interpret probabilities in frequentist/realist terms, all statistical correlations (which I guess is what you mean by 'logical dependence') must be explained in terms of physical causes, though the explanation may involve a common cause in the past of two events rather than either one directly influencing the other.
mn4j said:
You are not reading what I write. I'll use a dramatic example.

"not Dead implies not Executed".
Do you agree with the above?

There is a logical dependence between "not Dead" and "not Executed". If a person is not dead, it MUST follow that the person is not Executed. However, you can not say "not Dead" physically causes "not Executed", otherwise nobody will ever be executed. Logical dependence is not the same as physical causation.
And you are not reading what I write, because I already explained that your overly narrow definition of "physical causation" is different from what I mean by the term. Read the end of post #11 again:
Why not? "Causality" just means that one physical fact determines another physical fact according to the laws of physics, and an absence of a certain physical event is still a physical fact about the universe, there's no reason it can't be said to "cause" some other fact.
Both "the prisoner is not dead" and "the prisoner was not executed" are physical facts which would be known by a hypothetical godlike being that knows every physical fact about every situation, and if this being looked at a large sample of prisoners, he'd find that for everyone to whom "not dead" currently applies, it is also true that "not executed" applies to their past history. So, according to my broad definition of "cause", it is certainly true that "not executed" is a necessary (but not sufficient) cause for the fact of being "not dead".
JesseM said:
And Bell did not assume there'd be no statistical correlation between A and B--the whole point of including [tex]\lambda[/tex] was to show there could be such a correlation
mn4j said:
He must have. Equation (2) in his article, (12) in Jaynes, means just that.
Wow, you have really missed the most basic idea of the proof. No, of course (12) in Jaynes doesn't mean A and B are independent of [tex]\lambda[/tex], where could you possibly have gotten that idea? The equation explicitly includes the terms [tex]P(A | a \lambda)[/tex] and [tex]P(B | b \lambda)[/tex], that would make no sense if A and B were independent of [tex]\lambda[/tex]! What equation (12) does show is that Bell was assuming A was independent of b, and B was independent of a. No other independence is implied, if you think it is you really need to work on your ability to read statistics equations.
JesseM said:
in a local realist universe, as long as it was explained by the source creating both particles with correlated hidden variables (a common cause in the past), just like my example of sending balls to Alice and Bob and always making sure one was sent a black ball and the other was sent a white one, so their measurements results would always be opposite (here I play the role of the 'source' which determines the hidden variables of each box that determine the correlations between their observations when they open their respective boxes).
mn4j said:
Maybe I should ask you a question. If you know the outcome at A, the settings at A and the settings at B will you be able to deduce the outcome at B? Isn't this the premise of any hidden variable theorem, that the outcome is determined by the values they left the source with and the settings at the terminals?
No, of course it isn't--you've completely left out the hidden variables here! A hidden-variables theory just says that the outcome A seen by experimenter #1 is determined by experimenter #1's choice of measurement a (like the choice of which box to scratch in my lotto card analogy in post #3 on this thread) combined with the hidden variables [tex]\lambda_1[/tex] associated with the system experimenter #1 is measuring (like the preexisting hidden fruits behind each box on the lotto card). Likewise, the outcome B seen by experimenter #2 is determined by experimenter #2's choice of measurement b along with the hidden variables [tex]\lambda_2[/tex] associated with the system experimenter #2 is measuring. If both experimenters always get the same result on trials where they both choose the same measurement, that must mean that the hidden variables associated with each system must predetermine the same outcome to any possible measurement, as long as you assume the source that's "preparing" the hidden variables on each trial has no foreknowledge of what the experimenters will choose (if it did have such foreknowledge, then it might only predetermine the same outcome to the same measurement on trials where the experimenters were, in fact, going to choose the same measurement to make).
mn4j said:
Just like knowing the result of the first second draw should influence the way you calculate the probability of the second draw. Shouldn't it? How is it supposed to influence it if you do not have a P(A|B) term??
Because the correlation seen between results A and B is assumed to be purely a result of the hidden variables the source associated with each particle--a common cause in the past (again, in a local realist universe all correlations are understood either as direct causal relations or a result of common causes in the past, and A and B are supposed to have a spacelike separation which rules out a direct causal relation in a local realist universe). As long as you include a term for the dependence of both A and B on the hidden variables, there's no need for a separate term for the statistical correlation between A and B. Similarly, if I have an urn containing two reds and one white, and the first I pick is red, I can write the equation P(second ball seen to be white | first ball seen to be red) = 1/2; but if I explicitly include a term for the "hidden variables" associated with what's left in the urn on each pick, I can just rewrite this as:

P(second ball seen to be white | after first pick but prior to examination, first ball had 'hidden state' red and urn had 'hidden state' one red, one white) = 1

and

P(first ball seen to be red | after first pick but prior to examination, first ball had 'hidden state' red and urn had 'hidden state' one red, one white) = 1

which together with

P(after first pick but prior to examination, first ball had 'hidden state' red and urn had 'hidden state' one red, one white) = 1/2

imply the statement P(second ball seen to be be white | first ball seen to be red) = 1/2. And in general we can write the equation:

P(first ball seen to be red, second ball seen to be white) = [SUM OVER ALL POSSIBLE HIDDEN STATES X FOR URN + FIRST BALL AFTER FIRST PICK] P(first ball seen to be red | urn + first ball in hidden state X)*P(second ball seen to be white | urn + first ball in hidden state X)*P(urn + first ball in hidden state X)

...This is directly analogous to equation (12) from the Jaynes paper which you've told me is the same as (2) from Bell's paper.
mn4j said:
Look at Bell's original article, DrChinese has it on his website. Everything starts from Eq. (2). Clearly from (2) there are only two options
1- Either bell did not understand how to apply probability rules or
2- He assumed that knowing the results at B should not influence the way we calculate the probability at A.
Either one is devastating to his result.
Nope, see above, in a local realist universe any correlation between A and B should be determined by the hidden state of each particle given to them by the source at the event of their common creation, so there is no need to include P(A | B) as a separate term, just like there's no need in my equation above for the urn as long as you include the hidden state of the urn + first ball after the first pick.
 
Last edited:
  • #20


JesseM said:
Nope, see above, in a local realist universe any correlation between A and B should be determined by the hidden state of each particle given to them by the source at the event of their common creation, so there is no need to include P(A | B) as a separate term, just like there's no need in my equation above for the urn as long as you include the hidden state of the urn + first ball after the first pick.

Read up on probability theory Jesse, what you say makes no sense. If you assume that there is a correlation between the two particles when they left the source, which you must if you are trying to model a local realist theorem, then you MUST also assume that there is logical dependence between the probability of the measurement at A and that at B. Bell did not do that. Remember Bell was trying to model a hidden variable theorem.

JesseM said:
For example, suppose there is an urn with two red balls and one white ball, and the experiment on each trial is to pick two balls in succession (without replacing the first one before picking the second), and noting the color of each one. If I open my hand and see that the first one I picked was red, and then I look at the closed fist containing the other and guess if it'll be red or white, do you agree that I should conclude P(second will be white | first was red) = 1/2?

Write down the equations you used to arrive at your answers. Then we can talk.
 
  • #21


JesseM said:
If you agree with both these statements, then it shouldn't be hard to see how any statement about subjective probabilities in an objective universe should be mappable to a statement about the statistics seen by a hypothetical godlike observer in a large number of trials.
Again you apparently have a limited view of what probability means.
If you think there could be any exceptions--objectively true statements of probability which cannot be mapped in this way--then please give an example. It would be pretty earth-shattering if you could, because the frequentist interpretation of probabilities is very mainstream, I'm sure you could find explanations of probability in terms of the statistics over many trials in virtually any introductory statistics textbook.
I already did. What is the common event in the past of "not executed" and "not dead" which caused both? Yet I can assign a probability to P("not dead"|"not executed").

You are arguing against yourself. In a hidden variable local realist theorem, there was also an event in their past which predetermined the result of the measurement. So then their probabilities should be dependent and you should have a p(A|B) term.

What exactly do you mean by "logical dependence"? The probabilities can of course be calculated in the same frequentist manner as I discussed above--if you imagine a large number of trials of this type, it's certainly true that on the subset of trials where the first ball picked was white, the second ball was always red in 100% of this subset, and likewise in the subset of trials where the first ball picked was red, the second ball was always white in 100% of this subset.

Logical dependence is probably the first thing you learn in any philosophy or probability class. I have already explained in this thread what it means. Nobody said anything about not being able to calculate probabilities in a frequentist manner. The monkey did not perform the experiment 100 times. He did it once. Are you trying to say in such a case probabilities can not be assigned? Probability means much more than frequencies!

You are distorting Bell's claims again. He does not claim that as some sort of general rule, P(AB|Z) = P(A|Z)P(B|Z) for any arbitrary observations or facts A, B, and Z.
No I am not. If Bell's equation (2) is not a general rule, where does he get it from. What principle of mathematics or logic permits him to write down that equation the way he did??
Instead, he says that for the specific case where a and b represent the events of some experimenter's choices of what variable to measure on a given trial, we can assume that these choices are really "free" and were not predetermined by some common cause in the past which also determined the state of the hidden variables [tex]\lambda[/tex]. And thus, in this specific case with a and b having that specific meaning, we can write the equality in equation (14) from the Jaynes paper you referenced:

[tex]P(AB | ab\lambda ) = P(A | a \lambda) P(B | b \lambda)[/tex]

Jaynes does not disagree that this equation is correct if you make the assumption about a and b not being predetermined by factors that also determined [tex]\lambda[/tex], that's why he prefaces that equation by saying "But if we grant that knowledge of the experimenters' free choices (a,b) would give us no information about [tex]\lambda[/tex]".
You are focussing on the wrong thing that is why you are not understanding Jaynes. If knowledge of the experimenters' free choices (a,b) gives us information about [tex]\lambda[/tex], then you must have a [tex] P(\lambda|ab)[/tex] term in the equation. Jaynes can grant him that because he is focused on how knowledge of the RESULT at A is dependent on knownedge of the RESULT at B. That is why in the correct equation (15), you have a P(B|A) term or a P(A|B) term. Please read the article again carefully to see this.

If you want to question the assumption of "free choice" (which just means choices not determined by factors which also determined the hidden variables produced by the source on a given trial, they might be determined by other complex factors in the experimenter's brains prior to the choice), then go ahead, this is a known loophole in the proof of Bell's theorem. But don't act like Bell was making some very broad statement about probability that would be true regardless of what events/observations the symbols a and b are supposed to represent.
As pointed out above you are focusing on the wrong thing.

Both "the prisoner is not dead" and "the prisoner was not executed" are physical facts which would be known by a hypothetical godlike being that knows every physical fact about every situation, and if this being looked at a large sample of prisoners, he'd find that for everyone to whom "not dead" currently applies, it is also true that "not executed" applies to their past history. So, according to my broad definition of "cause", it is certainly true that "not executed" is a necessary (but not sufficient) cause for the fact of being "not dead".
As I suspected, your statement above tells me you do not know the deference between logical dependence and physical dependence.

What equation (12) does show is that Bell was assuming A was independent of b, and B was independent of a. No other independence is implied, if you think it is you really need to work on your ability to read statistics equations.
This is wrong. While equation (12) shows that KNOWLEDGE of A is independent of KNOWLEDGE of b, it also shows that KNOWLEDGE of A is independent of KNOWLEDGE of B. According to the rules of probability it MUST be so. Readup on "product rule", P(AB)=P(A|B)P(B).

If a source releases two electrons, one spin up and spin down, and we measure one at A and find it to be up, then we can predict with certainty that the one at B will be down. Now try to write down the equations for this exact, very simple two electron case, without any settings at the stations and a single hidden variable (spin). If A is not logically dependent on B and vice versa, how come when you do not no either outcome, the P(A) = 1/2, and p(B) = 1/2 but when you know the out come the A is "up", P(A) = 1.0 and p(B)=0. There is only one set of equations that will give you these results consistently and that is the product rule. Bell's equation (2) is not valid. Unless you are trying to say the example is not local realist.


Because the correlation seen between results A and B is assumed to be purely a result of the hidden variables the source associated with each particle--a common cause in the past (again, in a local realist universe all correlations are understood either as direct causal relations or a result of common causes in the past
So you admit that A and B should be logically dependent. Do you agree that if KNOWING A should give you more information about B?
 
  • #22


Z: The premise that the urn contains three balls, one white and two red. Two balls are picked in succession without replacing.
A: First ball is red
B: Second ball is red

The correct equation is
p(AB|Z) = p(A|B)p(A|Z)

from Z we can calculate p(A|Z) = [number of red balls]/[total number of balls] = 2/3
p(A|B) = [number of red balls - 1 red ball]/[total number of balls - 1 red ball] = 1/2.

therefore p(AB|Z) = 2/3 * 1/2 = 1/3

Let's add another premise:

C: second ball is white.
The correct equation is p(AC|Z) = p(A|C)p(A|Z), notice that the equation is exactly the same except for the meaning of the terms. The rules of probability do not change on a case by case basis.

p(A|Z) = [number of red balls]/[total number of balls] = 2/3
p(A|C) = [number of red balls]/[total number of balls - 1 white ball] = 2/2 = 1.0. You don't subtract 1 white ball from the numerator because taking out a white ball does not change the number of red balls still in consideration.

therefore p(AC|Z) = 2/3 * 1.0 = 2/3.

Do you disagree with these equations?
 
  • #23


mn4j said:
If the equations on which his conclusions are based are wrong then the conclusions are baseless. Just because you prove it differently does not mean your equations are correct. For example, where in your eqations do you account for logical dependence? If you think logical dependence is not necessary then what you are modeling is not a local-realist system. No doubt the results contradict real experiments.

You really aren't saying anything meaningful. It should be glaringly obvious, once you have seen Bell's Theorem, that you cannot model a local realistic (HV) photon to operate in a manner consistent with experiment (cos^2(theta)). You can choose to find your way to that position any of a number of ways. It's sort of like discovering there is no Easter Bunny. Once you learn, you can't go back.

And it is not logical dependence that needs to be considered; it is the independence of an observation at one point with an observation at another spacelike separated point. Really, what is so difficult about that? What we are talking about observing is the "DNA" of twin photons, which clearly act in an entangled fashion - in opposition to the local realistic model. Are you trying to be obtuse? As said before, you are completely missing the big picture by your focus on meaningless semantics. If you don't like Bell's method of representation, simply change it so the effect (and conclusion) is the same. It's not THAT hard because a jillion (perhaps a slight exaggeration) others have already passed down that road.

Don't throw the baby out with the bathwater. You can solve this "issue" yourself. Remind yourself that Bell was essentially a reply to EPR. See Einstein's quoted words above (which of course were pre-Bell). Were Einstein alive, he would be forced to concede the point. And he certainly wouldn't be mumbling something about logical vs. physical dependence.
 
  • #24


mn4j said:
Unless you are trying to say the example is not local realist.

Your example is ridiculous. The perfect correlation argument was known in 1935, and has never been an issue as this must appear first in HV models. Model the Bell inequality violation in a local realist fashion and there might be something to discuss. (QM already predicts as experiment finds.) There are new models being put forth every week, in fact I believe you have referenced one such previously. Typically, they try to show that photon visibility issues lead to S>2 [CHSH inequality form] but fall apart with ever-increasing experimental precision.

JesseM, I am bowing out as it appears that our friend is stuck in a meaningless philosophical time warp. Best of luck!
 
  • #25


mn4j said:
Z: The premise that the urn contains three balls, one white and two red. Two balls are picked in succession without replacing.
A: First ball is red
B: Second ball is red

The correct equation is
p(AB|Z) = p(A|B)p(A|Z)
Shouldn't that be p(AB|Z) = p(B|A)p(A|Z)? In other words, you want to find the probability of A given Z, then the probability of B given A.
mn4j said:
from Z we can calculate p(A|Z) = [number of red balls]/[total number of balls] = 2/3
p(A|B) = [number of red balls - 1 red ball]/[total number of balls - 1 red ball] = 1/2.

therefore p(AB|Z) = 2/3 * 1/2 = 1/3
Of course p(B|A) is also 1/2, so the error I noted above gives the same final answer for p(AB|Z).
mn4j said:
Let's add another premise:

C: second ball is white.
The correct equation is p(AC|Z) = p(A|C)p(A|Z), notice that the equation is exactly the same except for the meaning of the terms. The rules of probability do not change on a case by case basis.
Again, the correct equation should actually be p(AC|Z) = p(C|A)p(A|Z)
mn4j said:
p(A|Z) = [number of red balls]/[total number of balls] = 2/3
p(A|C) = [number of red balls]/[total number of balls - 1 white ball] = 2/2 = 1.0.
P(C|A) is the correct term to use in the equation, and it's 1/2.
mn4j said:
You don't subtract 1 white ball from the numerator because taking out a white ball does not change the number of red balls still in consideration.

therefore p(AC|Z) = 2/3 * 1.0 = 2/3.
With the correct equation, you'd have p(AC|Z) = 2/3*1/2 = 1/3. Think about it and you'll see this is correct--if you do a lot of trials, on 2/3 of the total trials the first ball will be red (A), and in 1/2 of the subset of trials where the first ball was red the second will be white (C), so on 2/3*1/2 = 1/3 of all trials you'll find that the first was red while the second is white.
mn4j said:
Do you disagree with these equations?
See my corrections. But note that even the corrected equations I gave, these are not the only possible correct equations according to the rules of probability. In particular, the equations above say nothing about the hidden state of the first ball and the urn after the first pick, which in causal terms explains all correlations between the observed color of the first and second balls, and which is analogous to the hidden variables in any local realist explanation for the correlations seen in entanglement experiments.

Let H = Hidden color of first ball after first pick is made + hidden colors of other balls in urn after first pick
A = first ball is red
B = second ball is white

Then we can write P(AB) = [SUM OVER ALL POSSIBLE H] P(A|H)P(B|H)*P(H)

Likewise, if we let C = second ball is red, then we have:

Then we can write P(AC) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(C|H)*P(H)

Do you disagree with these equation, which are directly analogous to equation 12 in Jaynes' paper? (just substitute [tex]\lambda[/tex] for H, and substitute an integral over [tex]\lambda[/tex] for a sum over H, and add terms for the experimenter's choices of what to measure which don't matter in the ball/urn scenario, although you could think of another classical example where the experimenter can make a choice like my scratch lotto card example)

The "sum over all possible H" is pretty simple in a case with only three balls in the urn initially, since the only possible states for H are:

H1: first ball has hidden color red, urn contains two balls with hidden colors red and white
H2: first ball has hidden color white, urn contains two balls with hidden colors red and red

So the first equation is really just P(AB) = P(A|H1)*P(B|H1)*P(H1) + P(A|H2)*P(B|H2)*P(H2)
and the second equation is P(AC) = P(A|H1)*P(C|H1)*P(H1) + P(A|H2)*P(C|H2)*P(H2)

P(H1) = 2/3 and P(H2) = 1/3. Meanwhile P(A|H1) = 1, P(B|H1) = 1/2, and P(A|H2) = 0 and P(B|H2) = 0. Likewise P(A|H1) = 1, P(C|H1) = 1/2, and P(A|H2) = 0 and P(C|H2) = 1.

So, we have P(AB) = 1 * 1/2 * 2/3 + 0 * 0 * 1/3 = 1/3. Likewise, P(AC) = 1/3.

Any disagreement with the above?
 
Last edited:
  • #26


mn4j said:
Again you apparently have a limited view of what probability means.
I don't say that the frequentist way of interpreting probabilities is the only way to interpret them, just that it is a way. If you refuse to think in frequentist terms, it is you who has the more limited view. Do you disagree that any correct statement about subjective probabilities can be mapped onto a statement about the statistics of a large number of trials? If you do disagree, give me a single example of a correct statement about subjective probabilities that can't be mapped in this way.
JesseM said:
If you think there could be any exceptions--objectively true statements of probability which cannot be mapped in this way--then please give an example. It would be pretty earth-shattering if you could, because the frequentist interpretation of probabilities is very mainstream, I'm sure you could find explanations of probability in terms of the statistics over many trials in virtually any introductory statistics textbook.
mn4j said:
I already did. What is the common event in the past of "not executed" and "not dead" which caused both? Yet I can assign a probability to P("not dead"|"not executed").
First of all, you're confusing my request for a counterexample to the claim "every correct probability claim can be turned into a statement about the statistics that would be expected over many trials (whether or not many trials are actually performed)" with my request for a counterexample to the claim "every correlation can be explained by either one event causing the other or both events having a common cause". I was asking about the first one in the question above, yet you appear to be talking about causality in your response.

Second, your prisoner example is not a counterexample to either. It's easy to turn it into a statement about the statistics of a large sample of prisoners--for every prisoner, we can check if "not dead" applies to that prisoner currently, and we can also check if "not executed" applies to that prisoner's history. In frequentist terms, P(not dead|not executed) just means looking at the subset of prisoners who were not executed, and seeing what fraction are not dead. As for causality, an event A can be defined as a cause of B if A is in the past of B, if the occurrence of A has some positive effect on the probability of B over a large number of trials, and if this positive effect on the probability cannot be entirely explained in terms of some other event(s) C in the past of both A and B. In this sense, "not executed" is a direct cause of "not dead", and I said that for in a local realist universe any correlation can either be explained as one event directly causing the other or both events having a common cause.
mn4j said:
You are arguing against yourself. In a hidden variable local realist theorem, there was also an event in their past which predetermined the result of the measurement. So then their probabilities should be dependent and you should have a p(A|B) term.
Nope, you just haven't thought things through very well. See my equation involving the urn above:

P(AB) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(B|H)*P(H)

This equation gives the correct probabilities for P(AB) where A represents the observed observed color on the first pick and B represents the color on the second pick, and H represents the hidden state of the ball on the first pick before it's examined, along with the hidden state of all the other balls in the urn immediately after the first ball is removed. Note that there is no P(A|B) term.
mn4j said:
Logical dependence is probably the first thing you learn in any philosophy or probability class. I have already explained in this thread what it means. Nobody said anything about not being able to calculate probabilities in a frequentist manner. The monkey did not perform the experiment 100 times. He did it once. Are you trying to say in such a case probabilities can not be assigned?
No, of course not, the large number of trials need not be actually realized as long as we can figure out logically what the statistics of a large number of trials would be if we (hypothetically) repeated the experiment many times.
mn4j said:
No I am not. If Bell's equation (2) is not a general rule, where does he get it from. What principle of mathematics or logic permits him to write down that equation the way he did??
Again, Bell's equation (2) is almost identical to my equation:

P(AB) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(B|H)*P(H)

And this equation does not work generally, but it does work in the specific case I was describing where any correlation between A and B is fully explainable in terms of the hidden variables H. Do you see any P(A|B) terms in there?
mn4j said:
You are focussing on the wrong thing that is why you are not understanding Jaynes. If knowledge of the experimenters' free choices (a,b) gives us information about [tex]\lambda[/tex], then you must have a [tex] P(\lambda|ab)[/tex] term in the equation.
His "fundamentally correct" equation (13) does have such a term. But I think I see your point, Jaynes is just saying that the fact that [tex]\lambda[/tex] is independent of a,b allows us to substitute [tex]p(\lambda)[/tex] in for [tex]p(\lambda | ab)[/tex] in (13), but that still does not justify (12) without the additional substitution of (14), and it is this that Jaynes disagrees with, saying the correct substitution would instead look like (15). Is that what you're saying? If so, I think you're right, I have misunderstood what Jaynes is arguing somewhat. However, I disagree with his point. Again, in a local realist universe any statistical correlation must be explainable in terms of events either influencing one another or being mutually influenced by events in their common past. To the extend that there is a correlation between A and B--what you call "logical dependence"--then it cannot be due to any direct influence of outcome A on outcome B or vice versa, since there is a spacelike separation between them, so it must be explained by both being determined (or influenced) by events in their common past, and this influence is entirely expressed in terms of the "hidden variables" [tex]\lambda[/tex] associated with both particles, imparted to them by the source that created them. It's exactly like the example where I send a ball to Alice and a ball to Bob, and I make sure that they always receive opposite-colored balls--here there is also a logical dependence between what Bob sees when he opens his box and what Alice sees when she opens hers, but this is explained entirely by the fact that I always prepare the "hidden states" of each box (i.e. the hidden color of the ball inside before the box is opened) in such a way that they will yield opposite results. If you explicitly include the hidden state in your equation, there is no need to include any additional terms of the form p(A|B). You can also see this in the urn example, where the "hidden state" of the first ball and the remaining balls in the urn after the first pick is made is what explains any correlations between the observation of the color of the first ball and the observation of the color of the second ball...there were no terms of the form p(A|B) in my equation:

P(AB) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(B|H)*P(H)

...and they weren't needed, for exactly this reason.
JesseM said:
Both "the prisoner is not dead" and "the prisoner was not executed" are physical facts which would be known by a hypothetical godlike being that knows every physical fact about every situation, and if this being looked at a large sample of prisoners, he'd find that for everyone to whom "not dead" currently applies, it is also true that "not executed" applies to their past history. So, according to my broad definition of "cause", it is certainly true that "not executed" is a necessary (but not sufficient) cause for the fact of being "not dead".
mn4j said:
As I suspected, your statement above tells me you do not know the deference between logical dependence and physical dependence.
Or maybe I am just defining my terms a little differently from you. Do you think my way of defining things is unreasonable? Is there not a causal relation between the fact that the prisoner wasn't executed and the fact that he's not dead?

Even if you prefer to define this as "logical dependence", all that would mean is that I'd have to slightly modify my claim about correlations in a local realist universe. Leaving aside all talk of physical causality, I could say that any time two variables A and B have a correlation, then it must be true either that A lies in the past light cone of B or vice versa, or it could be true that there is some other event or events C which lie in the overlap of the past light cones of A and B, such that the correlation between A and B is fully explained by C (i.e. if you know the value of C on a given trial, and you know the logical dependence between A and C as well as the logical dependence between B and C, then this fully determines any correlations between A and B). Do you disagree that something like this must be true in a local realist universe? If you disagree, try to provide a specific counterexample, and I'll try to explain what the C is in the past light cone of the correlated events A and B that you provide which fully determines how they are correlated (assuming there is a spacelike separation between A and B, if not than no such past event is necessary according to my claim above).
mn4j said:
This is wrong. While equation (12) shows that KNOWLEDGE of A is independent of KNOWLEDGE of b, it also shows that KNOWLEDGE of A is independent of KNOWLEDGE of B. According to the rules of probability it MUST be so. Readup on "product rule", P(AB)=P(A|B)P(B).
No, it just says that any dependence between your knowledge of A and B must be fully explainable in terms of what knowledge of A tells you about [tex]\lambda[/tex], which in turn can inform your knowledge of B. Once again, there is certainly some correlation between A and B in the urn example, but since this correlation is entirely explainable in terms of the hidden state H after the first ball was picked, the equation P(AB) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(B|H)*P(H) works just fine despite lacking any P(A|B) terms.
mn4j said:
If a source releases two electrons, one spin up and spin down, and we measure one at A and find it to be up, then we can predict with certainty that the one at B will be down. Now try to write down the equations for this exact, very simple two electron case, without any settings at the stations and a single hidden variable (spin). If A is not logically dependent on B and vice versa, how come when you do not no either outcome, the P(A) = 1/2, and p(B) = 1/2 but when you know the out come the A is "up", P(A) = 1.0 and p(B)=0. There is only one set of equations that will give you these results consistently and that is the product rule. Bell's equation (2) is not valid. Unless you are trying to say the example is not local realist.
No, this example works fine in a local realist universe. You can assume that immediately on being created, the electrons have either hidden state H1 = "electron going in direction of A is up, electron going in direction of B is down", or hidden state H2 = "electron going in direction of A is down, electron going in direction of B is up". In this case we can say that P(AB) = P(A|H1)*P(B|H1)*P(H1) + P(A|H2)*P(B|H2)*P(H2). From this you will be able to show that for A=up and B=down, P(AB)=1/2, and likewise for A=down and B=up, whereas for A=up and B=up you have P(AB)=0 and likewise for A=down and B=down. The correlation between A and B is fully explained by their dependence on the hidden state, there's no need to include any P(A|B) terms in the equation.
mn4j said:
So you admit that A and B should be logically dependent. Do you agree that if KNOWING A should give you more information about B?
Yes, of course. But as I say, that doesn't mean you actually need a P(B|A) term in an equation for P(AB).
 
  • #27


JesseM said:
Shouldn't that be p(AB|Z) = p(B|A)p(A|Z)? In other words, you want to find the probability of A given Z, then the probability of B given A.
That is correct. My bad. Still you have a P(B|A) term and a P(C|A) term. It doesn't change what I am saying.


Let H = Hidden color of first ball after first pick is made + hidden colors of other balls in urn after first pick
A = first ball is red
B = second ball is white
Why did you change the formulation. Your's has problems. You are trying to eliminate the prior information in Z. The H you have introduced is an outcome because it changes every time a pick is made.
Then we can write P(AB) = [SUM OVER ALL POSSIBLE H] P(A|H)P(B|H)*P(H)
Conditional probabilities do not work like that. What is the condition that AB is premised on? In other words, what is your hypothesis space? You must include it in the equation. What you call H is not a hypothesis space.
The consequence is that P(H) is a meaningless term in your equation -- "The probability that 'hidden color of first ball after first pick is made plus hidden colors of other balls after first pick is made'" makes no sense.
 
  • #28


JesseM said:
P(AB) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(B|H)*P(H)

Again I am waiting for you to show where you got this equation. What rule or theory of mathematics or logic permits you to write this equation the way you did. Bell did not give a citation and neither have you. Jaynes equations are the result of the well known and established product rule of probability theory. What is the equation you or Bell is using?
 
  • #29


mn4j said:
Why did you change the formulation. Your's has problems. You are trying to eliminate the prior information in Z. The H you have introduced is an outcome because it changes every time a pick is made.
If you wish you can include Z as a separate condition, but every valid H will already include the total number of red and white balls.
mn4j said:
Conditional probabilities do not work like that. What is the condition that AB is premised on? In other words, what is your hypothesis space? You must include it in the equation. What you call H is not a hypothesis space.
As I said, Z can be added if you want, it would just convert the equation into this:

Then we can write P(AB|Z) = [SUM OVER ALL POSSIBLE H] P(A|HZ)P(B|HZ)*P(H|Z)

However, it is not generally necessary to include the hypothesis space explicitly as a variable, as long as it is understood what it is from the definition of the problem. For example, if I am talking about flipping a coin, I am free to just write P(heads) rather than P(heads|a coin that can land on either of two sides, heads or tails).
mn4j said:
The consequence is that P(H) is a meaningless term in your equation -- "hidden color of first ball after first pick is made plus hidden colors of other balls after first pick is made" makes no sense.
That's because "hidden color of first ball after first pick is made plus hidden colors of other balls after first pick is made" is not a phrase representing a particular description of the hidden colors, any more than [tex]\lambda[/tex] is supposed to represent a particular description of the value of the hidden variables. Both H and [tex]\lambda[/tex] are variables which can take multiple values. For example, H can take the value H1, where H1 means "first ball has hidden color red, urn contains two balls with hidden colors red and white". It can also take the value H2, "first ball has hidden color white, urn contains two balls with hidden colors red and red". Hopefully you would agree that P(H1) and P(H2) are both meaningful probabilities.

What did you think "sum over all possible H" was supposed to mean, anyway? Did you totally miss the part where I explicitly broke down the equation "P(AC) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(C|H)*P(H)" into a sum including H1 and H2? Read this part again:
The "sum over all possible H" is pretty simple in a case with only three balls in the urn initially, since the only possible states for H are:

H1: first ball has hidden color red, urn contains two balls with hidden colors red and white
H2: first ball has hidden color white, urn contains two balls with hidden colors red and red

So the first equation is really just P(AB) = P(A|H1)*P(B|H1)*P(H1) + P(A|H2)*P(B|H2)*P(H2)

and the second equation is P(AC) = P(A|H1)*P(C|H1)*P(H1) + P(A|H2)*P(C|H2)*P(H2)
 
  • #30


P(AC) = [SUM OVER ALL POSSIBLE H] P(A|H)*P(C|H)*P(H)"
It appears you are looking at multiple experiments that is why you are summing over different H. But the result you get by summing over multiple experiments can not be correct if you are not treating the individual experiment properly. So focus on just one experiment for now and once that is settled, you can then sum over all the experiments.

Let us take what you call the H1 case (first ball has hidden color red, urn contains two balls with hidden colors red and white).
Compare this with my initial premises:

Z: The premise that the urn contains three balls, one white and two red. Two balls are picked in succession without replacing.
A: First ball is red
B: Second ball is red
As you see, H is not necessary because it is just A.

In any case, even if we assume that H is not the same as A, you say the correct equation is:
P(AB|HZ) = P(A|HZ)P(B|HZ)*P(H|Z)

I say it should be according to the product rule of probability theory.
P(AB|HZ) = P(A|BHZ)*P(B|HZ)

If you want to sum over all H, this is the equation you should use. Where did you get your equation from? Again we are looking at a the single experiment probability which is everything right of the intergral sign in bell's equation(2), eq. 12 in Jaynes.

In other words, can you prove that equation, or point me to a textbook or article where it is proven? If you want a formal proof of the product rule, look at Chapter 1 of Jaynes book. "Probability Theory: The Logic of Science" which is available for free here: http://bayes.wustl.edu/etj/prob/book.pdf
 
  • #31


mn4j said:
Again I am waiting for you to show where you got this equation. What rule or theory of mathematics or logic permits you to write this equation the way you did. Bell did not give a citation and neither have you. Jaynes equations are the result of the well known and established product rule of probability theory. What is the equation you or Bell is using?
The equation just comes from thinking about the statistics of this particular problem, but I'll spell it out a bit. Do you agree that if X and Y are statistically independent, then P(XY)=P(X)*P(Y)? If so, would you also agree that even if X and Y are not independent in the sample space as a whole, if they are independent in the subset of trials where Z occurred, then we can write P(XY|Z)=P(X|Z)*P(Y|Z)? Clearly this sort of thing applies if you consider statements about the hidden state of the first ball and urn after the first drawing, like H1 and H2. If you already know H1, "first ball has hidden color red, urn contains two balls with hidden colors red and white", then knowing A (that the first ball was observed to have the color red when it was revealed) gives you no further knowledge about the likelihood that the second ball will be red (B), and likewise knowing B gives you no further knowledge about the likelihood that the first ball will be observed to be red (in fact knowing H1 leads you to predict A with probability 1).

So, it's reasonable that P(AB|H1) = P(A|H1)*P(B|H1) in this problem (not as a general statement about probabilities in arbitrary problems), and likewise for any other hidden state that might exist immediately after the first ball is picked. It's also reasonable that if every point in the sample space has some [tex]H_i[/tex] associated with it (i.e. on every trial there is some fact of the matter of what the hidden color of the first ball was after it was picked, and what hidden colors remained in the urn), and all the different [tex]H_i[/tex]'s are mutually exclusive, then we can write [tex]P(AB) = \sum_i P(AB|H_i)*P(H_i)[/tex]. Do you disagree with that? If not, it's easy to combine these equations to get [tex]P(AB) = \sum_i P(A|H_i)P(B|H_i)P(H_i)[/tex].

So if there is some point of disagreement here, please be specific about where it is. Also, if you disagree, please play around with the above equation a little and see if you can find any urn examples where the equation would give the wrong numbers for the probabilities.
 
Last edited:
  • #32


JesseM said:
Do you agree that if X and Y are statistically independent, then P(XY)=P(X)*P(Y)?
Of course. Working from the product rule
P(XY|Z) = P(X|Z)*P(Y|Z)
the reason P(XY|Z) = P(X|Z)*P(Y|Z) in the case of logical independence is because P(Y|XZ) = P(Y|Z). This means that knowledge of X gives us no additional information about Y, so the equation reduces to P(XY|Z) = P(X|Z)*P(Y|Z).

If so, would you also agree that even if X and Y are not independent in the sample space as a whole, if they are independent in the subset of trials where Z occurred
This makes no sense. If you write P(XY|Z) = P(X|Z)*P(Y|Z), you are saying that based on all the information Z, X is logically independent of Y. The reason this is false in this particular case is precisely because there is logical dependence.

To see this you only need to ask the question, does knowing the spin at A tell me anything about the spin at B? If it does, then they are logically dependent and you can NOT reduce P(Y|XZ) to P(Y|Z). Now remember we are still talking about a specific experiment NOT repeated events. The fact that the experiment can be repeated is inconsequential.

The definitive premise in any hidden variable theorem is that the particles left the source with enough information to determine the outcome at the stations (together with the settings at the station). This means for two entangled particles, any information you gain about one, MUST tell you something about the other. This was the basis of the EPR paradox which Bell was trying to refute. So logical dependence is a MUST in the specific case Bell is treating. Therefore there is no justification for why he reduced P(Y|XZ) to P(Y|Z). Do you understand this?
 
  • #33


mn4j said:
This makes no sense. If you write P(XY|Z) = P(X|Z)*P(Y|Z), you are saying that based on all the information Z, X is logically independent of Y.
I'm not sure what you mean by the phrase "based on all the information Z, X is logically independent of Y". The equation P(XY|Z)=P(X|Z)*P(Y|Z) certainly doesn't mean X is logically independent of Y in the series of trials as a whole, it just means that in the subset of trials where Z occurred, the frequency of X is independent of Y and vice versa. Another way of putting this is that the statistical dependence of X and Y is wholly accounted for by Z, meaning that if you know Z, then knowing X gives you no additional information about Y, and likewise if you know Z, knowing Y gives you no additional information about X.

Consider the urn example again. If I've already learned that H1 is true (that the first ball selected had the hidden color red prior to being observed) and I want to bet on whether or not B is true (that the second ball selected will be observed to be red), do you agree that knowing A (that the first ball selected was observed to be red) tells me absolutely nothing new that should cause me to modify my bet when all I knew was H1?

Of course this example is somewhat trivial since H1 implies A with probability 1. But you could consider a more complicated example too. Suppose that instead of colored balls, the urn contains three plastic balls with dimmed lights inside, and that pressing a button on top of each ball causes either the internal red light to turn on, or the internal green light. The balls are externally indistinguishable but their hidden internal mechanisms are somewhat different, so to distinguish them we'll call the three balls D, E and F. Each ball has a random element that decides which light will go on when the button is pressed, but because of the different hidden mechanisms the probabilities are different in each case--D has a 70% chance of lighting up red, E has a 90% chance of lighting up red, and F has a 25% chance of lighting up red. Two balls are taken from the urn, one is given to Alice and the second is given to Bob, then they go into separate rooms and press the buttons and note the results.

Now, since the balls are taken from the same urn without replacement, there is some statistical dependence between the results seen by Alice and the results seen by Bob--for example, if Alice sees her ball light up green, that makes it more likely that the she got ball F, which in turn makes it less likely that Bob got F and thus more likely that Bob will see his light up red. Now consider the "hidden variables" condition H1: "ball D was given to Alice, ball E was given to Bob". Do you agree that if we look at the subset of trials where H1 is true, then in this subset, knowing that Alice's ball lit up red gives us no additional information about the likelihood that Bob's ball will light up red? In other words, P(Bob's ball lights up red | ball D given to Alice, ball E given to Bob) = P(Bob's ball lights up red | ball D given to Alice, ball E given to Bob AND Alice's lit up red) = P(Bob's ball lights up red | ball D given to Alice, ball E given to Bob AND Alice's lit up green) = 90%. The fact that Bob got ball E is enough to tell us there's a 90% chance his will light up red, knowing the color Alice saw doesn't further affect our probability calculation at all. In the same way, knowing that Alice got ball D tells us there's a 70% chance hers will light up red, knowing the result Bob got doesn't change the probability further either. So, P(Alice saw red, Bob saw red|H1) = P(Alice saw red|H1)*P(Bob saw red|H1) = 0.9*0.7=0.63. It's only if we don't know the hidden truth about which ball each one got that knowing what color one of them saw can influence our estimate of the probability the other one saw red. Do you disagree with any of this? If you do, please tell me what you would give for the probability P(Alice saw red, Bob saw red|H1) along with the probabilities P(Alice saw red|H1) and P(Bob saw red|H1).
mn4j said:
The definitive premise in any hidden variable theorem is that the particles left the source with enough information to determine the outcome at the stations (together with the settings at the station). This means for two entangled particles, any information you gain about one, MUST tell you something about the other.
Of course. But if God came and told you the hidden state [tex]\lambda[/tex] associated with each particle, then in that case knowing the information gained by one experimenter tells you nothing additional about the result the other experimenter will see, just like if God told Alice that she had gotten ball D, then knowing Bob had seen his ball light up red would tell her nothing additional about the probability her own would light up red. That's why for every possible hidden state H (H1, H2, etc.), it is true that P(Alice sees red, Bob sees red|H) = P(Alice sees red|H)*P(Bob sees red|H). Similarly, in a hidden variables theory of QM that's why if you include the complete set of hidden variables [tex]\lambda[/tex] in the conditional, in that case you can write [tex]P(AB|\lambda) = P(A|\lambda)*P(B|\lambda)[/tex]--you're only looking at the subset of trials where [tex]\lambda[/tex] took on some particular value. But I know you'll probably find reason to dispute this, which is why I want to first deal with the simple example of the light-up balls, and see if in that case you dispute that P(Alice sees red, Bob sees red|H) = P(Alice sees red|H)*P(Bob sees red|H).
 
  • #34


I'm not sure what you mean by the phrase "based on all the information Z, X is logically independent of Y". The equation P(XY|Z)=P(X|Z)*P(Y|Z) certainly doesn't mean X is logically independent of Y in the series of trials as a whole

You keep mixing up consideration of an individual experiment with consideration of multiple experiments that is why you are confused. We are not yet talking about multiple experiments, that is where the intergral comes in. We are talking about A SINGLE experiment!

Secondly, if P(XY|Z)=P(X|Z)*P(Y|Z) does not mean X is logically independent of Y, then where did you get the equation from. I have already shown you how you can derive P(XY|Z)=P(X|Z)*P(Y|Z) from the product rule P(XY|Z)=P(X|YZ)*P(Y|Z) WHEN AND ONLY WHEN X is logically independent of Y. You have not shown how you can arrive at that equation without assuming logical independence.

This is very basic probability theory. Unless you understand this, we can not even begin to discuss about hidden variables and multiple experiments.
 
  • #35


mn4j said:
You keep mixing up consideration of an individual experiment with consideration of multiple experiments that is why you are confused. We are not yet talking about multiple experiments, that is where the intergral comes in. We are talking about A SINGLE experiment!
As I already explained, the probability of X on a single experiment is always guaranteed to be equal to the fraction of trials on which X occurred in a large number (approaching infinity) of repeats of the experiment with the same conditions--it doesn't matter if a large number of trials actually happens, all that matters is that this would be true hypothetically the experiment were repeated in this way. Do you disagree with this? Please give me a simple yes or no answer.
mn4j said:
Secondly, if P(XY|Z)=P(X|Z)*P(Y|Z) does not mean X is logically independent of Y, then where did you get the equation from.
It comes from the fact that if you only look at the trials where Z occurred and throw out all the trials where it did not, then in that set of trials X is independent of Y. It would help if you addressed my simple numerical example:

Three balls D, E, F; D has 0.7 chance of lighting up red when button pushed, E has 0.9 chance, F has 0.25 chance.

H1 is the condition that "ball D was given to Alice, ball E was given to Bob".

Do you agree that in this case, P(Alice saw red, Bob saw red|H1) = 0.63, exactly the same as P(Alice saw red|H1)*P(Bob saw red|H1)? If not, what probability would you assign to P(Alice saw red, Bob saw red|H1)? Please give me a specific answer to this question.
mn4j said:
I have already shown you how you can derive P(XY|Z)=P(X|Z)*P(Y|Z) from the product rule P(XY|Z)=P(X|YZ)*P(Y|Z) WHEN AND ONLY WHEN X is logically independent of Y. You have not shown how you can arrive at that equation without assuming logical independence.
Of course I have. As I already explained, I am simply assuming that X is logically independent of Y in the subset of trials where Z occurs, even though it they are not independent in the set of all trials. Another way of putting this is that if we know Z occurred, then knowing X tells us nothing further about the probability of Y, even though in the absence of knowledge of Z knowledge of X would have given us some new information about the probability of Y. You apparently are having problems understanding this explanation, but I can't help you unless you are willing to actually think about some specific examples like the one I ask above. Can you please just do me the courtesy of answering my simple questions about these examples?

Here's another example--we have a 6-sided die and a 12-sided die, Alice rolls one and Bob rolls the other. If there's a 50/50 chance the 12-sided die will be given to either one of them, and we aren't told who got which die, then the results they see aren't logically independent--for example, if Alice gets a 10 that allows us to infer Bob will get some number between 1 and 6, whereas before we knew what Alice rolled we would have considered it possible Bob could get any number from 1-12. However, in the subset of cases where Alice gets the 12-sided die and Bob gets the 6-sided die (or if you prefer, in a single case where we know Alice got the 12-sided die and Bob got the 6-sided die), Bob has a probability 1/6 of getting any number 1-6, and a probability 0 of getting a number 7-12, and knowing what Alice rolled has absolutely no further effect on these probabilities. Do you disagree? Again, please think about this specific example and answer yes or no.
 
Last edited:

Similar threads

Replies
48
Views
6K
Replies
87
Views
6K
Replies
47
Views
4K
Replies
28
Views
2K
Replies
5
Views
3K
Back
Top