Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #1,226
billschnieder said:
First of all, I said the equation is Bell's definition of HIS expectation values for the situation he is working with.
But then you use that to come to the absurd conclusion that in order to compare with empirical data, we need to make some assumptions about the distribution of values of λ on our three runs. We don't--Bell was writing for an audience of physicists, who would understand that whenever you talk about an "expectation value", the basic definition is always just a sum over each possible measurement result times the probability of that result, so to compare with empirical measurements you just take the average result on all your trials, nothing more. Bell obviously did not mean for his integrals to be the definitions of E(a,b) and E(b,c) and E(a,c), implying that you can only compare them with empirical data if you have actually confirmed that [tex]\rho(\lambda)[/tex] was the same for each run--rather he was making an argument that the "expectation values" as conventionally understood would also be equal to those integrals.
billschnieder said:
Secondly, nobody said anything about the probabilities in the equation not being true probabilities, so you are complaining about an inexistent issue.
You understand that the "true probabilities" represent the frequencies of different outcomes in the limit as the number of trials goes to infinity, and not the actual frequencies in our finite series of trials? So for example, if one run with settings (a,b) included three trials where λ took the value λ3, while another run with settings (b,c) included no trials where it took the value λ3, this wouldn't imply that ρ(λi) differed in the integrals for E(a,b) and E(b,c)? Because your comment at the end of post #1224 suggests you you are still confusing the issue of what it means for the "true probabilities" ρ(λi) to differ depending on the detector settings and what it means for the actual frequencies of different values of λi to differ on runs with different detector settings:
billschnieder said:
JesseM said:
Even if the data was drawn from triples, and the probability of different trials didn't depend on the detector settings on each run, there's no guarantee you'd be able to exactly resort the data in the manner of my example in post #1215, where we were able to resort the data so that every row (consisting of three pairs from three runs) had the same value of a,b,c throughout
That is why I cautioned you earlier not to prematurely blurb your claim that conspiracy must be involved for ρ(λi) to be different. Now we get an admission, however reluctantly that it is possible for ρ(λi) to be different without conspiracy. You see, the less you talk (write), the less you will have to recant later as I'm sure you are realizing.
So, kinda seems like this is not actually a dead issue. You may have noticed I discussed exactly this distinction between the "true probability distribution" ρ(λi) differing from one run to another and the actual frequencies of different λi's differing from one run to another at the very start of post #1214, but since you didn't respond I don't know if you even read that or what you thought of the distinction I was making there.
billschnieder said:
Thirdly, you object to my statement but go on to say the exact same thing. This is what I said after the equation:
Theoretically the above makes sense, where you measure each A(a,.), B(b,.) pair exactly once for a specific λ, and simply multiply with the probability of realizing that specific λ and then add up subsequent ones to get your expectation value E(a,b). But practically, you could obtain the same E(a,b) by calculating a simple average over a representative set of outcomes in which the frequency of realization of a specific λ, is equivalent to it's probability. ie

For example, if we had only 3 possible λ's (λ1, λ2, λ3) with probabilities (0.3, 0.5, 0.2) respectively. The expectation value will be
E(a,b) = 0.3*A(a,λ1)*B(b,λ1) + 0.5*A(a,λ2)*B(b,λ2) + 0.2*A(a,λ3)*B(b,λ3)
You really think that this is the "exact same thing" as what I was saying? Here your "practical" average requires us to know which value of λ occurred on each trial, and what the probability of each value was! Of course this is nothing like what I mean when I talk about comparing the theoretical expectation value to actual experimental data. Again, a definition of the expectation value involving "true probabilities" would be:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

So if you want to compare with empirical data on a run where the detector settings were a and b, it'd just be:

(+1*+1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result -1)

...which is equivalent to just computing the product of the two measurements on each trial, and adding them all together and dividing by the number of trials to get the empirical average for the product of the two measurements on all trials in the run.

You quote my simple equation for E(a,b) above and say:
It clearly shows that you do not understand probability or statistics. Clearly the definition of expectation value is based on probability weighted sum,
Which mine is--I'm multiplying each possible result by the probability of that result, for example the result (+1*-1) is multiplied by P(detector with setting a gets result +1, detector with setting b gets result -1)
billschnieder said:
and law of large numbers is used as an approximation, that is why it says in the last sentence above that the expectation values is "almost surely the limit of the sample mean as the sample size grows to infinity"
Of course. In the limit as the number of trials goes to infinity, we would expect this:

(+1*+1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result -1)

to approach this:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

...where all the probabilities in the second expression represent the "true probabilities", i.e. the fraction of trials with that outcome in the limit as the number of trials goes to infinity!

So, it's not clear why you think the wikipedia definition of expectation value is somehow different from mine, or that I "do not understand probability or statistics". Perhaps you misunderstood something about my definition.
billschnieder said:
You are trying to restrict the definition by suggesting that expection value is defined ONLY over the possible paired outcomes (++, --, +-, -+) and not possible λ's, but that is naive, and short-sighted but also ridiculous as we will see shortly.
No, all expectation values are just defines as a sum over all possible results times the probability of each possible result. And in this experiment the value of λ is not a "result", the "result" on each trial is just +1 or -1.
billschnieder said:
Now let us go back to the first sentence of the wikipedia definition above and notice the last two words "probability measure". In case you do not know what that means, a probability meaure is simply any real valued function which assigns 1 to the entire probablity space and maps events into the range from 0 to 1. An expectation value can be defined over any such probabiliy measure, not just the one you pick and choose for argumentation purposes. In Bell's equation (2),
[tex] \int d\lambda \rho (\lambda ) = 1 [/tex]
Therefore ρ(λ) is a probability measure over the paired products A(a,λ)A(b,λ)
No, ρ(λ) is a probability measure over values of λ, and it happens to be true (according to Bell's physical assumptions) that the value of λ along with the detector angles completely determines the results on each trial. But you can also define a probability measure on the results themselves, that would just be a measure that assigns probabilities between 0 and 1 to each of the four possible results:

1. (detector with setting a gets result +1, detector with setting b gets result +1)
2. (detector with setting a gets result +1, detector with setting b gets result -1)
3. (detector with setting a gets result -1, detector with setting b gets result +1
4. (detector with setting a gets result -1, detector with setting b gets result -1)

With the sum of the four probabilities equalling one. That's exactly the sort of probability measure I was assuming when I wrote down my equation:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

And when trying to compare an equation involving expectation values to actual empirical results, every physicist would understand that you don't need to even consider the question of what values λ may have taken on your experimental runs, instead you'd just compute something like this:

(+1*+1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result -1)

...which, by the law of large numbers, is terrifically unlikely to differ significantly from the "true" expectation value if you have done a large number of trials. If you think a physicists comparing experimental data to Bell's inequality would actually have to draw any conclusions about the values of λ on the experimental trials, I guarantee you that your understanding is totally idiosyncratic and contrary to the understanding of all mainstream physicists who talk about testing Bell's inequality empirically.
billshand Bell's equation (2) IS defining an expectation value for paired products irrespective of any physical assumptions. There is no escape for you here.[/QUOTE said:
If equation (2) was supposed to be the definition of the expectation value, rather than just an expression that he would expect the expectation value (under its 'normal' meaning, the one I've given above involving only actual measurable results and the probabilities of each result) to be equal to, then why do you think he would need to make physical arguments as to why equation (2) should be the correct form? Do you deny that he did make physical arguments for the form of equation (2), like in the first paper where he wrote:
Now we make the hypothesis, and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other. Since we can predict in advance the result of measuring any chosen component of [tex]\sigma_2[/tex], by previously measuring the same component of [tex]\sigma_1[/tex], it follows that the result of an such measurement must actually be predetermined. Since the initial quantum mechanical wave function does not determine the result of an individual measurement, this predetermination implies the possibility of a more complete specification of the state.

Let this more complete specification be effected by means of parameters λ ... the result A of measuring [tex]\sigma_1 \cdot a[/tex] is then determined by a and λ, and the result B of measuring [tex]\sigma_2 \cdot b[/tex] in the same instance is determined by b and λ
Do you disagree that here the first paragraph is providing physical justification for why A is a function only of a and λ but not b, and why B is a function of b and λ but not a, along with a justification for why we should believe the result A can be completely determined by a and the hidden parameters λ in the first place? Likewise, in the paper http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers , would you deny that this section from p. 16 of the pdf (p. 15 of the paper) is trying to provide physical justification for why the same function ρ(λ) appears in different integrals for different expectation values like E(a,b) and E(b,c)?
Secondly, it may be that it is not permissible to regard the experimental settings a and b in the analyzers as independent variables, as we did. We supposed them in particular to be independent of the supplementary variable λ, in that a and b could be changed without changing the probability distribution ρ(λ). Now even if we have arranged that a and b are generated by apparently random radioactive devices, housed in separate boxes and thickly shielded, or by Swiss national lottery machines, or by elaborate computer programmes, or by apparently free willed experimental physicists, or by some combination of all of these, we cannot be sure that a and b are not significantly influenced by the same factors λ that influence A and B. But this way of arranging quantum mechanical correlations would be even more mind boggling than one in which causal chains go faster than light. Apparently separate parts of the world would be deeply and conspiratorially entangled, and our apparent free will would be entangled with them.
If you don't disagree that these sections are attempts to provide physical justification for the form of the integrals he writes, why do you think he would feel the need to provide physical justification if he didn't have some independent meaning of "expectation values" in mind, like the meaning I talked about above involving just the different results and the probabilities of each one?
 
Last edited by a moderator:
Physics news on Phys.org
  • #1,227
JesseM said:
You understand that the "true probabilities" represent the frequencies of different outcomes in the limit as the number of trials goes to infinity, and not the actual frequencies in our finite series of trials?

You do not understand probability either. Say I give you the following list of

++
--
-+
+-

And ask you to calculate P(++) from it. Clearly the probability is the number of times (++) occurs in the list divided by the number of entries in the list. The list does not have an infinite number of entries, there is no need to perform an infinite number of trials in order to deduce the probability. And even if you did perform a large number of trials, you will not get exactly the true probability which is 1/4. So your "law of large numbers" cop-out is an approximation of the true probability not it's definition. You need to learn some basic probability theory here because you are way off base.

JesseM said:
But then you use that to come to the absurd conclusion that in order to compare with empirical data, we need to make some assumptions about the distribution of values of λ on our three runs. We don't--Bell was writing for an audience of physicists, who would understand that whenever you talk about an "expectation value", the basic definition is always just a sum over each possible measurement result times the probability of that result
Sorry JesseM but that bubble has already been burst, when I proved conclusively that you do not know the meaning of "expectation value". To show how silly this adventitious argument of yours is, I asked you a simple question and dare you to answer it:

billschnieder said:
You are given a theoretical list of N pairs of real-valued numbers x and y. Write down the mathematical expression for the expectation value for the paired product. Once you have done that, try and swindle your way out of the fact that
a) The structure of the expression so derived does not depend on the actual value N. ie, N could be 5, 100, or infinity.
b) The expression so derived is a theoretical expression not "empirical".
c) The expression so derived is the same as the simple average of the paired products.

JesseM said:
So for example, if one run with settings (a,b) included three trials where λ took the value λ3, while another run with settings (b,c) included no trials where it took the value λ3, this wouldn't imply that ρ(λi) differed in the integrals for E(a,b) and E(b,c)? Because your comment at the end of post #1224 suggests you you are still confusing the issue of what it means for the "true probabilities" ρ(λi) to differ depending on the detector settings and what it means for the actual frequencies of different values of λi to differ on runs with different detector settings
You are sorely confused. Note I use ρ(λi) not P(λi) to signify that we are dealing with a probability distribution, which is essentially a function defined over the space of all λ, with integral over all λ equal to 1.

If the (a,b) run included N iterations with three of those corresponding to λ3, P(λ3) for our dataset = 3/N. But if in a different run of the experiment (b,c) none of the λ's was λ3, P(λ3) = 0 for our dataset. It therefore means the proability distribution of ρ(λi) can not be same for E(a,b) and E(b,c). If this is still too hard for you, let me simplify further.

According to Bell, E(a,b) calculated by the following sum

a1*b1*P(λ1) + a2*b2*P(λ2) + ... + an*bn*P(λn) where n is the total number of possible distinct lambdas. ρ(λ) is a function which maps a specific λi to its probability P(λi). By definition therefore, if the function ρ(λ) is the same for two runs of the experiment, it must produce the same P(λi) for both cases. In other words, if it produced different values of P(λi) such as 3/N in one case and 0 in another, it means ρ(λ) is necessarily different between the two and the runs can not be used together as a valid source of terms for comparing with Bell's inequality.

JesseM said:
billschnieder said:
Note, what Bell is doing here is calculating the weighted average of the product A(a,λ)*B(b,λ) for all λ. Which is essentially the expectation value. Theoretically the above makes sense, where you measure each A(a,.), B(b,.) pair exactly once for a specific λ, and simply multiply with the probability of realizing that specific λ and then add up subsequent ones to get your expectation value E(a,b). But practically, you could obtain the same E(a,b) by calculating a simple average over a representative set of outcomes in which the frequency of realization of a specific λ, is equivalent to it's probability. ie

For example, if we had only 3 possible λ's (λ1, λ2, λ3) with probabilities (0.3, 0.5, 0.2) respectively. The expectation value will be
E(a,b) = 0.3*A(a,λ1)*B(b,λ1) + 0.5*A(a,λ2)*B(b,λ2) + 0.2*A(a,λ3)*B(b,λ3)

Where each outcome for a specific lambda exists exactly once. OR we can calculate it using a simple average, from a dataset of 10 data points, in which A(a,λ1),B(b,λ1) was realized exactly 3 times (3/10 = 0.3), A(a,λ2), B(b,λ2) was realized 5 times, and A(a,λ3), B(b,λ3) was realized 2 times; or any other such dataset of N entries where the relative frequencies are representative of the probabilities. Practically, this is the only way available to obtain expectation values, since no experimenter has any idea what the λ's are or how many of them there are.
You really think that this is the "exact same thing" as what I was saying? Here your "practical" average requires us to know which value of λ occurred on each trial
Oh come on! At least be honest about what you claim I am saying! Why would you need to know λ for each trial if you are calculating a simple average!? Go back and answer the example I requested for the expectation value for N pairs of real-valued numbers x and y and if you still do not understand how ridiculous this sounds, ask a gain and I will explain it using yet simpler terms assuming it is possible to simplify this any further.
 
  • #1,228
JesseM said:
So if you want to compare with empirical data on a run where the detector settings were a and b, it'd just be:

(+1*+1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*(fraction of trials where detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*(fraction of trials where detector with setting a gets result -1, detector with setting b gets result -1)

...which is equivalent to just computing the product of the two measurements on each trial, and adding them all together and dividing by the number of trials to get the empirical average for the product of the two measurements on all trials in the run.
Despite your empty protests, you are still unable to show why the above will be different from a simple average <ab>. Oh wait, you actually agree with my statement that:

billschnieder said:
But practically, you could obtain the same E(a,b) by calculating a simple average over a representative set of outcomes
So yeah, you are saying the exact same thing after objecting to it!

JesseM said:
So, it's not clear why you think the wikipedia definition of expectation value is somehow different from mine, or that I "do not understand probability or statistics"
It is different because your restricts expectation values to only the possible outcomes (++, --, +-, +-) even though expectation values are definied for any probability measure. ρ(λ) is a probability measure over all outcomes, therefore, Bell's equation (2) is a standard mathematical expression for an expectation value, contrary to your morphing claims.

JesseM said:
No, all expectation values are just defines as a sum over all possible results times the probability of each possible result. And in this experiment the value of λ is not a "result", the "result" on each trial is just +1 or -1.
...
No, ρ(λ) is a probability measure over values of λ
Hehe, this is precisely an example of why I say you do not understand probability theory and statistics. In Bell's equation (2), the pair [A(a,λ)B(b,λ)] defines an event, the probability of the event [A(a,λ)B(b,λ)] occurring is P(λ), therefore ρ(λ) IS a probability measure over [A(a,λ)B(b,λ)] whether you like it or not. There are lots of references online. Find me one which says otherwise. No physical assumption is required to obtain this blatant mathematical definition.

JesseM said:
But you can also define a probability measure on the results themselves, that would just be a measure that assigns probabilities between 0 and 1 to each of the four possible results:

1. (detector with setting a gets result +1, detector with setting b gets result +1)
2. (detector with setting a gets result +1, detector with setting b gets result -1)
3. (detector with setting a gets result -1, detector with setting b gets result +1
4. (detector with setting a gets result -1, detector with setting b gets result -1)

With the sum of the four probabilities equalling one. That's exactly the sort of probability measure I was assuming when I wrote down my equation:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)
This is an admission that you were wrong to suggest that Bell's equation (2) is not a valid expectation value unless physical assumptions are also made. Nobody is arguing that there no other valid mathematical expressions for expectation value. You were the one arguing that mathematically defined expectation value must be the one you chose and not the one Bell chose. I'm happy you are now backtracking from that ridiculous position.

JesseM said:
If you think a physicists comparing experimental data to Bell's inequality would actually have to draw any conclusions about the values of λ on the experimental trials, I guarantee you that your understanding is totally idiosyncratic and contrary to the understanding of all mainstream physicists who talk about testing Bell's inequality empirically.
Grasping at straws here to make it look like there is something I said which you object to. Note that you start the triumphant statement with an IF and then go ahead to hint that what you are condemning is actually something I think but you provide no quote of mine in which I said anything of the sort. I thought this kind of tactics was relagated to talk-show tv and political punditry.
 
  • #1,229
JesseM said:
If equation (2) was supposed to be the definition of the expectation value, rather than just an expression that he would expect the expectation value (under its 'normal' meaning, the one I've given above involving only actual measurable results and the probabilities of each result) to be equal to, then why do you think he would need to make physical arguments as to why equation (2) should be the correct form? Do you deny that he did make physical arguments for the form of equation (2) ...
Duh! The whole point is that no physical assumptions are needed! This issue would be dead had you not argued vehemently that without extra physical assumptions, Bell's equation (2) will not be a standard mathematical expression for the expectation value of paired products.

You apparently did not see the following in my earlier post #1211:
billschnieder said:
You could say the reason Bell obtained the same same expression is because he just happened to be dealing with two functions which can have values (+1 and -1) for physical reasons and experiments producing a list of such pairs. And he just happened to be interested in the pair product of those functions for physical reasons. But the structure of the calculation of the expectation value is determined entirely by the mathematics and not the physics. Once you have two variables with values (+1 and -1) and a list of pairs of such values, the above equations should arise no matter the process producing the values, whether physical, mystical, non-local, spooky, super-luminal, or anything you can dream about. That is why I say the physical assumptions are peripheral.
So while it is true that Bell discussed the physical issues of local causality, those issues are peripheral as I have already explained.

JesseM said:
If you don't disagree that these sections are attempts to provide physical justification for the form of the integrals he writes, why do you think he would feel the need to provide physical justification if he didn't have some independent meaning of "expectation values" in mind, like the meaning I talked about above involving just the different results and the probabilities of each one?

Because the meaning of the expression is clear from the expression Bell wrote himself. He is multiplying the paired product A(a,λ)A(b,λ) with their probability P(λ) and integrating over all λ. That is the mathematical definition of an expectation value. You are the one trying to impose on Bell's equation a meaning he did not intend as is evident from what he himself wrote in his original paper. You can't escape this one.

For example:

Let us define A(a,λ) = +/-1 , B(a,λ) just like Bell and say that the functions represent the outcome of two events on two stations one on Earth (A) and another (B) on planet 63, and in our case λ represents non-local mystical processes which together with certain settings on the planets uniquely determine the outcome. We also allow in our spooky example for the setting a on Earth to remotely affect the choice of b instantaneously and vice versa. Note in our example, there is no source producing any entangled particles, everything is happening instantaneously.

The expectation value for the paired product of the outcomes at the two stations is exactly the same as Bell's equation (2). If you disagree, explain why it would be different or admit that the physical assumptions are completely peripheral.
 
  • #1,230
EPR represents only conservation (in the line of the question:also in there issues that are quite different, e.g., "is QM a complete theory?"). For a classical pair, magnetic momentum (for instance) would be conserved along ANY but also along ALL directions. In QM, only one direction at once makes sense, so the spin projection is conserved along ANY direction but NOT ALONG ALL directions. Think of the Uncertainty Principle with reversed time, as proved in 1931 by Einstein, Tolman, and Podolsky. Bell's theorem assumes a form of realism not proven to make sense in the microcosm, at least for the type of coordinates we know (Einstein lie Schodinger thought that one should use other variables, but would have considered the hidden variables of Bell very naive). Assuming like Bell a form of naive microscopic realism that would let one make sense, e.g., of spin projections along at least 3 directions, John Bell proved an inequality already known by Boole in the late ninetieth century for macroscopic properties, but only realism counts there. The (nice) experiments supposed to "prove action at a distance" ONLY proved QM to be right, something that competent people did not doubt so much about anyway: they prove that realism and locality (absence of action at distance so to speak) cannot both hold true, but the only interesting question is to know whether realism (at least in the classical form, i.e., valid for all observables) holds true in the microcosm. A proof has just appeared in the European Journal of Physics to the effect that a Bell theorem holds true without assuming locality, en route to prove that (classical) realism is false, perhaps.
 
  • #1,231
Bill, from reading the last two pages, this seems like a pretty straightforward example of you being mistaken, and JesseM being correct. Posting in bulk isn't changing this, or obscuring that fact in any way from those of us reading this this thread. I just thought you might want that reality check-in.
 
  • #1,232
nismaratwork said:
Posting in bulk isn't changing this

Yeah, and the extremely funny thing is that Bill are accusing others for writing too loooooooooong posts!?

(:biggrin:)
 
  • #1,233
charlylebeaugosse said:
A proof has just appeared in the European Journal of Physics to the effect that a Bell theorem holds true without assuming locality, en route to prove that (classical) realism is false, perhaps.

Extremely interesting! Any links?


P.S. Welcome to PF charlylebeaugosse! :wink:
 
  • #1,235
Last edited:
  • #1,236
DrChinese said:
... Also, this author has written other articles claiming that Bell leads to a rejection of what he calls "weak realism".

I don’t know... but there seems to be other things that are a little "weak" also...? Like this:
"As a consequence classical realism, and not locality, is the common source of the violation by nature of all Bell Inequalities."

I may be stupid, but I always thought one has to make a choice between locality and realism? You can’t have both, can you?

And what is this?
"We prove versions of the Bell and the GHZ theorems that do not assume locality but only the effect after cause principle (EACP) according to which for any Lorentz observer the value of an observable cannot change because of an event that happens after the observable is measured."

To me this is contradictory. If you accept nonlocality, you must accept that the (nonlocal) effect comes before the cause (at speed of light)?
 
  • #1,237
DevilsAvocado said:
I don’t know... but there seems to be other things that are a little "weak" also...? Like this:
"As a consequence classical realism, and not locality, is the common source of the violation by nature of all Bell Inequalities."

I may be stupid, but I always thought one has to make a choice between locality and realism? You can’t have both, can you?

And what is this?
"We prove versions of the Bell and the GHZ theorems that do not assume locality but only the effect after cause principle (EACP) according to which for any Lorentz observer the value of an observable cannot change because of an event that happens after the observable is measured."

To me this is contradictory. If you accept nonlocality, you must accept that the (nonlocal) effect comes before the cause (at speed of light)?

There are some signs - and this is one, GHZ being another, and there are others too - that realism flat out fails no matter what. You could also simply say that reality is contextual and get the same effect. The time symmetry interpretations as well as MWI fall into this category. Pretty much all of the Bohmian/dBBers also acknowledge contextuality.

Keep in mind that in Delayed Choice setups, you can have after the fact entanglement. So that pretty much wrecks his EACP anyway.
 
  • #1,238
DrChinese said:
Keep in mind that in Delayed Choice setups, you can have after the fact entanglement. So that pretty much wrecks his EACP anyway.

Thanks DrC. Great to have you back as the "Concierge" in this messy thread... :wink:
 
  • #1,239
DevilsAvocado said:
Thanks DrC. Great to have you back as the "Concierge" in this messy thread... :wink:

More like the con rather than the concierge. :smile:

Hey, look at my post count! Although JesseM has been smearing me lately on post length...
 
Last edited:
  • #1,240
DrChinese said:
More like the con

But not on Shutter Island, right!?

(:biggrin:)
 
  • #1,241
Message to the Casual Reader

Maybe you are confused by what’s going on in this thread. And maybe you don’t know what to think about extensive and overcomplicated mathematical formulas, claiming to be a serious "rebuttal" of Bell's inequality.

Don’t worry. You are not alone. Let's untie this spurious "Gordian knot".

As already said – all this can be understood by a gifted 10-yearold (which includes DrC & Me, where the former is gifted :smile:).

Let’s start from the beginning, with Bell's theorem:
"[URL – Bell's theorem[/B][/URL]

In theoretical physics, Bell's theorem (AKA Bell's inequality) is a no-go theorem, loosely stating that:
No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

It is the most famous legacy of the late physicist John S. Bell.

Bell's theorem has important implications for physics and the philosophy of science as it proves that every quantum theory must violate either locality or counterfactual definiteness.


Right there we can see that "some" in this thread have totally misinterpreted the very basics about Bell's theorem/Bell's inequality – Quantum Mechanics must violate either locality or counterfactual definiteness.

Bell's Theorem is not a diehard proof of nonlocality, never was, never will be.

Counterfactual definiteness (CFD) is another word for objective Realism, i.e. the ability to assume the physical existence of objects and properties of objects defined, whether or not it is measured (or observed or not).

Therefore we can say: Bell's Theorem proves that QM must violate either Locality or Realism.

If we combine Locality and Realism, we get Local Realism (LR), i.e. an object is influenced directly only by its immediate surroundings, and have an objective existence even when not measured.

Now we can see that: Bell's Theorem proves that QM violates Local Realism (LR).

Local Realism just doesn’t work with current understanding of Quantum Mechanics. Note that this is a totally different thing than faster than light (FTL) messaging.



Furthermore we can see that, for example billschnieder, is convinced that Bell's Theorem is an empirical "law of nature", and if he can find a mathematical flaw in this "law of nature", all goes down the drain, including 45 years of hard work, which is of course utterly silly and stupid, because it’s not a "law of nature", it’s a Theorem:
http://en.wikipedia.org/wiki/Theorem"

Theorems have two components, called the hypotheses and the conclusions. The proof of a mathematical theorem is a logical argument demonstrating that the conclusions are a necessary consequence of the hypotheses, in the sense that if the hypotheses are true then the conclusions must also be true, without any further assumptions. The concept of a theorem is therefore fundamentally deductive, in contrast to the notion of a scientific theory, which is empirical.


Deductive reasoning constructs or evaluates deductive arguments, which attempts to show that a conclusion necessarily follows from a set of premises.

Quantum mechanics, on the other hand, is an empirical scientific theory, where information is gained by means of observation, experience, or experiment.

billschnieder is comparing apples and oranges, without knowing what he's doing – in a last hysterical attempt to find some "flaw" in Bell's Theorem:
billschnieder said:
For a dataset of triples, Bell's inequality can never be violated, not even by spooky action at a distance! ... In other words, it is mathematically impossible to violate the inequalities for a dataset of triples, irrespective of the physical situation generating the data, whether it is local causality or FTL.


Pretty obvious, isn’t it? He’s fighting in the dark, totally obsessed with FTL, and completely in ignorance of the other half in Local Realism.

billschnieder is also convinced that he is in possession of the highest IQ of all times. That his simple "High School Freshman Discovery" has been overlooked by thousands of extremely brilliant scientist – including Nobel Laureates – where none of them saw this very simple "rebuttal": To violate Bell's inequality we need a dataset of TRIPLES from TWO entangled objects!

Besides totally hilarious, it’s an inevitable fact that we are dealing with a clear case of the dreadful http://en.wikipedia.org/wiki/Dunning–Kruger_effect" .

Bell's Inequality is a concept, an idea, how to finally settle the long debate between Albert Einstein and Niels Bohr regarding the EPR paradox. Bell's Inequality is not one single mathematical solution – it can be defined in many ways – as DrChinese points out very well:
DrChinese said:
One of the things that it is easy to lose sight of - in our discussions about spin/polarization - is that a Bell Inequality can be created for literally dozens of attributes. Anything that can be entangled is a potential source. Of course there are the other primary observables like momentum, energy, frequency, etc. But there are secondary observables as well. There was an experiment showing "entangled entanglement", for example. Particles can be entangled which have never interacted, as we have discussed in other threads.

And in all of these cases, a realistic assumption of some kind leads to a Bell Inequality; that Inequality is tested; the realistic hypothesis is rejected; and the predictions of QM are confirmed.



There is not one single "Holy Grail of Inequality", as billschnieder assumes, and I’m going to prove it in a very simple example.

billschnieder thrives from complexity - the longer his futile equations gets – the happier he gets, and that goes for his semantic games as well. billschnieder rejects everything that’s beautiful in its simplicity, where there is no room for his erratic ideas.

This example, by Nick Herbert, is known as one of the simplest proofs of Bell's Inequality (and I already know billschnieder going to hate it :devil:):

The setup is standard, one source of entangled pair of photons, and two polarizers that we can position independently at different angles.
13z71hi.png

The entangled source is of that kind, that if both polarizers are set to 0º, we will get perfect agreement, i.e. if one photon gets thru one polarizer the other photon gets thru the other polarizer, and if one is stopped the other is also stopped, i.e. 100% match and 0% discordance.

To start, we set first polarizer at +30º, and the second polarizer at :
16jlw1g.png

If we calculate that discordance (i.e. the number of measurements where we get a mismatching outcome thru,stop / stop,thru), we get 25% according to QM and experiments.

Now, if we set first polarizer to , and the second polarizer to -30º:
106jwrd.png

And calculate this discordance we will naturally get 25% according to QM, this time also.

Now let’s use some of John Bell’s brilliant logic, and ask ourselves:

– What will the discordance be if we set the polarizers to +30º and -30º ...??
2zjm5jk.png

Well that isn’t hard, is it ...!:rolleyes:?

If we assume a local reality, that nothing we do to one polarizer can affect the outcome of the other polarizer, we can formulate this simple Bell Inequality:
N(+30°, -30°) ≤ N(+30°, 0°) + N(0°, -30°)

The symbol N represents the number of discordance (or mismatches).

This inequality is as good as any other you’ve seen in this thread, anybody stating different is a crackpot liar.

(The "is less than or equal to" sign is just to show that there could be compensating changes where a mismatch is converted to a match, but this is not extremely important.)

We can make this simple Bell Inequality even simpler, for let’s say a gifted 10-yearold :smile::
50% = 25% + 25%

This is the obvious local realistic assumption.

But this wrong! According to QM and physical experiments we will now get 75% discordance!
sin2(60º) = 75%

This is completely crazy!? How can the setting of one polarizer affect the discordance of the other, if reality is local?? It just doesn’t make sense!

But John Bell demonstrated by means of very brilliant and simple tools that our natural assumption about a local reality is by over 25% incompatible with the predictions of Quantum Mechanics and all performed physical experiments so far.

We can simplify our inequality even further and say:
25 + 25 = 50

And divide by 25, to get this extremely simple local realistic Bell Inequality:
1 + 1 = 2

How simple can it be ?:-p?

Now we can see that QM predictions and experiments violate this simple inequality:
1 + 1 = 3 !:devil:!​

Conclusion: We do not need dataset of triples, or miles of Bayesian probability, or conspiracy theories, or any overcomplicated math whatsoever – BECAUSE IT’S ALL VERY SIMPLE AND BEAUTIFUL.


Hope this was helpful, and that you now clearly see who the liar in this thread is.

Thanks for the attention.
 
Last edited by a moderator:
  • #1,242
DevilsAvocado;2833234[B said:
Local Realism[/B] just doesn’t work with current understanding of Quantum Mechanics.






Bell's words:

"-My theorem answers some of Einstein's questions in a way that Einstein would have liked the least."


responding to Einstein's:

"-On this I absolutely stand firm. The world is not like this."
 
  • #1,243
DevilsAvocado said:
As already said – all this can be understood by a gifted 10-yearold (which includes DrC & Me, where the former is gifted :smile:).

Let’s start from the beginning, with Bell's theorem:

...

Great post!

And I am gifted, because I got a present for my birthday! (The 10 year old part represents my emotional age, by the way.)
 
  • #1,244
GeorgCantor said:
Bell's words:

"-My theorem answers some of Einstein's questions in a way that Einstein would have liked the least."


responding to Einstein's:

"-On this I absolutely stand firm. The world is not like this."

History has shown that the opinions of such men are less important than the work they leave behind. I think even dogs know at this point that Einstein was an uncompromising figure in the latter half of his life, searching for something which now seems even less likely. Should I raise a family in the manner of Dirac because he was brilliant? Bell's assertion is meaningless without his theorem, and Einstein's rebuttal is meaningless without a foundation.
 
  • #1,245
GeorgCantor said:
Bell's words:

"-My theorem answers some of Einstein's questions in a way that Einstein would have liked the least."


responding to Einstein's:

"-On this I absolutely stand firm. The world is not like this."

Georg, Sources, please?

Thank you, JenniT
 
  • #1,246
JenniT said:
Georg, Sources, please?

Thank you, JenniT



"Bell, in his first
article on hidden variables and contextuality [9], wrote
“the Einstein-Podolsky-Rosen paradox is resolved in the
way which Einstein would have liked least.”


Page 1 of:

"Einstein, Podolsky, Rosen, and Shannon"
Asher Peres
Department of Physics, Technion—Israel Institute of Technology, 32000 Haifa, Israel

http://arxiv.org/PS_cache/quant-ph/pdf/0310/0310010v1.pdf


The quote can also be found in "Quantum Reality" by N.Herbert with the insistence about spooky action "On this I absolutely stand firm. The world is not like this."
 
  • #1,247
nismaratwork said:
History has shown that the opinions of such men are less important than the work they leave behind. I think even dogs know at this point that Einstein was an uncompromising figure in the latter half of his life, searching for something which now seems even less likely. Should I raise a family in the manner of Dirac because he was brilliant? Bell's assertion is meaningless without his theorem, and Einstein's rebuttal is meaningless without a foundation.



You are arguing with yourself or an imaginary version of "me". It must be your fantasy that drives your misguided belief I implied their work wasn't important. I said no such thing.
 
  • #1,248
GeorgCantor said:
You are arguing with yourself or an imaginary version of "me". It must be your fantasy that drives your misguided belief I implied their work wasn't important. I said no such thing.

What was your point exactly?
 
  • #1,249
billschnieder said:
You do not understand probability either. Say I give you the following list of

++
--
-+
+-

And ask you to calculate P(++) from it. Clearly the probability is the number of times (++) occurs in the list divided by the number of entries in the list.
No, you can't calculate the probability just from the information provided, not if we are talking about objective frequentist probabilities rather than subjective estimates. After all, the nature of the physical process generating this list might be such that frequency of ++ in a much greater number of trials would be something other than 0.25, and according to the frequentist definition P(++) is whatever fraction of trials would yield result ++ in the limit as the number of trials went to infinity.
billschnieder said:
So your "law of large numbers" cop-out is an approximation of the true probability not it's definition. You need to learn some basic probability theory here because you are way off base.
Again your argument seems to involve a casual dismissal of the frequentist view of probability, when it is an extremely mainstream way of defining the notion of "probability", and regardless of whether you like it or not, it's a pretty safe bet that Bell was tacitly assuming the frequentist definitions in his proofs since they become fairly incoherent with any more subjective definition of probability (because they deal with "probabilities" of hidden variables that would be impossible for experimenters to measure)
JesseM said:
But then you use that to come to the absurd conclusion that in order to compare with empirical data, we need to make some assumptions about the distribution of values of λ on our three runs. We don't--Bell was writing for an audience of physicists, who would understand that whenever you talk about an "expectation value", the basic definition is always just a sum over each possible measurement result times the probability of that result
billschnieder said:
Sorry JesseM but that bubble has already been burst, when I proved conclusively that you do not know the meaning of "expectation value".
So you deny that the "expectation value" for a test which can yield any of N possible results R1, R2, ..., RN would just be [tex]1/N \sum_{i=1}^N R_i * P(R_i )[/tex]? (where P(R) is the probability distribution function that gives the probability for each possible Ri) This is the definition of "expectation value" I used, and if you deny that this is true for a test with a finite set of possible results (like the measurement of spin for two entangled particles), then it is you who fails to understand the basic meaning of the term "expectation value". If you agree with this definition but think I have somehow been failing to use it in my own arguments, then you are misunderstanding something, please clarify.
billschnieder said:
To show how silly this adventitious argument of yours is, I asked you a simple question and dare you to answer it:

You are given a theoretical list of N pairs of real-valued numbers x and y. Write down the mathematical expression for the expectation value for the paired product.
It's impossible to write down the correct objective/frequentist expectation value unless we know the sample space of possible results (all possible pairs, which might include possibilities that don't appear on the list of N pairs) along with the objective probabilities of each result (which may be different from the frequency with which the result appears on your list, although you can estimate the objective probability based on the empirical frequency if N is large...it's better if you have some theory that gives precise equations for the probability like QM though).
billschnieder said:
Once you have done that, try and swindle your way out of the fact that
"Swindle", nice. You stay classy Bill!
billschnieder said:
a) The structure of the expression so derived does not depend on the actual value N. ie, N could be 5, 100, or infinity.
If you know the objective probabilities, then it doesn't even depend on the results that happen to appear on the list! But if you're just trying to estimate the true probabilities based on the frequencies on the list, than the accuracy of your estimates (as compared to the actual true probabilities) is likely to be higher the greater N is.
billschnieder said:
b) The expression so derived is a theoretical expression not "empirical".
If you are estimating the probabilities based on the frequencies on the list, then I would call this an empirical estimate of the expectation value, which may be different from the true expectation value. For example, if I know based on theory that a certain test has an 0.5 chance of giving result +1 and an 0.5 chance of giving result -1, then the expectation value is (+1)*(0.5) + (-1)*(0.5)=0. On the other hand, if I don't know the true probabilities of +1 and -1 and am just given a list of results with 51 results that are +1 and 49 results that are -1, then my estimate of the expectation value would be (+1)*(0.51) + (-1)*(0.49) = 0.02, close to the theoretically-derived expectation value of 0 but slightly off.
billschnieder said:
c) The expression so derived is the same as the simple average of the paired products.
Not if you know (or can calculate theoretically) the true probabilities of different results, and they are different from the fraction of trials with each result that appear on the list.
JesseM said:
So for example, if one run with settings (a,b) included three trials where λ took the value λ3, while another run with settings (b,c) included no trials where it took the value λ3, this wouldn't imply that ρ(λi) differed in the integrals for E(a,b) and E(b,c)? Because your comment at the end of post #1224 suggests you you are still confusing the issue of what it means for the "true probabilities" ρ(λi) to differ depending on the detector settings and what it means for the actual frequencies of different values of λi to differ on runs with different detector settings
billschnieder said:
You are sorely confused. Note I use ρ(λi) not P(λi) to signify that we are dealing with a probability distribution, which is essentially a function defined over the space of all λ, with integral over all λ equal to 1.
P(λi) is also a type of probability distribution, the only difference between ρ(λi) and P(λi) is that ρ(λi) is a continuous probability density function (based on the assumption that λ can take a continuous range of values) while P(λi) is a discrete probability distribution--I have in some posts made the simplifying assumption that λ can only take a finite set of possible values rather than being a continuous variable, it makes no real difference to Bell's argument which one we assume.
billschnieder said:
If the (a,b) run included N iterations with three of those corresponding to λ3, P(λ3) for our dataset = 3/N. But if in a different run of the experiment (b,c) none of the λ's was λ3, P(λ3) = 0 for our dataset. It therefore means the probability distribution of ρ(λi) can not be same for E(a,b) and E(b,c)
No, it doesn't mean that, because the ρ(λi) that appears in Bell's equations (along with the P(λi) that appears in the discrete version) is pretty clearly supposed to be an objective probability function of the frequentist type. Anyone who understands what it means to say that for a fair coin P(heads)=0.5 even if an actual series of 20 flips yielded 11 heads and 9 tails should be able to see the difference between the two.

Again, no one is asking you to agree that frequentist definitions are the "best" ones to use in ordinary situations where we are trying to come up with probability estimates from real data, but you can't really deny they are widely used in theoretical arguments involving probabilities, so you might at least consider whether Bell's arguments make sense when interpreted in frequentist terms. If you simply refuse to even talk about the frequentist notion of probability because you have such a burning hatred for it, then probably you're not really interested in trying to understanding Bell's argument in its own terms (i.e., how Bell and other physicists would conceive the argument), but are just trying to make a rhetorical case against it based on showing that it becomes incoherent when we interpret the probabilities in non-frequentist terms.
billschnieder said:
According to Bell, E(a,b) calculated by the following sum

a1*b1*P(λ1) + a2*b2*P(λ2) + ... + an*bn*P(λn) where n is the total number of possible distinct lambdas.
Sure.
billschnieder said:
ρ(λ) is a function which maps a specific λi to its probability P(λi).
Huh? P(λi) is already a function that maps each specific λi to a probability. Bell just uses the greek letter [tex]\rho[/tex] to indicate he's talking about a probability density function on a variable λ which is assumed to be continuous--the "probability density" for a specific value of λ would then not be an actual probability, instead if you want to know the probability that λ was in some finite range (say, between 0.4 and 0.5) you'd integrate the probability density function in that range, and that would give the probability. That's why Bell writes "It is a matter of indifference in the following whether λ denotes a single variable or a set, or even a set of functions, and whether the variables are discrete or continuous. However, we write as if λ were a single continuous parameter ... ρ(λ) is the probability distribution of λ". It's common in QM to use ρ to refer to a probability density, see here and here for example.
billschnieder said:
By definition therefore, if the function ρ(λ) is the same for two runs of the experiment, it must produce the same P(λi) for both cases. In other words, if it produced different values of P(λi) such as 3/N in one case and 0 in another, it means ρ(λ) is necessarily different between the two and the runs can not be used together as a valid source of terms for comparing with Bell's inequality.
Not if we are defining probabilities in a frequentist sense, and I think any physicist reading Bell's work would understand that in his theoretical proof he is indeed using the frequentist definition, so having the same probability distribution for different detector settings need not imply that the frequency of a given λi would actually be exactly the same for two finite runs with different detector settings (just like the claim that two fair coins both have P(heads)=0.5 does not imply that two runs of ten flips with each coin will each produce exactly five heads).
billschnieder said:
JesseM said:
billschnieder said:
But practically, you could obtain the same E(a,b) by calculating a simple average over a representative set of outcomes in which the frequency of realization of a specific λ, is equivalent to it's probability. ie

For example, if we had only 3 possible λ's (λ1, λ2, λ3) with probabilities (0.3, 0.5, 0.2) respectively. The expectation value will be
E(a,b) = 0.3*A(a,λ1)*B(b,λ1) + 0.5*A(a,λ2)*B(b,λ2) + 0.2*A(a,λ3)*B(b,λ3)

Where each outcome for a specific lambda exists exactly once. OR we can calculate it using a simple average, from a dataset of 10 data points, in which A(a,λ1),B(b,λ1) was realized exactly 3 times (3/10 = 0.3), A(a,λ2), B(b,λ2) was realized 5 times, and A(a,λ3), B(b,λ3) was realized 2 times; or any other such dataset of N entries where the relative frequencies are representative of the probabilities. Practically, this is the only way available to obtain expectation values, since no experimenter has any idea what the λ's are or how many of them there are.
You really think that this is the "exact same thing" as what I was saying? Here your "practical" average requires us to know which value of λ occurred on each trial
Oh come on! At least be honest about what you claim I am saying! Why would you need to know λ for each trial if you are calculating a simple average!?
OK, I missed the bolded sentence, but I don't understand how the stuff that preceded it can possibly be consistent with the idea that the experimenter doesn't know what the λ's are. How does the experimenter know that "A(a,λ1),B(b,λ1) was realized exactly 3 times" if he has no idea whether λ1 or some other λ occurred on a given trial? How would you know whether your outcomes were "a representative set of outcomes in which the frequency of realization of a specific λ, is equivalent to it's probability" if you had no idea what the frequency was that each specific λ was realized? Once again your explanation is totally confusing to me, and I suspect to other readers as well, but anytime I misunderstand instead of helpfully correcting me you immediately jump down my throat and accuse me of not being "honest".

Also, what does it even mean to say that a set of outcomes is "representative" if "the frequency of realization of a specific λ, is equivalent to it's probability" when you are using a non-frequentist definition of probability? If we have a set of 3000 outcomes and we somehow know that λ1 occurred on 30 of those, are you using a definition of "probability" where that would automatically imply that the probability of λ1 given that data must be 0.01? (that's what seemed to be implied by your comment quoted at the start that 'Clearly the probability is the number of times (++) occurs in the list divided by the number of entries in the list') For a frequentist the "true" probability of λ1 could certainly be different from 0.01 since the fraction of outcomes with λ1 might approach some other value in the limit as the number of trials approached infinity, but from the way you are defining probabilities it seems like the fraction of trials where λ1 occurs is by definition said to be the "probability" of λ1, so I don't see how any set of outcomes could fail to be "representative". If you are not defining the probability of an event as just the fraction of trials in the dataset where that event occurred, please clarify your definition.

And once again, regardless of your definition, will you at least consider whether Bell's proof makes sense if the probabilities are interpreted in frequentist terms? It seems like most of your critique is based on the assumption that he is defining probabilities in terms of actual outcomes on some finite set of trials, but if he was assuming more "objective" frequentist definitions then this would be a giant strawman argument.
 
Last edited:
  • #1,250
JesseM said:
No, you can't calculate the probability just from the information provided, not if we are talking about objective frequentist probabilities rather than subjective estimates. After all, the nature of the physical process generating this list might be such that frequency of ++ in a much greater number of trials would be something other than 0.25, and according to the frequentist definition P(++) is whatever fraction of trials would yield result ++ in the limit as the number of trials went to infinity.
Who said anything about a physical process. I've given you an abstract mathematical list, and you can't bring yourself to admit that you were wrong, to the point you are making yourself look foolish. P(++) for the list I gave you is 1/4, even a cave man can understand that level of probability theory Jesse! Are you being serious, really?

JesseM said:
billschnieder said:
So your "law of large numbers" cop-out is an approximation of the true probability not it's definition. You need to learn some basic probability theory here because you are way off base.
Again your argument seems to involve a casual dismissal of the frequentist view of probability, when it is an extremely mainstream way of defining the notion of "probability"

Who said anything about frequentist view. All I did was point out to you a basic mainstream fact in probability theory:

Wikipedia (http://en.wikipedia.org/wiki/Law_of_large_numbers):
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

So you are way off base and I am right to say that you do not understand probability theory.

So you deny that the "expectation value" for a test which can yield any of N possible results R1, R2, ..., RN would just be
[tex]1/N \sum_{i=1}^N R_i * P(R_i )[/tex] ?

(where P(R) is the probability distribution function that gives the probability for each possible Ri)

Again you are way off base. In probability theory When using the probability of an R as a weight in calculating the expectation value, you do not need to divide the sum by N again. That will earn you an F grade. The correct expression should be:

[tex]\sum_{i}^{N} R_i * P(R_i )[/tex]

For example, if N is 3 and the probabilities of R1, R2 and R3 are (0.3, 0.5, 0.2) the expectation value will R1*0.3 + R2*0.5 + R3*0.2 NOT (R1*0.3 + R2*0.5 + R3*0.2)/3 !
 
Last edited by a moderator:
  • #1,251
JesseM said:
billschnieder said:
You are given a theoretical list of N pairs of real-valued numbers x and y. Write down the mathematical expression for the expectation value for the paired product.
It's impossible to write down the correct objective/frequentist expectation value unless we know the sample space of possible results (all possible pairs, which might include possibilities that don't appear on the list of N pairs) along with the objective probabilities of each result (which may be different from the frequency with which the result appears on your list, although you can estimate the objective probability based on the empirical frequency if N is large...it's better if you have some theory that gives precise equations for the probability like QM though).

Wow! The correct answer is <xy>

Wikipedia:
http://en.wikipedia.org/wiki/Mean
In statistics, mean has two related meanings:

* the arithmetic mean (and is distinguished from the geometric mean or harmonic mean).
* the expected value of a random variable, which is also called the population mean.

There are other statistical measures that use samples that some people confuse with averages - including 'median' and 'mode'. Other simple statistical analyses use measures of spread, such as range, interquartile range, or standard deviation. For a real-valued random variable X, the mean is the expectation of X.
You really do not know anything about probability.

JesseM said:
"Swindle", nice. You stay classy Bill!
I didn't think it was possible to swindle that one. But you found a way. Foolish me for thinking a blatant fact will be too difficult for you to swindle.

JesseM said:
No, it doesn't mean that, because the ρ(λi) that appears in Bell's equations (along with the P(λi) that appears in the discrete version) is pretty clearly supposed to be an objective probability function of the frequentist type.

Oh, so now you are abandoning your law of large numbers again because it suits your argument. Remember the the underlined text because it will haunt you later when you try to argue that expectation values calculated from three different runs of an experiment can be used as terms for comparing with Bell's inequality. You are way off base as you recognize yourself in the following comment:

JesseM said:
Again, no one is asking you to agree that frequentist definitions are the "best" ones to use in ordinary situations where we are trying to come up with probability estimates from real data...
Right after arguing that the probabilities I got from real data are not the correct ones, you go right ahead and argue that the frequentist view (which btw, is what I used in the statement you were objecting to), is the "best" one to use. But yet, you still manage to imply that I disagree with the frequentist view ? Only JesseM can do this kind of swindling. It is professional grade indeed.

From the number of times you have suddenly invoked the word "frequentist" in the latest post of yours, it seems you would rather we abandon this discussion and start one about definitions of probability of which your favorite is frequentist. But I'm not interested in that discussion, thank you for asking subtly though. I understand that you plan to argue next that unless the frequentist view is used, Bell's work can not be understood correctly. Even though I will not agree with such a narrow view, let me pre-empt that and save you a lot of effort by pointing you to the fact that in my arguments above explaining Bell's work, I have been using the frequentist view.
 
  • #1,252
JesseM said:
but I don't understand how the stuff that preceded it can possibly be consistent with the idea that the experimenter doesn't know what the λ's are. How does the experimenter know that "A(a,λ1),B(b,λ1) was realized exactly 3 times" if he has no idea whether λ1 or some other λ occurred on a given trial?

You do not get it. It is their only hope if they are trying to obtain empirical estimates of the true expectation value. This is the whole point! They can't just measure crap and plug it into Bell's equations unless they can ascertain that it is a damn good estimate of the true expectation values! If it is a very good estimate, then the probability distribution of λ in their sample will not be significantly different from the true probability distribution of λ. A representative sample is one in which those two probabilty distributions are the not significantly different. That is why the fair sampling assumption is made! The part you quoted is explaining to you the meaning of expectaton value in abstract terms.

JesseM said:
How would you know whether your outcomes were "a representative set of outcomes in which the frequency of realization of a specific λ, is equivalent to it's probability" if you had no idea what the frequency was that each specific λ was realized?
Again that is the whole point. Without knowing λ, the experimenters have no way of making sure that the sample they used is representative, the best they can do is ensure that empirical probability distributions in the datasets used to calculate their three terms are not significantly different. And they can make sure of that by sorting the data the way I described. In that case, Bell's inequality is guaranteed to be obeyed. So they can not make sure of it, but they can verify it.

I hope that you will find time out of your busy schedule to comment on this example I presented:

billschnieder said:
For example:

Let us define A(a,λ) = +/-1 , B(a,λ) just like Bell and say that the functions represent the outcome of two events on two stations one on Earth (A) and another (B) on planet 63, and in our case λ represents non-local mystical processes which together with certain settings on the planets uniquely determine the outcome. We also allow in our spooky example for the setting a on Earth to remotely affect the choice of b instantaneously and vice versa. Note in our example, there is no source producing any entangled particles, everything is happening instantaneously.

The expectation value for the paired product of the outcomes at the two stations is exactly the same as Bell's equation (2). If you disagree, explain why it would be different or admit that the physical assumptions are completely peripheral.
 
  • #1,253
Bill, give it up, I don't know where you're getting the ideas you espouse here, but JesseM is tearing them apart. I'll say it again, you can post in bulk, but it doesn't change that your posts are rambling and borderline-crackpot, whereas JesseM is sticking to the science.

You keep saying things such as, "[JesseM] doesn't know anything about probability," which having read the last 20 pages or so, is laughable! You are talking pure crap, and he's calling you on every point. As one of the "casual readers" DevilsAvocado refers to, please, take your personal Quixote complex to PMs and let this thread become readable again. I for one am tired of JesseM having to go through your endless multiple posts, line by line to try and reason with you.

You can keep harping on [tex]\lambda[/tex], but it's only in the context of what seems to be your own nearly religious belief here. You clearly have no idea what the significance of Bell or a BSM is, and your own concocted standards for what "the whole point" is, has no bearing on the current science. Why not start a blog where you can rant and rail to your heart's content, and spare the thread the clutter.
 
  • #1,254
billschnieder said:
Who said anything about a physical process. I've given you an abstract mathematical list, and you can't bring yourself to admit that you were wrong, to the point you are making yourself look foolish. P(++) for the list I gave you is 1/4, even a cave man can understand that level of probability theory Jesse! Are you being serious, really?
Yes, Bill. Would you deny, for example, that a physical process that had P(++)=0.3, P(+-)=0.2, P(-+)=0.15, and P(--)=0.35 (with all of these numbers being the frequentist probabilities that would represent the fraction of trials with each value in the limit as the number of trials goes to infinity) could easily generate the following results on 4 trials?

++
--
-+
+-
billschnieder said:
Who said anything about frequentist view.
I did. It's the only notion of "probability" that I've been using the whole time, perhaps if you go back and look at some of the posts of mine you thought didn't make sense and read them in this light you will understand them better (also, note that I'm not talking about 'finite frequentism', but 'frequentism' understood in terms of the limit as the number of trials goes to infinity--see below for a link discussing the difference between the two). For example, if we are talking about the frequentist view of probability, the mere fact that you got ++ once on a set of four trials does not imply P(++)=0.25...do you disagree?
billschnieder said:
All I did was point out to you a basic mainstream fact in probability theory:

Wikipedia (http://en.wikipedia.org/wiki/Law_of_large_numbers):
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.
So you are way off base and I am right to say that you do not understand probability theory.
Note that the wikipedia article says "close to the expected value", not "exactly equal to the expected value". And note that this is only said to be true in a large number of trials, the article does not suggest that if you have only four trials the average on those four trials should be anywhere near the expectation value. Finally, note that in the forms section of the article they actually distinguish between the "sample average" and the "expected value", and say that the "sample average" only "converges to the expected value" in the limit as n (number of samples) approaches infinity. So, it seems pretty clear the wikipedia article is using the frequentist definition as well.
JesseM said:
So you deny that the "expectation value" for a test which can yield any of N possible results R1, R2, ..., RN would just be [tex]1/N \sum_{i=1}^N R_i * P(R_i )[/tex]? (where P(R) is the probability distribution function that gives the probability for each possible Ri)
billschnieder said:
Again you are way off base. In probability theory When using the probability of an R as a weight in calculating the expectation value, you do not need to divide the sum by N again. That will earn you an F grade. The correct expression should be:

[tex]\sum_{i}^{N} R_i * P(R_i )[/tex]
Yes, here you did catch me in an error, I wrote down the expression too fast without really thinking carefully, I guess I got confused by all the other sums which did include 1/N on the outside. Before you brandish this as proof that I "don't know probability", note that in previous posts I did write it down correctly, for example in post #1205:
JesseM said:
In general, if you have some finite number N of possible results Ri for a given measurement, and you know the probability P(Ri) for each result, the "expectation value" is just:

[tex]E = \sum_{i=1}^N R_i * P(R_i )[/tex]

If you perform a large number of measurements of this type, the average result over all measurements should approach this expectation value.
And in post #1218:
JesseM said:
Physical assumptions are peripheral to calculating averages from experimental data, it's true, and they're also peripheral to writing down expectation values in terms of the "true" probabilities as I did when I wrote [tex]E(R) = \sum_{i=1}^N R_i * P(R_i)[/tex],
Anyway, now that we seem to be agreed that the correct form for the expectation value is [tex]E = \sum_{i=1}^N R_i * P(R_i )[/tex] (though I am sure we would disagree on the meaning of P(Ri) since I define it in frequentist terms as the fraction of trials that would give result Ri in the limit as the number of trials goes to infinity), can you tell me if you think I was incorrect to write the expectation value as follows?

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

This equation does have the form [tex]E = \sum_{i=1}^N R_i * P(R_i )[/tex] does it not? If you don't object to the claim that the above is at least one way of defining E(a,b), then why in post #1221 did you object as follows?
billschnieder said:
So when you say:
JesseM said:
This expectation value is understood as a sum of the different possible measurement outcomes weighted by their "true" probabilities:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

...

The comment above is completely misguided, since the basic definition of "expectation value" in this experiment has nothing at all to do with knowing the value of λ, it is just understood to be:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)
It clearly shows that you do not understand probability or statistics. Clearly the definition of expectation value is based on probability weighted sum, and law of large numbers is used as an approximation, that is why it says in the last sentence above that the expectation values is "almost surely the limit of the sample mean as the sample size grows to infinity"
billschnieder said:
JesseM said:
billschnieder said:
You are given a theoretical list of N pairs of real-valued numbers x and y. Write down the mathematical expression for the expectation value for the paired product.
It's impossible to write down the correct objective/frequentist expectation value unless we know the sample space of possible results (all possible pairs, which might include possibilities that don't appear on the list of N pairs) along with the objective probabilities of each result (which may be different from the frequency with which the result appears on your list, although you can estimate the objective probability based on the empirical frequency if N is large...it's better if you have some theory that gives precise equations for the probability like QM though).
Wow! The correct answer is <xy>

Wikipedia:
http://en.wikipedia.org/wiki/Mean
In statistics, mean has two related meanings:

* the arithmetic mean (and is distinguished from the geometric mean or harmonic mean).
* the expected value of a random variable, which is also called the population mean.

There are other statistical measures that use samples that some people confuse with averages - including 'median' and 'mode'. Other simple statistical analyses use measures of spread, such as range, interquartile range, or standard deviation. For a real-valued random variable X, the mean is the expectation of X.

You really do not know anything about probability.
Here the wikipedia article is failing to adequately distinguish between the "mean" of a finite series of trials (or any finite sample) and the "mean" of a probability distribution (edit: See for example this book which distinguishes the 'sample mean' [tex]\bar X[/tex] from the 'population mean' [tex]\mu[/tex], and says the sample mean 'may, or may not, be an accurate estimation of the true population mean [tex]\mu[/tex]. Estimates from small samples are especially likely to be inaccurate, simply by chance.' You might also look at this book which says 'We use [tex]\mu[/tex], the symbol for the mean of a probability distribution, for the population mean', or this book which says 'The mean of a discrete probability distribution is simply a weighted average (discussed in Chapter 4) calculated using the following formula: [tex]\mu = \sum_{i=1}^n x_i P[x_i ][/tex]'). If you think the expectation value is exactly equal to the average of a finite series of trials, regardless of whether the number of trials is large or small, then you are disagreeing with the very wikipedia quote you posted earlier from the Law of Large Numbers page:
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.
According to you, would it be more correct to write "the average of the results obtained from any number of trials would be exactly equal to the expected value"? If you do, then your view is in conflict with the quote above. And if you don't think the average from a finite number of trials is exactly equal to the expectation value, then you were incorrect to write "Wow! The correct answer is <xy>" above.
JesseM said:
No, it doesn't mean that, because the ρ(λi) that appears in Bell's equations (along with the P(λi) that appears in the discrete version) is pretty clearly supposed to be an objective probability function of the frequentist type.
billschnieder said:
Oh, so now you are abandoning your law of large numbers again because it suits your argument.
Um, how am I doing that? I said "objective probability function of the frequentist type" above (and again, you can assume that all my comments about probabilities assumed a frequentist definition, it might help you avoid leaping to silly false conclusions about what I'm arguing), do you understand that this would be a function where the "probability" it assigns to any outcome is equal to the fraction of trials where that outcome would occur in the limit as the number of trials went to infinity? And if I'm defining probabilities in terms of the limit as the number of trials goes to infinity, I'm pretty clearly making use of the law of large numbers, no?
billschnieder said:
Remember the the underlined text because it will haunt you later when you try to argue that expectation values calculated from three different runs of an experiment can be used as terms for comparing with Bell's inequality.
You can't calculate "expectation values" from three runs with a finite series of trials, not in my way of thinking (I have never said otherwise, if you think I did you misread me). You can only calculate the sample average from a finite run of trials. However, by the law of large numbers, the bigger your sample, the smaller the probability that your sample average will differ significantly from the "true" expectation value determined by the "true" probabilities (again with 'true' probabilities defined in frequentist terms)
JesseM said:
Again, no one is asking you to agree that frequentist definitions are the "best" ones to use in ordinary situations where we are trying to come up with probability estimates from real data...
billschnieder said:
Right after arguing that the probabilities I got from real data are not the correct ones, you go right ahead and argue that the frequentist view (which btw, is what I used in the statement you were objecting to), is the "best" one to use.
Again, the usual modern meaning of the "frequentist" view is that the probability of some outcome is just the fraction of trials with that outcome in the limit as the number of trials goes to infinity, not in any finite series of trials (see here and http://books.google.com/books?id=Q1AUhivGmyUC&lpg=PA80&dq=frequentism&pg=PA80#v=onepage&q=frequentism&f=false and p.9 here for example...the Stanford Encyclopedia of Philosophy article also refers to something called 'finite frequentism', but modern authors usually use 'frequentism' to mean the definition involving the limit as number of trials approaches infinity, and in any case this is what I have always meant by 'frequentism', I'm certainly not talking about finite frequentism)

And I am only arguing that the frequentist view is the "best" one to use for understanding the meaning of the probabilities in Bell's theoretical argument, not for estimating probabilities based on empirical data. All I want to know is whether you are willing to consider whether your argument about the limited applicability of Bell's proof (that it can't be applied to three separate lists of pairs which can't be resorted in the way you discussed in #1208) would not apply if we interpret the probabilities in Bell's argument in frequentist terms. Can you please tell me, yes or no, are you willing to consider whether Bell's proof might allow us to make broad predictions about three runs which each yield a distinct list of pairs, if we do indeed interpret the probabilities in his theoretical argument in (non-finite) frequentist terms?
billschnieder said:
From the number of times you have suddenly invoked the word "frequentist" in the latest post of yours, it seems you would rather we abandon this discussion and start one about definitions of probability of which your favorite is frequentist.
I don't want a discussion of definitions of probability. Whenever I have been talking about probabilities I have been assuming frequentist definitions, and only lately have I noticed that your argument seems to depend critically on the fact that you are using non-frequentist definitions (or 'finite frequentist' definitions if you prefer), which is why I have started trying to be explicit about it. Even if you don't like the frequentist definition in general, all I'm asking is that you consider the possibility that Bell's own probabilities might have been intended to be interpreted in frequentist terms, and that the supposed problems with his argument might disappear if we do interpret symbols like ρ(λ) in this light.
billschnieder said:
I understand that you plan to argue next that unless the frequentist view is used, Bell's work can not be understood correctly. Even though I will not agree with such a narrow view, let me pre-empt that and save you a lot of effort by pointing you to the fact that in my arguments above explaining Bell's work, I have been using the frequentist view.
Your arguments may have been assuming the "finite frequentist" view, but as I said that's not what I'm talking about. I'm talking about the more common "frequentist" view that defines objective probabilities in terms of the limit as the number of trials goes to infinity. Are you willing to discuss whether Bell's argument makes sense (and doesn't have the problem of limited applicability that you point to) if we assume the probabilities in his theoretical argument were also meant to be understood in the same "frequentist" sense that I'm talking about here?
 
Last edited by a moderator:
  • #1,255
DevilsAvocado said:
I don’t know... but there seems to be other things that are a little "weak" also...? Like this:
"As a consequence classical realism, and not locality, is the common source of the violation by nature of all Bell Inequalities."

I may be stupid, but I always thought one has to make a choice between locality and realism? You can’t have both, can you?

And what is this?
"We prove versions of the Bell and the GHZ theorems that do not assume locality but only the effect after cause principle (EACP) according to which for any Lorentz observer the value of an observable cannot change because of an event that happens after the observable is measured."

To me this is contradictory. If you accept nonlocality, you must accept that the (nonlocal) effect comes before the cause (at speed of light)?


The Effect After Cause Principle (EACP) states ONLY that:
For any Lorentz observer O, once an effect E of cause C is observed by observer O, no fiddling with C can change E.

Now the cause could happen after the effect despite the EACP if Non-Locality would hold true. Only the cause would have to be a cause that is compatible with the observed effect. Most physicists would admit that an observation once made cannot be changed, even if these physicists believe that non-locality holds true. So the EACP once understood properly (and stated properly for that effect) should not be a problem for most (should I say "any"?) physicists.

This being said, citing delayed choice experiments against the EACP (as someone has done in this thread) after the superb analysis of the question "Resolution to the Delayed choice quantum eraser?" presented by Cthugha is really missing the point of what is the meaning of the EACP (and the illusory nature of the delay of the cause in delayed erasure experiments, since one is only speaking of generating measurements that can be done in coincidence (Wheeler's type delay experiment are another matter altogether, even if Jacques et al. use delayed erasure to perform what they call an instance of Wheeler's delayed measurement experiments: I'll have to check if there is a thread on that)). But I confess that the EACP is not as easy as concept to grab as one would like. The proof of Bell compatible with the EACP is indeed delicate, and msot Bell inequalities cannot be proved when one replaces the locality assumption by the (much) weaker EACP assumption.

Now coming back to the first point of the quote from DevilsAvocado, of course one knows from Bell's Theorem and Quantum Mechanics that "locality" and "realism" cannot be both true (where "realism" means classical realism, i.e., the observables have values before measurement, and in particular observables make sense before measurement). The point of the paper is to make progress toward the fact that it is "realism" and not "locality" that is the problem generating contradictions.

PRELUDE TO A QUESTION (since we are mentioning "realism"): The older attack on "realism" that I know ( using QM arguments [\B] ) is a 1931 paper by (and I cite in reverse order w.r.t. what is on the paper): Podolsky, Tolman...
and Einstein (the ETP paper). This at a time when Bohr and Heisenberg where admitting retrodictive ( i.e, [\I] backward in time, but I may have the spelling wrong) violation of the Uncertainty Principle (Heisenberg apparently doing that because of Bohr). So much for Einstein as naively realist. Now, this being said, there is a little catch in the ETP paper: the argument works for generic particles, but not for some special particles such as the EPR particles that are both very special to QM and (but this is less well known) more classical than generic particles, as for instance, if created in pairs such that the total momentum is conserved, they do not generate interference when going through a setting that would generate interferences with generic particles. (I do not believe that Einstein was always right, but why create mistakes that he did not make, especially when he was right or had at least a view worse taking into consideration? Some people like to tell of his supposed mistakes to tell that they understand better, which is not the way this preamble to a question should be taken: the question indeed follows).

QUESTION: Can anyone tell me of a PHYSICS argument against "realism" older than ETP?
 
  • #1,256
nismaratwork said:
Bill, give it up, I don't know where you're getting the ideas you espouse here, but JesseM is tearing them apart.
Hehehe, Evidently you are not on the same planet as the one this discussion is taking place in. You may be a member of the JesseM fan club but it is not up to you what I can or can not argue in this thread. So if you have anything of import to say about any of the specific facts I have posted here, all backed up by standard mathematics, post it and be prepared to defend it as well rather than shy away and pretend to be a thread police.

I'll say it again, you can post in bulk, but it doesn't change that your posts are rambling and borderline-crackpot, whereas JesseM is sticking to the science.
Again give one example where I was wrong and JesseM was right and be prepared to back it up. If you think throwing words like "crackpot" around will have any impact on my determination to vehemently defend what I know to be accurate, you haven't been speaking with DevilsAvocado enough. He has tried and failed.

The more crap like yours there is to respond to, the more posts I will post. I do not like long posts so I break down my posts into pieces which address a specific point. You do not like that, tough luck. I tried sticking to the essentials but JesseM kept throwing bulk at me, so I decided from now on I will not leave any stone uncovered.

You keep saying things such as, "[JesseM] doesn't know anything about probability," which having read the last 20 pages or so, is laughable!
Because you do not know anything about probability yourself so you can not independently understand anything being said. And since you already sold all your property and bought JesseM stock a while back, you defer all your judgement to him. Anything he says is 100% accurate to you. You are not the only one.

You are talking pure crap, and he's calling you on every point.
Like which one?

As one of the "casual readers" DevilsAvocado refers to, please, take your personal Quixote complex to PMs and let this thread become readable again. I for one am tired of JesseM having to go through your endless multiple posts, line by line to try and reason with you.
There are casual readers who matter and there are the fan-boys who don't. You can guess which class you belong to. But if you do not like the thread, don't read it, nobody voted you president of the casual readers. Other casual readers have brains and can follow a discussion without being patronized by the likes of you and DA. Besides if JesseM was calling out my crap as you claim he was, you won't be trying to stop me. Your comments suggest the opposite is the case and you are beginning to regret your premature investment.

You can keep harping on [tex]\lambda[/tex], but it's only in the context of what seems to be your own nearly religious belief here. You clearly have no idea what the significance of Bell or a BSM is, and your own concocted standards for what "the whole point" is, has no bearing on the current science. Why not start a blog where you can rant and rail to your heart's content, and spare the thread the clutter.
Why do you think I posting on this thread, because I like getting in the skin of people like you who make ridiculous claims without knowing squat about what you are talking about? Why don't you start a blog so that you can police all the comments to your hearts content?
 
  • #1,257
charlylebeaugosse said:
The Effect After Cause Principle (EACP) states ONLY that:
For any Lorentz observer O, once an effect E of cause C is observed by observer O, no fiddling with C can change E.

Now the cause could happen after the effect despite the EACP if Non-Locality would hold true. Only the cause would have to be a cause that is compatible with the observed effect. Most physicists would admit that an observation once made cannot be changed, even if these physicists believe that non-locality holds true. So the EACP once understood properly (and stated properly for that effect) should not be a problem for most (should I say "any"?) physicists.

This being said, citing delayed choice experiments against the EACP (as someone has done in this thread) after the superb analysis of the question "Resolution to the Delayed choice quantum eraser?" presented by Cthugha is really missing the point of what is the meaning of the EACP (and the illusory nature of the delay of the cause in delayed erasure experiments, since one is only speaking of generating measurements that can be done in coincidence (Wheeler's type delay experiment are another matter altogether, even if Jacques et al. use delayed erasure to perform what they call an instance of Wheeler's delayed measurement experiments: I'll have to check if there is a thread on that)). But I confess that the EACP is not as easy as concept to grab as one would like. The proof of Bell compatible with the EACP is indeed delicate, and msot Bell inequalities cannot be proved when one replaces the locality assumption by the (much) weaker EACP assumption.

Not sure I would agree here that delayed choice experiments are not relevant. What is the meaning of EACP if you have the future affecting the past? And you cannot be certain that is not happening once you look at those experiments.

I personally cannot see that EACP is a "weaker" assumption than locality. I mean, it seems a subjective assessment.
 
  • #1,258
billschnieder said:
JesseM said:
but I don't understand how the stuff that preceded it can possibly be consistent with the idea that the experimenter doesn't know what the λ's are. How does the experimenter know that "A(a,λ1),B(b,λ1) was realized exactly 3 times" if he has no idea whether λ1 or some other λ occurred on a given trial?
You do not get it. It is their only hope if they are trying to obtain empirical estimates of the true expectation value.
What do you mean by "true expectation value"? Are you using the same definition I have been using--the average in the limit as the number of trials goes to infinity--or something else?

Also, when you say "it is their only hope", what is "it"? Are you saying their only hope is to make some assumptions about the values of λ in their experiment, such as the idea that the distribution of different λi's in their sample is similar to the one given by the probability distribution function (in the discrete case I label this function P(λi), in the continuous case we'd need a probability density function which I would label ρ(λ))? And if you are saying physicists have to make some assumption about the values of λ in their experiment, why did you object so venomously at the end of post #1228 to a statement of mine which just suggested this was what you were saying, namely:
billschnieder said:
JesseM said:
If you think a physicists comparing experimental data to Bell's inequality would actually have to draw any conclusions about the values of λ on the experimental trials, I guarantee you that your understanding is totally idiosyncratic and contrary to the understanding of all mainstream physicists who talk about testing Bell's inequality empirically.
Grasping at straws here to make it look like there is something I said which you object to. Note that you start the triumphant statement with an IF and then go ahead to hint that what you are condemning is actually something I think but you provide no quote of mine in which I said anything of the sort. I thought this kind of tactics was relagated to talk-show tv and political punditry.
billschnieder said:
This is the whole point! They can't just measure crap and plug it into Bell's equations unless they can ascertain that it is a damn good estimate of the true expectation values!
Yes, but to get a "damn good estimate of the true expectation values", all that's necessary is that the actual frequencies of different measurement results were close to the "true probabilities" (in frequentist terms) in equations like this one:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

As long as the fraction of trials where they got a given pair of results like (+1 on detector with setting a, +1 on detector with setting b) is close to the corresponding "true probability" P(detector with setting a gets result +1, detector with setting b gets result +1), then the sample average of all the products of measured pairs will be close to the expectation value. And the law of large numbers says that the measured fractions are likely to be close to the true probabilities for a reasonably large number of trials (a few thousand or whatever), even if the number of trials is small compared to the number of possible values of λ so that the frequencies of different λi's in the particles they sampled were very different from the frequencies in the limit of an infinite number of trials, which is what is given by the probability distribution on λ. Do you disagree with that "even if"? If so, this might be a good time for you to finally address the "coin-flip simulation" argument from post #1214 which you never responded to.
billschnieder said:
If it is a very good estimate, then the probability distribution of λ in their sample will not be significantly different from the true probability distribution of λ. A representative sample is one in which those two probabilty distributions are the not significantly different. That is why the fair sampling assumption is made!
The fair sampling assumption discussed on wikipedia doesn't say anything about the full set of all hidden variables associated with the particles, it just says the fair sampling assumption "states that the sample of detected pairs is representative of the pairs emitted", i.e. if 2000 pairs were emitted but only 1000 pairs were detected and recorded, then if 320 of those pairs gave result (+1 on detector with setting a, -1 with detector with setting b), then the fair sampling assumption would say that about 640 of the pairs emitted would have been predetermined to give result (+1 on detector with setting a, -1 with detector on setting b). Aside from those two predetermined results, the fair sampling assumption doesn't assume anything else about the hidden variables in your sample being "representative" of all those emitted.
billschnieder said:
Again that is the whole point. Without knowing λ, the experimenters have no way of making sure that the sample they used is representative, the best they can do is ensure that empirical probability distributions in the datasets used to calculate their three terms are not significantly different. And they can make sure of that by sorting the data the way I described. In that case, Bell's inequality is guaranteed to be obeyed. So they can not make sure of it, but they can verify it.
On the subject of "resorting", I haven't yet responded to your post #1224 (and I do want to get back to that one), but your reply there was your usual unending supply of negativity and hostility about everything I said, with no comment along the lines of "yes, it looks like your example finally indicates that you understand what I mean by 'resorting'" or "no, your example still indicates a misunderstanding, here is where your example needs to be modified". So can you just tell me yes or no, was I right to think that by "resorting" you meant renumbering the iterations on each of the three runs, in such a way that if we look at the ith iteration of runs with settings (a,b) and (a,c) they both got the same result for setting a, if we look at the ith iteration of runs with settings (a,b) and (b,c) they both got the same result for setting b, and if we look at the ith iteration of runs with settings (b,c) and (a,c) they both got the same result for setting c?

If this is correct, then I'll just note that even if you can do this resorting, it doesn't guarantee that the "hidden triples" associated with the ith iteration of all three runs were really the same, much less that the value of λ (which can encompass many more details than just three predetermined results for each setting) was really the same on all three. Of course if you can do such a resorting it shows that it is hypothetically possible that your dataset could have been generated by hidden variables which were the same for the ith iteration of all three runs, and if you can do such a resorting it also guarantees that your data will obey the inequality. Is that all you're claiming, or are you claiming something more about the significance of "resorting"?
billschnieder said:
I hope that you will find time out of your busy schedule to comment on this example I presented:
For example:

Let us define A(a,λ) = +/-1 , B(a,λ) just like Bell and say that the functions represent the outcome of two events on two stations one on Earth (A) and another (B) on planet 63, and in our case λ represents non-local mystical processes which together with certain settings on the planets uniquely determine the outcome. We also allow in our spooky example for the setting a on Earth to remotely affect the choice of b instantaneously and vice versa. Note in our example, there is no source producing any entangled particles, everything is happening instantaneously.

The expectation value for the paired product of the outcomes at the two stations is exactly the same as Bell's equation (2). If you disagree, explain why it would be different or admit that the physical assumptions are completely peripheral.
First of all, it's a physical assumption that the result A on Earth depends only on a and λ and can therefore be written A(a,λ)--if you allow "spooky" influences, why can't the result A on Earth depend on the setting b, so that if on Earth we have setting a1 and hidden variables in state λ5, and on the other planet the experimenter is choosing from settings b1 and b2, then it could be true that A(a1, λ5, b1)=+1 but A(a1, λ5, b2)=-1? It's also a physical assumption that the measurement result is a deterministic function of the detector settings and some set of hidden variables prior to measurement, it could be a probabilistic function like P(A=+1|a1, λ5)=0.7 and P(A=-1|a1, λ5)=0.3. It's also a physical assumption that the probability distribution function on different values of λ, which I write as P(λi) in the case that λ takes a discrete set of values and ρ(λ) in the case that it takes a continuous set, would be the same in the integrals for E(a,b) and E(b,c) and E(a,c)--it's quite conceivable that the true probabilities (in the frequentist sense) of different values of λ could change depending on what detector settings were chosen. All the above violations of Bell's assumptions, which would make his equations incorrect, could easily be programmed into a computer simulation which would produce lists of pairs for different detector settings. So it's clearly not true that the equations in Bell's paper require no physical assumptions to ensure their validity.
 
Last edited:
  • #1,259
So we went from "Is action at a distance possible as envisaged by the EPR Paradox." to a discussion of Bell and in particular whether some experiments provide an experimental verification of the violation by QM.

Of course, Bell's inequality do not apply to QM and we all understand that what this is all about is the conjunction of "locality" and "realism". To make the experiments possibly meaningful, you need a special form of the Boole inequalities (no misprint here) and this is what CHH provides, together with supplementary hypotheses, such as fair sampling (also discussed by CHSH, Bell, and others). So Clauser-Aspect-ect. type of experiments "prove" (in a physicist's sense) that:
IF local realism
1) Holds true,
2) Has some extra properties,
THEN, assuming furthermore that:
3) the "loopholes" are irrelevant,
one of 1), 2) and 3) has to be relinquished.


In fact these experiments mostly prove QM right once more, and where there was no much surprise, except perhaps for Louis de Broglie, but after all that he has given to science, one can perhaps forgive a small blunder, isn't it? The sad thing is that people like Aspect refuse to see conservation rules and Malus law where they belong as this would trivialize a bit the
context of the experiments. Sad also the mis-representation of Einstein after so much work has been done by Jammer, Fine and others. I invite everyone to read the Bell 1964 paper on the inequalities and the EPR paper, and what Einstein writes about de Broglie and Bell, and what Rosen wrote about HV in the 50th anniversary of EPR meeting, and compare that to what Bell write of the content of EPR and the beliefs and intents of the authors. I got into QM because I did find non-locality beautiful, and then I opened the literature and find collections of misrepresentations of the truth (something almost equal to a 3 letters word that starts with an L and ends with an ... but I digress).

Now, telling that 1) is false could just mean that "realism" is false, the quasi claim of the paper I have mentioned and that has been now documented, or that "locality is false", unfortunately now the leading belief among physicists related to QM (or so it seems, despite the opinion of Sir Anthony Leggett, an authority on most of what we discuss here), or that both are false. I believe that only locality is false but I'd like proofs, even if the Europ. J. of Phys. paper which I have mentioned as an indication to this effect turns out to be a proof acceptable by physicists (the realm of proofs being logic and math, or math including logic).

I do believe that further experiments can be designed to make the truth of "realism" ( i.e., [\I] I recall pre-existence of observable values to measurement: in fact I prefer "weak realism" which means that but before any time when some measurement is actually made, which is the minimal assumption in the realism vein to allow the proof of a Bell Theorem)) an issue in physics and no more only in philosophy (of course some beliefs would have to be involved as physics is not pure math). Hence I believe that discussing too much Bell's theory is a distraction (Bell was a realist and the net effect of his paper, which should have been progress, has a big part of regression as NAIVE hidden variables, of a type that Einstein would never had accepted (he found the theories of de Broglie and Bohm very naive). Reasonable Hidden Variable (HVs) should be compatible with the uncertainty principle and should not give meaning to two conjugate variables at once. No Bell theorem could be written with such HVs. Similarly, if the EPR condition of reality had been taken seriously by Podolsky who wrote the EPR paper, meaning if all quantities would me measured somehow, at most 2 spin projections would make sense in the EPRB context.
Perhaps the worst "crime" of Bell was to start his 1964 inequalities paper by a misquotation of the EPR paper, then answering in a dishonest way to Jammer who had noticed that Einstein (at least after 1927) never supported HVs (at least the naive one that Bell let us assume the authors supported)). Even if in 1964 Bell did not know ho wrote the EPR paper nor Einstein's discussion of the completeness problem, he learned it or did hard work to avoid that. We are now (and not because of Bell only of course) in a situation where the level of honesty in citations and even quotes in physics is so low that no other discipline would resist. This at a time when many groups attack science violently.

But the question was:
"Is action at a distance possible as envisaged by the EPR Paradox."?
To that I would answer that Einstein version of the EPR matter does not involve action at a distance (the paper neither, but it is so oddly written as detailed, for instance by Arthur Fine in The Shaky Game..): what one has there is conservation, and much weaker in fact than in the classical case when in case of conservation, say of the magnetic momentum, all its projection would be conserved where conjugacy consideration prevent that in the quantum case. Even if you accept non-locality (for which no evidence has been given and which should probably be once and for all eliminated, except perhaps to let one have even better arguments that what one has now), nothing like action at distance is enabled by EPR nor by the context of EPR. At best, the value of some observables would depend of the setting of apparatus that is used so that there is space separation between the apparatus measurement and the related observables measurements, BUT it has been many time proved that such effect of non-locality (assuming again that non-locality holds true) would not permit any sueprluminal message transmission). Now it follows from the paper that I have cited from Europ J of Phys that if there are HVs then if for these HVs Non-locality does
not enable Super-Luminal Message transmission, then some Boole (or Bell) type inequalities hold true without assuming locality. So non-locality does not cure anything and one can thus know that the cause of inequalities violated by physics is nothing but "realism" (or even "weak realism") so that non-locality (as well as "weak realism" goes away from physics. Remains to explains how the macrocosm generates realism, and also geometry. In brief, to the question:

"Re: Is action at a distance possible as envisaged by the EPR Paradox."? (the "?" is mine, but otherwise, it's not a question), I answer:

"This is not a rightful question as there is no action at a distance envisaged by the EPR Paradox."

Furthermore, any sort of "action at a distance possible as inspired by miss-reading of the EPR Paradox" and that would permit as much as super-luminal message transmission is impossible according to physics as we know it in 2010.

FACT (to support what comes after "Furthermore, any sort...": Some extremely good physicists and mathematicians (active and/or retired) are crackpots (and I have belonged for years to both professions, so that I witnessed the oddest scenes I could ever imagine). Since rumors do not even require a good scientist near the origin of said rumor, anything that sound weird should be a-priori considered as weird until proven otherwise.
 
  • #1,260
charlylebeaugosse said:
1. I invite everyone to read the Bell 1964 paper on the inequalities and the EPR paper, and what Einstein writes about de Broglie and Bell...

2. ...has a big part of regression as NAIVE hidden variables, of a type that Einstein would never had accepted (he found the theories of de Broglie and Bohm very naive). Reasonable Hidden Variable (HVs) should be compatible with the uncertainty principle and should not give meaning to two conjugate variables at once. No Bell theorem could be written with such HVs. Similarly, if the EPR condition of reality had been taken seriously by Podolsky who wrote the EPR paper, meaning if all quantities would me measured somehow, at most 2 spin projections would make sense in the EPRB context.

3. Perhaps the worst "crime" of Bell was to start his 1964 inequalities paper by a misquotation of the EPR paper, then answering in a dishonest way to Jammer who had noticed that Einstein (at least after 1927) never supported HVs (at least the naive one that Bell let us assume the authors supported)). Even if in 1964 Bell did not know ho wrote the EPR paper
...

1. I think the EPR & Bell source papers are wonderful, I have copies available on my site for those who wish to read them. I have read a bit here or there about Einstein on Bohm (I am sure you didn't mean Bell), but perhaps you are referring to some specific comnment? I am not sure I follow your point here.


2. Einstein gave us the "the moon is there when not looking at it" comment, so I am not sure I quite agree if you are saying that Einstein was not a "naive" realist. (Although I personally don't care for the use of the word naive as it comes off as an insult.) But I would be interested in a quote that clearly expresses a) what realism looks like which is NOT naive; and more importantly b) any evidence Einstein subscribed to that view. Given his "moon" comment, which is pretty clearly in the "naive" school.

In my mind: the HUP flies in the face of all versions of realism. I mean, the word just doesn't have much meaning if you reject "simultaneous elements of reality" as too naive.


3. Einstein's name was on the 1935 paper, not really sure why there would be a need to back away from it. It was a great paper, and is quite important even while being wrong (in its final conclusion).
 

Similar threads

2
Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
2K
Replies
6
Views
1K
Replies
2
Views
1K
Replies
100
Views
9K
Replies
6
Views
3K
Back
Top