Joy Christian, Disproof of Bell's Theorem

In summary: The article is discussing Joy Christian's latest paper which purports to disprove Bell's theorem. The article does a good job of summarizing the paper and its contents. The article does not provide a verdict on whether or not the paper is correct or significant.
  • #1
bcrowell
Staff Emeritus
Science Advisor
Insights Author
Gold Member
6,724
431
Joy Christian, "Disproof of Bell's Theorem"

This showed up on the arxiv blog today: Joy Christian, "Disproof of Bell's Theorem," http://arxiv.org/abs/1103.1879

I'm not enough of a specialist to be able to judge the correctness or significance of the result.

Comments?
 
Physics news on Phys.org
  • #2


A(a,lambda) seems to depend only on Lambda according to (1).
my Joy remains.
 
  • #3


I'm not familiar enough to judge either, but i just find it interesting that the only references for the paper, besides Bell's original paper, are Christian's own papers...
 
  • #4


I'm not knowledgeable enough to judge/comment definitively either. But here's my two cents in lieu of some experts chiming in.

Christian apparently became convinced some years ago that there's nothing special about Bell's theorem or quantum entanglement. Since then he's presented a number of nonrealistic counterexamples to Bell's theorem. This is the latest. My guess, not having worked through all of it, is that all his math is probably correct, but that his result will probably be regarded as insignificant in that it's a consequence of assumptions/definitions that seem even more clearly nonrealistic than his previous attempts.
 
  • #5


I tried to read one of Christian's "disproof" articles once, and it was a bunch of incoherent nonsense. It was impossible to make sense of what he was saying. I decided then to not let him waste any of my time again.

There are several other threads about his articles by the way. In one of them, I got an infraction for posting a link to the article I alluded to above. A bit excessive perhaps, but I do agree that discussions about articles that the authors have been unable to get published don't really belong in this forum.
 
Last edited:
  • #6


His paper is interesting, but I have heard a convincing explanation for why it is probably not significant:

Assuming that all his mathematics are correct, he still assumes that the measurement outcomes follow some weird algebra which when combined together, to give a desired value which he designed to replicate quantum experiments. However, measurement outcomes are defined by clicks on detectors, and it is the experimentalist, not the experiment itself, who assigns the values to the outcomes and combines the values of the outcomes to get the correlations. The experimentalist combines the values in such a way that it follows NORMAL algebra, and yet it still violates Bell's inequality. You just need to see Ekert's cryptographical protocol to see that this is true. In that protocol, two experiementalists randomly performs independent measurements and assigns +1 or -1 to the outcomes, then compares them using simple multiplication to violate Bell's inequality, no special algebra required.

Therefore, in a sense, Joy Christian is missing the point. What is required, is an explanation of why it is that when 2 experimentalists perform independent measurements and assigns real values to them, and thereafter combine them using ONLY normal algebra, the resulting correlations STILL violate Bell's inequality. So even though he found an interesting algebra which is local with hidden variables and is able to reproduce certain quantum results when you combine them, it still does not explain why when experimentalists apply only normal algebra to their measured outcomes, Bell's inequality is still violated. So his 'disproof' is probably incorrect.
 
  • #7


bcrowell said:
This showed up on the arxiv blog today: Joy Christian, "Disproof of Bell's Theorem," http://arxiv.org/abs/1103.1879

I'm not enough of a specialist to be able to judge the correctness or significance of the result.

Comments?

His "proof" gives values for A and B, along with joint probabilities for A & B. But a realistic proof should give a value of C as well, such that various joint probabilities for A, B and C sum to 1 and none are negative. It is no surprise that you can derive a Bell Inequality violation without the realistic test applied.

Don't hold your breath on this one.
 
  • #8


bobbytkc said:
So even though he found an interesting algebra which is local with hidden variables.

Thats goes back to Bohm and Hiley.
 
  • #9


yoda jedi said:
Thats goes back to Bohm and Hiley.

You are talking about Bohmian mechanics?

Bohmian mechanics has hidden variables (in that particles have definite position and momentum, but still obeys Schrodinger's equations), however, the cost is that it is distinctively non local, so it still falls under the umbrella of Bell's theorem, which states that quantum mechanics is non-local and/or has no hidden variable. So in that respect, you are mistaken. What Joy Christian purports to have found is something more significant, that a local AND hidden variable theory is possible and matches the predictions of quantum mechancs. Unfortunately, it is probably not the right way to approach it.
 
  • #10


DrChinese said:
His "proof" gives values for A and B, along with joint probabilities for A & B. But a realistic proof should give a value of C as well, such that various joint probabilities for A, B and C sum to 1 and none are negative. It is no surprise that you can derive a Bell Inequality violation without the realistic test applied.

Don't hold your breath on this one.

What C? I believe Bell's theorem is only bipartite,no?
 
  • #11


bobbytkc said:
His paper is interesting, but I have heard a convincing explanation for why it is probably not significant:

Assuming that all his mathematics are correct, he still assumes that the measurement outcomes follow some weird algebra which when combined together, to give a desired value which he designed to replicate quantum experiments. However, measurement outcomes are defined by clicks on detectors, and it is the experimentalist, not the experiment itself, who assigns the values to the outcomes and combines the values of the outcomes to get the correlations. The experimentalist combines the values in such a way that it follows NORMAL algebra, and yet it still violates Bell's inequality. You just need to see Ekert's cryptographical protocol to see that this is true. In that protocol, two experiementalists randomly performs independent measurements and assigns +1 or -1 to the outcomes, then compares them using simple multiplication to violate Bell's inequality, no special algebra required.

Therefore, in a sense, Joy Christian is missing the point. What is required, is an explanation of why it is that when 2 experimentalists perform independent measurements and assigns real values to them, and thereafter combine them using ONLY normal algebra, the resulting correlations STILL violate Bell's inequality. So even though he found an interesting algebra which is local with hidden variables and is able to reproduce certain quantum results when you combine them, it still does not explain why when experimentalists apply only normal algebra to their measured outcomes, Bell's inequality is still violated. So his 'disproof' is probably incorrect.
I agree. Your post is enlightening and points to the reason why Christian's purported LR models of entanglement are not regarded as LR models of entanglement -- which results, ultimately, from a qualitative assessment of how the purported local realism is encoded in the formulation(s).

bobbytkc said:
What Joy Christian purports to have found is something more significant, that a local AND hidden variable theory is possible and matches the predictions of quantum mechancs.
Yes. Obviously, proposing a LR formulation that doesn't reproduce qm expectation values is a non-starter.

bobbytkc said:
Unfortunately, it is probably not the right way to approach it.
If the aim is to produce an LR theory of entanglement, then it's the only way to approach it. Explicit, clearly LR models a la Bell have been definitively ruled out. Christian's LR offerings fail the quantitative test of this (which is what DrC is talking about). So we know that Christian's models are not Bell LR.

However, in the absence of a logical proof that Bell LR is the only possible LR, then it remains to assess each purported LR model qualitatively. Christian's λ fail the realism test by any qualitative standard that I'm aware of.

If the aim is to understand why LR models of entanglement are impossible (even in a world that obeys the principle of locality and the light speed limit), then I agree with you that Christian is missing the point and taking the wrong approach. But it's still fun to check out the stuff that he comes up with.
 
  • #12


bobbytkc said:
What C? I believe Bell's theorem is only bipartite,no?

With 2 entangled photons, you can measure coincidence at 2 angle settings (say A and B, which will follow the cos^2 rule for AB). The third, C, is hypothetical in a realistic universe because the realist asserts it exists. However it cannot be measured. QM makes no statement about its existence, so no problem there. But the realist does. Bell pointed out that the values of coincidence for AC and BC will not both follow the QM predictions (if C existed). See his (14) where C is introduced.

So a local realistic model without a prediction for C is not truly "realistic" after all. In other words, A and B are not truly independent of each other for if they were, there would also be C, D, E... values possible which would all follow the QM expectation values when considered with A or B. Many "disproofs" of Bell conveniently skip this requirement.
 
  • #13


bobbytkc said:
You are talking .....?


bobbytkc said:
So even though he found an interesting algebra which is local with hidden variables

Clifford Algebra.
(goes back to Bohm and Hiley).
 
Last edited:
  • #14


Just to add to what I said above:

a) It is a requirement that 1 >= f(A,B) >= f(A,B,C) >= 0 where f() is a correlation function. This is the realism requirement. Using the logic of Bell, this is shown to be false if C is assumed to exist. (See also "Bell's Theorem and Negative Probabilities".)

b) It is a requirement that f(A,A)=f(B,B)=1 (or 0 depending on your basis). This is the requirement for perfect correlations. This is sometimes overlooked, but is actually the reason someone might think there are hidden variables in the first place. However, it is really a consequence of the cos^2 rule since:

f(A,A) = cos^2(A,A) = cos^2(A-A) = cos^2(0) = 1
 
  • #15


DrChinese said:
With 2 entangled photons, you can measure coincidence at 2 angle settings (say A and B, which will follow the cos^2 rule for AB).
What might be confusing for some is that what you're denoting as A,B and C are unit vectors associated with spin analyzer settings, and are usually denoted by bolded lower case letters (eg., a, b, c, etc.).

DrChinese said:
The third, C, is hypothetical in a realistic universe because the realist asserts it exists. However it cannot be measured.
This might be confusing because (a,b), (a,c), and (b,c) are denotations of different dual analyzer settings, ie., different θ, or angular differences (a-b), in 3D, Euclidean space, and therefore realistic, and all follow the cos2 rule.

So, in what sense is c not realistic?

DrChinese said:
QM makes no statement about its existence, so no problem there.
The qm (a,b) refers to any combination of analyzer settings, any θ, wrt the dual, joint analysis of bipartite systems. Since a can take on any value from the set of all possible analyzer settings, and so can b, then it isn't clear what you mean that qm makes no statement about the existence of a certain possible analyzer setting.

[...]

DrChinese said:
So a local realistic model without a prediction for C is not truly "realistic" after all.
But all purported LR models make a prediction for any individual analyzer setting, as well as any θ. So does qm.

DrChinese said:
In other words, A and B are not truly independent of each other ...
Well, obviously the analyzer settings aren't independent wrt the measurement of any given pair since together they're the global measurement parameter, θ. Is that what you mean? If not, then what?

DrChinese said:
... for if they were, there would also be C, D, E... values possible which would all follow the QM expectation values when considered with A or B. Many "disproofs" of Bell conveniently skip this requirement.
It's not clear to me what you're saying or how you got there.
 
  • #16


ThomasT said:
But all purported LR models make a prediction for any individual analyzer setting, as well as any θ. So does qm.

Joy's doesn't, and it should if it is realistic.

Where a=0, b=67.5, c=45:

The QM prediction for f(a, b)=.1464
There is no QM prediction for either f(a, b, c) or f(a, b, ~c). QM is not realistic. Note: the ~c means Not(c) which is the same as saying you would get the opposite result. I.e. a plus instead of minus, or vice versa.

The LR prediction for f(a, b)=.1464. OK so far.
But if any LR is truly realistic, then it has a prediction for both f(a, b, c) and f(a, b, ~c). For the angles above, what is that value? When you run the math, the value for f(a, b, ~c) comes out -.1036 which is impossible as it is less than zero.
 
  • #17


DrChinese said:
Joy's doesn't, and it should if it is realistic.
Sure it does. We're talking about analyzing bipartite systems with dual analyzers. ab, ac, and bc are the only possible analyzer settings for a given run.

DrChinese said:
There is no QM prediction for either f(a, b, c) or f(a, b, ~c).
Why would there be? The bipartite system is generating data via dual, not triple, analyzers.

DrChinese said:
But if any LR is truly realistic, then it has a prediction for both f(a, b, c) and f(a, b, ~c).
Why, if a model is intended to describe a bipartite system that's generating data via dual, not triple, analyzers? Seems like an unrealistic requirement.
 
  • #18


bcrowell said:
This showed up on the arxiv blog today: Joy Christian, "Disproof of Bell's Theorem," http://arxiv.org/abs/1103.1879

I'm not enough of a specialist to be able to judge the correctness or significance of the result.

Comments?

There is a book edited by him and myrvold.

Quantum reality, relativistic causality, and closing the epistemic circle. (Springer).

http://books.google.co.ve/books?id=...&resnum=3&ved=0CCsQ6AEwAg#v=onepage&q&f=false

http://www.springerlink.com/content...aef650fda1&pi=0#section=23120&page=1&locus=71
 
Last edited:
  • #19


ThomasT said:
Why, if a model is intended to describe a bipartite system that's generating data via dual, not triple, analyzers? Seems like an unrealistic requirement.

What else would realism be except the requirement that unmeasured c exists alongside measured a and b?
 
  • #20


DrChinese said:
What else would realism be except the requirement that unmeasured c exists alongside measured a and b?
Realism is made explicit in Bell's equation (1), where he defines the functions A (operating at S1) and B (operating at S2),
A(a,λ) = ± 1, B(b,λ) = ± 1 ,
where the spin analyzer settings are described as unit vectors in 3D Euclidean space and denoted as a and b, and where λ denotes arbitrary hidden parameters (determining individual detection) carried by the particles from the source and is, in his equation (2), associated with the particles via probability density ρ.

Locality is made explicit via his equation (2),
P(a,b) = ∫dλρ(λ)A(a,λ)B(b,λ)

In Bell's (14) c isn't unmeasured. It represents an analyzer setting that produces a third individual ± 1 datastream. There are only two analyzers, one at S1 and one at S2, which produce the three joint datastreams, ab, ac, and bc necessary for the inequality, Bell's (15).

I still don't understand what you're getting at (at one point I thought I did, but now I see that I don't). But since we're assessing Christian's model as unrealistic for different reasons, then it seems ok to continue the discussion regarding your 'realistic dataset requirement".
 
Last edited:
  • #21


ThomasT said:
Realism is made explicit in Bell's equation (1), where he defines the functions A (operating at S1) and B (operating at S2),
A(a,λ) = ± 1, B(b,λ) = ± 1 ,
where the spin analyzer settings are described as unit vectors in 3D Euclidean space and denoted as a and b, and where λ denotes arbitrary hidden parameters (determining individual detection) carried by the particles from the source and is, in his equation (2), associated with the particles via probability density ρ.

...

In Bell's (14) c isn't unmeasured. It represents an analyzer setting that produces a third individual ± 1 datastream. There are only two analyzers, one at S1 and one at S2, which produce the three joint datastreams, ab, ac, and bc necessary for the inequality, Bell's (15).

If you can measure it (c), then you don't have a realism assumption. And you specifically say that the analyzers are S1 and S2 for a and b (actually any 2 of a, b, c). The whole idea of Bell is that when you measure ab, there are no datasets for an additional assumed c which is itself consistent as to ac and bc - even though the third is not measured. This is very straightforward, see his (14) and after.
 
  • #22


DrChinese said:
If you can measure it (c), then you don't have a realism assumption. And you specifically say that the analyzers are S1 and S2 for a and b (actually any 2 of a, b, c). The whole idea of Bell is that when you measure ab, there are no datasets for an additional assumed c which is itself consistent as to ac and bc - even though the third is not measured. This is very straightforward, see his (14) and after.
Realism is assumed and explicated by Bell via the functions A and B defined in his equation (1).

Bell's inequality has nothing to do with not being able to generate an abc dataset. Obviously, it's physically impossible to generate an abc dataset using dual analyzers, and it's not clear to me why you think that that has anything to do with Bell's realism assumption.
 
  • #23


ThomasT said:
Realism is assumed and explicated by Bell via the functions A and B defined in his equation (1).

Bell's inequality has nothing to do with not being able to generate an abc dataset. Obviously, it's physically impossible to generate an abc dataset using dual analyzers, and it's not clear to me why you think that that has anything to do with Bell's realism assumption.
Your mistake is a common one. :biggrin:

Bell's (1) leads to nothing inconsistent with QM. You would actually expect perfect correlations from that, and of course we see that experimentally. But Bell's (14+) is required to see the fallacy of (1). Once you try to fit a, b and c into things, it all falls apart. And certainly not before (14).
 
  • #24


For those reading along, allow me to add the following. When you have 2 entangled particles that are essentially clones of each other, you would expect that if they were independent (locality holds) then any measurement on Alice (say) would yield the same result as an identical measurement on Bob. Therefore, you would expect that the result of ANY measurement on either Alice or Bob is actually predetermined. How else to explain the results? This idea - that the results of any measurement is predetermined - can be considered to be the assumption of Realism. Realism is the idea that ALL particle properties are independent of an actual measurement.

Of course, the Heisenberg Uncertainty Principle essentially says the opposite: a measurement of one property makes its non-commuting partner completely uncertain.

So if Realism AND Locality hold, particle properties are predetermined. So presumably the unmeasured properties have values. For polarization of photons, that means that you could expect either a + or - result and that such result would occur with a frequency of somewhere between 0 and 100% of the time. That's reasonable, right?

Ah, reasonable but wrong (says Bell)! Turns out you cannot construct a dataset in which the QM expectation value holds for many a, b and c settings. And yet we said those were predetermined if the entangled particles were really clones and if locality holds.
 
  • #25


DrChinese said:
Your mistake is a common one. :biggrin:
What mistake?

DrChinese said:
Bell's (1) leads to nothing inconsistent with QM.
Agreed. Bell's (1) has to do with spin properties carried by particles produced at a common source which produce individual results. All consistent with the qm model and application of the conservation law.

DrChinese said:
But Bell's (14+) is required to see the fallacy of (1).
What fallacy? Don't we agree that Bell's (1) is consistent with qm, as per Bell himself?

Bell's (14) is a revision of Bell's (2) in view of Bell's (12) and (13). Bell's (2) makes explicit the locality assumption, which is necessary because Bell's (1) doesn't explicate locality wrt joint detections.

DrChinese said:
Once you try to fit a, b and c into things, it all falls apart. And certainly not before (14).
Bell shows that the form of Bell's (2) is incompatible with qm. The incompatibility is due to the locality assumption embodied in the form (2), which converted to (14) and evaluated wrt expectation values for three distinct joint analyzer settings (ab, ac, and bc) gives Bell's inequality.
 
  • #26


ThomasT said:
What fallacy? Don't we agree that Bell's (1) is consistent with qm, as per Bell himself?

...

The issue is that there was no APPARENT flaw in (1) prior to Bell. Bell then showed how this innocent looking formula is wrong. Which it is. We now know that it cannot account at all for the observed behavior.
 
  • #27


DrChinese said:
The issue is that there was no APPARENT flaw in (1) prior to Bell. Bell then showed how this innocent looking formula is wrong.
Bell didn't show this. In fact, he showed that the functions (1) that determine individual detection, are compatible with qm. Which is not to say that standard qm can be interpreted as being realistic. It can't. It's nonrealistic and acausal.

What Bell did show was that the separable form of (2), the embodiment of his locality condition, is incompatible with qm.

DrChinese said:
Which it is.
The functions A and B in (1) can't be said to be wrong, because they're compatible with qm and experiment. They determine individual detection. Period.

DrChinese said:
We now know that it cannot account at all for the observed behavior.
What we know is that the separable form of (2) skews and reduces the range of the the predictions. This is because what's being measured by the analyzers in the joint context is a nonseparable parameter, unchanging from entangled pair to entangled pair (as opposed to what's being measured by the individual analyzers, which varies from particle to particle). Unfortunately for diehard local realists, there's no known way to make a realistic theory local without something akin to Bell's locality condition, which results in a separable form, which skews the predictions.
 
Last edited:
  • #28


ThomasT said:
...This is because what's being measured by the analyzers in the joint context is a nonseparable parameter, unchanging from entangled pair to entangled pair (as opposed to what's being measured by the individual analyzers, which varies from particle to particle). Unfortunately for diehard local realists, there's no known way to make a realistic theory local without something akin to Bell's locality condition, which results in a separable form, which skews the predictions.

Presumably, if it is nonseparable it is also nonlocal. That is consistent with accepted interpretations of QM.

Now Bell's (1) is essentially A(a)={+1,-1}, B(b)={+1,-1}

Bell later effectively says that realism implies simultaneously C(c)={+1,-1}. This assumption is wrong if QM is correct.
 
  • #29


DrChinese said:
Presumably, if it is nonseparable it is also nonlocal. That is consistent with accepted interpretations of QM.
Yes, I agree, given certain definitions of the terms nonseparable and nonlocal. Due to ambiguous connotations of those terms it takes a bit of sorting out. In the case of standard qm nonlocal doesn't mean what it means wrt 3D classical physics, the assumed locality of which, vis SR, is compatible with quantum nonlocality. The nonlocality and nonseparability of entanglement can be taken as referring to essentially the same thing, with nonseparability ultimately tracing back to parameter (not ontological) nonseparability due to the experimental analysis of relationships between particles, which entails the dependence of measured particle properties and why the entangled system can be more completely described than it's subsystems.

DrChinese said:
Now Bell's (1) is essentially A(a)={+1,-1}, B(b)={+1,-1}
Ok.

DrChinese said:
Bell later effectively says that realism implies simultaneously C(c)={+1,-1}. This assumption is wrong if QM is correct.
I understand how this is the basis for your dataset requirement and your 'negative probability' paper, ie., I think it does constitute an understandable insight. My only problems with it were 1) that I thought there might be a more thorough process for assessing proposed LR models, and 2) that I wasn't sure where/why you were reading this, apparently tacit (because I don't remember it being mentioned in Bell's paper), realization on Bell's part into Bell's development of his theorem. I was concerned with nailing down Bell's explicit realism assumption as a guide to evaluating the realism of LR models, and thought that your understanding of that might have been a bit off the mark. In any case, whether Bell was actually thinking along those lines or not is less important than the fact that it works as an evaluative tool.

Regarding Christian, my current opinion is that his LR program fails, and he's missing the point, for essentially the reason that bobbytkc gave in post #6. Christian, apparently, doesn't quite get what the LR program is about.
 
Last edited:
  • #30


ThomasT said:
<SNIP>
Regarding Joy Christian, my current opinion is that his LR program fails, and he's missing the point, for essentially the reason that bobbytkc gave in post #6. Christian, apparently, doesn't quite get what the LR program is about.
["Joy" inserted above for clarity. GW]

1. The thread initiated by me -- https://www.physicsforums.com/showthread.php?t=475076 --

is an off-shoot from another thread discussing Joy Christian's work.


2. I make no claim as to whether Joy Christian does or does NOT understand the LR program. But I would be very surprised if he does not understand it exactly, precisely, whatever.

3. IMHO, it is not that difficult; unless I too am missing some extreme subtlety; or there is being inserted a requirement that goes beyond the Einstein and EPR program.

4. I would certainly expect that anyone, critically and carefully studying Bell's theorem, would be trying to ensure that their efforts did not breach the commonsense (the core Einstein and EPR principles) that attaches to the LR program.

5. However, in this widely rejected/neglected area of study (Einstein's baby, IMHO), slips are possible. So a better critique of JC's work, for those concerned by it, would be to identify JC's error specifically; my own critical opinion of JC's efforts not being relevant here.

6. The point that I would like to emphasize is this: The L*R program, discussed in the above thread (https://www.physicsforums.com/showthread.php?t=475076), is most certainly local and realistic, and in full accord with the Einstein and EPR program, as I understand it. (And I doubt that JC understands it any less than I do -- so why not help him find his slip -- IF slip there be. Because my "guess" is: it's fixable!)
 
  • #31


Nonseparability has been mentioned but I doubt that it's impact to this discussion has been fully understood. In the the Gordon Watson's linked thread he mentioned "triangle inequality", I have a variation of it which may throw some light in a simple and common sense manner why "nonseparability" is so important to the issue being raised by Joy Christian. DrC may be interested in this because it blows the lid off his "negative probabilities" article.

A simple analogy is the x^2 + y^2 = z^2 relationship for right angled triangles, of sides, x, y and z. Consider a process which generates right angled triangles defined within a unit circle, where z is always = 1, x = cos(angle), y = sin(angle), where the angle is randomly chosen each time. Our goal is to measure the lengths of the sides x and y. But, assume that in the first experiment, we can only make a single measurement. So we run our experiment a gazzillion number of times and obtain the averages <x> and <y> averages. Do you think <x>^2 + <y>^2 will obey the relationship of being equal to 1. If you do, think again <x>^2 + <y>^2 converges to 0.8105... not 1, a violation. This is simply because x and y are non-separable in our relationship.

However we can imagine that in our experiment we also had corresponding values for both x and y for each individual measurement. So we might think that using our new dataset with all corresponding values included will result in <x>^2 + <y>^2 = 1, right? Wrong. We get exactly the same violation as before. The reason is separability. But there is one thing we can calculate in our second scenario which we could not in the first. We can calculate <x^2 + y^2> since we now have both points, and indeed we obtain 1 as the result which obeys the relationship.

In our first experiment, x and y do not commute therefore it is a mathematical error to use x and y in the same expression, that is why the violation was observed. In probability theory, an expectation value such as E(a,c) is undefined if A(a,lambda) and A(c,lambda) do not commute. Expectation values are only defined for E(a,c) if there is an underlying probability distribution P(a,c). But it is not possible to measure at angles "a" and "c" on the same particle pair therefore there is no P(a,c) probability distribution. The same is the case in Bell-test experiments and QM, in which it is possible to measure "a" and "b" but not "c" simultaneously so, the pairs measured in different runs do not correspond to each other, so we are left with calculating three different expectation values from three different probability distributions to plug into an inequality in which the terms are defined on the same probability distribution. This is a mathematical error.

Concerning negative probabilities, Dr C says:
X is determined by the angle between A and B, a difference of 67.5 degrees X = COS^2(67.5 degrees) = .1464 This prediction of quantum mechanics can be measured experimentally.*
Y is determined by the angle between A and C, a difference 45 degrees Y = SIN^2(45 degrees) = .5000 This prediction of quantum mechanics can be measured experimentally.*
Z is determined by the angle between B and C, a difference 22.5 degrees Z = COS^2(22.5 degrees) = .8536 This prediction of quantum mechanics can be measured experimentally.*

...

(X + Y - Z) / 2

Substituting values from g. above:

= (.1464 + .5000 - .8536)/2

= (-.2072)/2

= -.1036
Note how he defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.
 
  • #32


Gordon Watson said:
["Joy" inserted above for clarity. GW]

1. The thread initiated by me -- https://www.physicsforums.com/showthread.php?t=475076 --

is an off-shoot from another thread discussing Joy Christian's work.2. I make no claim as to whether Joy Christian does or does NOT understand the LR program. But I would be very surprised if he does not understand it exactly, precisely, whatever.

3. IMHO, it is not that difficult; unless I too am missing some extreme subtlety; or there is being inserted a requirement that goes beyond the Einstein and EPR program.

4. I would certainly expect that anyone, critically and carefully studying Bell's theorem, would be trying to ensure that their efforts did not breach the commonsense (the core Einstein and EPR principles) that attaches to the LR program.

5. However, in this widely rejected/neglected area of study (Einstein's baby, IMHO), slips are possible. So a better critique of JC's work, for those concerned by it, would be to identify JC's error specifically; my own critical opinion of JC's efforts not being relevant here.

6. The point that I would like to emphasize is this: The L*R program, discussed in the above thread (https://www.physicsforums.com/showthread.php?t=475076), is most certainly local and realistic, and in full accord with the Einstein and EPR program, as I understand it. (And I doubt that JC understands it any less than I do -- so why not help him find his slip -- IF slip there be. Because my "guess" is: it's fixable!)
See Carlos Castro's, There is no Einstein-Podolsky-Rosen Paradox in Clifford-Spaces . In C-space, the particles can exchange signals encoding their spin measurement values across a null interval, which isn't the sort of locality required by the LR program. Or can it be translated into that because this is essentially the same as specifying a relationship produced via a common source? I don't know.

Because Christian is using tensors (in the papers using Clifford algebra and in the paper currently under discussion with the Kronecker Delta, Levi-Cevita algebra) to deal with a relationship (which is what Bell tests are actually measuring) between vectors, then maybe I was too quick to dismiss his stuff. Or maybe not. Again, I don't know.

These articles might also be relevant:

Bound entanglement and local realism

All the Bell Inequalities

Clearly, we need some input from experts, or at least more knowledgeable, in the field.
 
Last edited by a moderator:
  • #33


billschnieder said:
Nonseparability has been mentioned but I doubt that it's impact to this discussion has been fully understood. In the the Gordon Watson's linked thread he mentioned "triangle inequality", I have a variation of it which may throw some light in a simple and common sense manner why "nonseparability" is so important to the issue being raised by Joy Christian. DrC may be interested in this because it blows the lid off his "negative probabilities" article.

A simple analogy is the x^2 + y^2 = z^2 relationship for right angled triangles, of sides, x, y and z. Consider a process which generates right angled triangles defined within a unit circle, where z is always = 1, x = cos(angle), y = sin(angle), where the angle is randomly chosen each time. Our goal is to measure the lengths of the sides x and y. But, assume that in the first experiment, we can only make a single measurement. So we run our experiment a gazzillion number of times and obtain the averages <x> and <y> averages. Do you think <x>^2 + <y>^2 will obey the relationship of being equal to 1. If you do, think again <x>^2 + <y>^2 converges to 0.8105... not 1, a violation. This is simply because x and y are non-separable in our relationship.

However we can imagine that in our experiment we also had corresponding values for both x and y for each individual measurement. So we might think that using our new dataset with all corresponding values included will result in <x>^2 + <y>^2 = 1, right? Wrong. We get exactly the same violation as before. The reason is separability. But there is one thing we can calculate in our second scenario which we could not in the first. We can calculate <x^2 + y^2> since we now have both points, and indeed we obtain 1 as the result which obeys the relationship.

In our first experiment, x and y do not commute therefore it is a mathematical error to use x and y in the same expression, that is why the violation was observed. In probability theory, an expectation value such as E(a,c) is undefined if A(a,lambda) and A(c,lambda) do not commute. Expectation values are only defined for E(a,c) if there is an underlying probability distribution P(a,c). But it is not possible to measure at angles "a" and "c" on the same particle pair therefore there is no P(a,c) probability distribution. The same is the case in Bell-test experiments and QM, in which it is possible to measure "a" and "b" but not "c" simultaneously so, the pairs measured in different runs do not correspond to each other, so we are left with calculating three different expectation values from three different probability distributions to plug into an inequality in which the terms are defined on the same probability distribution. This is a mathematical error.

Concerning negative probabilities, Dr C says:

---Quote---
X is determined by the angle between A and B, a difference of 67.5 degrees X = COS^2(67.5 degrees) = .1464 This prediction of quantum mechanics can be measured experimentally.*
Y is determined by the angle between A and C, a difference 45 degrees Y = SIN^2(45 degrees) = .5000 This prediction of quantum mechanics can be measured experimentally.*
Z is determined by the angle between B and C, a difference 22.5 degrees Z = COS^2(22.5 degrees) = .8536 This prediction of quantum mechanics can be measured experimentally.*

...

(X + Y - Z) / 2

Substituting values from g. above:

= (.1464 + .5000 - .8536)/2

= (-.2072)/2

= -.1036
---End Quote---

Note how he defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.

Note how he defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.

I, for one, will be very interested in studying this beautiful example. My current concern is to first show where BT fails. That for me will open the way for me (and others) to assess what I would presently call "analogies." For my job then would be to use examples such as yours; showing how they fit into a "more formal" disproof of BT.

Until that time, I can hear Bell's supporters discussing "loopholes" against you, ad nauseam.

(The boot will be on the other foot, as it were, for them then; considering all the loopholes that EPR-style supporters adduce to ignore BT and related experimental results. Me here wanting to be very clear that LOOPHOLES are not only unnecessary but unwarranted. And have never been considered valid or relevant by me.)
 
Last edited:
  • #34


billschnieder said:
A simple analogy is the x^2 + y^2 = z^2 relationship for right angled triangles, of sides, x, y and z. Consider a process which generates right angled triangles defined within a unit circle, where z is always = 1, x = cos(angle), y = sin(angle), where the angle is randomly chosen each time. Our goal is to measure the lengths of the sides x and y. But, assume that in the first experiment, we can only make a single measurement. So we run our experiment a gazzillion number of times and obtain the averages <x> and <y> averages. Do you think <x>^2 + <y>^2 will obey the relationship of being equal to 1. If you do, think again <x>^2 + <y>^2 converges to 0.8105... not 1, a violation. This is simply because x and y are non-separable in our relationship.

However we can imagine that in our experiment we also had corresponding values for both x and y for each individual measurement. So we might think that using our new dataset with all corresponding values included will result in <x>^2 + <y>^2 = 1, right? Wrong. We get exactly the same violation as before. The reason is separability. But there is one thing we can calculate in our second scenario which we could not in the first. We can calculate <x^2 + y^2> since we now have both points, and indeed we obtain 1 as the result which obeys the relationship.
You say you're varying θ randomly. So, <θ> = 45°, where <x> = cosθ = .707..., <y> = sinθ = .707... , (.707...)2 + (.707...)2 = 1. No violation.

billschnieder said:
In our first experiment, x and y do not commute therefore it is a mathematical error to use x and y in the same expression, that is why the violation was observed. In probability theory, an expectation value such as E(a,c) is undefined if A(a,lambda) and A(c,lambda) do not commute. Expectation values are only defined for E(a,c) if there is an underlying probability distribution P(a,c). But it is not possible to measure at angles "a" and "c" on the same particle pair therefore there is no P(a,c) probability distribution. The same is the case in Bell-test experiments and QM, in which it is possible to measure "a" and "b" but not "c" simultaneously so, the pairs measured in different runs do not correspond to each other, so we are left with calculating three different expectation values from three different probability distributions to plug into an inequality in which the terms are defined on the same probability distribution. This is a mathematical error.
Bell's inequality is based on the fact that for x,y,z = ±1, you have |xz - yz| = 1 - xy. Substituting x = A(b,λ), y = A(c,λ), z = A(a,λ) and integrating wrt the measure ρ, you get 1 + P(b,c) ≥ |P(a,b) - P(a,c)| , (Bell's inequality), in view of Bell's (14), P(a,b) = - ∫dλρ(λ)A(a,λ)B(b,λ) . There's no mathematical error in Bell's stuff.

billschnieder said:
Note how he (DrC) defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.
I don't see any mathematical error in DrC's stuff either. It's an interesting numerical treatment based on Einstein realism which demonstrates the incompatibility with qm.
 
Last edited:
  • #35


ThomasT said:
You say you're varying θ randomly. So, <θ> = 45°, where <x> = cosθ = .707, <y> = sinθ = .707 . (.707)2 + (.707)2 = 1. No violation.

This is inaccurate. Generating θ randomly around a circle gives us values in the range [0,360]. So how do you get <θ>=45 degrees shouldn't it be 180? Even if your 45 degrees were correct which it is not, <x> is not the same as Sin<θ>. You may be tempted to say Sin(180) = 0 and Cos(180) = 1 which still adds up to 1 but the error here is that you are assuming that information is present in the experiment which is is not. Remember that x is a length and our experimenter is measuring a length not an angle. He is never given an angle, just a triangle so he can not determine <θ>. He only has the length which is the absolute value of Sin(θ). Secondly, were you to suggest that the mean value for x which he measured were <x> = 0 (cf sin(180)), you will be suggesting that he actually measured negative lengths which is not possible.

In fact <x> is 0.6366.. NOT 0.707 as you stated. You can verify it with a simple calculation, the python code below does that
0.6366^2 + 0.6366^2 = 0.81056.. NOT 1

I hope you see that this simple example is not as stupid as you may have assumed at first. In fact your misunderstanding of this example highlights exactly the point I'm trying to make.

Code:
import numpy
# generate 1million angles from 0 to 360
thetas = numpy.linspace(0,2*numpy.pi, 1000000)

# calculating |sin(<theta>)|
x1 = numpy.abs(numpy.sin(thetas.mean())) 
print "%0.4f" % x1
#Output: 0.0000                

# calculating <|sin(theta)|>
x2 = numpy.abs(numpy.sin(thetas)).mean()
print "%0.4f" % x2
#Output 0.6366

Bell's inequality is based on the fact that for x,y,z = ±1, you have |xz - yz| = 1 - xy. Substituting x = A(b,λ), y = A(c,λ), z = A(a,λ) and integrating wrt the measure ρ, you get 1 + P(b,c) ≥ |P(a,b) - P(a,c)| , (Bell's inequality), in view of Bell's (14), P(a,b) = - ∫dλρ(λ)A(a,λ)B(b,λ) . There's no mathematical error in Bell's stuff.
That is not my point. For the valid inequality |xz - yz| = 1 - xy., all three terms xz, yz, and xy are defined within the same probability space. You can not take terms from three different probability spaces and substitute them in the above equation. The problem is not with the inequality. It is a question of whether bipartite experiments, and QM's predictions for expectation values for bipartite experiments (which do not commute with each other) can be used as legitimate sources of terms to be substituted into the equation for comparisons. I believe not.

I don't see any mathematical error in DrC's stuff either. It's an interesting numerical treatment based on Einstein realism which demonstrates the incompatibility with qm.
Given that you did not understand my original point, I did not expect that you will see the error either. The main point is simply that you can not combine expectation values for non-commuting observables into the same expression as is commonly done when comparing Bell's inequality with QM, and as DrC does in the text I quote. If anybody thinks it is a valid mathematical procedure, let them say so and we can discuss that in a new thread.
 
Last edited:

Similar threads

Replies
220
Views
20K
Replies
55
Views
7K
Replies
70
Views
17K
Replies
28
Views
2K
Replies
20
Views
2K
Replies
333
Views
14K
Back
Top