Bell's theorem and local realism

In summary: Bell inequalities. So I think you are right that local realism is an assumption that is made in the theorem.In summary, the theorem says that quantum mechanics predicts correlations in certain pairs of highly entangled particles (say, A and B) that cannot be explained by a complete knowledge of everything in the intersection of A's and B's respective light cones. Bell's theorem refers to correlations between "classical" or "macroscopic" experimental outcomes. So as long as one believes that the experimental outcomes in a Bell test are "classical", then the violation of the inequality does rule out local realism.
  • #106
stevendaryl said:
Bell's theorem is an answer to the question: "Can the correlations in EPR be explained by supposing that there are hidden local variables shared by the two particles?" The answer to that question is "no". It's not purely a question about locality, it's a question about a particular type of local model of correlations.
That particular type,that local model is what I call locality, making it purely a question about it.
Is this the same locality as that of relativity, and classical field theory in general? What do you think?

The fact that it isn't purely about locality is proved by the possibility of superdeterministic local explanations for the EPR. (On the other hand, if you're going to allow superdeterminism, then the distinction between local and nonlocal disappears, I guess.)
And it therefore spoils the supposed proved fact:wink:
That's why I insist there should be one unified and specific definition of locality, to avoid semantic confusion.
 
Physics news on Phys.org
  • #107
atyy said:
How about this method of arguing that reality is at least assumed in using a Bell test to disprove nonlocality? The Bell inequality is about the correlation between definite results. In quantum mechanics, we can put the Heisenberg cut however we want. So Bob can deny the reality that Alice had a result at spacelike separation. Bob is entitled to say that he had a result that Alice claimed a result at spacelike separation, but this result is about Alice's claim, which Bob obtained at non-spacelike separation. So there is no spacelike separation, and no Bell test.
Yes. Basically as long as the quantum/classical cut is not solved this heuristic is valid.
 
  • #108
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
 
  • #109
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.

So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal. However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality"). Is that your argument?

Maybe something like the terminology in http://arxiv.org/abs/quant-ph/9709026, which terms quantum mechanics as "nonlocal" and "causal"?
 
Last edited:
  • #110
atyy said:
So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal.

Exactly. The problem is that I tend to think that QM's antirealism is so strong that I'm not sure it allows to keep even that bit enough for Bell.
However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality").
Hmmm, let's say I would favor this view of the situation. But subject to the above disclaimer. And probably biased by my admiration for both relativity and QM :-p
 
  • #111
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
Some authors have argued that the correlations in Bell-type experiments have yet to be explained by any local, non-realist model (whatever that means). Is there even any such model? I recall only 1 such model that was posted previously but it doesn't appear to be very popular and it's a difficult model to understand. I read it twice and still had trouble with it even though the author tried explaining it on this forum. Moreover, if non-locality is already implied by Bell-type experiments, why give up both realism and locality when giving up locality is all that is necessary to get results?
 
  • #112
TrickyDicky said:
Let's give some context. It is not that the theorem introduces any "particle" concept as its premise. It is about the conclusions from the theorem given certain assumption that is virtually shared by the whole physics community, namely atomism, the atomic theory as explanation of matter(the fundamental building blocks narrative) .
[..]
Now I have to say that I disagree with Neumaier that Classical field theory like electrodynamics as understood at least since Lorentz, violates Bell's inequalities as a theory. The reason is that electrodyamics includes classical particles. So it is both local and realistic.
I did not see Neumaier phrase it like that. It is true that in order to create EM radiation, one needs a radiation source; but IMHO, for his argument it's irrelevant how you model that source. It suffices that EM radiation can be modeled in a precise way.

He gave a neat illustration of how it can be sufficiently "nonlocal" for the hidden-variable analysis in his unpolished paper http://lanl.arxiv.org/abs/0706.0155. However, how EM waves could be sufficiently "nonlocal" for doing the trick with distant polarizers is still far from clear to me, although the paper by Banaszek quant-ph/9806069 seems to give, unwittingly, a hint at the end.

PS: the "fundamental building blocks" according to Neumaier are (something like) waves.
 
Last edited:
  • #113
harrylin said:
I did not see Neumaier phrase it like that. It is true that in order to create EM radiation, one needs a radiation source; but IMHO, for his argument it's irrelevant how you model that source. It suffices that EM radiation can be modeled in a precise way.

He gave a neat illustration of how it can be sufficiently "nonlocal" for the hidden-variable analysis in his unpolished paper http://lanl.arxiv.org/abs/0706.0155. However, how EM waves could be sufficiently "nonlocal" for doing the trick with distant polarizers is still far from clear to me, although the paper by Banaszek quant-ph/9806069 seems to give, unwittingly, a hint at the end.

PS: the "fundamental building blocks" according to Neumaier are (something like) waves.

I had not read Neumaier's paper linked by you when I wrote that, and now I have just read the conclusions.
He seems to center his analysis just on EM radiation and I was referring to electrodynamics whole theory so it's natural his argument has nothing to do with what I said there.

There is a trivial way in which say a plane wave is nonlocal, as it correlates its waveform for infinitely separated points.

His conclusion that "the present analysis demonstrates that a classical wave model for quantum
mechanics is not ruled out by experiments demonstrating the violation of the traditional
hidden variable assumptions" even if it was true(I don't know since I didn't read the analysis) looks to me not very useful since ruling out classical wave models explaining QM experiments doesn't need Bell's theorem.

His other conclusion:"the traditional hidden variable assumptions therefore only amount to hidden particle assumptions, and the experiments demonstrating their violation are just another chapter
in the old dispute between the particle or field nature of light conclusively resolved in favor of the field" I might agree with, as long as we use an extended notion of particle(basically any particle-like object).
 
  • #114
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
What do you mean? Anyway, QM does not allow *anything* to go. Not at all. QM can't get the CHSH quantity S to go above 2 sqrt 2 but alternative theories could, still without violating locality. It could get all the way to 4.

It's called Tsirelson's inequality. I know that some very respectable and serious physicists have published experimental violation of Tsirelson inequality, and got that published in PRL or PRA - says something about refereeing and editing and general knowledge among physicists - but fortunately for QM, their experiment was flawed (loopholes!).
 
  • #115
bohm2 said:
Some authors have argued that the correlations in Bell-type experiments have yet to be explained by any local, non-realist model (whatever that means). Is there even any such model? I recall only 1 such model that was posted previously but it doesn't appear to be very popular and it's a difficult model to understand. I read it twice and still had trouble with it even though the author tried explaining it on this forum. Moreover, if non-locality is already implied by Bell-type experiments, why give up both realism and locality when giving up locality is all that is necessary to get results?

1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.
2) You could say that QM does not "explain" those correlations, it only describes them.
3) Bohmian theory does explain them, but it is non-local, of course (Bell's theorem).
4) No experiment was yet performed which was both succesfull in violating Bell type inequalities AND simultaneously satisfied the standard requirements for a "loophole-free" experiment, namely an experiment which (if succesful) can't be explained by a LHV theory. Possibly such an experiment might finally have gotten done within about a year from now. They getting pretty damned close.

For instance experiments with photons suffer from photons getting lost. You don't have a binary outcome you have a ternary outcome yes/no/disappeared (detection loophole). Experiments with atoms have the atoms so close the measurements so slow that it would be easy for one of the atoms to "know" how the other is being measured (locality loophole). Many experiments do not have fast, random, switching of detector settings, so later "particles" can easily "know" how earlier particles were being measured (memory loophole).
 
  • #116
atyy said:
So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal. However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality"). Is that your argument?

Maybe something like the terminology in http://arxiv.org/abs/quant-ph/9709026, which terms quantum mechanics as "nonlocal" and "causal"?

Belavkin's eventum mechanics provides a view of QM which is both local and causal. As long as you don't ask for a mechanistic ie classical like explanation for "what is going on behind the scenes". You have to stop and accept quantum randomness. Irreducible. Intrinsic. Not like usual randomness ("merely statistical").

Sorry here I give you a reference to an unpublished unfinished manuscript by myself but it does give you some references and a quick easy (?) intro: http://arxiv.org/abs/0905.2723
 
  • #117
gill1109 said:
1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.

There was some interesting work done years ago by an Israeli mathematical physicist, Itamar Pitowsky, about the possibility of evading Bell's theorem by using non-measurable sets. The basic idea was to construct (in the mathematical sense) a function [itex]F[/itex] of type [itex]S^2 \rightarrow \{+1,-1\}[/itex] ([itex]S^2[/itex] being a unit sphere, or alternatively the set of unit direction vectors in 3D space) such that

  1. The measure of the set of points [itex]\vec{a}[/itex] such that [itex]F(\vec{a}) = 1[/itex] is 1/2.
  2. For almost all points [itex]\vec{a}[/itex], the measure of the set of points [itex]\vec{b}[/itex] such that [itex]F(\vec{a}) = F(\vec{b})[/itex] is [itex]cos^2(\theta/2)[/itex] where [itex]\theta[/itex] is the angle between [itex]\vec{a}[/itex] and [itex]\vec{b}[/itex]

It is actually mathematically consistent to assume the existence of such a function. Such a function could be used for a hidden variable explanation of EPR, contrary to Bell. The loophole that this model exploits is that Bell implicitly assumed that everything of interest is measurable, while in Pitowsky's model, certain joint probabilities correspond to non-measurable sets.

The problem with Pitowsky's model turns out to be that a satisfactory physical interpretation of non-measurable sets is about as elusive as a satisfactory physical interpretation of QM. In particular, if your theory predicts that a certain set of events is non-measurable, and then you perform experiments to actually count the number of events, you will get some actual relative frequency. So the assumption, vital to making probabilistic models testable, that relative frequency approaches the theoretical probability, can't possibly hold for nonmeasurable sets. In that case, it's not clear what the significance of the theoretical probability is, in the first place.

In particular, as applied to the spin-1/2 EPR experiment, I think it's true that every finite set of runs of the experiment will have relative frequencies that violate Pitowsky's theoretical probabilities. That's not necessarily a contradiction, but it certainly shows that introducing non-measurable sets makes the interpretation of experiment statistics very strange.
 
  • #118
stevendaryl said:
There was some interesting work done years ago by an Israeli mathematical physicist, Itamar Pitowsky, about the possibility of evading Bell's theorem by using non-measurable sets.
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103. Moreover it is stronger than the conventional result since it is a "finite N" result: a probability inequality for the observed correlations after N trials. The assumptions are slightly different from the usual ones: I put probability into the selection of settings, not into the particles.

No, sorry, all the people claiming that some mathematical niceties e.g. measure theory or conventional definitions of integrability or the topology of space-time are the "way out" are barking up the wrong tree (IMHO)

Bell makes some conventional assumptions in order to write his proof out using conventional calculus. But you don't *have* to make those assumptions in order to get his main result. What you actually use is a whole lot weaker. Pitowsky only shows how Bell's line of proof would break down ... he does not realize that there are alternative lines of proof which would not break down even if one did not make measurability assumptions.

NB the existence of non-measurable functions requires the axiom of choice. A somewhat arbitrary assumption about infinite numbers of infinite sets. There exist consistent axioms for mathematics without the axiom of choice but making all sets measurable. So what are we talking about here? Formal word games, I think.
 
Last edited:
  • #119
gill1109 said:
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103.

I think maybe I had read something along those lines, which was the reason I said that the nice(?) measure-theoretic properties of Pitowsky's model doesn't seem to imply anything about actual experiments.

Well, that's disappointing. It seemed to me that something like that might work, because non-measurable sets are weird in a way that has something of the same flavor as quantum weirdness.

An example (assuming the continuum hypothesis, this is possible) is to have an ordering (not the usual ordering) [itex]\leq[/itex] on the unit interval [itex][0,1][/itex] such that for every real number [itex]x[/itex] in the interval, there are only countably many [itex]y[/itex] such that [itex]y \leq x[/itex]. Since every countable set has Lebesgue measure 0, we have the following truly weird situation possible:

Suppose you and I both generate a random real between 0 and 1. I generate the number [itex]x[/itex] and later, you generate the number [itex]y[/itex]. Before you generate your number, I look at my number and compute the probability that you will generate a number less than mine (in the special ordering). Since there are only countably many possibilities, I conclude that the probability is 0. So I should have complete confidence that my number is smaller than yours.

On the other hand, by the perfect symmetry between our situations, you could make the same argument.

So one or the other of us is going to be infinitely surprised (an event of probability zero actually happened).
 
  • #120
stevendaryl said:
I think maybe I had read something along those lines, which was the reason I said that the nice(?) measure-theoretic properties of Pitowsky's model doesn't seem to imply anything about actual experiments.

Well, that's disappointing. It seemed to me that something like that might work, because non-measurable sets are weird in a way that has something of the same flavor as quantum weirdness.

An example (assuming the continuum hypothesis, this is possible) is to have an ordering (not the usual ordering) [itex]\leq[/itex] on the unit interval [itex][0,1][/itex] such that for every real number [itex]x[/itex] in the interval, there are only countably many [itex]y[/itex] such that [itex]y \leq x[/itex]. Since every countable set has Lebesgue measure 0, we have the following truly weird situation possible:

Suppose you and I both generate a random real between 0 and 1. I generate the number [itex]x[/itex] and later, you generate the number [itex]y[/itex]. Before you generate your number, I look at my number and compute the probability that you will generate a number less than mine (in the special ordering). Since there are only countably many possibilities, I conclude that the probability is 0. So I should have complete confidence that my number is smaller than yours.

On the other hand, by the perfect symmetry between our situations, you could make the same argument.

So one or the other of us is going to be infinitely surprised (an event of probability zero actually happened).
I think you are referring here to paradoxes from "model theory" namely there exist countable models for the real numbers. Beautiful. It's a self-reference paradox, really just a hyped up version of the old paradox of the barber who shaves everyone in the village who doesn't shave himself. In some sense, it is just a word game. It's a useful tool in maths - one can prove theorems by proving theorems about proving theorems. Nothing wrong with that.

Maybe there is superficially a flavour of that kind of weirdness in quantum weirdness. But after studying this a long time (and analysing several such "solution") I am certain that quantum weirdness is weirdness of a totally different nature. It is *physical*, it conflicts with our in-built instinctive understanding of the world (which got there by evolution. It allowed our ancestors to succesfully raise more kids than the others. Evolution is blind and even leads species into dead ends, again and again!). So I would prefer to see it as quantum wonderfulness, not quantum weirdness.
 
  • #121
gill1109 said:
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103. Moreover it is stronger than the conventional result since it is a "finite N" result: a probability inequality for the observed correlations after N trials. The assumptions are slightly different from the usual ones: I put probability into the selection of settings, not into the particles.

To follow up a little bit, I feel that there is still a bit of an unsolved mystery about Pitowsky's model. I agree that his model can't be the way things REALLY work, but I would like to understand what goes wrong if we imagined that it was the way things really work. Imagine that in an EPR-type experiment, there was such a spin-1/2 function [itex]F[/itex] associated with the electron (and the positron) such that a subsequent measurement of spin in direction [itex]\vec{x}[/itex] always gave the answer [itex]F(\vec{x})[/itex]. We perform a series of measurements and compile statistics. What breaks down?

On the one hand, we could compute the relative probability that [itex]F(\vec{a}) = F(\vec{b})[/itex] and we conclude that it should be given by [itex]cos^2(\theta/2)[/itex] (because [itex]F[/itex]) was constructed to make that true). On the other hand, we can always find other directions [itex]\vec{a'}[/itex] and [itex]\vec{b'}[/itex] such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).

So what that means is that for any run of experiments, there will be some statistics that don't come close to matching the theoretical probability. I think this is a fundamental problem with relating non-measurable sets to experiment. The assumption that relative frequencies are related (in a limiting sense) to theoretical probabilities can't possibly hold when there are non-measurable sets involved.
 
  • #122
stevendaryl said:
To follow up a little bit, I feel that there is still a bit of an unsolved mystery about Pitowsky's model. I agree that his model can't be the way things REALLY work, but I would like to understand what goes wrong if we imagined that it was the way things really work. Imagine that in an EPR-type experiment, there was such a spin-1/2 function [itex]F[/itex] associated with the electron (and the positron) such that a subsequent measurement of spin in direction [itex]\vec{x}[/itex] always gave the answer [itex]F(\vec{x})[/itex]. We perform a series of measurements and compile statistics. What breaks down?

On the one hand, we could compute the relative probability that [itex]F(\vec{a}) = F(\vec{b})[/itex] and we conclude that it should be given by [itex]cos^2(\theta/2)[/itex] (because [itex]F[/itex]) was constructed to make that true). On the other hand, we can always find other directions [itex]\vec{a'}[/itex] and [itex]\vec{b'}[/itex] such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).

So what that means is that for any run of experiments, there will be some statistics that don't come close to matching the theoretical probability. I think this is a fundamental problem with relating non-measurable sets to experiment. The assumption that relative frequencies are related (in a limiting sense) to theoretical probabilities can't possibly hold when there are non-measurable sets involved.
If we do the Bell-CHSH type experiment picking settings at random as we are supposed to, nothing breaks down. At least, nothing breaks down if you use a sharper method of proof than Bell's old approach.

That's the point of my own work, going back to http://arxiv.org/abs/quant-ph/0110137. No measurability assumptions. The only assumption is that both outcomes are simultaneously defined - both the outcomes which would have been seen if either setting had been in force. aka counterfactual definiteness. The experimenter tosses a coin and gets to see one or the other, at random. This works for Pitowsky's "model" too. It works for any LHV model. A function is a function whether it is measurable or not. It works for stochastic LHV models as well as deterministic. Just a matter of redefining what is the hidden variable.

The only escape is super-determinism so that I cannot actually effectively randomize experimental settings.
 
  • #123
TrickyDicky said:
I had not read Neumaier's paper linked by you when I wrote that, and now I have just read the conclusions.
[..]
ruling out classical wave models explaining QM experiments doesn't need Bell's theorem.
Sure. The last days I did read some of his papers while I found them, and (as you may have guessed), that's not what he had in mind. He (re)discovered that QM is totally incompatible with classical particle theory but very close to classical wave theory. The naive particle concept must be dropped.
There is a trivial way in which say a plane wave is nonlocal, as it correlates its waveform for infinitely separated points. [..]
If I'm not mistaken, all matter is similarly modeled in QFT as field excitations.
 
  • #124
stevendaryl said:
On the other hand, we can always find other directions [itex]\vec{a'}[/itex] and [itex]\vec{b'}[/itex] such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).
No this is a misunderstanding. My theorem says that for the set of correlations you actually did choose to measure, the chance that they'll violate CHSH by more than some given amount is incredibly small if N is pretty large. The theorem doesn't say anything about what you didn't do. It only talks about what you actually did experimentally observe. It assumes you are doing a regular CHSH type experiment - Alice and Bob are repeatedly and independently choosing between just two particular settings. So only four correlations are getting measured.
Note, Pitowsky has a non-measurable law of large numbers which says that the relative frequency of the event you are looking at will continue for ever to fluctuate between its outer probability and its lower probability. Those two numbers can be 1 and 0 respectively. So what. My theorem talks about the chance of something happening for a given fixed finite value of N, conditional on the values of the hidden variables etc etc. The probability in my theorem is *exclusively* in the 2 N coin tosses determining Alice and Bob's settings. If N goes to infinity it doesn't matter at all whether or not the quantum averages converge or not. There are always subsequences along which they converge by compactness. Along any such subsequence, in the long run CHSH will certainly only be violated by more than epsilon at most finitely many times. (Here I am using the Borel-Cantelli lemma which is how you can prove the strong law of large numbers once you have got an exponential bound like we have here).
 
Last edited:
  • #125
gill1109 said:
I think you are referring here to paradoxes from "model theory" namely there exist countable models for the real numbers.

No, not at all. Let [itex]\omega_1[/itex] be the smallest uncountable ordinal. Then for any ordinal [itex]\alpha < \omega_1[/itex] (with [itex]<[/itex] the usual ordering on ordinals), there are only countably many [itex]\beta < \alpha[/itex] but there are uncountably many [itex]\beta > \alpha[/itex]. So if we assume the continuum hypothesis, then every real in [itex][0,1][/itex] can be associated with an ordinal less than [itex]\omega_1[/itex]. This gives us a total ordering on reals such that for any [itex]x[/itex] there are only countably many smaller reals in [itex][0,1][/itex] but uncountably many larger reals.

Beautiful. It's a self-reference paradox, really just a hyped up version of the old paradox of the barber who shaves everyone in the village who doesn't shave himself. In some sense, it is just a word game. It's a useful tool in maths - one can prove theorems by proving theorems about proving theorems. Nothing wrong with that.

No, I don't think it's paradoxical in that sense. It's perfectly consistent mathematics (unlike the Liar Paradox, which is an actual logical contradiction). It's just weird.
 
  • #126
stevendaryl said:
No, not at all. Let [itex]\omega_1[/itex] be the smallest uncountable ordinal. Then for any ordinal [itex]\alpha < \omega_1[/itex] (with [itex]<[/itex] the usual ordering on ordinals), there are only countably many [itex]\beta < \alpha[/itex] but there are uncountably many [itex]\beta > \alpha[/itex]. So if we assume the continuum hypothesis, then every real in [itex][0,1][/itex] can be associated with an ordinal less than [itex]\omega_1[/itex]. This gives us a total ordering on reals such that for any [itex]x[/itex] there are only countably many smaller reals in [itex][0,1][/itex] but uncountably many larger reals.
I think you are wrong. The continuum hypothesis tells us that the unit interval has the same cardinality as Aleph_1, the first cardinal number large that Aleph_0, the first infinite cardinal. This does not mean that the numbers in [0, 1] can be put in 1-1 correspondence with 1, 2, ... You are saying that there is a 1-1 map from [0, 1] to the numbers 1, 2, ... hence [0, 1] is countable.

Continuum hypothesis says there is no cardinality strictly between Aleph_0, the first infinite cardinal = the cardinality of the set of the natural numbers, and 2^Aleph_0, the set of functions from Aleph_0 to {0, 1}, which is easily seen to be the same cardinality as that of the unit interval on the real line. So no infinite set which cannot be put into one-to-one correspondence with the natural numbers but which is the domain of some one-to-one map into the unit interval but which cannot be put into one-to-one correspondence with the whole unit interval.

Maybe you are mixing up cardinals and ordinals?
 
Last edited:
  • #127
gill1109 said:
No this is a misunderstanding. My theorem says that for the set of correlations you actually did choose to measure, the chance that they'll violate CHSH by more than some given amount is incredibly small if N is pretty large. The theorem doesn't say anything about what you didn't do. It only talks about what you actually did experimentally observe. It assumes you are doing a regular CHSH type experiment - Alice and Bob are repeatedly and independently choosing between just two particular settings. So only four correlations are getting measured.

I don't think there's a misunderstanding. I'm just saying that there is an apparent contradiction and I don't see how to resolve it.

Imagine generating a sequence of Pitowsky spin-1/2 functions:

[itex]F_1[/itex]

[itex]F_2[/itex]

.
.
.

For each such run, you let Alice and Bob pick a direction:

[itex]a_1, b_1[/itex]
[itex]a_2, b_2[/itex]

.
.
.

Then we lookup their corresponding results:

[itex]R_{A,1} = F_1(a_1)[/itex], [itex]R_{B,1} = F_1(b_1)[/itex]
[itex]R_{A,2} = F_2(a_2)[/itex], [itex]R_{B,2} = F_2(b_2)[/itex]
.
.
.

The question is: what are the statistics for correlations between Alice's results and Bob's results?

On the one hand, your finite version of Bell's inequality can show that (almost certainly) the statistics can't match the predictions of QM. On the other hand, the functions [itex]F_j[/itex] were specifically constructed so that the probability of Bob getting [itex]F_j(b_j) = +1[/itex] given that Alice got [itex]F_j(a_j) = +1[/itex] is given by the QM relative probabilities. That seems to be a contradiction. So what goes wrong?
 
  • #128
gill1109 said:
I think you are wrong. The continuum hypothesis tells us that the unit interval has the same cardinality as Aleph_1, the first cardinal number large that Aleph_0, the first infinite cardinal. This does not mean that the numbers in [0, 1] can be put in 1-1 correspondence with 1, 2, ... You are saying that there is a 1-1 map from [0, 1] to the numbers 1, 2, ... hence [0, 1] is countable.

Continuum hypothesis says there is no cardinality strictly between Aleph_0, the first infinite cardinal = the cardinality of the set of the natural numbers, and 2^Aleph_0, the set of functions from Aleph_0 to {0, 1}, which is easily seen to be the same cardinality as that of the unit interval on the real line. So no infinite set which cannot be put into one-to-one correspondence with the natural numbers but which is the domain of some one-to-one map into the unit interval but which cannot be put into one-to-one correspondence with the whole unit interval.

Maybe you are mixing up cardinals and ordinals?

I didn't say that the unit interval can be put into a one-to-one correspondence with the naturals; I said that it can be put into a one-to-one correspondence with the countable ordinals. The set of countable ordinals goes way beyond the naturals. The naturals are the finite ordinals, not the countable ordinals.

Aleph_1 is (if one uses the Von Neumann ordinals) equal to the set of all countable ordinals. So there are uncountably many countable ordinals. The continuum hypothesis implies that Aleph_1 has the same cardinality as the continuum, and so it implies that the unit interval can be put into a one-to-one correspondence with the countable ordinals.
 
  • #129
stevendaryl said:
I didn't say that the unit interval can be put into a one-to-one correspondence with the naturals; I said that it can be put into a one-to-one correspondence with the countable ordinals. The set of countable ordinals goes way beyond the naturals. The naturals are the finite ordinals, not the countable ordinals.

Aleph_1 is (if one uses the Von Neumann ordinals) equal to the set of all countable ordinals. So there are uncountably many countable ordinals. The continuum hypothesis implies that Aleph_1 has the same cardinality as the continuum, and so it implies that the unit interval can be put into a one-to-one correspondence with the countable ordinals.

Hold it. Aleph_1 is the first uncountable *cardinal* not ordinal.

AFAIK, the continuum hypothesis does not say that the unit interval is in one-to-one correspondence with the set of countable *ordinals*. But maybe you know things about the continuum hypothesis which I don't know. Please give a reference.
 
  • #130
stevendaryl said:
I didn't say that the unit interval can be put into a one-to-one correspondence with the naturals; I said that it can be put into a one-to-one correspondence with the countable ordinals. The set of countable ordinals goes way beyond the naturals. The naturals are the finite ordinals, not the countable ordinals.

Aleph_1 is (if one uses the Von Neumann ordinals) equal to the set of all countable ordinals. So there are uncountably many countable ordinals. The continuum hypothesis implies that Aleph_1 has the same cardinality as the continuum, and so it implies that the unit interval can be put into a one-to-one correspondence with the countable ordinals.

The axiom of choice implies that every set can be put into a one-to-one correspondence with some initial segment of the ordinals. That means that it is possible to index the unit interval by ordinals [itex]\alpha[/itex]: [itex][0,1] = \{ r_\alpha | \alpha < \mathcal{C}\}[/itex] where [itex]\mathcal{C}[/itex] is the cardinality of the continuum. The continuum hypothesis implies that [itex]\mathcal{C} = \omega_1[/itex], the first uncountable ordinal ([itex]\omega_1[/itex] is the same as Aleph_1, if we use the Von Neumann representation for cardinals and ordinals). So we have:

[itex][0,1] = \{ r_\alpha | \alpha < \omega_1\}[/itex]

If [itex]\alpha < \omega_1[/itex], then that means that [itex]\alpha[/itex] is countable, which means that there are only countably many smaller ordinals. That means that if we order the elements of [itex][0,1][/itex] by using [itex]r_\alpha < r_\beta \leftrightarrow \alpha < \beta[/itex], then for every [itex]x[/itex] in [itex][0,1][/itex] there are only countably many [itex]y[/itex] such that [itex]y < x[/itex].
 
  • #131
stevendaryl said:
I don't think there's a misunderstanding. I'm just saying that there is an apparent contradiction and I don't see how to resolve it.

Imagine generating a sequence of Pitowsky spin-1/2 functions:

[itex]F_1[/itex]

[itex]F_2[/itex]

.
.
.

For each such run, you let Alice and Bob pick a direction:

[itex]a_1, b_1[/itex]
[itex]a_2, b_2[/itex]

.
.
.

Then we lookup their corresponding results:

[itex]R_{A,1} = F_1(a_1)[/itex], [itex]R_{B,1} = F_1(b_1)[/itex]
[itex]R_{A,2} = F_2(a_2)[/itex], [itex]R_{B,2} = F_2(b_2)[/itex]
.
.
.

The question is: what are the statistics for correlations between Alice's results and Bob's results?

On the one hand, your finite version of Bell's inequality can show that (almost certainly) the statistics can't match the predictions of QM. On the other hand, the functions [itex]F_j[/itex] were specifically constructed so that the probability of Bob getting [itex]F_j(b_j) = +1[/itex] given that Alice got [itex]F_j(a_j) = +1[/itex] is given by the QM relative probabilities. That seems to be a contradiction. So what goes wrong?

Pitowsky can come up with one function or lots but he doesn't know in advance which arguments we are going to supply it with. In the j'th run one of his functions is "queried" once (well once on each side of the experiment) and generates two outcomes +/-1. His "probabilities" are irrelevant. If he is using non-measurable functions he can't control what "probabilities" come out when these functions are queried infinitely often. I don't see any point in trying to rescue his approach. But you can try if you like. I think it is conceptually unsound.
 
  • #132
stevendaryl said:
The axiom of choice implies that every set can be put into a one-to-one correspondence with some initial segment of the ordinals. That means that it is possible to index the unit interval by ordinals [itex]\alpha[/itex]: [itex][0,1] = \{ r_\alpha | \alpha < \mathcal{C}\}[/itex] where [itex]\mathcal{C}[/itex] is the cardinality of the continuum. The continuum hypothesis implies that [itex]\mathcal{C} = \omega_1[/itex], the first uncountable ordinal ([itex]\omega_1[/itex] is the same as Aleph_1, if we use the Von Neumann representation for cardinals and ordinals). So we have:

[itex][0,1] = \{ r_\alpha | \alpha < \omega_1\}[/itex]

If [itex]\alpha < \omega_1[/itex], then that means that [itex]\alpha[/itex] is countable, which means that there are only countably many smaller ordinals. That means that if we order the elements of [itex][0,1][/itex] by using [itex]r_\alpha < r_\beta \leftrightarrow \alpha < \beta[/itex], then for every [itex]x[/itex] in [itex][0,1][/itex] there are only countably many [itex]y[/itex] such that [itex]y < x[/itex].
Thanks, you are right!

So the set of countable ordinals is very very very large. Your "ordering" of [0, 1] is not actually countable, even though every initial segment of it is. Well that's how it has to be if we want both the axiom of choice and the continuum hypothesis to be true. But it is merely a matter of taste whether or not we want them to be true. The physics of the universe does not depend on these axioms of infinite sets being true or not. So maybe there are physical grounds to prefer not to have some of these axioms - we might get a mathematics which was more physically appealing by making different choices. There have been a number of very serious proposals on these lines. Pick axioms of the infinite not on the grounds of mathematical expediency but on the grounds of physical intuition.
 
Last edited:
  • #133
Bohm described it very well. As Bell himself said, you can't get away with "no action at a distance"...

 
Last edited by a moderator:
  • #134
EEngineer91 said:
Bohm described it very well. As Bell himself said, you can't get away with "no action at a distance"...


When/where did Bell say that? Just like Bohr, there is a young Bell and an older and wiser Bell ... Young Bell was a fan of Bohmian mechanics. Older Bell liked the CSL theory. Always (?) Bell was careful to distinguish his gut feelings about a matter, from what logic would allow us to conclude.

Look: you can't simulate quantum correlations with local hidden variables model without cheating. That's exactly what Bell's theorem says. If you *know* that there must be a hidden variables model explaining QM, then you *know* there is non-locality.

QM does not allow action-at-a-distance in the world of what we can see and feel and measure. If you want to simulate QM with hidden variables, you'll have to put action-at-a-distance into the hidden layer.
 
Last edited by a moderator:
  • #135
Please watch the video, that is not a young Bell saying this. He was always a fan of Bohm's work, but unfortunately he died early as well. The most important line in the video is at the end "you can't get away with NO action at a distance"...non-locality is fine, it just bugs the relativists and those who think c is a universal speed barrier to everything, it is just a constant of electromagnetism
 
  • #136
EEngineer91 said:
Please watch the video, that is not a young Bell saying this. He was always a fan of Bohm's work, but unfortunately he died early as well. The most important line in the video is at the end "you can't get away with NO action at a distance"...non-locality is fine, it just bugs the relativists and those who think c is a universal speed barrier to everything, it is just a constant of electromagnetism
will do

He is a bit subtle. He says. I cannot say action at a distance is not needed. I can say that you can't say it is not needed. This is like Buddha talking about self. He is saying that our usual categories of thought are *wrong*. Because of the words in our vocabulary and our narrow interpretation of what they mean, we ask stupid questions, and hence get stupid answers.

Beautiful! Exactly what I have been thinking for a long time...
 
  • #137
gill1109 said:
Hold it. Aleph_1 is the first uncountable *cardinal* not ordinal.

In the Von Neumann representation of ordinals and cardinals, a cardinal is an ordinal; [itex]\alpha[/itex] is a cardinal if it is an ordinal, and for any other ordinal [itex]\beta < \alpha[/itex], there is no one-to-one correspondence between [itex]\alpha[/itex] and [itex]\beta[/itex]. So in the Von Neumann representation, the first uncountable ordinal is also the first uncountable cardinal.

AFAIK, the continuum hypothesis does not say that the unit interval is in one-to-one correspondence with the set of countable *ordinals*.

Yes, it does imply that. With the Von Neumann representation of ordinals, any ordinal is the set of all smaller ordinals. So the set of all countable ordinals is itself an ordinal. It has to be uncountable (otherwise, it would be an element of itself, which is impossible). So it's the smallest uncountable ordinal, [itex]\omega_1[/itex]. The continuum hypothesis says that there is no cardinality between countable and the continuum, so the continuum has to equal [itex]\omega_1[/itex].

But maybe you know things about the continuum hypothesis which I don't know. Please give a reference.

I did some Googling, and I don't see the claim stated explicitly anywhere, although it's a trivial consequence of other statements.

http://en.wikipedia.org/wiki/Aleph_number
[itex]\aleph_1[/itex] is the cardinality of the set of all countable ordinal numbers...

the celebrated continuum hypothesis, CH, is equivalent to the identity

[itex]2^{\aleph_0}=\aleph_1[/itex]

Together, those statements imply that the continuum has the same cardinality as the set of countable ordinals. Having the same cardinality means that they can be put into one-to-one correspondence.
 
  • #138
gill1109 said:
Pitowsky can come up with one function or lots but he doesn't know in advance which arguments we are going to supply it with. In the j'th run one of his functions is "queried" once (well once on each side of the experiment) and generates two outcomes +/-1. His "probabilities" are irrelevant. If he is using non-measurable functions he can't control what "probabilities" come out when these functions are queried infinitely often. I don't see any point in trying to rescue his approach. But you can try if you like. I think it is conceptually unsound.

Yes, that's my point--there seems to be a contradiction between the formal computed probabilities and the intuitive notion of probabilities as limits of relative frequencies. Maybe that means that the mathematical possibility of nonmeasurable sets is inconsistent with our use of probabilities for physics.

It's not so much that I'm trying to rescue Pitowsky's approach--from the very first, it seemed to me like a toy model to show the subtleties involved in Bell's proof that are easy to gloss over. At this point, I'm really trying to reconcile two different mathematical results that both seem pretty rigorous, but seem to contradict each other. Whether or not Pitowsky's functions have any relevance to the real world, we can reason about them---they are pretty well-defined, mathematically. I'm trying to understand what goes wrong in reasoning about them.
 
Last edited:
  • #139
stevendaryl said:
Together, those statements imply that the continuum has the same cardinality as the set of countable ordinals. Having the same cardinality means that they can be put into one-to-one correspondence.
Agree. This is what continuum hypothesis and axiom of choice tell us. But we are free not to believe either. Formal mathematics is consistent with them if and only if it is consistent without them. One could have other axioms instead, e.g. all subsets of [0,1] are Lebesgue measurable. Maybe that would be a nicer axiom for physics applications. No more Banach-Tarski paradox. All kinds of advantages ...
 
  • #140
gill1109 said:
will do

He is a bit subtle. He says. I cannot say action at a distance is not needed. I can say that you can't say it is not needed. This is like Buddha talking about self. He is saying that our usual categories of thought are *wrong*. Because of the words in our vocabulary and our narrow interpretation of what they mean, we ask stupid questions, and hence get stupid answers.

Beautiful! Exactly what I have been thinking for a long time...

Yes, very subtle...but important.
 

Similar threads

Replies
220
Views
20K
Replies
48
Views
6K
Replies
16
Views
3K
Replies
55
Views
7K
Replies
5
Views
2K
Back
Top