Bell's theorem and local realism

In summary: Bell inequalities. So I think you are right that local realism is an assumption that is made in the theorem.In summary, the theorem says that quantum mechanics predicts correlations in certain pairs of highly entangled particles (say, A and B) that cannot be explained by a complete knowledge of everything in the intersection of A's and B's respective light cones. Bell's theorem refers to correlations between "classical" or "macroscopic" experimental outcomes. So as long as one believes that the experimental outcomes in a Bell test are "classical", then the violation of the inequality does rule out local realism.
  • #176
gill1109 said:
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions. And Slava Belavkin's "eventum mechanics" which is just QM with Heisenberg cut seen as "real" and placed in the Hilbert space in a way which ensures causality is even Copenhagen QM without a Schrödinger cat paradox. Finally, it can be made relativistically invariant using recent work of D. Beddingham (2011). Relativistic State Reduction Dynamics. Foundations of Physics 41, 686–704. arXiv:1003.2774

Besides the Copenhagen interpretation getting the quantum predictions.Does this interpretation also account for the perfect anti correlations when a=b ? If the particles are in superposition, spin up spin down and detectors are aligned then measurement at A (spin up) seems to have non local effect at B (spin down )
 
Physics news on Phys.org
  • #177
add

EEngineer91 said:
Yet, for some reason, many physicists of today lambast action at a distance as some logical impossibility. Bell even brought up good holistic problems such as defining precisely "measurement", "observation device" and the paradox of their "seperate-ness" and fundamental "together-ness"

Some call action at a distance a logical impossibility because there is no known physical mechanism and it is bizarre. While for others non-locality is an artefact created by the introduction into quantum mechanics of notions which are foreign to it.
There are physical systems that are beyond the scope of the EPR definition of reality.
One could also think of complicated scenarios where local unknown physical variables may couple together in a way that will give the (false) impression of non-local results. ( In part. Laloe, 2008)
And in this paper Statistics, Causality and Bell's Theorem. Gill, May 2014. The author argues that Bell's theorem ( and its experimental confirmation) should lead us to relinquish not locality but realism.
"Quantum randomness is both real and fundamental.Quantum randomness is non-classical, irreducible. It is not an emergent phenomenon. It is the bottom line. It is a fundamental feature of the fabric of reality".
 
Last edited:
  • #178
TrickyDicky said:
But the point here, or at leas what I meant in the post about dropping realism, is that either we agree on calling nonlocal any theory able to get Quantum predictions, regardless of any other assumption like realism, or in the nonrealistic case(like QM's) it makes no sense to still calling it local, unless we are meaning the Einstein sense i.e. causal, but then it is better not to use the term local.
I think a theory can be nonlocal in the Bell sense and keep causality, it just won't be able to do it with particle-like objects in its ontology in the case the theory is realist, if it is not, i.e. intrumentalis like QM it can make up anything without the need to take it seriously as interpretation(as indeed happens).

While dropping realism in both cases above you are still calling the theories non local: 1) Calling the theory non local regardless of an assumption like realism. 2) Or in the non realistic case (like QM's) it makes no sense to still calling it local.

If locality and realism are not conjoined then there could be a local non realism/ non linear model that reproduces the correlations.
 
Last edited:
  • #179
morrobay said:
Besides the Copenhagen interpretation getting the quantum predictions. Does this interpretation also account for the perfect anti correlations when a=b ? If the particles are in superposition, spin up spin down and detectors are aligned then measurement at A (spin up) seems to have non local effect at B (spin down )
Copenhagen interpretation doesn't *explain* this. It merely *describes* this. Perhaps that's the bottom line.

If you want to *explain* it you have to come up with something weird. That's what Bell's theorem says.

MWI doesn't explain it either. It says that reality is not real. The particular branch of the many worlds you and I are stuck in at this moment, is no more nor less real than all the others. The only reality is a unitarily evolving wave function of the universe. Call that an explanation? I don't. Still, it seems to make a lot of people happy.
 
  • #180
morrobay said:
And in this paper Statistics, Causality and Bell's Theorem. Gill, May 2014. The author argues that Bell's theorem ( and its experimental confirmation) should lead us to relinquish not locality but realism.
"Quantum randomness is both real and fundamental.Quantum randomness is non-classical, irreducible. It is not an emergent phenomenon. It is the bottom line. It is a fundamental feature of the fabric of reality".

And wise words they are. :smile: These too, from the same citation:

"It seems to me that we are pretty much forced into rejecting realism , which, please remember, is actually an idealistic concept: outcomes 'exist' of measurements which were not performed. However, I admit it goes against all instinct."

http://arxiv.org/pdf/1207.5103.pdf

Keep in mind that EPR rejected observer dependent reality in favor of objective reality (hidden variables/realism) with no scientific or logical basis. They simply thought that subjective reality was unreasonable. That matches to the comments of Gill above ("goes against all instinct") for quite the reason he presents: because a) EPR did not know of Bell; and b) the assumption of realism is an unnecessary luxury we cannot afford in light of Bell.
 
  • #181
But is there even agreement on what a non-realistic theory is? For example, MWI is considered by some of its proponents to be real http://philsci-archive.pitt.edu/8888/1/Wallace_chapter_in_Oxford_Handbook.pdf . Also CSL is considered by its author to be real, eg. http://arxiv.org/abs/1209.5082. So if Belavkin's Eventum Mechanics http://arxiv.org/abs/quant-ph/0512188 is a generalization or close relative of CSL, why should it be considered unreal?

One resolution, which I sympathize with, is Matt Leifer's explanation that there are two definitions of reality http://arxiv.org/abs/1311.0857.
 
Last edited:
  • #182
atyy said:
But is there even agreement on what a non-realistic theory is? For example, MWI is considered by most to be real. Also CSL is considered by its author to be real, eg. http://arxiv.org/abs/1209.5082. So if Belavkin's Eventum Mechanics http://arxiv.org/abs/quant-ph/0512188 is a generalization or close relative of CSL, why should it be considered unreal?
I read on wikipedia, in material written by those favourable to MWI, that MWI abandons realism .. or at least, reality.

Yes, CSL is realist and non-local. The random disturbances in the model are supposed to exist in reality and thereby determining what would have happened if the experimenter would have done something else, and because it reproduces QM, it has to be non-local.

The mathematical framework of CSL can be seen as a special case of the mathematical framework of eventum mechanics. But the interpretation of those models favoured by the people who invented them differs precisely in what they thought should be considered part of reality and what shouldn't. It's subtle and confusing. In fact it corresponds exactly to what Bell says in that Youtube video: I can't tell you that there is action at a distance in QM, and I can't tell you that it's not true that there isn't.
 
  • #183
gill1109 said:
The mathematical framework of CSL can be seen as a special case of the mathematical framework of eventum mechanics. But the interpretation of those models favoured by the people who invented them differs precisely in what they thought should be considered part of reality and what shouldn't. It's subtle and confusing.

The main place where Belavkin seems to differ from CSL in ontology, if any, is the derivation from filtering, which if you take a Bayesian interpretation, may involve non-real things (subjective probability). Is that why you say Eventum Mechanics is not real, while CSL is real, even though mathematically the final equations are a generalization of CSL?
 
  • #185
atyy said:
One resolution, which I sympathize with, is Matt Leifer's explanation that there are two definitions of reality http://arxiv.org/abs/1311.0857.
Leifer says "Scientific realism is the view that our best scientific theories should be thought of as describing an objective reality that exists independently of us". I guess most people are realists, according to that definition. Or would like to be! But the important issue is *what should be thought of as belonging to that reality*? The MWI people somehow think of reality consisting only of a unitarily evolving wave-function of the universe. What we personally experienced along our own path so far, is apparently an illusion.
I like to think of detector clicks as being part of reality. Then work from there, and see what else can be put in while still making sense. If I want to keep locality I have to give up counter-factual definiteness and I have to give up local hidden variables (I'm not going to buy conspiracy theories). So finally it all comes down to "what's in a name". Local / non-local, realist / non-realist, these maybe are only semantic squabbles.
 
  • #186
gill1109 said:
Leifer says "Scientific realism is the view that our best scientific theories should be thought of as describing an objective reality that exists independently of us". I guess most people are realists, according to that definition. Or would like to be! But the important issue is *what should be thought of as belonging to that reality*? The MWI people somehow think of reality consisting only of a unitarily evolving wave-function of the universe. What we personally experienced along our own path so far, is apparently an illusion.
I like to think of detector clicks as being part of reality. Then work from there, and see what else can be put in while still making sense. If I want to keep locality I have to give up counter-factual definiteness and I have to give up local hidden variables (I'm not going to buy conspiracy theories). So finally it all comes down to "what's in a name". Local / non-local, realist / non-realist, these maybe are only semantic squabbles.

Yes, a lot of it is semantic. I actually like the semantics that Leifer proposes at the end, so that one can consider the wave function in MWI both real and not real. So we can eat our cake and have it. (I'm not sure whether I agree with him that MWI is technically correct, but that's a different issue.)

http://arxiv.org/abs/1311.0857
"We have arrived at the conclusion that noncontextuality must be derived in terms of an analysis of the things that objectively exist. This implies a realist view of physics, or in other words “bit from it”, which seems to conflict with “it from bit”. Fortunately, this conflict is only apparent because “it” is being used in different senses in “it from bit” and “bit from it”. The things that Wheeler classifies as “it” are things like particles, fields and spacetime. They are things that appear in the fundamental ontology of classical physics and hence are things that only appear to be real from our perspective as classical agents. He does not mention things like wavefunctions, subquantum particles, or anything of that sort. Thus, there remains the possibility that reality is made of quantum stuff and that the interaction of this stuff with our question asking apparatus,
also made of quantum stuff, is what causes the answers (particles, fields, etc.) to come into being. “It from bit” can be maintained in this picture provided the answers depend not only on the state of the system being measured, but also on the state of the stuff that comprises the measuring apparatus. Thus, we would end up with “it from bit from it”, where the first “it” refers to classical ontology and the second refers to quantum stuff."
 
Last edited:
  • #187
gill1109 said:
If I want to keep locality...

It would be nice if you explained how you define locality here, so we might understand why you want to keep it at such a price.
 
  • #188
I suppose it is ok to have several definitions of reality when discussing Bell's theorem. Even if we accept experiment results at spacelike separation as real, what the theorem excludes is variables defined in the non-overlapping past light cone as being sufficient to describe the results. So in the sense that variables defined in Hilbert space are not defined in spacetime, those could be considered not real for Bell's theorem.

MWI's primary reality is Hilbert space, not spacetime, so it could be argued that it is not real in the sense of Bell excluding "local realistic variables". On the other hand, such a definition would seem to make even Bohmian Mechanics not real. However, if one allows things defined in Hilbert space to be real, then it would seem MWI and BM are both real, since the wave function really did evolve in a certain way (counterfactual definiteness).
 
  • #189
gill1109 said:
My opinion is that it will get nowhere. Of course it is not conspiratorial "at the Planck scale". But at the scale of a real world Bell-CHSH type experiment it would have to have become conspiratorial.

I think we mean different things by conspiracies. I agree that a theory that "explains" Bell correlation by fine-tuning the initial parameters is cheap and conspiratorial. However, if you can get the correlations by some physical mechanism that does not depend on any fine-tuning then we deal with a law of physics, not conspiracies.

As an example, if you consider a mechanical clock, the correlation between the displayed time and the alarm is a type of conspiracy. There is nothing in the physics of the mechanism that makes such a correlation inevitable. On the other hand the direction of rotation of two geared wheels at the beginning and the end of a raw is the same for an odd number of wheels and different for an even number. I would say that this type of correlation is not conspiratorial because it is generic, it applies to every type of mechanism regardless of the detailed way in which it was built.

't Hooft makes clear that the theory he is pursuing has to be non-conspiratorial, the correlations should appear as a result of some generic property of the evolution of the CA.

In fact, I am absolutely certain that this approach will get nowhere. Possibly a tiny part of QM can be described in this way. But it cannot form a basis for all of conventional QM because of Bell's theorem. Or ... it can but this will lead to "Bell's fifth position": a loophole-free and successful Bell-type experiment will never take place because QM uncertainty relations itself will prevent establishing the right initial conditions. The right quantum state in the right small regions of space-time.

Given the fact that this class of theories denies the statistical independence (or free-will or freedom) assumption, it has no a-priory problem with Bell. The source and detectors are part of the same CA and there are subtle correlations between them.

't Hooft's project is to find a so-called ontological basis for QM (which is supposed to consist of CA states). In this basis all variables would commute, so there is no uncertainty. Nevertheless, the theory is still QM so it should apply to virtually everything, just like the standard model.


It would also imply that a quantum computer can never be built ... or rather, not scaled-up. As one makes computers with more and more qubits in them, quantum decoherence will take over faster, and you'll never be able to factor large integers rapidly or whatever else you want to do with them.

I don't think this is true. The idea is that a quantum computer would never outrun a classical plank-scale computer.
 
  • #190
ueit said:
't Hooft makes clear that the theory he is pursuing has to be non-conspiratorial, the correlations should appear as a result of some generic property of the evolution of the CA.

There was a brief discussion about t'Hooft's ideas in the group "Beyond the Standard Model", but it didn't really go anywhere. Here's my objection--which I'm perfectly happy to be talked out of.

Consider a twin-pair EPR experiment with experimenters Alice and Bob. The usual assumption in discussions of hidden variables is that there are three independent "inputs": (1) Alice's setting, (2) Bob's setting, and (3) whatever hidden variables are carried along with the twin pairs. t'Hooft's model ultimately amounts to saying: (1) and (2) are not actually independent variables. Alice has some algorithm in mind for selecting her setting, and if we only knew enough about Alice's state, and the state of whatever else she's basing her decision on, then we could predict what her choice would be. Similarly for Bob. And if Alice's and Bob's choices are predictable, then it's not hard to generate a hidden variable model of the twin pairs that gives the right statistics. (The difficulty with hidden variables is that you have to accommodate all possible choices Alice and Bob might make. If you only have to accommodate one choice, it's much easier.)

Okay, that's plausible. Except that Alice can bring into play absolutely any other fact about the universe in making her decision about her setting. She could say: If the next batter in the Cubs game gets a hit, I'll choose setting 1, otherwise, I choose setting 2. If the hidden variable relies on knowing what Alice and Bob will choose, then potentially, it would be necessary to simulate the entire universe (or the relevant part of the backwards light cone).

A possible alternative might be to just let Alice's and Bob's results get made independently, locally, and then run time backwards and make another choice if later a conflict is discovered. That would be a real conspiracy theory, but it would be computationally more tractable, maybe.
 
  • #191
morrobay said:
[..] One could also think of complicated scenarios where local unknown physical variables may couple together in a way that will give the (false) impression of non-local results. ( In part. Laloe, 2008)[..].
http://journals.aps.org/pra/abstract/10.1103/PhysRevA.77.022108
Very interesting - macroscopic Bell-type experiments, without counterfactual definiteness.
Thanks!

But what did you mean with "(false) impression"?
 
  • #192
stevendaryl said:
There was a brief discussion about t'Hooft's ideas in the group "Beyond the Standard Model", but it didn't really go anywhere. Here's my objection--which I'm perfectly happy to be talked out of.

Consider a twin-pair EPR experiment with experimenters Alice and Bob. The usual assumption in discussions of hidden variables is that there are three independent "inputs": (1) Alice's setting, (2) Bob's setting, and (3) whatever hidden variables are carried along with the twin pairs. t'Hooft's model ultimately amounts to saying: (1) and (2) are not actually independent variables. Alice has some algorithm in mind for selecting her setting, and if we only knew enough about Alice's state, and the state of whatever else she's basing her decision on, then we could predict what her choice would be. Similarly for Bob. And if Alice's and Bob's choices are predictable, then it's not hard to generate a hidden variable model of the twin pairs that gives the right statistics. (The difficulty with hidden variables is that you have to accommodate all possible choices Alice and Bob might make. If you only have to accommodate one choice, it's much easier.)

I would say that none of the 3 variables are independent. This is my understanding of his model:

The CA is an array of plank-sized cubes including the entire universe. Each cube has some properties (say color). At each tick of the clock, the color of each cube changes following some algorithm, the input values being the color of the cube and of its surrounding cubes.

Now, this produces all sorts of patterns, and those patterns correspond to quantum particles and ultimately to the macroscopic objects that are used to perform a Bell test.

The important thing here is that those patterns can only appear in some configurations (because of the CA algorithm). These configurations correspond to the predicted quantum statistics of the Bell test. In other words it is mathematically impossible to get results that are in contradiction with QM.

Okay, that's plausible. Except that Alice can bring into play absolutely any other fact about the universe in making her decision about her setting. She could say: If the next batter in the Cubs game gets a hit, I'll choose setting 1, otherwise, I choose setting 2. If the hidden variable relies on knowing what Alice and Bob will choose, then potentially, it would be necessary to simulate the entire universe (or the relevant part of the backwards light cone).

I think that you see this backwards. It is the CA patterns from which Alice and its "decision" emerges, not the other way around. Just like you cannot take a decision which leads to a violation of conservation laws you cannot take decisions contrary to the rules of CA. Say that you are asked if you want tee or coffee, and a brain scan shows that in order to choose tee, some electrons in your brain would need to violate the momentum conservation law. Guess what you will choose!

A possible alternative might be to just let Alice's and Bob's results get made independently, locally, and then run time backwards and make another choice if later a conflict is discovered. That would be a real conspiracy theory, but it would be computationally more tractable, maybe.

How are we supposed to "run time backwards"? I do not understand this.
 
  • #193
ueit said:
I think that you see this backwards. It is the CA patterns from which Alice and its "decision" emerges, not the other way around. Just like you cannot take a decision which leads to a violation of conservation laws you cannot take decisions contrary to the rules of CA. Say that you are asked if you want tee or coffee, and a brain scan shows that in order to choose tee, some electrons in your brain would need to violate the momentum conservation law. Guess what you will choose!

I know that Alice choice isn't going to violate the laws of physics. But as I said, Alice can certainly make a meta-choice: "If in the baseball game the batter gets a hit, I"m going to drink coffee. Otherwise, I'm going to drink tea." That doesn't make the choice any less deterministic, but it means that predicting her choice would involve more than knowing what's inside her brain. You would also have to know what's going on in the baseball game miles away.

Potentially, the choice of Alice and Bob's setting in an EPR experiment could depend on the rest of the universe. So to the extent that their settings and their results are co-determined, it would require arranging things with distant baseball teams, as well as Alice and Bob. Potentially, the entire universe would have to be fine-tuned to get the right statistics for EPR-type experiments.

Suppose Alice announces: "I will measure the spin in the x-direction if the next batter gets a hit. Otherwise, I will measure the spin in the y-direction." Bob announces: "I will measure the spin in the x-direction if the juggler I'm watching drops the ball. Otherwise, I will measure the spin in the y-direction." So we generate a twin-pair, and Alice measures the spin in one direction, and Bob measures the spin in a possibly different direction. t'Hooft is saying that the four variables: Alice's direction, Bob's direction, Alice's result, Bob's result, are not a case of the first two causing the last two, but of all four being determined by the initial state of the cellular automaton. But because of the particular way that Alice and Bob choose their settings, he also has to include the baseball player and the juggler in the conspiracy. Potentially the state of the entire rest of the universe might be involved in computing whether Alice measures spin-up.

How are we supposed to "run time backwards"? I do not understand this.

I didn't say that WE are the ones doing it. The universe could work this way: Alice's result is generated under an assumption (a pure guess) as to what Bob's setting and result will be. Bob's result is generated under an assumption as to what Alice's setting and result will be. If it later turns out, after they compare results, that the guesses were wrong, you just fix Alice's and Bob's memories so that they have false memories of getting different results. I don't see how this is any less plausible than t'Hooft's model.
 
  • #194
stevendaryl said:
... t'Hooft is saying that the four variables: Alice's direction, Bob's direction, Alice's result, Bob's result, are not a case of the first two causing the last two, but of all four being determined by the initial state of the cellular automaton. But because of the particular way that Alice and Bob choose their settings, he also has to include the baseball player and the juggler in the conspiracy. Potentially the state of the entire rest of the universe might be involved in computing whether Alice measures spin-up.

This is a point I have tried to make in the past about superdeterministic programs a la t' Hooft: every particle in every spot in the universe must have a copy of the complete (and very large) playbook if there is to be local-only interaction determining the individual outcomes. That is the only way, once you consider all the different "choice" permutations (juggler, player, or a near infinite number of other combinations), that the conspiracy can operate. After all, the outcomes must otherwise appear random (when considered individually). I presume that random element is in the playbook too.
 
  • #195
I saw an article by Zeilinger et al. Which proves experimentally that a large class of non-local realistic variables were incompatible with qm :

http://www.nature.com/nature/journal/v446/n71p38/abs/nature05677.html

"giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned."

This point is also in Epr article : qm disturbs the system but permits prediction with certainty since the wf before and after the measurement are respectively nonseparate and the separate.

I suppose epr have overlooked this in their quantum formalism since it seems contradictory to disturb in a non controlled way and be able to predict with certainty...
 
Last edited by a moderator:
  • #197
stevendaryl said:
I know that Alice choice isn't going to violate the laws of physics. But as I said, Alice can certainly make a meta-choice: "If in the baseball game the batter gets a hit, I"m going to drink coffee. Otherwise, I'm going to drink tea." That doesn't make the choice any less deterministic, but it means that predicting her choice would involve more than knowing what's inside her brain. You would also have to know what's going on in the baseball game miles away.

Potentially, the choice of Alice and Bob's setting in an EPR experiment could depend on the rest of the universe. So to the extent that their settings and their results are co-determined, it would require arranging things with distant baseball teams, as well as Alice and Bob. Potentially, the entire universe would have to be fine-tuned to get the right statistics for EPR-type experiments.

Suppose Alice announces: "I will measure the spin in the x-direction if the next batter gets a hit. Otherwise, I will measure the spin in the y-direction." Bob announces: "I will measure the spin in the x-direction if the juggler I'm watching drops the ball. Otherwise, I will measure the spin in the y-direction." So we generate a twin-pair, and Alice measures the spin in one direction, and Bob measures the spin in a possibly different direction. t'Hooft is saying that the four variables: Alice's direction, Bob's direction, Alice's result, Bob's result, are not a case of the first two causing the last two, but of all four being determined by the initial state of the cellular automaton. But because of the particular way that Alice and Bob choose their settings, he also has to include the baseball player and the juggler in the conspiracy. Potentially the state of the entire rest of the universe might be involved in computing whether Alice measures spin-up.

You have to understand that in a CA there are no free parameters. Everything is related to everything else. The fact that Alice "decides" to make a "meta-choice" is quite irrelevant. Her state was already related to that baseball game and to the Bob's juggler, and to whatever you may think of. It might look somehow unintuitive, but this feature is shared with very respectable physical theories, like general relativity or classical electrodynamics.

In fact, cellular automatons are used exactly for that: simulations of various field theories. From the point of view of Bell's theorem, more specifically, from the point of view of the "freedom" assumption, the CA proposal is in the same class with all local field theories.

From the point of view of their mathematical formulation all these theories are as superdeterministic and conspiratorial as CA. The only difference resides in their domain. GR or classical electrodynamics do not describe everything, and especially not humans brains. CA does that (hopefully).

I maintain that for systems which are fully described by these theories, the freedom assumption does not hold. And it is easy to see why, and why this should not be perceived as a conspiracy.

Let's assume, for the sake of the argument, that our galaxy is described completely by GR (we ignore supernovas and other events involving other forces). Let's focus now on Earth and on another planet which is situated symmetrically, in the opposite arm of the galaxy, call it Earth_B. Our experiment involves only our observation of the trajectories of these two planets.
GR is a local theory, therefore the trajectory of Earth during our observation can be perfectly predicted from the local space curvature. The same is for Earth_B. I need to make no reference to Earth_B when describing what Earth is doing and I couldn't care less about Earth while describing Earth_B. They are so far apart that no signal can travel between them during our experiment, and even in that case, the effect of one on the other would be irrelevant at such a distance.

So, we should dismiss any "conspiracies" and proclaim the trajectory of the two planets statistically independent, right? Or, if you want, we may let them depend on their solar systems, or even on the whole branch of the galaxy. They are really independent now, right?

But when the two trajectories are compared we see a perfect correlation between them. How can we explain that? It must be a non-local effect, or the universe must go forward and back in time, or our logic sucks, isn't it?

Obviously, none of these solutions are true. The fact that was forgotten was that, at the beginning of the experiment, the states of the two planets (together with the local space curvature) were correlated already, and they have been so since the Big-Bang.

So, the states of Alice and Bob and of the particle source, baseball players, and of the juggler are correlated even before the experiment begins. An they will remain so.

I didn't say that WE are the ones doing it. The universe could work this way: Alice's result is generated under an assumption (a pure guess) as to what Bob's setting and result will be. Bob's result is generated under an assumption as to what Alice's setting and result will be. If it later turns out, after they compare results, that the guesses were wrong, you just fix Alice's and Bob's memories so that they have false memories of getting different results. I don't see how this is any less plausible than t'Hooft's model.

I find 't Hoofts' proposal much more acceptable.
 
  • #198
ueit said:
You have to understand that in a CA there are no free parameters. Everything is related to everything else. The fact that Alice "decides" to make a "meta-choice" is quite irrelevant. Her state was already related to that baseball game and to the Bob's juggler, and to whatever you may think of. It might look somehow unintuitive, but this feature is shared with very respectable physical theories, like general relativity or classical electrodynamics.

I understand that, but as I said, Alice's choice, while not free, could involve essentially the rest of the universe (or at least everything in the backward light cone of her decision event). So for a cellular automaton to take advantage of this determinism, it would have to take into account everything else in the universe. As Dr. Chinese said, it would be necessary for every particle in the universe to have in essence a "script" for what everything else in the universe was going to do. That's not impossible, but it's not a very attractive model, it seems to me.
 
  • #199
ueit said:
In fact, cellular automatons are used exactly for that: simulations of various field theories. From the point of view of Bell's theorem, more specifically, from the point of view of the "freedom" assumption, the CA proposal is in the same class with all local field theories.

From the point of view of their mathematical formulation all these theories are as superdeterministic and conspiratorial as CA.

No, that's not true. The evolution of a classical field does not depend on knowing what's happening in distant regions of spacetime. Classical E&M is not superdeterministic. It's local and deterministic.
 
  • #200
stevendaryl said:
No, that's not true. The evolution of a classical field does not depend on knowing what's happening in distant regions of spacetime. Classical E&M is not superdeterministic. It's local and deterministic.

Just about any nonlocal theory can be turned into a local theory by invoking superdeterminism. So superdeterminism makes the distinction between local and nonlocal almost meaningless.
 
  • #201
ueit said:
You have to understand that in a CA there are no free parameters. Everything is related to everything else. The fact that Alice "decides" to make a "meta-choice" is quite irrelevant. Her state was already related to that baseball game and to the Bob's juggler, and to whatever you may think of. It might look somehow unintuitive, but this feature is shared with very respectable physical theories, like general relativity or classical electrodynamics.

...

The fact that was forgotten was that, at the beginning of the experiment, the states of the two planets (together with the local space curvature) were correlated already, and they have been so since the Big-Bang.

So, the states of Alice and Bob and of the particle source, baseball players, and of the juggler are correlated even before the experiment begins. An they will remain so.

First, your counter-example fails the locality loophole test. A shift of position of an object outside the light cone of a gravitational detector will not present the correlations of one which is quantum non-local. Relativistic dynamics are, of course, strictly local.

Second, the question is not whether there is a correlation (when such is asserted and assumed), but exactly how is the "answer" being supplied after an interaction with the environment? Ie. how is it that the entangled partner "knows" to give a spin up response 75% of the time when a distant spin partner is planning a spin down response after a last second angle setting instruction is received? The point being that a superdeterministic theory must have a explanation of how it is "more complete" than QM.

All I am hearing is that playbooks are *hidden* inside every particle and have *all* the answers with no logical explanation of how that occurs.
 
  • #203
stevendaryl said:
No, that's not true. The evolution of a classical field does not depend on knowing what's happening in distant regions of spacetime. Classical E&M is not superdeterministic. It's local and deterministic.

1. If you make the states of the CA to correspond to those of an electromagnetic field, you get a discrete approximation to classical electrodynamics.

2. The local value of the field does depend on the position/momenta of all field sources (point charges) in the universe.

Assume a universe which only contains point-charges which is completely described by classical electromagnetism.

The trajectory of any charge is determined by the local em field (Lorentz force).
The local em field is a function of all charges' positions and momenta.

It follows that the trajectory of any charge is a function of all other charges' positions and momenta. Therefore the assumption that there exists an object (a charge or a group of charges) that evolves independently of the rest of the charges in the universe is false. In other words, the freedom/free-will/statistical independence assumption of Bell's theorem is false.

A similar reasoning applies to general relativity, you only need to replace point charges by point masses and local em field by space curvature. Freedom assumption is also false in a universe described by GR.
 
  • #204
ueit said:
1. If you make the states of the CA to correspond to those of an electromagnetic field, you get a discrete approximation to classical electrodynamics.

2. The local value of the field does depend on the position/momenta of all field sources (point charges) in the universe.

No, it doesn't. The coupled Maxwell-Lorentz equations are local. What that implies is that if you want to compute [itex]\vec{E}(\vec{r},t)[/itex], it is sufficient to know the values of [itex]\vec{E}, \vec{B}[/itex] and the positions of charged particles in the region of spacetime consisting of all points [itex]\vec{r'}, t'[/itex] such that
  • [itex]0 < t - t' < \delta t[/itex],
  • [itex]|\vec{r'}-\vec{r}| < c \delta t[/itex].

You don't need to know anything about points more distant than [itex]c \delta t[/itex]. The evolution of the electric field only depends on facts about local conditions, not facts about the whole universe.

Assume a universe which only contains point-charges which is completely described by classical electromagnetism.

The trajectory of any charge is determined by the local em field (Lorentz force).
The local em field is a function of all charges' positions and momenta.

That's not true. The trajectory of a charge depends on local values of the fields. The evolution of the fields depends only on NEARBY charges. Distant charges are irrelevant (if you know the local values of fields in the recent past).

It follows that the trajectory of any charge is a function of all other charges' positions and momenta. Therefore the assumption that there exists an object (a charge or a group of charges) that evolves independently of the rest of the charges in the universe is false. In other words, the freedom/free-will/statistical independence assumption of Bell's theorem is false.

That's just not true. You're glossing the distinction between a local theory and a nonlocal theory.
 
  • #205
ueit said:
It follows that the trajectory of any charge is a function of all other charges' positions and momenta. Therefore the assumption that there exists an object (a charge or a group of charges) that evolves independently of the rest of the charges in the universe is false. In other words, the freedom/free-will/statistical independence assumption of Bell's theorem is false.

Sorry, one does not follow from the other.

In a local classical universe, you are saying that everything is predetermined. Perhaps. But that is a far cry from the superdeterminisitic universe you (or t' Hooft) are describing. In one, a decision to perform a particular measurement, while predetermined, has no bearing on the purely local outcomes. In the other, it does and that has the effect of returning results at odds with the "true" statistics and instead consistent with QM predictions (which are otherwise wrong).
 
  • #206
Since when does non-locality equate with "no objective reality"?
 
  • #207
DrChinese said:
First, your counter-example fails the locality loophole test. A shift of position of an object outside the light cone of a gravitational detector will not present the correlations of one which is quantum non-local. Relativistic dynamics are, of course, strictly local.

It was not my intention to make such a claim.

Second, the question is not whether there is a correlation (when such is asserted and assumed), but exactly how is the "answer" being supplied after an interaction with the environment? Ie. how is it that the entangled partner "knows" to give a spin up response 75% of the time when a distant spin partner is planning a spin down response after a last second angle setting instruction is received? The point being that a superdeterministic theory must have a explanation of how it is "more complete" than QM.

I'll give you my take on this matter, using as an example classical electromagnetism.

1. Assume a universe which only contains point-charges which is completely described by classical electromagnetism.

2. I use a simple model of a "pre-entangled" pair: two rotating, classical, oppositely charged spheres. The spin is the classical magnetic moment associated with the rotation of each sphere.

3. The position/momenta of all charges in the universe in the past (including those of the would-be detectors) determines the local EM field near the "pre-entangled" pair.

4. When the EM force generated by the local EM field is strong enough, the spheres will depart, reaching the detectors.

5. The actual orientation of the spin magnetic moment of each sphere depends on the local EM field at the moment of the splitting.

6. From (3) and (5) it follows that the actual orientation of the spin magnetic moment of each sphere depends on the position/momenta of all charges in the universe in the past (including those of the would-be detectors).

7. The entire universe is deterministic, therefore the position/momenta of all charges in the universe in the future (say at the moment of detection) is a function of the position/momenta of all charges in the universe in the past.

8. From (6) and (7) it follows that the spin magnetic moment of each sphere and the position/momenta of all charges in the universe in the future at the moment of detection are not independent parameters.

9. The detector orientation at the time of measurement is nothing but the position/momenta of the charges that constitute the detector.

10. From (8) and (9) it follows that the spin magnetic moment of each sphere and the detector orientation at the time of measurement are not independent parameters.

I hope the point above can give you a "feeling" of how local but deterministic theories can provide correlations in Bell tests.

All I am hearing is that playbooks are *hidden* inside every particle and have *all* the answers with no logical explanation of how that occurs.

Not from me.
 
  • #208
ueit said:
I'll give you my take on this matter, using as an example classical electromagnetism.

1. Assume a universe which only contains point-charges which is completely described by classical electromagnetism.

2. I use a simple model of a "pre-entangled" pair: two rotating, classical, oppositely charged spheres. The spin is the classical magnetic moment associated with the rotation of each sphere.

3. The position/momenta of all charges in the universe in the past (including those of the would-be detectors) determines the local EM field near the "pre-entangled" pair.

4. When the EM force generated by the local EM field is strong enough, the spheres will depart, reaching the detectors.

5. The actual orientation of the spin magnetic moment of each sphere depends on the local EM field at the moment of the splitting.

But that isn't good enough. The relevant facts about the detectors is not their positions at the time of splitting. What's relevant for the predictions of QM are the positions of the detectors at the time of detection. The detectors could very well change positions while the particles are in-flight (after they have split).

It is true that if you knew the positions and velocities of every particle in the universe, then you could in principle predict the positions the detectors would be in at the time of detection. But that's a LOT more complicated than allowing the actual orientation of the spin magnetic moment todependon the local EM field at the moment of splitting. As a matter of fact, the local EM field would be pretty much irrelevant. (If the detectors are electrically neutral, then they have a negligible effect on a distant EM field.) What it would take for a local deterministic model to reproduce the predictions of quantum mechanics is a supercomputer that can simulate the rest of the universe. And it would have to come up with the result of its computation essentially instantly.

This approach seems completely implausible to me.
 
  • #209
ueit said:
... I hope the point above can give you a "feeling" of how local but deterministic theories can provide correlations in Bell tests.

Most definitely not, and we know that from Bell! Certainly there are many more things that determine and ultimately affect the outcomes of experiments in a local classical (deterministic) universe. The orientation of angle settings, for example. The orientation of a particle is not determined in any way by randomizing devices which select settings those outside of a light cone, as was done in the experiment of Weihs et al (1998).

The point is that superdeterminism is NOT anything like any Laplacian demon operating in a clockwork universe. In such a universe, we would not get a value from experiment that matches QM. In superdeterminism, there is a mechanism in place that PREVENTS the selected sample from matching the true universe of counterfactual values. And I say that NO superdeterministic theory can ever reproduce all of the results of QM.

Of course, I can't prove that a la Bell - but it wouldn't surprise me if someone else could eventually. I do not believe that ANY program a la t' Hooft's CA can ever succeed. The only superdeterministic program that can succeed, in my opinion, is one in which:

a) There is a (VERY large) playbook handed out to every particle in the universe as it created.
b) This playbook must be created ex nihilo since even photons from a laser beam have their own unique copies.
c) There is an absolute time reference frame in the entire universe. This is required to that Bell test results can synchronize properly.
d) That playbook instructs particles how to answer for their quantum observables at all times, including during Bell tests. So fundamental particles such as electrons will be guided by the playbook for a very, very, very long time (which is why the playbook is VERY large).
e) And of course, the playbook is safely hidden inside every particle, along with the clock used to determine which page of the playbook is to be referenced at any moment.
f) The only apparent purpose of all these playbooks is to get around Bell's Theorem. Apparently, J.S. Bell is actually the most important being in all history since the playbook is a giant conspiracy to disprove his theorem. (This is humor.)

So a playbook might look like this:

Playbook for electron 21ZNA9-004958:
...
August 10, 4:11:34.00599834: Act spin up in X direction.
August 10, 4:11:34.00601577: Act spin down in Y direction.
August 10, 4:11:34.00624384: Act spin up in Z direction.
August 10, 4:11:34.00653403: Emit a photon and give it a playbook of its own, presumably copied from the electron's playbook.
...
 
  • #210
DrChinese said:
I do not believe that ANY program a la t' Hooft's CA can ever succeed. The only superdeterministic program that can succeed, in my opinion, is one in which: [deleted]

Well, the playbook idea that you sketched could be implemented by one of t'Hooft's machines, couldn't it?
 

Similar threads

Replies
220
Views
20K
Replies
48
Views
6K
Replies
16
Views
3K
Replies
55
Views
7K
Replies
5
Views
2K
Back
Top