Does the Bell theorem assume reality?

In summary, the conversation revolves around the different interpretations and assumptions of Bell's theorem in relation to reality and nonlocality. Roderich Tumulka's paper is mentioned as a comprehensive analysis of the four different notions of reality, with the conclusion that only the mildest form of realism, (R4), is relevant to Bell's theorem. There is also discussion about the role of hidden variables and counterfactuals in Bell's theorem. Ultimately, while the validity of (R4) can be questioned philosophically, it is a necessary assumption within the scientific framework.
  • #106
DrChinese said:
This is Bell's condition that the setting at A does not affect the outcome at B, and vice versa. You could call that the Locality condition. The other one is the counterfactual condition, or Realism. Obviously, the standard and accepted interpretation of Bell is that no Local Realistic theory can produce the QM results. So both of these - Locality and Realism - must be present explicitly as assumptions.

It seems to me that there are two steps involved: One (having nothing to do with locality, but instead is the Reichenbach's Common Cause Principle (whether or not Bell intended this). It's the assumption that if two things are correlated, then there exists a "common cause" for both. I gave the example earlier of twins: You randomly select a pair of 15-year-old twins out of the population, and then you separately test them for their ability to play basketball. Doing this for many pairs, you will find (probably--I haven't done it) that their abilities are correlated. The probability that they both are good at basketball is unequal to the square of the probability that one of them is good at basketball. Reichenbach's Common Cause Principle would imply that there is some common causal factor affecting both twins' basketball-playing abilities. Maybe it's genetics, maybe it's parenting style, maybe it's where they live, maybe it's what school they went to, etc. If we let ##\lambda## be the collection of all such causal factors, then it should be the case that, controlling for ##\lambda##, there is no correlation between twins' basketball-playing ability.

To me, that's where factorizability comes in. It doesn't have anything to do with locality, yet, because the common factors might conceivably include something happening on a distant star a billion light-years away. Locality is the additional assumption that the common causal factors ##\lambda## must be in the intersection of the backwards light cones of the two tests of the boys' basketball-playing ability.

Factorizability is not particularly about locality, but locality dictates what can go into the common factors.
 
Physics news on Phys.org
  • #107
stevendaryl said:
Factorizability is not particularly about locality, but locality dictates what can go into the common factors.

An example that maybe illustrates the issue of factorizability is a pair of correlated coins. You have two identical coins. Examined separately, they seem unremarkable---they each seem to have a 50/50 chance of producing heads or tails when flipped. But they have a remarkable correlation: No matter how far separated the two coins are, the ##n^{th}## flip of one coin always produces the opposite result of the ##n^{th}## flip of the other coin. We can characterize the situation by:
  1. ##P_1(H) = \frac{1}{2}, P_1(T) = \frac{1}{2}##
  2. ##P_2(H) = \frac{1}{2}, P_2(T) = \frac{1}{2}##
  3. ##P(H, H) = \frac{1}{2}, P(H, T) = 0, P(T, H) = 0, P(T, T) = \frac{1}{2}##
If the coins were uncorrelated, then the probability of both giving a result of ##H## would be the product of the individual probabilities, 1/4. Instead, it's 1/2.

So the probabilities don't factor:
##P(H,H) \neq P_1(H) P_2(H)##

Reichenbach's common cause principle would tell us that there is something funny going on with these coins. It would suggest that immediately prior to flipping the coins for the ##n^{th}## time, there is some hidden state information affecting the coins, influencing the results of one or the other or both flips. In other words, there is some state variable ##\lambda_n## such that if we knew the value of ##\lambda_n##, then we could predict the result of the ##n^{th}## coin flip.

This is not a locality assumption. A priori, ##\lambda_n## might the conjunction of conditions in the neighborhoods of both coins.

Of course, this toy example doesn't violate Bell's inequality, because there actually is a "hidden variable" explanation for the correlations. For example, we could propose that the result of the ##n^{th}## coin flip is determined by the binary expansion of some fixed real number such as ##\pi##. That would explain the correlations without the need for nonlocal interactions.
 
  • Like
Likes martinbn
  • #108
stevendaryl said:
It seems to me that there are two steps involved: One (having nothing to do with locality, but instead is the Reichenbach's Common Cause Principle (whether or not Bell intended this). It's the assumption that if two things are correlated, then there exists a "common cause" for both. I gave the example earlier of twins: You randomly select a pair of 15-year-old twins out of the population, and then you separately test them for their ability to play basketball. Doing this for many pairs, you will find (probably--I haven't done it) that their abilities are correlated. The probability that they both are good at basketball is unequal to the square of the probability that one of them is good at basketball. Reichenbach's Common Cause Principle would imply that there is some common causal factor affecting both twins' basketball-playing abilities. Maybe it's genetics, maybe it's parenting style, maybe it's where they live, maybe it's what school they went to, etc. If we let ##\lambda## be the collection of all such causal factors, then it should be the case that, controlling for ##\lambda##, there is no correlation between twins' basketball-playing ability.

To me, that's where factorizability comes in. It doesn't have anything to do with locality, yet, because the common factors might conceivably include something happening on a distant star a billion light-years away. Locality is the additional assumption that the common causal factors ##\lambda## must be in the intersection of the backwards light cones of the two tests of the boys' basketball-playing ability.

Factorizability is not particularly about locality, but locality dictates what can go into the common factors.

I understood Bell's (2) - factorizing - as being his attempt to say that the outcome of A does not depend on the nature of a measurement at B. I also read it as saying there are initial conditions common to both, not so different than what you say. And I think in all cases, those initial conditions occur prior to the measurements of A and B (by unstated assumption).
 
  • #109
DrChinese said:
In the referenced paper, the (R3) requirement is:
There is some (“hidden”) variable λ that influences the outcome in a probabilistic way, as represented by the probability P(A, B|a, b, λ).

But it really includes these 3 to work out in Bell - this is usually ignored but to me it is the crux of the realism assumption:

P(A,B|a,b,λ)
P(A,C|a,c,λ)
P(B,C|b,c,λ)

We are assuming the existence of a counterfactual.

A counterfactual event or a counterfactual probability?

Is assuming the existence of the probability of an event the same concept as assuming the existence of the event itself? - or assuming the "counterfactual" existence of the event?

There are two different interpretations of physical probability. On interpretation is that a unique event that will or will-not occur at time t has a probability associated with it that is "real" before time t and becomes either 0 or 1 at time t. The other interpretation is that the probability of such an event is only "real" in the sense that there is "really" a large collection of "identical" ( to a certain level of detail in their description) events that will or will-not occur at time t and the probability is (really) a statistical property of the outcomes of that collection of events.

(It seems to me that both intepretations lead to hopeless logical tangles!)
 
  • #110
DrChinese said:
I understood Bell's (2) - factorizing - as being his attempt to say that the outcome of A does not depend on the nature of a measurement at B. I also read it as saying there are initial conditions common to both, not so different than what you say. And I think in all cases, those initial conditions occur prior to the measurements of A and B (by unstated assumption).

Okay, but unless you already have a complete set of causal factors, then locality does not imply factorizability. To go back to my twin basketball players example, let's make up the following binary variables: ##A##: the first twin is good at basketball. ##a##: the first twin makes his first basket attempt in the tryouts. ##B##: the second twin is good at basketball. ##b##: the second twin makes his first basket attempt.

I'm guessing that ##P(A, B | a, b) \neq P(A | a) P(B | b)##. Knowing that the first twin made his first basket attempt might very well tell you something about whether the second twin is good at basketball. But that failure to factor doesn't mean that anything nonlocal is going on. It means that you haven't identified all the causal factors.
 
  • #111
From the article:
(R4) Every experiment has an unambiguous outcome, and records and memories of that outcome agree with what the outcome was at the space-time location of the experiment.
The notion here is that the "unambiguous outcome" as it applies to the experiment, is that the "result" is either +1 or -1, and not something probabilistic. As applied to QM, it means that the result has "collapsed" to a certainty. For QM, I don't think that's a given.

On the other hand, most of us are willing to accept this as a practical reality. If we ignore the "unambiguous" part, what we have is the basis for science, scientific method, and scientific discovery.

On another point, I think that it is important to note that in actual experiments, the result is +1, -1, or not detected. In any situation where the Bell Inequality is being tested, the experimenter needs to verify that the "not detected" case is not so large as to ruin Bell's arithmetic. Otherwise, your hidden variable can be used to select which particles are easiest to detect with a given measurement angle.
 
  • #112
Stephen Tashi said:
A counterfactual event or a counterfactual probability?

There is no event, certainly. It is an expression of realism. The realist claims that there is reality independent of the act of observation, and the results at A are independent of the nature of a measurement on B. Because every possible measurement result on A can be predicted in advance, together these *imply* that every possible measurement result (on A) pre-exists. Similar logic applies to B. Therefore, every possible combination of A & B - measured at any angles - must be counterfactually real. I.e. Simultaneously real. That is the idea of an objective reality.

And yet, clearly Bell shows that is not possible.
 
  • #113
Demystifier said:
A large portion of physicists thinks that Bell's theorem shows that reality does not exist. Another large portion of physicists thinks that reality is not an assumption of Bell's theorem, so that Bell's theorem just proves nonlocality, period. A third large portion of physicists thinks that both reality and locality are assumptions of Bell's inequalities, so that the Bell theorem proves that either reality or locality (or both) are wrong. So who is right?

I don’t see where the problem is if one avoids the term “reality” which is charged with a lot of cherished philosophical beliefs. One should simply use the term "objective local theory" as, for example, done by A. J. Leggett in “Testing the limits of quantum mechanics: motivation, state of play, prospects” (J. Phys.: Condens. Matter 14 (2002) R415–R451):

As is by now very widely known, in an epoch-making 1964 paper the late John Bell demonstrated that under such conditions the two-particle correlations predicted by QM are incompatible with a conjunction of very innocuous and commonsensical-looking postulates which nowadays are usually lumped together under the definition of an ‘objective local’ theory; crudely speaking, this class of theories preserves the fundamental postulates of local causality in the sense of special relativity and a conventional concept of the ‘arrow’ of time, and in addition makes the apparently ‘obvious’ assumption that a spatially isolated system can be given a description in its own right. The intuitive plausibility (to many people) of the class of objective local theories is so high that once Bell had demonstrated that under suitable conditions (including the condition of space-like separation) no theory of this class can give experimental predictions which coincide with those made by QM, a number of people, including some very distinguished thinkers, committed themselves publicly to the opinion that it would be QM rather than the objective local postulates which would fail under these anomalous conditions.
 
  • #114
akvadrako said:
I don't have much to say about the other points, so I'll just comment on this one. How could a mixed state not imply multiple copies of an observer, given unitary evolution? It would seem to require that the observer is both entangled with a qubit representing a future measurement and not entangled with it.
A mixed state can result from simple classical ignorance of a pure state source which may pump out, say, one of four pure states. It would be described by a mixed state due to the classical ignorance, but this has nothing to do with entanglement or multiple copies of the observer, i.e. even in Many Worlds in such a case there wouldn't be multiple copies. It's the difference between a proper and improper mixture.

akvadrako said:
In more general terms, I would say SLU always applies to all observers, because there is a lot about their environment they are uncertain about.
I don't see how in this case. The Minkowski observer sees the vacuum state ##\rho_{\Omega}## as a pure state, a Rindler boosted observer sees it as a mixed state. In this case there is no environment and the "mixture" is unrelated to post-measurement entanglement.
 
Last edited:
  • Like
Likes bhobba
  • #115
Demystifier said:
My translation of this is the following: Sure, there is objective reality, but it's just not described by (QBist) QM. The things which are described by QM do not involve objective reality. Objective reality, since it exists, is non-local as proved by Bell, but QM as a theory with a limited scope is a local theory.
Bell's theorem doesn't prove reality is non-local, that's only one way out of the theorem. As I mentioned above retrocausal or acausal theories are another way out.

Also Fuchs explicitly thinks nature is local, as does any of the rest of the QBist authors I've seen talks from. Fuchs even has a cartoon of non-locality he calls the "tickle tickle world" (see @18:30):


Demystifier said:
Regarding this, I think there are two types of QBists. One type says that there is no ##\lambda## in Nature. Those deny the existence of objective reality. Another type says that there is objective reality, so there is ##\lambda## in Nature, but there is no ##\lambda## in a specific theory of Nature that we call QBist QM.
Everyone I've seen is the former. And it's not so much denying objective reality as denying that reality is fully mathematizable. In their view, there is a world out there, it's just not amenable to a complete mathematical specification. Finding out why that is and what that means is sort of what Fuchs intends as the future for QBism.
 
  • Like
Likes eloheim, Demystifier, bhobba and 1 other person
  • #116
  • Like
Likes ShayanJ
  • #117
DarMM said:
Bell's theorem doesn't prove reality is non-local, that's only one way out of the theorem. As I mentioned above retrocausal or acausal theories are another way out.

In a retrocausal or acausal contextual theory, the context is formed from a quantum system at different points in spacetime. These would not be simultaneous as you would expect either in a conventional local classical theory, or in a non-local theory.

As a result, there is no counterfactual scenario. So the realistic assumption in Bell is explicitly rejected.
 
  • Like
Likes DarMM
  • #118
stevendaryl said:
Well, I don't have Bell's paper in front of me, so that doesn't help. However, Wikipedia has derivations of the Bell inequality and the related CHSH inequality.

I don't know what 14a and 14b refer to. I see this paper of Bell's, posted by Dr. Chinese: http://www.drchinese.com/David/Bell_Compact.pdf
but it doesn't have a 14a and 14b.
Well, I think that Einstein et al were reasoning along the lines of: If it is possible, by measuring a property of one particle to find out the value of a corresponding property of another, far distant particle, then the latter property must have already had a value. Specifically, Alice by measuring her particle's spin along the z-axis immediately tells her what Bob will measure for the spin of his particle along the z-axis. (EPR originally were about momenta, rather than spins, but the principle is the same). So to EPR, this either means that (1) Alice's measurement affects Bob's measurement (somehow, Bob's particle is forced to be spin-down along the z-axis by Alice's measurement of her particle, or (2) Bob's particle already had the property of being spin-down along the z-axis, before Alice even performed her measurement.

So EPR's "elements of reality" when applied to the measurement of anti-correlated spin-1/2 particles would imply (under the assumption that Alice and Bob are going to measure spins along the z-axis) that every particle already has a definite value for "the spin in the z-direction". If you furthermore assume that Alice and Bob are free to choose any axis they like to measure spins relative to (I don't know if the original EPR considered this issue), then it means that for every possible direction, the particle already has a value for the observable "the spin in that direction".

Bell captured this intuition by assuming that every spin-1/2 particle produced in a twin-pair experiment has an associated parameter ##\lambda## which captures the information of the result of a spin measurement in an arbitrary direction. The functions ##A(\overrightarrow{a}, \lambda)## and ##B(\overrightarrow{b}, \lambda)## are assumed to give the values for Alice's measurement along axis ##\overrightarrow{a}## and Bob's measurement along axis ##\overrightarrow{b}##, given ##\lambda##.

So it seems to me that ##\lambda## directly captures EPR's notion of "elements of reality". ##\lambda## is just the pre-existing value of the spin along an arbitrary direction.
(14a) and (14b) were introduced at post #95.
 
  • #119
DrChinese said:
In a retrocausal or acausal contextual theory, the context is formed from a quantum system at different points in spacetime. These would not be simultaneous as you would expect either in a conventional local classical theory, or in a non-local theory.

As a result, there is no counterfactual scenario. So the realistic assumption in Bell is explicit rejected.
I've been thinking* and there's a possible link with QBism and these views. Bear with me, because I might be talking nonsense here and there's plenty of scare quotes because I'm not sure of the reality of various objects in these views.

In these views let's say you have a classical device ##D_1## the emitter and another classical device ##D_2## the detector, just as spacetime in Relativity is given a specific split into space and time by the given "context" of an inertial observer, in these views we have spacetimesource which is split into
  1. Space
  2. Time
  3. A conserved quantity, ##Q##
by the combined spatiotemporal contexts of those two devices.

That conserved quantity might be angular momentum, or it might be something else, depending on what ##D_1## and ##D_2## are. Then some amount of ##Q## is found at earlier times in ##D_1## and in later times in ##D_2##, not because it's transmitted, simply that's the "history" that satisfies the 4D constraints.

Quantum particles and fields only come in as a way of evaluating the constraint via a path integral, they're sort of a dummy variable and don't fundamentally exist as such.

So ultimately we have two classical objects which define not only a reference frame but a contextual quantity they "exchange". This is quite interesting because it means if I have an electron gun and an z-axis angular momentum detector, then it was actually those two devices that define the z-axis angular momentum itself ##J_z## that they exchange, hence there is obviously no counterfactual:
"X-axis angular momentum ##J_x## I would have obtained had I measured it"
since that would have required a different device, thus a different decomposition of the spacetimesource and a completely different 4D scenario to constrain. Same with Energy and so on. ##J_z## also wasn't transmitted by an electron, it's simply that integrating over fermionic paths is a nice way to evaluate the constraint on ##J_z## defining the 4D history.

However and here is the possible link, zooming out the properties of the devices themselves are no different, they are simply contextually defined by other classical systems around them. "Everything" has the properties it is required to have by the surrounding context of its environment, which in turn is made of objects for which this is also true. In a sense the world is recursively defined. Also since an object is part of the context for its constituents, the world isn't reductive either, the part requires the whole to define its properties.

It seems to me that in such a world although you can mathematically describe certain fixed scenarios, it's not possible to obtain a mathematical description of everything in one go, due to the recursive, non-reductive nature of things. So possibly it could be the kind of ontology a QBist would like? Also 4D exchanges are fundamentally between the objects involved, perhaps the sort of non-objective view of measurements QBism wants.

Perhaps @RUTA can correct my butchering of things! :nb)

*Although this might be completely off as I've read the Relational Block World book and other papers on the view, as well as papers on Retrocausal views like the Transactional interpretation, but I feel they haven't clicked yet.
 
  • #120
Who do they think they are trying to convince. We're not real.:bow:
 
  • #121
AgentSmith said:
Who do they think they are trying to convince. We're not real.:bow:

I think the concept of real is like the concept of time - its one of those things that's hard to pin down. Time is what a clock measures - real is the common-sense idea that what we experience comes from something external to us that actually exists. All these can be be challenged by philosophers, and often are circular, but I think in physics pretty much all physicists would accept you have to start somewhere and hold views similar to the above.

For what its worth I think Gell-Mann and Hartel are on the right track:
https://www.sciencenews.org/blog/context/gell-mann-hartle-spin-quantum-narrative-about-reality

The above, while for a lay audience, contains the link to the actual paper.

Thanks
Bill
 
  • Like
Likes *now* and DarMM
  • #122
DarMM said:
Bell's theorem doesn't prove reality is non-local, that's only one way out of the theorem. As I mentioned above retrocausal or acausal theories are another way out.
I agree about the retrocausal, but I'm not sure what do you mean by acausal. The Bell's theorem, especially some later versions of it, does not depend on the assumption of determinism. So I guess by acausal you mean something different from non-deterministic, but I am not sure what exactly would that be. Perhaps influences with a finite speed larger than c? That was ruled out by a theorem of Gisin.

Or perhaps by acausal you mean the idea that things just happen, without a cause? This, indeed, is very much Copenhagen in spirit. But it violates the Reichenbach common cause principle, which is at the root of all scientific explanations. So acausal in that sense is rather non-scientific in spirit, it is a form of mysticism. It's a perfectly legitimate position, of course, but one needs to say clearly what is at stake when one adopts that position.

Moreover, if we accept that acausality in this sense is a way to save locality, then Bohmian mechanics can also be interpreted as local. Particles just happen to move along those funny Bohmian trajectories, without a cause. Furthermore, in this sense, any quantitative physical theory can be interpreted as acausal. For instance, classical mechanics is acausal too; particles just happen to move along those classical trajectories, without a cause. And so on, and so on ... I don't think that such an acausal perspective helps to explain anything. I think it is worse than mysticism, it's a nonsense.
 
Last edited:
  • Like
Likes 1977ub and zonde
  • #123
DarMM said:
A mixed state can result from simple classical ignorance of a pure state source which may pump out, say, one of four pure states. It would be described by a mixed state due to the classical ignorance, but this has nothing to do with entanglement or multiple copies of the observer, i.e. even in Many Worlds in such a case there wouldn't be multiple copies. It's the difference between a proper and improper mixture.

I'm a bit surprised by this disagreement. Unitary QM is deterministic, so if only one of four states will occur it must be due to initial conditions. Those different initial conditions correspond to different copies of the observer. Those copies could exist on different branches due to previous interactions but SLU doesn't depend on an environment or interaction. It just requires that indistinguishable observers are created in different locations; those could be spacetime points, quantum branches or independent pre-quantum universes.
 
  • #124
N88 said:
(14a) and (14b) were introduced at post #95.

That post does not say what they are. It does introduce the character strings "14a" and "14b", but that doesn't help by itself.
 
  • #125
DrChinese said:
The realist claims that there is reality independent of the act of observation, and the results at A are independent of the nature of a measurement on B. Because every possible measurement result on A can be predicted in advance, together these *imply* that every possible measurement result (on A) pre-exists.

I don't understand what it means for something to "pre-exist". Doe the prediction of a possible measurement imply predicting a definite outcome for it? Or does such a realist only claim the existence of definite probabilities for various outcomes?

There is a common language notion of "real" that is identical to the common language notion of "actual". There is also a common language notion that statements involving hypothetical events can be "really" true. For example, if my local grocery store has tangelos today then it's stock of tangelos is "real" in the sense of being "actual". If I say "If Alice goes to my local grocery store, she will find tangelos for sale" then, by common notions, this is "really" true. And it is not "really" true that "if Alice goes to my local grocery store, she will not find tangelos for sale". However, from the standard mathematical point of view, a false premise "really" implies any conclusion. So "if Alice goes to my local grocery store today then 2+2 =5" is true when Alice does not go to the store. Likewise "If Alice goes to my local grocery store today then she will not find tangelos" is true when Alice does not go to the store

The writing of mathematical equation does not constitute a statement until words are supplied to interpret it. The use of symbols representing conditional probabilities does not, by itself, say anything definite about a notion of truth for "if...then..." type statements that is different from the standard mathematical notion of truth about them. It seems to me that the physical notions involving "counterfactuals" require establishing some context for equations that goes beyond the purely mathematical interpretation.
 
  • #126
Stephen Tashi said:
I don't understand what it means for something to "pre-exist".

Well, suppose I take a pair of shoes and split them up, putting one in one box and sending it to Alice, and putting the other in another box and sending it to Bob. Alice and Bob don't see which shoe was sent to which person.

Alice would give the subjective probability of 50/50 for her finding a left shoe or a right shoe when she opens the box. She would give the same odds for Bob finding a left or right shoe. However, knowing how the boxes were produced, when she opens her box, she immediately knows which shoe Bob will find, even though he has not yet opened his box. So in that case, we would say that Bob's result, left or right, is pre-determined. Even though he hasn't looked, Alice knows what he will see.

The original EPR argument was that measurement of correlated particles must have a similar explanation.
 
  • #127
stevendaryl said:
Well, suppose I take a pair of shoes and split them up, putting one in one box and sending it to Alice, and putting the other in another box and sending it to Bob. Alice and Bob don't see which shoe was sent to which person.
...
The original EPR argument was that measurement of correlated particles must have a similar explanation.

So, do you define "realism" to be the belief that such hidden variables really (i.e. actually) exist?
 
  • #128
Stephen Tashi said:
...

So, do you define "realism" to be the belief that such hidden variables really (i.e. actually) exist?

In the context of EPR, the idea of realism - much as stevendaryl says - was as follows: since a measurement result of Alice could be predicted with certainty (by prior reference to Bob), there must be an element of reality to whatever led to that result. They *assumed* objective realism as part of that conclusion, specifically that Alice's result did not depend on the choice of measurement on Bob.

In the context of Bell, that idea was expressed slightly differently. That there were probabilities of outcomes at 3 different pairs of measurement angle settings, and that they were independent on each other. The angle settings being AB, BC, and AC. This matches the EPR assumption, although it is a bit looser.

You could ALSO say that there were "real" hidden variables, that is your choice. Doesn't really change much if you do. The point is that Bell showed the EPR assumption to lead to a contradiction with QM. Unless of course there were FTL influences at work.
 
  • #129
akvadrako said:
I'm a bit surprised by this disagreement
So, in the case of a proper mixture we simply have classical uncertainty. There is no more need for multiple observers or SLU than with a coin in a box that could be head or tails, it's pure standard "I don't know" uncertainty from plain Kolmogorov probability.

In the Unruh case, no observer measures or becomes entangled with anything and yet a supposedly ontic object ##\rho_\Omega## is a pure state for a Minkowski observer, but a mixed state (of simple "I don't know if the coin is heads or tails" form) for another. So we have a supposedly ontic object having some purely epistemic content in an accelerating frame.

Unless you interpret even coin tosses or classical probability in general in a multiple physical copies of the observer sense.
 
  • Like
Likes dextercioby
  • #130
DarMM said:
Unless you interpret even coin tosses or classical probability in general in a multiple physical copies of the observer sense.

That depends. Is there only one result possible, but the observer just isn't smart enough to figure it out? That would seem to be epistemic uncertainty without multiple observers. Though the observer being just ignorant of a result doesn't seem to have much physical relevance.

On the other hand, if there are multiple outcomes compatible with the observer's state, then I would say worlds corresponding to each outcome exist and SLU applies. In regards to your example, I looked up the Unruh effect but didn't find much relevant to this aspect. However, I don't see how the details could matter. One can imagine 4 worlds, as viewed from outsiders, each containing a stationary observer with the same state and accelerating observers with different states.
 
  • #131
Demystifier said:
I agree about the retrocausal, but I'm not sure what do you mean by acausal
So to be clear:

Retrocausal is propogation into the past light cone.

Acausal describes physics where you take a 4D chunk of spacetime with some matter in it and declare the events that occur are those that satisfy a specific constraint given conditions on hypersurfaces at opposite ends. This basically occurs in Classical Mechanics in the least action formalism, however due to the resulting least action trajectory obeying the Euler-Lagrange equations, you can convert this to a 3+1D picture of what is going on, i.e. of a particle moving in response to a potential.

Acausal views of QM declare the set of events is a result of a constraint different to the least action principal, however one where the resulting set of events can't be understood in a 3+1D way, i.e. as initial conditions evolving in time under a PDE or something similar.

Reichenbach's common cause principle doesn't hold, because later events don't result from previous ones which are their causes, rather the set of events as a whole is selected by a constraint. However things still have a fairly clear scientific explanation.
 
Last edited:
  • Like
Likes Demystifier and DrChinese
  • #132
stevendaryl said:
That post does not say what they are. It does introduce the character strings "14a" and "14b", but that doesn't help by itself.

PS: I might be missing your point? See [[inserts]] next:

Post # 95 says: "4. Now IF we number Bell's 1964 math from the bottom of p.197: [[starting with Bell's ]] (14), (14a), (14b), (14c), [[and finishing with Bell's]] (15): THEN Bell's realism enters between (14a) and (14b) via his use of his (1)."

That is, we faciltate discussion of Bell's key move --- (14a) to (14b) --- by properly identifying the place that it occurs: between Bell 1964:(14) and Bell 1964:(15). HTH

N88 said:
The point I seek to make is that Bell's inequality is a mathematical fact of limited validity.

1. It is algebraically false.

2. It is false under EPRB (yet Bell was seeking a more complete specification of EPRB).

3. So IF we can pinpoint where Bell's formulation departs from #1 and #2, which I regard as relevant boundary conditions, THEN we will understand the reality that Bell is working with.

4. Now IF we number Bell's 1964 math from the bottom of p.197: (14), (14a), (14b), (14c), (15): THEN Bell's realism enters between (14a) and (14b) via his use of his (1).

So the challenge for me is to understand the reality that he introduces via the relation ...

##B(b,\boldsymbol{\lambda})B(b,\boldsymbol{\lambda}) = 1. \qquad(1)##

... since this is what is used --- from Bell's (1) --- to go from (14a) to (14b).

And that challenge arises because it seems to me that Bell breaches his "same instance" boundary condition; see that last line on p.195. That is, from LHS (14a), I see two sets of same-instances: the set over ##(a,b)## and the set over ##(a,c)##. So, whatever Bell's realism [which is the question], it allows him to introduce a third set of same-instances, that over ##(b,c)##.

It therefore seems to me that Bell is using a very limited classical realism: almost as if he had a set of classical objects that he can non-destructively test repeatedly, or he can replicate identical sets of objects three times; though I am open to -- and would welcome -- other views.

Thus, from my point of view: neither nonlocality nor any weirdness gets its foot in the door: for [it seems to me], it all depends on how we interpret (1).

PS: I do not personally see that Bell's use of (1) arises from "EPR elements of physical reality." But I wonder if that is how Bell's use of his (1) is interpreted?

For me: "EPR elements of physical reality" correspond [tricky word] to beables [hidden variables] which I suspect Bell may have been seeking in his quest for a more complete specification of EPRB. However, toward answering the OP's question, how do we best interpret the reality that Bell introduces in (1) above?

Or, perhaps more clearly: the reality that Bell assumes it to be found in Bell's move from (14a) to (14b). HTH.
 
  • #133
N88 said:
PS: I might be missing your point? See [[inserts]] next:

My point is that I don't know what "14a" and "14b" are.
 
  • #134
Lord Jestocost said:
"From a classical standpoint we would imagine that each particle emerges from the singlet state with, in effect, a set of pre-programmed instructions for what spin to exhibit at each possible angle of measurement, or at least what the probability of each result should be…….

From this assumption it follows that the instructions to one particle are just an inverted copy of the instructions to the coupled particle……..

Hence we can fully specify the instructions to both particles by simply specifying the instructions to one of the particles for measurement angles ranging from 0 to π……….
"

see: https://www.mathpages.com/home/kmath521/kmath521.htm

Thanks for this. Since the above assumption leads to an unphysical result, I prefer an alternative classicality. Let's call it "Einstein-causality" --- or [maybe better] "Einstein-classicality" after this from Bell:

Einstein argued that EPR correlations ‘could be made intelligible only by completing the quantum mechanical account in a classical way,' Bell (2004:86).

Therefore let Bell's λ denote a particle's total angular momentum, a term common to CM and QM, ..., ... . Then that particle's interaction with a polarizer is DETERMINED via [in classical terms] spin, torque and precession. So the interaction of its pairwise correlated twin is likewise DETERMINED similarly: and in a law-like fashion, the law being readily discerned.

Thus, under this "Einstein-causality" we have "correlation-at-a-distance" and the QM results delivered painlessly. More importantly: we avoid 'spooky-action', nonlocality, AAD, etc: which is what I presume QFT provides?

PS, in so far as the OP and I are interested in Bell and his assumed realism: regarding the "Einstein-causality" above, are you able to also comment on the above from the QFT point-of-view?
 
  • #135
  • Like
Likes Demystifier
  • #136
N88 said:
Please see the top two unnumbered relations on Bell (1964:198).

If you do not have it, Bell (1964) is readily and freely available online: http://cds.cern.ch/record/111654/files/vol1p195-200_001.pdf

Okay, I've looked at that paper several times, but there were no equations labeled 14a and 14b, so I assumed you meant another paper.
 
  • #137
stevendaryl said:
Okay, I've looked at that paper several times, but there were no equations labeled 14a and 14b, so I assumed you meant another paper.

He just meant the ones following 14, before 15. I think. :smile:
 
  • #138
DrChinese said:
He just meant the ones following 14, before 15. I think. :smile:
Yes, thanks. (14a), (14b), and (14c) are Bell's three UNNUMBERED math expressions [now numbered] ... following 14, before 15.

PS: DrChinese, I would welcome you POV on Bell's transformation of (14a) to (14b). Thank you again.
 
  • #139
stevendaryl said:
Okay, I've looked at that paper several times, but there were no equations labeled 14a and 14b, so I assumed you meant another paper.

Bell is assuming that ##A(\overrightarrow{a}, \lambda)## is a function returning ##\pm 1##, and the interpretation is that if a particle has the hypothesized hidden variable ##\lambda##, and you measure the spin along direction ##\overrightarrow{a}## (or polarization), then you will get the result ##A(\overrightarrow{a}, \lambda)##.

So ##A(\overrightarrow{a}, \lambda) A(\overrightarrow{b}, \lambda) - A(\overrightarrow{a}, \lambda) A(\overrightarrow{c}, \lambda)## can be written:

1. ##A(\overrightarrow{a}, \lambda) A(\overrightarrow{b}, \lambda) - A(\overrightarrow{a}, \lambda) A(\overrightarrow{c}, \lambda)##
##= A(\overrightarrow{a}, \lambda) (A(\overrightarrow{b}, \lambda) - A(\overrightarrow{c}, \lambda))##

At this point, we can use that ##A(\overrightarrow{b}, \lambda) = \pm 1##, which implies that ##A(\overrightarrow{b}, \lambda) A (\overrightarrow{b}, \lambda) = 1##. So we can rewrite

##A(\overrightarrow{c}, \lambda)) = A(\overrightarrow{b}, \lambda) A (\overrightarrow{b}, \lambda) A(\overrightarrow{c}, \lambda)##

Since the first two factors multiplied together yield +1. So we can plug this in for ##A(\overrightarrow{c}, \lambda))## into the right-side of equation 1 to get:
2. ##A(\overrightarrow{a}, \lambda) A(\overrightarrow{b}, \lambda) - A(\overrightarrow{a}, \lambda) A(\overrightarrow{c}, \lambda)##
##= A(\overrightarrow{a}, \lambda) (A(\overrightarrow{b}, \lambda) - A(\overrightarrow{b}, \lambda) A (\overrightarrow{b}, \lambda) A(\overrightarrow{c}, \lambda))##
##= A(\overrightarrow{a}, \lambda) A(\overrightarrow{b}, \lambda)(1 - A(\overrightarrow{b}, \lambda) A(\overrightarrow{c}, \lambda))##
 
  • #140
N88 said:
Therefore let Bell's λ denote a particle's total angular momentum, a term common to CM and QM, ..., ... . Then that particle's interaction with a polarizer is DETERMINED via [in classical terms] spin, torque and precession. So the interaction of its pairwise correlated twin is likewise DETERMINED similarly: and in a law-like fashion, the law being readily discerned.

Thus, under this "Einstein-causality" we have "correlation-at-a-distance" and the QM results delivered painlessly. More importantly: we avoid 'spooky-action', nonlocality, AAD, etc: which is what I presume QFT provides?

I think that QFT is a red-herring. QFT explains (or fails to explain) EPR in the same way that QM does.

But I don't understand what you're saying here. Let's go through again why Einstein thought there was spooky action at a distance (or hidden variables):
  • We produce a pair of correlated particles.
  • Assume that Alice measures some property of her particle before Bob measures the corresponding property of his particle.
  • For the properties that Bell was discussing, there are two possible results, which we can map to ##\pm 1##.
  • Immediately before Alice performs her measurement, she would give the subjective probability of Bob's two results as 50/50.
  • Immediately after she performs her measurement, she knows with certainty what Bob's result will be (assuming he measures spin or polarization along the same axis that Alice did).
So Alice's subjective likelihood of Bob getting +1 changed instantaneously from 50% to 100% (or 0%, whichever it is). Einstein reasoned that there were two possible explanations for this sudden change:
  1. Somehow Alice's measurement affected Bob's particle, even though it was far away. This would be "spooky action at a distance".
  2. Alternatively, maybe Bob's measurement result was pre-determined to be whatever before Alice performed her measurement, and her measurement only informed her of this fact. This would be a hidden variable.
Bell's argument showed that interpretation 2 is not possible. So spooky action at a distance it is. Of course, you can argue that there are more than two possibilities, but of the two Einstein considered, spooky action at a distance seems to be the one that is not ruled out by experiment.
 
  • Like
Likes Mentz114 and DrChinese
Back
Top