Relativity & Quantum Theory: Is Locality Violated?

  • Thread starter UglyDuckling
  • Start date
  • Tags
    Relativity
In summary, Special Relativity is violated because information is not transferred between two systems that are spatially separated.
  • #141
Vanesch,

Yes, it's certainly interesting how QM can intrude (or at the least, walk hand in hand) with some forms of philosophy.

I myself lean towards property dualism. :)

I can not prove the qualia of a stone...but I also can't prove an invisible spaghetti monster lives on the planet Uranus. So I choose to assume that neither happens. :)

Thanks for the replies though. This is a 'pet topic' of mine.
 
Physics news on Phys.org
  • #142
Hurkyl said:
No, this is my theory. In my theory, the probability distribution is a fundamental physical constant.

The probability distribution *of what*? You'll maybe say something like "the outcomes"... but what *are* these? What are they *made of*? What's actually *going on* in this scenario?

Is it really such a myserious surprise that there is a difference between saying "X happens" and providing a *theory* about X?


And besides, this is exactly what you were asking for: you were asking a theory about "irreducibly-random events", and further clarified by stating "a universe where the outcomes aren't based on any micro-hidden-details, but are genuinely irreducibly random".

A theory can provide some account of beables involved in the production of some "observable", and still include irreducibly random events. I'm not asking for a deterministic theory -- just some kind of theory with some kind of state descriptions and some kind of dynamics.


So, in fact, the very conditions you've put forth require that there are no other physical variables that affect the outcomes of the coin flips -- the only property these coins have is their joint probability distribution!

Maybe the problem is that the example is too silly. But the main point here is that my conditions *don't* "require that there are no other physical variables that affect the outcomes..." Indeed, I don't see how you can propose a theory without talking about, well, *some* variables. An example would be orthodox QM: the collapse postulate involves irreducible randomness, yet it is part of a theory which proposes a definite state description and definite dynamics. So what you need is an example of something like that -- but something that doesn't violate Bell Locality.



This seems, to me, to completely fulfill the requirements you set forth. If you disagree, it would help greatly if you could put forth any theory that had irreducibly-random events in a universe where the outcomes aren't based on any micro-hidden-details, but are genuinely irreducibly random.

Orthodox QM w/ collapse postulate. Only, make the collapse occur along future-light-cone out from the measurement event so it doesn't violate Bell Locality. That would be a local theory w/ irreducible randomness. (But of course it isn't empirically viable.)



I can, right now, create two devices that will display either "H" or "T" when a button is pressed. I can give these to Alice and Bob, and they can press the buttons in whatever way they want, and when they compare notes, they will find they both got the same sequences of "H"s and "T"s.

Sure you can do this. But what does this have to do with Bell Locality? Perhaps you've forgotten that Bell Locality speaks not of subjective probabilities but of probabilities assigned by a theory. And it doesn't count to say "my theory is that there's nothing actually going on" -- i.e., "my theory is that there is no theory."
 
  • #143
vanesch said:
I think you are here thinking of "a mechanism that can explain the randomness"... and as such should not be intrinsically random itself ; in other words, a deterministic mechanism.

You're half right. I'm thinking of what might as well be called "a mechanism that can explain"... but the point is not to explain the randomness (in terms of an underlying deterministic mechanism) but just to explain the results. This is nothing special. It's called having a theory about what's going on (as opposed to just shrugging and saying "this particle detector over here ended up firing at a certain moment; oh well; I guess that's just an irreducible inexplicable fact").


What's wrong with say, have "stochastical" classical mechanics, where you have to add, at spacelike separated events, intrinsically random variables to the local equations of motion, but in such a way that these random variables are correlated ?

I have no objection to injecting random variables into the dynamics. What I object to is picking a random number and then simultaneously injecting it into the dynamics of two spatially separated regions.


Of course, you can not find any DETERMINISTIC mechanism that can explain this correlation from an underlying "deterministic mechanics with lack of knowledge of the initial state" ; but if the intrinsically random variables are FUNDAMENTAL ? If you have no underlying mechanism, how are you going to require any statistical independence ?

Maybe this is what's causing the disagreement here. You seem to think that "irreducible randomness" means there's "no underlying mechanism." I'm willing to accept candidate descriptions of underlying mechanisms (i.e., *theories*!) which involve irreducible randomness.

For example, I think Orthodox QM is perfectly well a "theory" in the sense I'm insisting on. It provides an account of what's going on behind the scenes to produce something like a detector firing. And what I don't like about that theory is *not* that half of its dynamics isn't deterministic. The "problem" (in this context) is that this half of the dynamics is nonlocal. It says that an irreducibly random event at one place has an instantaneous effect at other places.

Do people think that the whole notion of "causal influence" is absent/meaningless in a non-deterministic theory? Maybe that's the source of the trouble here...
 
  • #144
ttn said:
The "problem" (in this context) is that this half of the dynamics is nonlocal. It says that an irreducibly random event at one place has an instantaneous effect at other places.

Do people think that the whole notion of "causal influence" is absent/meaningless in a non-deterministic theory? Maybe that's the source of the trouble here...

The problem is that "orthodox quantum mechanics" doesn't say that, and does have a perfectly good relativistic notion of causality. It says that you only see the correlation after the fact, and that the correlation isn't because the one event caused the other, because there is no causality between spacelike separated events. It all hangs together and gives correct predictions, but you have to take to heart the old saying "Correlation is not causation."
 
  • #145
selfAdjoint said:
The problem is that "orthodox quantum mechanics" doesn't say that, and does have a perfectly good relativistic notion of causality. It says that you only see the correlation after the fact, and that the correlation isn't because the one event caused the other, because there is no causality between spacelike separated events. It all hangs together and gives correct predictions, but you have to take to heart the old saying "Correlation is not causation."

In orthodox QM, the state description of one object (or, in one region) *changes* as a result of a measurement on some other object (in some other region). One can of course always say "Oh, you shouldn't take those state descriptions literally." Well, OK. But not taking the descriptions as literal does not change the fact that *if* those descriptions are literally correct ("complete") descriptions of the real world, then the real world contains relativity-violating causal influences. That's just what the theory *says*, and frankly I find it ridiculous that so many otherwise reasonable people are able to get lulled into the cognitive trap of simply denying the literal truth of the theory as a way of avoiding this implication. The fact is, orthodox QM provides a candidate description of the world and dynamics for it, and this combination -- this theory -- is nonlocal (in the specific sense of violating Bell Locality).

A given person may or may not be troubled by this fact, depending on whether or not he thinks this theory is true. (If you don't think OQM is true, then there's no reason to worry about its conflicting with relativity.) But then, you have to face the obvious next question: what theory *is* true? Anybody who is not interested in asking and answering this question is, in my opinion, no physicist.

As to "correlation is not causation", the point of this old saying is that you can't infer, from the mere observed correlation of A and B, that A causes B. But this has precisely nothing to do with what we're talking about here. Yes, it's notoriously difficult to validly infer causal relationships from observation. But the whole beauty of Bell Locality is that we don't have to try to do this, because the criterion isn't *about* observed correlations (or "subjective probabilities" as I keep saying). It's about the predicitons of *theories*. And it is notoriously UN-difficult -- notoriously *easy* -- to validly infer causal relationships from *theories*, because (by virtue of what it means to be a theory in the first place) theories simply *tell* us what causal relationships exist.

This is an absolutely fundamental point to this discussion, and there's no point talking about anything else until this is clear. Nobody is saying that you can look at some empirical fact (like Alice's and Bob's outcomes are correlated in a certain way) and infer that there is some spooky superluminal causation going on. The claim is that you can look at a *candidate theory* (like OQM or Bohmian Mechanics or whatever) and infer that, according to this theory, there is or is not spooky superluminal causation going on. To be confused about this point, is the same about being confused about the distinction between Bell Locality and Signal Locality. (A violation of the latter *can* be directly inferred from observed correlations; a violation of the former cannot.)
 
  • #146
ttn said:
In orthodox QM, the state description of one object (or, in one region) *changes* as a result of a measurement on some other object (in some other region).

Orthodoxly the state function doesn't exists in any region of spacetime. What exists is a measurement, the result of a preparation and an action. Those are events in spacetime. And the time relations of spacelike events are indeterminate. So what you're saying here is violating both QM and relativity.
 
  • #147
selfAdjoint said:
Orthodoxly the state function doesn't exists in any region of spacetime. What exists is a measurement, the result of a preparation and an action. Those are events in spacetime. And the time relations of spacelike events are indeterminate. So what you're saying here is violating both QM and relativity.

This point has already been covered. There are simply two possible version of "orthodox QM". One (perhaps closer to von Neumann's ideas, although Bohr's insistence on the completeness doctrine makes me think this is close to Bohr's views too) is that the wave function provides a literal and complete description of physical states, with two different dynamical laws depending on whether or not a measurement is being made. The other possible view is to take the whole quantum formalism as an empty black-box algorithm for predicting measurement outcomes. This latter may or may not constitute a theory (in the relevant sense) depending on whether or not its advocate is claiming *ignorance* about real goings-on at the sub-microscopic level, or, rather, is claiming that there are no such goings on (i.e., that *the only thing that exists are readings on measurement devices* -- a view that to me is too preposterous to take seriously since it contradicts pretty much everything discovered by scientists in the last 200 years).
 
  • #148
ttn said:
I have no objection to injecting random variables into the dynamics. What I object to is picking a random number and then simultaneously injecting it into the dynamics of two spatially separated regions.

Why ?


Maybe this is what's causing the disagreement here. You seem to think that "irreducible randomness" means there's "no underlying mechanism." I'm willing to accept candidate descriptions of underlying mechanisms (i.e., *theories*!) which involve irreducible randomness.

Yes, but you place extra limits on how this irreducible randomness can be applied. For instance, what's indeed wrong with using the SAME (or correlated) *intrinsic random numbers" at two spatially separated events ? If these numbers have no ontology attached to them, and are fundamental quantities, I don't see why this should be forbidden. Them not having a *mechanism* in them, there's no "causality" involved in this. It "just happens that way". This is the essence of an intrinsically fundamental stochastical theory, no ? "things just happen this way".

For example, I think Orthodox QM is perfectly well a "theory" in the sense I'm insisting on. It provides an account of what's going on behind the scenes to produce something like a detector firing. And what I don't like about that theory is *not* that half of its dynamics isn't deterministic. The "problem" (in this context) is that this half of the dynamics is nonlocal. It says that an irreducibly random event at one place has an instantaneous effect at other places.

Well, this is only true in the case that one assigns some reality to the concept of the wavefunction ; not if it is an "algorithm to calculate probabilities", right ?

Do people think that the whole notion of "causal influence" is absent/meaningless in a non-deterministic theory? Maybe that's the source of the trouble here...

I think that there is a total absense of the notion of causal influence in THE STOCHASTICAL ELEMENT of an *intrinsically* non-deterministic theory. That doesn't mean that the theory as a whole does not have elements of causality to it: the "deterministic part" (the evolution equations of the ontologically postulated objects) does have such a thing of course. But the "random variables" that are supposed to describe the intrinsic randomness of the whole don't have - a priori - to obey any kind of rules, no ?
 
  • #149
ttn said:
The probability distribution *of what*? You'll maybe say something like "the outcomes"... but what *are* these? What are they *made of*? What's actually *going on* in this scenario?
In my theory, the outcomes "H" and "T" are fundamental things. They cannot be analyzed any further. The thing I call a "probability distribution" is a fundamental quantity.

The term "probability" is justified for the following reasion:

Suppose we had many pairs of coins whose "joint probability distribution" factors into the "probability distributions" on the pairs. (Where, again, P(HH) = P(TT) = 1/2 and P(TH) = P(HT) = 0 for the two coins in a pair)

Then, this "probability distribution" gives a value of nearly one to the set of outcomes where roughly half of the pairs of coins flip to HH and rest flip to TT.

So, my "probability distribution" really does give probabilities -- it satisfies the frequentist interpretation of probabilities.


ttn said:
Is it really such a myserious surprise that there is a difference between saying "X happens" and providing a *theory* about X?
...
Maybe the problem is that the example is too silly.
But yet, there is a theory that says nothing more than "X happens". The class of theories is very broad, and includes lots of dumb, ridiculous, uninteresting, unrealistic, and impractical things.

But yet, they're all still theories.


I'm not asking for a deterministic theory -- just some kind of theory with some kind of state descriptions and some kind of dynamics.
Fine -- flesh the rest out however you want. Let's make the coins Special Relativistic point particles, and each can be in one of three fundamental states: "H, T, and unflipped". The initially start out as "unflipped", and via an interaction called "flipping" can transition to "H" or "T". The transition is nondeterminsitic, and is governed by the joint probability distribution P(TT)=P(HH) = 1/2, P(TH) = P(HT) = 0. This joint probability distribution is a fundamental constant of the theory.


More details just obscures the point -- the theory is simple and clear. It doesn't have messy details to work through and understand, and it's manifestly Lorentz invariant.

If you really needed me to, I'm sure I could flesh this out to work with many coins and additional interactions, such as ways to revert a coin back to the "unflipped state" and a pairwise interaction on coins called "entanglement", but that would just obscure what's going on, and I don't think I'd be doing anything more than a nondeterministic variation of classical mechanics without the axiom that spatially separated probabilities are statistically independent.


ttn said:
Sure you can do this. But what does this have to do with Bell Locality?
It has to do with what you had asked in the paragraph I quoted.




Incidentally, another way to go is to assert that observations are random variables. And not the silly stuff we learned as kids: I'm saying that observations are like the random variables defined in mathematical statistics.

Observations are probabilitiy distributions on a space of outcomes, nothing more. In particular, observations never actually "take on" the value of a particular outcome.
 
  • #150
ttn said:
This point has already been covered. There are simply two possible version of "orthodox QM". One (perhaps closer to von Neumann's ideas, although Bohr's insistence on the completeness doctrine makes me think this is close to Bohr's views too) is that the wave function provides a literal and complete description of physical states

You say you have been over it, but you keep coming back to the same issues. What does "literal and complete" mean? A map can provide a complete description of a town; every street, every address, is there; but the map is still not the town, and the space of quantum states is still not in spacetime.
 
  • #151
Hurkyl said:
Fine -- flesh the rest out however you want. Let's make the coins Special Relativistic point particles, and each can be in one of three fundamental states: "H, T, and unflipped". The initially start out as "unflipped", and via an interaction called "flipping" can transition to "H" or "T". The transition is nondeterminsitic, and is governed by the joint probability distribution P(TT)=P(HH) = 1/2, P(TH) = P(HT) = 0. This joint probability distribution is a fundamental constant of the theory.

More details just obscures the point -- the theory is simple and clear. It doesn't have messy details to work through and understand, and it's manifestly Lorentz invariant.

You're filling in pointless details and still missing what's crucial. What is the *dynamics* for this theory? Where are the two coins located? Under what circumstances exactly do they make this transition from "unflipped" to either "H" or "T"?

Can you have the particles located at separate locations, and have the transitions occur under a local free choice of some experimenter, and still explain the correlations in a manifestly Lorentz invariant way?
 
  • #152
vanesch said:
Why [do I object to a random number being injected into the dynamics at two spatially separated locations] ?

Because this violates what I consider a reasonable criterion of locality!




Yes, but you place extra limits on how this irreducible randomness can be applied. For instance, what's indeed wrong with using the SAME (or correlated) *intrinsic random numbers" at two spatially separated events ?

Well, my "instinct" is that any such intrinsic random number (that is the output of a non-deterministic dynamics) exists at some particular place -- it comes into existence at some particular spacetime event. So then, to inject it into the dynamics at some spacelike separated event, is blatantly nonlocal.



If these numbers have no ontology attached to them,

I don't know what that means. The "random numbers" we're talking about are supposed to be part of a physics theory, right?


Them not having a *mechanism* in them, there's no "causality" involved in this. It "just happens that way". This is the essence of an intrinsically fundamental stochastical theory, no ? "things just happen this way".

This is the ambiguity I can't accept. There's no "mechanism" (by hypothesis) in the production of this, as opposed to that, particular random number. But the random numbers are still part of a *physics theory* which presumably is a candidate for the mechanism of something. So it's not like these random numbers have nothing to do with familiar notions of causality, ontology, etc. Your case here seems to trade on a slide from the random numbers "not having a mechanism" in the first sense, to some much broader claim about the random numbers being totally outside of the normal context of a physics theory and hence totally unanalyzable in normal terms -- just things you have to blindly accept no matter how they act or no matter how non-locally they come into existence and/or affect other things.

Re: "things just happen this way", sure -- at a particular event. But "this just happened this way here, and therefore that just happened the same way over there" I can't accept.


Well, this is only true in the case that one assigns some reality to the concept of the wavefunction ; not if it is an "algorithm to calculate probabilities", right ?

Yes, as we've I think agreed before. If one takes the wf as part of a mere algorithm, then *nothing* is true. If one isn't willing to actually assert a *theory*, then of course there's no particular fact of the matter about whether one's theory is local or nonlocal, etc...



I think that there is a total absense of the notion of causal influence in THE STOCHASTICAL ELEMENT of an *intrinsically* non-deterministic theory. That doesn't mean that the theory as a whole does not have elements of causality to it: the "deterministic part" (the evolution equations of the ontologically postulated objects) does have such a thing of course. But the "random variables" that are supposed to describe the intrinsic randomness of the whole don't have - a priori - to obey any kind of rules, no ?

No, I don't agree with this. Both parts have a causal part to them. I mean, presumably there are some dynamical equations even for the random part (e.g., the collapse postulate in OQM). Otherwise things would be entirely *too* random, yes? Even the randomness is, so to speak, governed by some laws. And more importantly, the randomness is still randomness *about something* -- it's randomly determined values for some allegedly real physical quantities or whatever. And that is the whole meaning of "causality" -- real physical things acting in accordance with their identity. In a non-deterministic theory, their identity is, by hypothesis, such as to produce evolution which isn't "fixed" by initial conditions. But the evolution is still governed by some (stochastical) laws. Otherwise, what exactly is one claiming is a theory?

Well, we're getting pretty distant from the main point, and even from the important and interesting tangent point. The basic question here, as I see it, is whether we should, from the POV of relativity, be troubled by a theory in which some random number produced by the dynamics can "come into existence" or "affect things" at spacelike separated events. My intuitive understanding of relativistic causality bristles at this. Some others' apparently doesn't. Frankly, I don't think either side has yet to make any strong argument in support of their side... so I think we should focus any subsequent discussion on that.

But even that is beside what I consider the main point. I'd like to make sure we don't completely lose sight of the claim I started here with -- namely, that no Bell Local theory can agree with experiment. That, I think, is a surprising claim that deserves to be clarified and scrutinized -- even if, in the end, some people don't think it's an *interesting* claim because they don't think Bell Locality is a correct transcription of relativity's prohibition on superluminal causation (which is what all this stuff about stochastic theories is about).
 
  • #153
You're filling in pointless details and still missing what's crucial. What is the *dynamics* for this theory? Where are the two coins located? Under what circumstances exactly do they make this transition from "unflipped" to either "H" or "T"?
They're Special Relativistic point particles. Do what you will with them. Maybe they have mass and electric charge, who knows. I don't think that's relavent to the issue at hand.

The state of the coin has absolutely no affect on anything. In my toy theory we cannot even observe the state, although it is there.

I don't know what would cause a "flipping" interaction to occur. That is also irrelevant to the issue at hand. We don't need to know -- they just do, and Alice and Bob are both able to control when it happens.

But let's have fun and define something silly. Let's say... a "flipping" interaction occurs when:

In the coin's rest frame, we take the three vectors:
(1) Electric field at the origin
(2) Magnetic field at the origin
(3) Force on the coin due to gravity
and if they are all nearly perpendicular to each other, a "flipping" interaction occurs and the coin transitions nondeterministically to either the "H" or the "T" state. (Nearly meaning the angles are within e radians of perpendicular, where e is some fundamental constant):-p
 
  • #154
Hurkyl said:
They're Special Relativistic point particles. Do what you will with them. Maybe they have mass and electric charge, who knows. I don't think that's relavent to the issue at hand.

The state of the coin has absolutely no affect on anything. In my toy theory we cannot even observe the state, although it is there.

I don't know what would cause a "flipping" interaction to occur. That is also irrelevant to the issue at hand. We don't need to know -- they just do, and Alice and Bob are both able to control when it happens.


OK, it's that last point that is important. So Alice and Bob both have little black boxes with buttons. They choose at some random moment to push the button, and then a screen displays either "H" or "T". And it is found that when they both push the buttons (at the same time, as seen from some particular frame, say) they always get the same outcome, even though the pushings are spacelike separated.

And you're telling me you're willing to just shrug and accept this, without being bothered in the slightest that there's something nonlocal going on?
 
  • #155
And you're telling me you're willing to just shrug and accept this, without being bothered in the slightest that there's something nonlocal going on?
Well, you asked for a theory that can have correlations and yet still respect Lorentz invariance!


Anyways, the only "nonlocality" going on here is the failure of the statistical independence hypothesis, and I have no problem with that. Why do we even have that hypothesis in the first place? I suspect there is no good theoretical reason: either people added it by hand because it fit the data, or worse, people implicitly assumed it while giving heuristic arguments in its favor.



Incidentally, does it bother you at all that you're asking nonlocal questions?
 
  • #156
Hurkyl said:
Anyways, the only "nonlocality" going on here is the failure of the statistical independence hypothesis,

That just begs the very question at issue here.


and I have no problem with that. Why do we even have that hypothesis in the first place? I suspect there is no good theoretical reason: either people added it by hand because it fit the data, or worse, people implicitly assumed it while giving heuristic arguments in its favor.

I don't buy that at all. What about the tons of empirical evidence for physical locality that is nicely summarized by some requirement like Bell Locality? Your point is that we might be being misled by such evidence since we didn't have any examples in the history of science of an irreducibly stochastic (true) theory. So we're duped into thinking that Bell Locality is a reasonable formalization of "local causality" when really it's only reasonable for deterministic theories.

That's a reasonable objection, and a good assignment for further thought and discussion. But it's hardly the same as saying (as I think you are saying above) that there was never any good reason at all to accept something like Bell Locality. The fact is there is very strong reason -- the objection is that maybe the reason isn't quite 100% conclusive.



Incidentally, does it bother you at all that you're asking nonlocal questions?

I don't know what you mean.
 
  • #157
ttn said:
So we're duped into thinking that Bell Locality is a reasonable formalization of "local causality" when really it's only reasonable for deterministic theories.

Yes, that's my point of view. Actually, it is difficult to imagine what it actually means to have an *irreducibly* stochastic theory! The concept itself is rather strange. We never thought of probability that way ; at least in physical science ; in human sciences and theology, it was of course considered - even essential - and was called variably "the will of the gods", "destiny" or "karma" or "providence" or whatever - it is almost at this level that one should indeed consider an irreducibly stochastic theory. Funny that in Greek mythology, even the gods were subjected to the irreducible randomness of "destiny"!

In physical sciences, however, probability was always a "way to quantify our ignorance about what was exactly going on" - implicitly assuming that *if* we could know, somehow, what was going on in detail, then we'd know for sure what was going to happen - call it underlying determinism. And I think that this is what Bell's definition of locality really means, and why it is so plausible. It is hard for a scientist to adhere to something like "destiny" as an irreducible element of his theory.
 
  • #158
vanesch said:
Yes, that's my point of view. Actually, it is difficult to imagine what it actually means to have an *irreducibly* stochastic theory! The concept itself is rather strange. We never thought of probability that way ; at least in physical science ; in human sciences and theology, it was of course considered - even essential - and was called variably "the will of the gods", "destiny" or "karma" or "providence" or whatever - it is almost at this level that one should indeed consider an irreducibly stochastic theory. Funny that in Greek mythology, even the gods were subjected to the irreducible randomness of "destiny"!

In physical sciences, however, probability was always a "way to quantify our ignorance about what was exactly going on" - implicitly assuming that *if* we could know, somehow, what was going on in detail, then we'd know for sure what was going to happen - call it underlying determinism. And I think that this is what Bell's definition of locality really means, and why it is so plausible. It is hard for a scientist to adhere to something like "destiny" as an irreducible element of his theory.


I respect this possible objection to the propriety of Bell Locality as a formalization of what relativity is supposed to require of physical theories. But I still don't see any good argument behind the worry -- and I simply cannot understand why you and others don't find anything problematic (from the POV of local causality) with randomness that injects itself into the dynamics of spacelike separated events.

Think about this silly example of the two spatially separated coin-flipping boxes. Alice makes a *free choice* to press the button (that makes her box transition from the "ready" state to either H or T, at random). And the "outcome" of this free choice -- the causal effect of it -- leaps suddenly into existence not only in the spacetime region where Alice's choice triggered it, but at a distant location as well. There are several possible ways of talking about this, granted. For example, you could say that the same one stochastic transition has a simultaneous effect at the two places. Or you could say that the transition has a direct effect only near Alice, but then somehow that effect is the cause of a further effect at the distant location. The point is: *however* you talk about it, Alice's free choice initiates a causal sequence that results in a physical change over by Bob (as demonstrated by the *assumed* change in the "propensity" for various outcomes on Bob's device -- this being what is meant by the randomness being irreducible: either the state description attributed to something near Bob changes and the relevant laws applying there stay the same, or vice versa).

You all seem so willing to just shrug and talk about things just popping into existence for no reason at all, from which it's only a small step to shrugging at things popping into existence simultaneously at distant locations in a correlated way: if you're not going to ask for an explanation of why the one event happened (as opposed to some other), then why ask for an explanation of why two events are correlated?

The whole attitude here strikes me as blatantly unscientific. Way too much "just shrugging", to put it nicely. But I'm even here setting that kind of bother completely aside, and *still* I cannot get myself to accept the reasonableness of your worry. That the correlated Heads-Tails game involves "spooky action at a distance" (according to the postulated theory in which the outcomes are genuinely random) is, to me, just obvious. So the fact that such a scenario involves a violation of Bell Locality is, to me, a nice *confirmation* of the reasonableness of that criterion.

I know that several of you see it differently. What's frustrating is that we're not making any progress on the point at issue, because *both sides* are simply taking it as "obvious" that this H/T type situation is -- or isn't -- causally local. Perhaps someone else can think of a way to make progress on this. But if not, hopefully we can return to the original question (Can a Bell Local theory exist which is consistent with experiment?) and simply leave this aside for later.
 
  • #159
ttn said:
1. Because this violates what I consider a reasonable criterion of locality!

2. Well, my "instinct" is that any such intrinsic random number (that is the output of a non-deterministic dynamics) exists at some particular place -- it comes into existence at some particular spacetime event. So then, to inject it into the dynamics at some spacelike separated event, is blatantly nonlocal.

3. I'd like to make sure we don't completely lose sight of the claim I started here with -- namely, that no Bell Local theory can agree with experiment. That, I think, is a surprising claim that deserves to be clarified and scrutinized -- even if, in the end, some people don't think it's an *interesting* claim because they don't think Bell Locality is a correct transcription of relativity's prohibition on superluminal causation (which is what all this stuff about stochastic theories is about).

I think Hurkyl and Vanesch have brought things into focus. In 1. 2. and 3., you state that by your definition, the experiments are evidence of non-locality and cannot be interpreted otherwise. Clearly, that is a leap I am not making and neither are the others. To me, it's circular reasoning because you assume (by your definition) that which you want to prove.

You then extend your conclusion so that Lorentz invariance must be dropped as well. So I think that it is actually you who is making the ol' switcheroo between Bell Locality and Lorentz invariance. But I acknowledge that it is *possible* that Lorentz invariance could be respected in a Bell non-local world.
 
  • #160
ttn said:
I respect this possible objection to the propriety of Bell Locality as a formalization of what relativity is supposed to require of physical theories. But I still don't see any good argument behind the worry -- and I simply cannot understand why you and others don't find anything problematic (from the POV of local causality) with randomness that injects itself into the dynamics of spacelike separated events.

Where does the randomness originate? That is a fair question, in my mind, but... You can't ding a theory that works as well as oQM because it doesn't explain it. Theories are supposed to be useful. If there is more utility to be extracted, then great... show us. But there is no specific problem as is.

I like the Beatles, and oQM doesn't explain that either. (Doesn't that bother you?) In other words, what you are asking is a "nice to have" but it is not essential. But I always have the door open for a better theory (i.e. one with more utility). In my opinion, there is no way that Bohmian Mechanics can be considered to have more utility than oQM at this time.
 
  • #161
(I'm dropping continuity with my previous posts -- this is an entirely different line of reasoning, and stands on its own merits... actually it might be two related but separate lines of reasoning)

Hurkyl said:
Incidentally, does it bother you at all that you're asking nonlocal questions?
ttn said:
I don't know what you mean.
One thing that struck me when reading your threads is that the issues you raise can only be noticed by someone external observer capable of observing all of the "beables" in two space-like separated regions of space-time.

The beables in Alice's laboratory are sufficient to completely describe what's going on there: she has a 50% chance of seeing a heads.

The beables in Bob's laboratory are sufficient to completely describe what's going on there: he has a 50% chance of seeing a heads.

If Alice and Bob perform their observations and take the results to Charlie's laboratory for comparison, then the beables in Charlie's laboratory are sufficient to completely describe what's going on there: ther's a 50% chance that they both saw heads, and 50% chance that they both saw tails.

In all of these cases, we're asking for descriptions of localized events: in the first, it's the event where Alice presses her button. In the second, it's the event where Bob presses his button. In the third, it's the event where Alice and Bob meet.


However, your issue is not well-localized: it involves space-like separated events in both Alice's and Bob's laboratories.

(incidentally, all of the beables in Alice's and Bob's laboratories are still sufficient to completely describe what's going on in this non-localized situation)



We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?" If I could wave my arm and instantly cause gravitational waves over in China, then the answer to this question would be no, because there would be an observable effect that cannot be described by the Chinese beables.

But the beables in Bob's laboratory are enough to completely describe his experiment.


Let's recall the frequentist interpretation of probabilities: if we repeatedly perform identical experiments, the probability of an outcome is defined to be the limiting ratio of the number of times we see that outcome divided by the number of experiments we performed.

Let's suppose our experiment is: "Dave creates the two boxes and gives them to Alice and Bob. Bob takes the box to his laboratory, and then presses the button to see if he gets heads or tails."

As far as I can tell, if Alice presses her button and gets heads, then in this perspective it is still appropriate to say that Bob has a 50% chance of getting heads from his box. (Although it would be correct to say that Bob has a 100% chance of seeing heads, given that Alice saw heads)
 
  • #162
DrChinese said:
To me, it's circular reasoning because you assume (by your definition) that which you want to prove.

That's not really fair. I'm not "just assuming" that the experiments prove non-locality, and then saying "Hey, look, I proved that the experiments prove nonlocality." Rather, I'm arguing that Bell's mathematical definition of local causality is prima facie reasonable as a formalization of relativity's prohibition on superluminal causation. And with that as the definition of locality, it has been rigorously proved that no local theory can agree with experiment. Yes, this leaves semi-open the question of whether this definition of locality really is or really isn't "what relativity really requires." That is indeed a difficult question, but it's a separable one -- and even the restricted claim (no Bell Local theory agrees with experiment) is *stronger* than the claim that most people erroneously think is the lesson of Bell's theorem (namely: no Bell Local *hidden variable* theory agrees with experiment).

So there is a new and important step forward here, even if, as I think we'd all agree, it doesn't answer absolutely every possible sub-question/objection.



You then extend your conclusion so that Lorentz invariance must be dropped as well. So I think that it is actually you who is making the ol' switcheroo between Bell Locality and Lorentz invariance. But I acknowledge that it is *possible* that Lorentz invariance could be respected in a Bell non-local world.

I don't understand the first bit here. I don't think I ever claimed that the failure of Bell Locality requires the failure of Lorentz Invariance. In fact, this paper would be a counterexample to such a claim:

http://www.arxiv.org/abs/quant-ph/0602208

(this is a more readable version of a more technical paper that is referenced in the above)
 
  • #163
DrChinese said:
Where does the randomness originate? That is a fair question, in my mind, but...

My intuitive sense of the right way to answer this (for OQM) is to say: since it's a "measurement" that triggers the collapse, we should think of the randomness as originating ("being injected") at the spacetime event of the triggering measurement. And then it's clear enough, in OQM, that this has a causal effect on spacelike separated events.

But this is all nothing but fleshing out the statement: OQM violates Bell Locality.

I think vanesch and hurkyl would disagree with the first part: you shouldn't (I think they'd say) think of the random # as being injected at that particular spacetime point; rather, think of it as a new universal constant that pops into existence and is immediately accessible everywhere. This, to me, is a very weird way of thinking -- but more to the point, it seems to beg the question in regard to the word "immediately". To make precise what is meant by that information being "immediately" available everywhere, you'd have to specify some spacelike hypersurface... i.e. break lorentz invariance. Of course, we know from tomonaga-schwinger QFT that the empirical predictions come out the same way no matter which way you foliate the spacetime. So we're in the curious situation that the empirical predictions are lorentz invariant, even though the theory itself isn't. But this is the same situation we're in for Bohmian mechanics, where there's an underlying nonlocality that is hidden at the level of signalling / empirical outcomes.


You can't ding a theory that works as well as oQM because it doesn't explain it. Theories are supposed to be useful. If there is more utility to be extracted, then great... show us. But there is no specific problem as is.

I think this comment completely misses the point. I'm not "dinging" theories on the grounds that they don't get the answers right (i.e., "work well"). Everybody knows OQM "works", i.e., gets the answers right. Likewise, everyone should know that there exist other theories (like Bohm's) that "work" equally well -- i.e., predict precisely the same answers. So empirical adequacy just isn't even on the table here as a relevant issue. I only care about theories that are, to begin with, empirically correct. I'm then raising a further and separate question: are the theories *locally causal*? And the answer turns out to be "no", not only for OQM and Bohm's theory but, as proved by Bell, for *any* possible empirically viable theory.


I like the Beatles, and oQM doesn't explain that either. (Doesn't that bother you?) In other words, what you are asking is a "nice to have" but it is not essential. But I always have the door open for a better theory (i.e. one with more utility). In my opinion, there is no way that Bohmian Mechanics can be considered to have more utility than oQM at this time.

I never claimed it did. I would only deny the reverse claim: that OQM can be considered to have more utility than Bohmian Mechanics at this time. The two theories make all the same expeirmental predictions. They're both "equally right" (by that standard of assessment). They are on a completely equal footing (by that standard).

Of course, there are some other standards on which Bohm wins hands down, e.g., not being plagued by the measurement problem. But that's a point for another day.
 
  • #164
Hurkyl said:
One thing that struck me when reading your threads is that the issues you raise can only be noticed by someone external observer capable of observing all of the "beables" in two space-like separated regions of space-time.

That's not true. Just construct the relevant x-t diagram later. Or do you think it's always wrong to draw an x-t diagram, because it includes events at spacelike separated points, which no one observer at those events could be aware of? :smile:


The beables in Alice's laboratory are sufficient to completely describe what's going on there: she has a 50% chance of seeing a heads.

The beables in Bob's laboratory are sufficient to completely describe what's going on there: he has a 50% chance of seeing a heads.

That is completely misleading, though. Because (by your own hypothesis) HT and TH never occur, and they should occur 50% of the time if you mean what you say above *straight* (i.e., not as statements of the marginals of some joint distribution).


If Alice and Bob perform their observations and take the results to Charlie's laboratory for comparison, then the beables in Charlie's laboratory are sufficient to completely describe what's going on there: ther's a 50% chance that they both saw heads, and 50% chance that they both saw tails.

Sure, but that's only consistent with what you say above if the 50/50 H/T outcome for Bob was correlated with the 50/50 H/T outcome for Alice. And then Bell's question is: is this correlation locally explicable? And the answer is: yes, but only by assuming "hidden variables" which determine in advance the outcome. Here's what he says:

"It is important to note that to the limited degree to which *determinism* plays a role in the EPR argument, it is not assumed but *inferred*. What is held sacred is the principle of 'local causality' -- or 'no action at a distance'. Of course, mere *correlation* between distant events does not by itself imply action at a distance, but only correlation between the signals reaching the two places. These signals ... must be sufficient to *determine* whether the particles go up or down. For any residual undeterminism could only spoil the perfect correlation.

"It is remarkably difficult to get this point across, that determinism is not a *presupposition* of the analysis. There is a widespread and erroneous conviction that for Einstein [*] determinism was always *the* sacred principle... [but, as Einstein himself made clear, it isn't]."

There is from the [*] the following footnote: "And his followers [by which Bell clearly means himself]. My own first paper on this subject (Physics 1, 195 (1965)) starts with a summary of the EPR argument *from locality to* deterministic hidden variables. But the commentators have almost universally reported that it begins with deterministic hidden variables."

This footnote is extremely important, because, decades later, "the commentators" are still almost universally confused about this. It is precisely this point that I have been at pains to clarify in this thread (and in some other parts of my life!). Oh, the above quotes are all from the beautiful paper "Bertlmann's Socks and the nature of reality", reprinted in Speakable and Unspeakable.



However, your issue is not well-localized: it involves space-like separated events in both Alice's and Bob's laboratories.

What "issue"? The whole *point* is that space-like separated events that are *correlated* can only be *locally* explained by stuff in the overlapping past light cones. You seem to be dancing around the edges of the MWI line that those "definite correlated events" aren't even *real* -- didn't really *happen*. But I think we've already covered that issue completely; I at least have no more energy for retrying that case.


We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?"

But this is precisely the condition Bell Locality! That condition can be stated: are all the beables here [i.e., say, in the past light cone of some spacetime event where some "outcome" appears] sufficient to define the probabilities for various possible "outcomes" -- with "sufficient" defined as follows: throwing some additional information about spacelike separated regions into the mix doesn't *change* the probabilities.

Your own example of the H/T devices *violates* this condition. Knowing (what according to your minimalist theory is) all there is to know in the past light cone of Alice's exercise is *not* sufficient (with the above definition) to define the probabilities for the possible outcomes. For example, if we specify in addition that Bob pushed his button and got "H", then the probability for Alice to get "H" changes from 50% to 100% -- even though that 50% was based on a *complete specification of beables* in the past light cone of Alice's event.

So your own theory is nonlocal, as I've been saying all along. Of course, this doesn't mean that the mere fact of perfect correlation between Alice's and Bob's outcomes, proves that nature is nonlocal. The correlation *can* be explained locally by adding "hidden variables", i.e., by considering a different theory than the one *you* proposed.



But the beables in Bob's laboratory are enough to completely describe his experiment.

No, they aren't. Not in the sense defined above.



As far as I can tell, if Alice presses her button and gets heads, then in this perspective it is still appropriate to say that Bob has a 50% chance of getting heads from his box.

If that's what your *theory* says, then your theory is going to be empirically *false* because it'll predict that sometimes Bob gets tails, even though (unknown of course to him) Alice has gotten heads.
 
  • #165
DrChinese said:
1. I think this is the crux of your issue. This is a specific claim of oQM, and is not strictly prohibited by relativity.

2. This is definitely not correct. You cannot objectively demonstrate that the outcome at B is in any way dependent on a measurement at A. If you could, you could perform superluminal signalling. All you can actually demonstrate is the correlated results follow the HUP.

3. This is really part of the interpretation one adopts.

The way I see it, the crux of the matter is the following.

Classically, we are used to equate "correlation between events" and "causality". In quantum mechanics, this link is broken. There may be correlation without a cause/effect relationship.

Would that be a fair statement?

Pat
 
  • #166
nrqed said:
The way I see it, the crux of the matter is the following.

Classically, we are used to equate "correlation between events" and "causality". In quantum mechanics, this link is broken. There may be correlation without a cause/effect relationship.

Would that be a fair statement?

Pat

Yea, I think that is a possibility. And maybe that's even what ttn is arguing at some level. I don't think anyone is really saying that we understand everything that is happening - I certainly don't. For example, and relating to your comment: we define cause/effect relationships to have the cause preceding the effect. In a world in which the laws of physics are time symmetric, is this really a reasonable definition?

If you reverse the flow of time (and therefore the sequence of events), what was formerly a cause might now appear as a random effect. So perhaps the future actually influences the past in some way (this need not violate special relativity, which should operate the same regardless of the direction of time).

So correlations might then appear that seem non-local at the end of the day - as you suggest.
 
  • #167
ttn said:
I think vanesch and hurkyl would disagree with the first part: you shouldn't (I think they'd say) think of the random # as being injected at that particular spacetime point; rather, think of it as a new universal constant that pops into existence and is immediately accessible everywhere. This, to me, is a very weird way of thinking -- but more to the point, it seems to beg the question in regard to the word "immediately".

Even *that* would be "deterministic" because what you now introduce is a physical scalar FIELD over spacetime with a constant (but unknown - hence probabilistically described) value and if only you KNEW the value of that constant field, you would know with certainty what the outcome would be - and hence the theory is being *underlying deterministic* with an unknown beable (the constant scalar field).

It is DAMN HARD to imagine an *irreducibly stochastic* theory, because it means that *one cannot assign any physical existence to the random quantities*. Because from the moment one does, these become "beables" and hence if their values are known, we have changed the thing into a "deterministic theory with unknown beables to which we assign probabilities".

And from the moment that you get rid of that, so that the random quantities are NOT physical, and "just happen", you cannot talk about "their locality" or anything.
 
  • #168
ttn said:
since it's a "measurement" that triggers the collapse, we should think of the randomness as originating ("being injected") at the spacetime event of the triggering measurement.
Oh, that's what you mean by "injecting randomness".

ttn said:
I think vanesch and hurkyl would disagree with the first part: you shouldn't (I think they'd say) think of the random # as being injected at that particular spacetime point; rather, think of it as a new universal constant that pops into existence and is immediately accessible everywhere.
That's not what I think at all! The randomness was always there -- it just manifested itself in the measurement. (Of course, some sort of measurement is the only way for anything physical to manifest themselves)

Well, I almost told the truth -- one of my pet thoughts on QM was that things like "position" and "momentum" do not map onto the fundamental elements of reality... but we can make it look like they do with a bit of randomization. So our devices for measuring such things are actually randomized in some sense. But I haven't thought too much about this and don't give it much weight anymore.



ttn said:
That is completely misleading, though. Because (by your own hypothesis) HT and TH never occur, and they should occur 50% of the time if you mean what you say above *straight* (i.e., not as statements of the marginals of some joint distribution).
If you really meant what you said in red, then you are way off track. In statistics, you absolutely, positively, cannot ask questions like:

P(Alice sees "H" and Bob sees "H")

or

P(Bob sees "H" | Alice sees "H")

without there being some joint distribution governing both random variables.

In other words, if they aren't marginals of some joint distribution, then you cannot even ask if they're statistically independent -- such a question would be mathematical gibberish!


ttn said:
Sure, but that's only consistent with what you say above if the 50/50 H/T outcome for Bob was correlated with the 50/50 H/T outcome for Alice. And then Bell's question is: is this correlation locally explicable? And the answer is: yes, but only by assuming "hidden variables" which determine in advance the outcome.
You don't need hidden variables: for example, the unitary evolution of QM explains the correlation just fine.


Hurkyl said:
We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?"
ttn said:
But this is precisely the condition Bell Locality! That condition can be stated: are all the beables here [i.e., say, in the past light cone of some spacetime event where some "outcome" appears] sufficient to define the probabilities for various possible "outcomes" -- with "sufficient" defined as follows: throwing some additional information about spacelike separated regions into the mix doesn't *change* the probabilities.
No! That part in red is what I did not say.

All of the beables here are sufficient to fully describe what happens here. They're just not sufficient to fully describe any correlations between things that are here with things that are over there. To fully describe those, you need the whole collection of beables that are here and there. (But you don't need any beables from a third place)

That red part is the statistical independence hypothesis.


ttn said:
Your own example of the H/T devices *violates* this condition. Knowing (what according to your minimalist theory is) all there is to know in the past light cone of Alice's exercise is *not* sufficient (with the above definition) to define the probabilities for the possible outcomes. For example, if we specify in addition that Bob pushed his button and got "H", then the probability for Alice to get "H" changes from 50% to 100% -- even though that 50% was based on a *complete specification of beables* in the past light cone of Alice's event.
It's not the probability that changed: its the question you asked.

P(Alice sees "H") is always 50%. It's just that once you learned Bob saw an "H", you started asking for P(Alice sees "H" | Bob sees "H").

But as you said, if you make the statistical independence hypothesis, then that conditional probability is the same as the marginal probability, and so you would be justified in saying the probability changed.

But if you do not make the statistical independence hypothesis, then you cannot conclude that P(Alice sees "H") has changed when Bob sees his "H".
 
  • #169
It is [DARN] HARD to imagine an *irreducibly stochastic* theory, because it means that *one cannot assign any physical existence to the random quantities*. Because from the moment one does, these become "beables" and hence if their values are known, we have changed the thing into a "deterministic theory with unknown beables to which we assign probabilities".
I don't think there's a problem with a random variable evolving deterministically: the result is still a random variable.

But remember what a random variable is: a probability measure on a set of outcomes. So the beables are the probabilities.
 
  • #170
Hurkyl said:
I don't think there's a problem with a random variable evolving deterministically: the result is still a random variable.

But remember what a random variable is: a probability measure on a set of outcomes. So the beables are the probabilities.

Yes, while the lesser mathematicians of us (like me) think of this as "a number (or other object, such as a function) of which we don't know the value, but only a probability distribution". And a "deterministic evolution of a random variable" is then seen as the deterministic evolution of the original object of which we didn't know the value, and hence drags with it its uncertainty, so this results in the 'dragged-along' probability distribution of the result of evolution.
And it is when you picture *this*, with these objects being real beables, that you arrive at Bell's condition. It is difficult to imagine random quantities which do NOT "materialise" this way.
 
  • #171
vanesch said:
It is difficult to imagine random quantities which do NOT "materialise" this way.


They better "materialize" some way, or they will have no dynamical consequences. Nobody thinks that the irreducible randomness at some point brings into existence a new (physical) scalar field, or a big set of polished bronze numerals reading "0.732752" (or whatever the random number was). But if what's being explained by the underlying stochastic theory is some kind of measurement outcome, then obviously the generated random numbers have to have a real physical effect on *something* which then in turn physically influences the macroscopic measurement devices (which *nobody*, except crazy MWI-people, denies are beables).

All of the points in the last few posts have been pointless semantic distractions. It's clear that any random numbers generated by a stochastic theory have to manifest themselves in some physical way -- otherwise they would be irrelevant to empirical observations and there's be no point at all in hypothesizing the theory in question. The only question can be: where (at what spacetime event) do such numbers arise?

To answer this question is to admit non-locality (in the kind of examples we've been discussing). If Bob makes the "first" measurement and there is something random that controls his outcome, then the subsequent effect of that number (or its various causal effects near Bob) constitutes nonlocality.

And to *not* answer this question is to admit non-locality. If Bob makes the "first" measurement and this new random number comes into existence *not* at some spacetime event near Bob, but (say) simultaneously along some spacelike surface through Bob's measurement event, said popping into existence constitutes nonlocality.
 
  • #172
Hurkyl said:
You don't need hidden variables: for example, the unitary evolution of QM explains the correlation just fine.

You've got to be kidding? First off, the unitary evolution is deterministic. Second, it *doesn't* "explain the correlation just fine" since it predicts that Alice's box never ever reads definitely "H" or definitely "T" -- in direct contradiction with what Alice (by assumption in *your* example) sees.

I will grant, however, that if you are going to begin by throwing out the empirical data that was supposed to define this situation (Alice and Bob each see H or T w/ 50/50 probability, but the two outcomes are always paired HH or TT) then, yeah, sure unitary-only QM can explain the correlations. Just like Ptolemy's theory of the solar system can explain the last 100 days of data for the price of tea in china...




All of the beables here are sufficient to fully describe what happens here.

Are you still talking about unitary-only QM?

I don't know what to say. If you think the above, you simply haven't understood Bell Locality at all. The whole point of this condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a local way? For your example of the irreducibly-random theory which purports to explain the HH/TT correlation, Bell Locality is violated: a complete specification of beables along some spacelike surface prior to both measurement events does *not* screen off the correlation.


They're just not sufficient to fully describe any correlations between things that are here with things that are over there. To fully describe those, you need the whole collection of beables that are here and there. (But you don't need any beables from a third place)

The version of Bell Locality that actually gets *used* in the derivation is actually equivalent to this weaker condition. The probability of one event is conditionalized not just on a complete specification of beables in the past light cone of that event, but across a spacelike hypersurface that crosses also the past light cone of the *other* event. That is, we do not presuppose what is nowadays sometimes called "separability".




It's not the probability that changed: its the question you asked.

P(Alice sees "H") is always 50%. It's just that once you learned Bob saw an "H", you started asking for P(Alice sees "H" | Bob sees "H").

But as you said, if you make the statistical independence hypothesis, then that conditional probability is the same as the marginal probability, and so you would be justified in saying the probability changed.

But if you do not make the statistical independence hypothesis, then you cannot conclude that P(Alice sees "H") has changed when Bob sees his "H".

I'm sorry, but every time you start analyzing probabilities and such, you turn into a mathematician -- i.e., you completely forget about the physical situation that we're talking about here. The whole question of locality is whether goings on near Alice are *alone* sufficient to account for all that there is to account for near Alice (her outcomes). What you have now lapsed into calling the "statistical independence hypothesis" is the *physical* requirement that a *local physics theory* shouldn't have its probabilities for one event, *depend* on happenings at spacelike separation, when a *complete specification of beables* in the past light cone of the first event is already given.

Yes, one can *deduce* from this "statistical independence" -- a complete specification of beables in the past of the two events should screen off any correlations between the outcomes. But this is not an arbitrary hypothesis; it is a *consequence* of the basic requirement, which is *locality*.

Let me ask you a serious question: have you ever read Bell's papers on this stuff?
 
  • #173
vanesch said:
Even *that* would be "deterministic" because what you now introduce is a physical scalar FIELD over spacetime with a constant (but unknown - hence probabilistically described) value and if only you KNEW the value of that constant field, you would know with certainty what the outcome would be - and hence the theory is being *underlying deterministic* with an unknown beable (the constant scalar field).

I don't agree; this is not deterministic. There could be irreducible stochasticity in the initial assignment of a value to the "scalar field."

I see no reason to postulate the existence of any physical scalar fields. The point is too simple to deserve such fanciness: you could have a theory in which there is irreducible randomness (the production of some random number from some kind of probability distribution), but in which that number (whatever it turns out to be) is then "available" at other spacetime events to affect beables. And my point is simple: if it is only available at spacetime points in the future light cone, the theory is local; if it's available also outside the future light cone, the theory is nonlocal.


It is DAMN HARD to imagine an *irreducibly stochastic* theory, because it means that *one cannot assign any physical existence to the random quantities*. Because from the moment one does, these become "beables" and hence if their values are known, we have changed the thing into a "deterministic theory with unknown beables to which we assign probabilities".

I don't understand this attitude at all. Beables are beables. I'm happy to permit, under the banner of "irreducibly stochastic theories", theories in which the evolution of beables is non-deterministic. But as I said before, what would be the *point* of the randomness if it didn't affect the beables? It would then have no effect on *anything* because there *is* (by definition) nothing but the beables! You seem to want to parse "irreducibly stochastic theories" as something in which, in addition to the beables, there are these other "things" that "exist", except that they are "random" in the sense that they don't exist in any particular measure/degree/value/whatever. But "random" isn't the same as "indefinite".

You say that as soon as one assigns physical existence to the random quantities, the theory becomes deterministic. I could not disagree more strongly. First, if you *don't* assign physical existence to the random quantities, what the heck is the point? They then play absolutely no role in the dynamics. And second, whether you do or don't assign physical existence to the random quantities, has no bearing whatever on whether the theory is deterministic. A theory in which there is randomness which affects things, is *not deterministic*. For example: orthodox QM (with the collapse postulate) is *not* a deterministic theory, even though there is irreducible randomness (which of the eigenstates the initial state collapses to) and the "outcome" of this "random choice" manifests itself immediately in the beables (the wave function is now that eigenstate).


And from the moment that you get rid of that, so that the random quantities are NOT physical, and "just happen", you cannot talk about "their locality" or anything.

But then there'd be no need to talk about their locality or anything, since they would play no role whatsoever in the evolution of beables (and hence no role whatsoever in the explanation of empirical observations).
 
  • #174
DrChinese said:
If you reverse the flow of time (and therefore the sequence of events), what was formerly a cause might now appear as a random effect. So perhaps the future actually influences the past in some way (this need not violate special relativity, which should operate the same regardless of the direction of time).

So correlations might then appear that seem non-local at the end of the day - as you suggest.


Please don't tell me I've spent all this time trying to explain things to you, only to have *this* appear as your considered view.

Sure, you can explain EPR/Bell data with a theory in which the causes of certain events come from the future. Do you seriously think such a theory would be "locally causal"?
 
  • #175
ttn said:
All of the points in the last few posts have been pointless semantic distractions.
Pointless semantic distractions?? Does that mean you no longer care to assert that what I put forth as an alternative to "locality" is actually Bell-locality?


ttn said:
It's clear that any random numbers generated by a stochastic theory have to manifest themselves in some physical way
The random numbers that are "generated" are the manifestation -- they are not any sort of dynamical entities, and they do not have any sort of effect on anything. They are nothing more than the result when you insist that a stochastic theory produce an actual outcome.

(A stochastic theory, of course, doesn't like to produce outcomes... it prefers to simply stick with a probability distribution on the outcome space)
 
Back
Top