What do violations of Bell's inequalities tell us about nature?

In summary: don't imply that nature is nonlocal ... though it's tempting to assume that nature is nonlocal by virtue of the fact that nonlocal hidden variable models of quantum entanglement are viable.

What do observed violation of Bell's inequality tell us about nature?

  • Nature is non-local

    Votes: 10 31.3%
  • Anti-realism (quantum measurement results do not pre-exist)

    Votes: 15 46.9%
  • Other: Superdeterminism, backward causation, many worlds, etc.

    Votes: 7 21.9%

  • Total voters
    32
  • #176
ttn said:
Well, thank you for at least *attempting* to address the challenge! But I don't think this really does it. What we are looking for is a local-but-non-realist explanation of the actual results that the experiment will give (in the special case where a=b, or equivalently for those particle pairs for which a happens to = b).

Well, I can certainly stipulate that
If Bob measured the spin along axis [itex]\hat{b}[/itex] and got result [itex]R_b[/itex] (either +1 or -1), and the message from Alice said "I measured the spin along axis [itex]\hat{a}[/itex] and got result [blah, blah, blah].", then Bob will read the "blah, blah, blah" as equal to [itex]R_b[/itex] with probability [itex]sin^2(\dfrac{\theta}{2})[/itex].​

I agree that this is a bizarre, comical way of resolving the conundrum, but it's got the same flavor as theories such as MWI that deny that measurements have definite values. That's why, in spite of your insistence that Bell's theorem is only about locality, I insist that some kind of realism assumption is required to derive nonlocality.

My claim is that, if the underlying *physics* model is local and non-realist, then it will predict that at least sometimes, when a=b, the results will not be perfectly correlated.

Why? We can specify that Bob's probability of misreading Alice's message depends on Bob's state, as well as the state of Alice's message. That's perfectly local. We can certainly make our probabilities such that it becomes a certainty in certain circumstances.

Also, what you said above about the sense in which the proposed model is non-realist makes no sense.

Maybe some other word than "realist" is called for, but the point is that Bob will be constructing a history of what happened based on his reading of the messages from Alice, but that history does not reflect anything real (at least as far as the parts referring to Alice's results).

At best, this is yet another distinct sense of "realism". But it has nothing to do with the deterministic non-contextual hidden variables sense of "realism" that DrC and others who voted "anti-realism" in the poll think is relevant here.

That's not clear to me.
 
Physics news on Phys.org
  • #177
ttn said:
You can say it doesn't if you want. But that theory will still be nonlocal.

Well, that's just a terminology thing. I don't agree with that terminology. I think it's misleading.

The wave function, at least for people who think (following Bohr) that the wave function provides a complete description of (microscopic) physical reality.

I don't think that there is general agreement that the wave function is objectively real. According to Everett's definitions, the wave function of an electron (say) is relative to the observer.

It's true, there are people who don't think the wf in ordinary QM should be understood as a beable, as corresponding to some physical reality. The question for them is: what, then, does?

That's a very good question, and the answer seems to be: we don't know.
 
  • #178
stevendaryl said:
Well, I can certainly stipulate that
[...]

Heck, if you're going to just stipulate stuff, why not just stipulate the existence of a local theory that explains all the QM predictions?


I agree that this is a bizarre, comical way of resolving the conundrum, but it's got the same flavor as theories such as MWI that deny that measurements have definite values. That's why, in spite of your insistence that Bell's theorem is only about locality, I insist that some kind of realism assumption is required to derive nonlocality.

More seriously, I agree with you here -- both about its being bizarre / comical / unserious and about its being substantially similar to MWI. (Yes, that was supposed to be funny, but I actually mean it, too.)

I'm trying to gradually extract myself from this thread, so the last thing I want to do here is get into a big side discussion of MWI. But one of the crucial points is what you more or less said here: if "explaining the QM predictions locally" means explaining what Aspect et al. think actually occurred in their lab, then there's a really important sense in which MWI doesn't do this at all. It says that, actually, something quite radically different happened, than what Aspect et al though. And then it tells an elaborate fairy tale about how, nevertheless, it predicts that Aspect et al should be deluded into thinking what they thought. That is, instead of explaining what one (perhaps naively) thinks needs explaining, it instead (allegedly) explains how the subjective delusion (that the outcomes predicted by QM actually happened) arises in consciousness. Whatever else anybody wants to say about it, that's ... not the same thing.

And second, I think it is highly dubious to say that MWI is a local theory. It's not clear what the local beables are supposed to be, and I stand with Bell in thinking that theories without local beables certainly cannot be meaningfully asserted to be local, or even nonlocal. It's like calling beethoven's 5th symphony local. There is at least one attempt I know of to be clear and explicit about local beables for MWI, but on that version actually it turns out that the theory is nonlocal.

http://arxiv.org/abs/0903.2211


Why? We can specify that Bob's probability of misreading Alice's message depends on Bob's state, as well as the state of Alice's message. That's perfectly local. We can certainly make our probabilities such that it becomes a certainty in certain circumstances.

I'm sorry, but as soon as you start talking about Bob's misreading of Alice's message -- instead of what Alice's actual outcome was -- I lose interest.

Maybe some other word than "realist" is called for, but the point is that Bob will be constructing a history of what happened based on his reading of the messages from Alice, but that history does not reflect anything real (at least as far as the parts referring to Alice's results).

Yes, I get that that's what you have in mind, and you're absolutely right that it's quite relevant to MWI. But surely you can see how it's a form of simply "cheating" to play this kind of game. I don't mean that such ideas are necessarily not worth considering (though personally I find them rather silly). But it is *really* changing the underlying rules of the discussion when instead of explaining the facts, you say that everybody is deluded about the facts and start trying to explain the delusions. See how far that kind of game gets you in other fields in science, for example: "Actually my design for the bridge *was* good -- you are just deluded into thinking that it collapsed and killed all those people."


That's not clear to me.

People think that, to derive a Bell inequality, you need several assumptions including at least (a) locality and (b) deterministic non-contextual hidden variables. Let's call (b) "realism" for short. People then see that experiments violate the inequality. (Note here they are not at all thinking "ooh, maybe we are all only *deluded* into thinking the inequality is violated, because we are actually *deluded* about any of the individual measurements having had any definite outcome at all!". That thought doesn't enter a normal person's mind! They take the data at face value.) So they say we have to reject (a) or (b). They say that it's crazy to reject (a) since (a) is required by relativity. Whereas, they say, only senile old fools like Einstein ever believed in (b), and indeed there are a bunch of no-go theorems basically providing independent proof that we shouldn't believe in (b), so, they say, the choice is obvious: bell's theorem shows that we should reject (b).

People who voted "anti-realism" in the poll are of course free to explain their reasoning if this doesn't capture it, but I'm pretty sure that's the main idea for most of these people.
 
  • #179
stevendaryl said:
Well, that's just a terminology thing. I don't agree with that terminology. I think it's misleading.

Well fine, but it's not as if it's arbitrary or undefined terminology. I've explained repeatedly exactly what I mean by "locality", referring to Bell's formulation, etc. So if you want to use the word "local" to mean something distinct, that's of course no problem. But don't mistake doing that for constructing an argument that Bell's formulation somehow fails to capture the concept it's intended to capture.



I don't think that there is general agreement that the wave function is objectively real. According to Everett's definitions, the wave function of an electron (say) is relative to the observer.

Of course there's not general agreement. But the point is that it doesn't matter. There's a theory -- let's call it QM1 -- which is orthodox QM with the wf interpreted as a beable. That theory is nonlocal. Then there's another theory -- let's call it QM2 -- which is orthodox QM with the wf interpreted as *not* a beable. That theory is nonlocal.


That's a very good question, and the answer seems to be: we don't know.

And neither do they. That is, maybe somebody will come up with a new candidate account of what the beables are. That will be a new theory. Perhaps it will be a local theory. If so, then it will make empirical predictions that respect Bell's inequality. That's what the theorem says. Note that we don't have to wait around for the people to actually to come up with their theory (or figure out how they think QM should be interpreted, or anything like that) in order to know this. That's the beauty of the theorem.
 
  • #180
ttn said:
Heck, if you're going to just stipulate stuff, why not just stipulate the existence of a local theory that explains all the QM predictions?

My model is such a theory of EPR-type predictions. So I don't need to stipulate its existence.

if "explaining the QM predictions locally" means explaining what Aspect et al. think actually occurred in their lab, then there's a really important sense in which MWI doesn't do this at all. It says that, actually, something quite radically different happened, than what Aspect et al though. And then it tells an elaborate fairy tale about how, nevertheless, it predicts that Aspect et al should be deluded into thinking what they thought.

That's what, to me, the non-realist branch of the "nonlocal or nonrealistic" choice means--that the world is VERY different from what our senses would lead us to expect.

And second, I think it is highly dubious to say that MWI is a local theory. It's not clear what the local beables are supposed to be,

Well, one possibility, not for quantum mechanics, but for quantum field theory, is to take as the "beables" not the wave function, but the field operators. They obey a perfectly local evolution equation.

I'm sorry, but as soon as you start talking about Bob's misreading of Alice's message -- instead of what Alice's actual outcome was -- I lose interest.

Well, in that case, you should rephrase your claims about what Bell's theorem shows as: "Among the theories that I am interested in, the only ones consistent with quantum mechanics are nonlocal".
 
Last edited:
  • #181
ttn said:
Of course there's not general agreement. But the point is that it doesn't matter. There's a theory -- let's call it QM1 -- which is orthodox QM with the wf interpreted as a beable. That theory is nonlocal. Then there's another theory -- let's call it QM2 -- which is orthodox QM with the wf interpreted as *not* a beable. That theory is nonlocal.

QM2 is not a theory of "beables" at all, local or otherwise.
 
  • #182
ttn said:
(Note here they are not at all thinking "ooh, maybe we are all only *deluded* into thinking the inequality is violated, because we are actually *deluded* about any of the individual measurements having had any definite outcome at all!". That thought doesn't enter a normal person's mind! They take the data at face value.)

Well, what normal people would believe is not that relevant. Normal people don't care about quantum mechanics, one way or the other.

It seems to me that if you are willing to accept that an electron can be in a superposition of states, then there is no principled reason to reject the possibility of a person, or a galaxy being in a superposition of states. So if you reject the latter possibility out of hand, then it means that you are already assuming that there is some unknown fact of the matter about questions like "where is the electron right now?"

So it appears to me that you are assuming from the start that there is some hidden variables underlying quantum mechanics, and the only question is whether its local or nonlocal. I agree, if you have a hidden variables model, it has to be nonlocal. But the people who take the "nonrealist" branch of the question "nonrealist or nonlocal" are rejecting the existence of hidden variables.
 
  • #183
ttn said:
I've never even heard of "relational blockworld". I looked at one of the papers and couldn't make any sense of it -- it's just page after page of philosophy, metaphor, what the theory *doesn't* say, etc. So... you'll have to explain to me how it explains the EPR correlations -- in particular the perfect correlations when a=b. Recall that the explanation should be local (and that the "no conspiracies" assumption should be respected... something tells me this could be an issue in a "blockworld" interpretation...).

I'm sure you haven't heard about a lot of things.

Yet, here it is! And it is fairly well developed for such a theory. It is what I refer to as a time-symmetric class theory. A context is not limited to the past and/or present, and so that is how it is able to account locally for Bell correlations. You don't have to agree with it, and it fact it makes predictions which may prove false. But it is a working theory.

I have no doubt that you will deny the existence of this, as this would be ipso facto evidence that your main contention is incorrect. That being that Bell+Aspect implies non-locality. As I have said many times, QM+Bell implies local hidden variable theories are non-starters.
 
  • #184
DrChinese said:
Yet, here it is! And it is fairly well developed for such a theory. It is what I refer to as a time-symmetric class theory. A context is not limited to the past and/or present, and so that is how it is able to account locally for Bell correlations. You don't have to agree with it, and it fact it makes predictions which may prove false. But it is a working theory.

I looked at the papers that you gave URLs for, and, as I said, I couldn't see a succinct, precise definition of what the model is. They mention how the quantum mechanical commutation relations have a similarity to the commutation relation between various symmetry operators in relativity (translations, rotations, boosts). But that doesn't seem to be a model, it's just an observation.
 
  • #185
stevendaryl said:
I looked at the papers that you gave URLs for, and, as I said, I couldn't see a succinct, precise definition of what the model is. They mention how the quantum mechanical commutation relations have a similarity to the commutation relation between various symmetry operators in relativity (translations, rotations, boosts). But that doesn't seem to be a model, it's just an observation.

I think this review article gives a better summary of the chief ideas:
http://chaos.swarthmore.edu/research/Dan.pdf
 
  • #186
I've read your paper and some other stuff from Bell and this thread now. Let's see:

ttn said:
I'm sorry, but... what the heck are you talking about? Are you really saying that ordinary QM doesn't allow you to calculate what the probabilities of various possible measurement outcomes are, in terms of the state ? of the system in question? That's the one thing that orthodox QM is unquestionably, uncontroversially good for!

Maybe the issue has to do with what I assume(d) was just a typo? Namely: it's not [itex]p(A,B,\lambda)[/itex] but rather [itex]p(A,B|\lambda)[/itex] -- or, as I indicated before, slightly more precisely, [itex]p_{\lambda}(A,B)[/itex].

I don't think that helps. If i understood your paper correctly, then [itex]p(A,B|\lambda)[/itex] is a conditional probability and then [itex]\lambda[/itex] needs to be an element of a probability space. Otherwise, how would you apply the rules of probability theory if your objects aren't well-defined probability measures? I think it would be a good idea to make it more clear in your paper, what the objects you are talking about mean and what spaces they belong to in terms of short, precise mathematical statements instead of rather long, vague paragraphs of text. Maybe it can be made rigorous, but at the moment i don't see it. I just see that you derive the factorization property that is used in the derivation of Bell's inequality, but in order to derive Bell's inequality, you have to perform an integration over [itex]\lambda[/itex], which isn't possible if [itex]\lambda[/itex] is supposed to be the wave-function. So even if you don't want to integrate over it, it should be possible in principle. (At least from what i understood, the factorization property you derive is supposed to be the same that is used in the derivation of Bell's inequality, right?)


----


But here's another thing i noticed: If i understood it correctly, the beables of a theory are supposed to be things that are ascribed a physical reality. Then i think that in QM, the individual measurement outcomes aren't beables. Neither is the wave-function. For an advocate of the Copenhangen interpretation, the only beables of QM would be the probability distributions.

For example the fact that after a measurement, the position probability distribution is peaked over a sharp value doesn't mean that the particle has suddenly acquired the real physical property of having a definite position, albeit it didn't have it one moment before. It merely means that we have come to know more about the probability distribution itself then we did before. The same thing applies to spin. An individual measurement tells us nothing about nature. Only the totality of many measurements allows us to make a statement about the world.

Also, the wave-function is not a beable. It's just a tool that is used to calculate the probability distributions, much like the 4-potential in electrodynamics is just a tool to calculate the field strength. A Copenhagenist wouldn't ascribe physical reality to the wave-function.


If the individual measurements and the wave-function are not beables. then i think you wouldn't come across these technical difficulties about measures in infinite-dimensional spaces. However, i have not studied what the theory would assert about the locality of QM, then.
 
  • #187
stevendaryl said:
I looked at the papers that you gave URLs for, and, as I said, I couldn't see a succinct, precise definition of what the model is. They mention how the quantum mechanical commutation relations have a similarity to the commutation relation between various symmetry operators in relativity (translations, rotations, boosts). But that doesn't seem to be a model, it's just an observation.

It's not my theory, I won't attempt to explain it or defend it. It is serious work though. Unlike every other candidate QM theory/model/interpretation I am aware of, it goes out on a limb to make a specific prediction which is different than orthodox QM. So give 'em credit for bravery if nothing else. Time will tell if their cosmological model predictions pan out, there is a lot of active research in that particular area (dark matter).

Thanks for the link to the Peterson paper, you are correct that it offers a view which is more relevant for this thread.
 
  • #188
DrChinese said:
It's not my theory, I won't attempt to explain it or defend it. It is serious work though. Unlike every other candidate QM theory/model/interpretation I am aware of, it goes out on a limb to make a specific prediction which is different than orthodox QM. So give 'em credit for bravery if nothing else. Time will tell if their cosmological model predictions pan out, there is a lot of active research in that particular area (dark matter).

Thanks for the link to the Peterson paper, you are correct that it offers a view which is more relevant for this thread.

As I said, I don't really understand what the model really is, except for the vague idea that it assumes that relativity is at the heart of quantum indeterminacy in some way. In this respect, it reminds me of Cramer's "Transactional Interpretation", which also explains the seeming nondeterminism of quantum measurement results in terms of details that can involve future as well as past. Cramer also assumed that non-relativistic quantum mechanics contained a remnant of relativity.
 
  • #189
rubi said:
I don't think that helps. If i understood your paper correctly, then p(A,B|λ) is a conditional probability and then λ needs to be an element of a probability space. Otherwise, how would you apply the rules of probability theory if your objects aren't well-defined probability measures?
Did you read the other paper I linked above by M. P. Seevinck. If I'm understanding your question, (I might not be) I believe Seevinck alludes to that beginning in section IV: INTRODUCING MATHEMATICS: FORMALIZING SUFFICIENCY
Then how are we to mathematically implement Bell’s idea of "λ being sufficiently specified so as to declare redundant some of the conditional variables” in Pa,b(A,B|λ), where the latter are in fact to range over both the labels a, b and the random variables A,B? This we will perform next...
Not throwing out the baby with the bathwater: Bell’s condition of local causality mathematically ‘sharp and clean’
http://mpseevinck.ruhosting.nl/seevinck/Bell_LC_final_Seevinck_corrected.pdf
 
Last edited:
  • #190
rubi said:
You have a green ball and a red ball and put them in two identical boxes. You send these boxes to two different people. These people know that you started with a green and a red ball. So the probability to get green/red is 1/2. When person 1 opens his box, he will get a definite result. Let's say he gets red. Then he knows immediately that person 2 has the green ball in his box, even if that box hasn't been opened yet. This is definitely a nonlocal correlation, but nobody would consider this as an action at a distance.

Let's use Alice and Bob as the two different people and calculate the probability using the conditional probability formula.

P(Alice-green,Bob-red)=(.05)(1)=0.5 and P(Alice-red,Bob-green)=(0.5)(1)=0.5 and the probability that Alice and Bob get different colors is 0.5+0.5=1. Standard calculation.

Question. How does Bell mathematically calculate using his formula for the joint probability by factoring to obtain a product of individual probabilities and incorporating λ to explain this perfect anti-correlation of opposite colors? Define λ for this case if possible and explain how it allows one to get to this same probability, P=1.

Thanks
 
  • #191
DrChinese said:
I'm sure you haven't heard about a lot of things.

Yet, here it is! And it is fairly well developed for such a theory. It is what I refer to as a time-symmetric class theory. A context is not limited to the past and/or present, and so that is how it is able to account locally for Bell correlations. You don't have to agree with it, and it fact it makes predictions which may prove false. But it is a working theory.

I have no doubt that you will deny the existence of this, as this would be ipso facto evidence that your main contention is incorrect.

I have no doubt that "the theory exists" in the sense that people have written some papers about it, etc. But whether it is genuinely a "working theory" or not is a different question. To me it is telling that even you -- who raised it and apparently thinks it's a counterexample to my claims -- cannot or will not explain anything about how it works and in particular how it explains the perfect correlations in a local but non-realist way. Surely if you understood this you'd be bursting at the seams to prove me wrong...


As I have said many times, QM+Bell implies local hidden variable theories are non-starters.

That's true. But it also commits the fallacy of the superfluous adjective.
 
  • #192
rlduncan said:
Let's use Alice and Bob as the two different people and calculate the probability using the conditional probability formula.

P(Alice-green,Bob-red)=(.05)(1)=0.5 and P(Alice-red,Bob-green)=(0.5)(1)=0.5 and the probability that Alice and Bob get different colors is 0.5+0.5=1. Standard calculation.

Question. How does Bell mathematically calculate using his formula for the joint probability by factoring to obtain a product of individual probabilities and incorporating λ to explain this perfect anti-correlation of opposite colors? Define λ for this case if possible and explain how it allows one to get to this same probability, P=1.

Thanks

In this case, a "hidden variable" [itex]\lambda[/itex] is just a specification of who gets the green ball. So there are two possible values of [itex]\lambda[/itex]: [itex]A_g[/itex] and [itex]B_g[/itex]. So

P(Alice-green, Bob-red [itex]\vert\ \lambda = A_g[/itex]) = 1
P(Alice-green, Bob-red [itex]\vert\ \lambda = B_g[/itex]) = 0
P(Alice-red, Bob-red [itex]\vert\ \lambda = A_g[/itex]) = 0
P(Alice-red, Bob-green[itex]\vert\ \lambda = B_g[/itex]) = 1
 
  • #193
rubi said:
I've read your paper and some other stuff from Bell and this thread now.

Thanks for taking the time to do that and for sharing your comments here.

I don't think that helps. If i understood your paper correctly, then [itex]p(A,B|\lambda)[/itex] is a conditional probability and then [itex]\lambda[/itex] needs to be an element of a probability space. Otherwise, how would you apply the rules of probability theory if your objects aren't well-defined probability measures? I think it would be a good idea to make it more clear in your paper, what the objects you are talking about mean and what spaces they belong to in terms of short, precise mathematical statements instead of rather long, vague paragraphs of text. Maybe it can be made rigorous, but at the moment i don't see it. I just see that you derive the factorization property that is used in the derivation of Bell's inequality, but in order to derive Bell's inequality, you have to perform an integration over [itex]\lambda[/itex], which isn't possible if [itex]\lambda[/itex] is supposed to be the wave-function. So even if you don't want to integrate over it, it should be possible in principle.

I agree that some things in the "Bell's concept..." paper are not as mathematically precise as one could wish. This is in part because it's a pedagogical paper (for physics students and teachers) and partly because I think there is a point at which mathematical precision actually gets in the way of clear understanding. Perhaps you would like the scholarpedia "Bell's theorem" entry more -- it's a bit more technical (with two of the four authors being mathematicians) and covers the same ideas.

But as to the λ thing, it seems here is an example where the "vague paragraphs of text" are actually important to take in. The λ refers to the possible microstates of the particle pair that can be produced by a given sort of preparation procedure. For QM, the preparation procedure produces a pair in the spin singlet state, period. So ρ(λ) is a delta function! Do you think there's some problem integrating over that when the time comes?



(At least from what i understood, the factorization property you derive is supposed to be the same that is used in the derivation of Bell's inequality, right?)

Yes, absolutely.


But here's another thing i noticed: If i understood it correctly, the beables of a theory are supposed to be things that are ascribed a physical reality. Then i think that in QM, the individual measurement outcomes aren't beables. Neither is the wave-function. For an advocate of the Copenhangen interpretation, the only beables of QM would be the probability distributions.

Bohr's Copenhagen interpretation insisted that the directly perceivable macroscopic classical world existed. (He literally insisted repeatedly on this, in the context of saying that all empirical data was ultimately statements about such macroscopic things.) So strictly speaking, the Copenhagen interpretation involves dividing the world into two realms -- the classical/macro realm and the quantum/micro realm. The former just unproblematically exists, essentially by postulate, and so there are lots and lots of beables there. The beable-status of the micro-world for Copenhagen is indeed more controversial, as has already been discussed on this thread. But (as discussed) that doesn't matter. In short, the "measurement outcomes" most certainly *are* beables for Copenhagen. Bohr himself insisted on it specifically. And to deny such outcomes beable status -- for *any* theory -- is frankly borderline crazy. We are talking here about concrete things like which way a certain pointer in a certain lab pointed at a certain time. To deny "physically real" status to such things is ... well ... to approach solipsism.


For example the fact that after a measurement, the position probability distribution is peaked over a sharp value doesn't mean that the particle has suddenly acquired the real physical property of having a definite position, albeit it didn't have it one moment before. It merely means that we have come to know more about the probability distribution itself then we did before. The same thing applies to spin. An individual measurement tells us nothing about nature. Only the totality of many measurements allows us to make a statement about the world.

What exactly you say is going on physically at the micro-level of course depends on which theory you're talking about and in particular what objects have beable status for the theory in question. But I fundamentally disagree about the last part of what you write. An individual measurement absolutely does tell us something about nature. Think about what an individual measurement means, concretely and physically. It means (at least) that some macroscopic object (think: pointer) moved a certain way. That is something we can directly perceive. It is pre-eminently physical, a fact about nature. (This was also one of the points that I guess you glossed over in the "long, vague paragraphs of text".)


Also, the wave-function is not a beable. It's just a tool that is used to calculate the probability distributions, much like the 4-potential in electrodynamics is just a tool to calculate the field strength. A Copenhagenist wouldn't ascribe physical reality to the wave-function.

Some do, some don't. But (as discussed above) this doesn't matter. You can give it either status you want, and "Copenhagen QM" is still nonlocal.
 
  • #194
stevendaryl said:
I think this review article gives a better summary of the chief ideas:
http://chaos.swarthmore.edu/research/Dan.pdf

I just spent an hour with it and still have essentially no idea what the point is. Certainly there is no genuine physical theory here. Sigh.
 
  • #195
ttn said:
I just spent an hour with it and still have essentially no idea what the point is. Certainly there is no genuine physical theory here. Sigh.

I wouldn't say "certainly", but it's not clear to me what the point is, either. I understand the derivation that quantum commutation relations can be interpreted as commutation relations of generators of Poincare symmetries, but I don't understand what's supposed to follow from that.
 
  • #196
ttn said:
I agree that some things in the "Bell's concept..." paper are not as mathematically precise as one could wish. This is in part because it's a pedagogical paper (for physics students and teachers) and partly because I think there is a point at which mathematical precision actually gets in the way of clear understanding. Perhaps you would like the scholarpedia "Bell's theorem" entry more -- it's a bit more technical (with two of the four authors being mathematicians) and covers the same ideas.

I agree that technical expositions often aren't very pedagogical. But i think that at some point, there should be some place, where the axioms of the theory are written down in a precise way and the theorems are proven rigorously. I believe that much of the power of physics comes from the fact that the mathematical language that underpins physics is very pedantic. I think it's desirable to know the limitations of our theories. This can often lead to a deeper understanding and even new discoveries.

I will take a look at the scholarpedia article.

But as to the λ thing, it seems here is an example where the "vague paragraphs of text" are actually important to take in. The λ refers to the possible microstates of the particle pair that can be produced by a given sort of preparation procedure. For QM, the preparation procedure produces a pair in the spin singlet state, period. So ρ(λ) is a delta function! Do you think there's some problem integrating over that when the time comes?

If [itex]\rho(\lambda)[/itex] were a delta function, then it's possible to formulate this using the dirac measure. But in QM, you can always multiply a state by a complex number and it still describe the same physical situation. So the distribution would really have to be a something like an indicator function. I'm not sure if this can be done. Maybe if you modify the theory a little and switch to the projective space. Then [itex]\lambda[/itex] isn't the wave function itself, but rather an equivalence class of wave functions.

However, i have to admit that i misunderstood this at first. I thought you were considering arbitrary distributions of the [itex]\lambda[/itex]'s, as one usually does it in the derivation of Bell's theorem and this looked like a hopeless task. If you restrict the distributions you consider to only special cases, it looks much more feasible.

Bohr's Copenhagen interpretation insisted that the directly perceivable macroscopic classical world existed. (He literally insisted repeatedly on this, in the context of saying that all empirical data was ultimately statements about such macroscopic things.) So strictly speaking, the Copenhagen interpretation involves dividing the world into two realms -- the classical/macro realm and the quantum/micro realm. The former just unproblematically exists, essentially by postulate, and so there are lots and lots of beables there.

Well, i think that the term "Copenhagen interpretation" is used more loosely today. I would probably consider myself a quantum instrumentalist. I don't assume that the classical world exists (that means I'm agnostic but rather tend to neglect its existence if problems emerge). In fact, the measurement apparatus itself and everything else should also behave quantum mechanically. We just don't include it in our models most of the time (but we could do it and it leads to very useful results, see decoherence). The quantum-classical split doesn't have an ontological status in my opinion. We use classical theories only to interpret the results of measurements. They aren't part of the quantum theory itself. That means that if we obtain a value for a position measurement of a quantum particle for example then if we were to use a classical theory for the further description of the system (instead of quantum mechanics), then it would probably be best to assign the obtained value to the position variable of that classical theory in order to model the situation best. That doesn't mean that the quantum particle has suddenly acquired a position. It just means that we have mentally assigned a classical position to it in order to get a more intuitive understanding of the situation. We do this for the sole reason that we have more intuition for classical theories than for quantum theories. If this view of quantum theory makes me a non-Copenhagenist, so be it, but i think it is shared by most physicists at least in a similar way. I don't persist on being an advocate of any particular interpretation.

So, long story short, classical theories aren't part of the quantum description. They are only an interpretational tool. Quantum mechanics doesn't assign definite values to observables. It assigns only probability distributions, from which we can calculate expectation values. Nothing more. Quantum mechanics doesn't tell us that the outcome of a position measurement will be "5". It just tells us that if we prepare the system identically and perform the same measurement 100 times, then if we calculate the mean value, we will get "5". In fact, QM doesn't even have a mechanism to predict individual outcomes of an experiment. It's not a theory like classical mechanics where you just don't know the exact positions and momenta and thus supplement it with a probability distribution. In fact, QM is solely probability. It's not a theory about an underlying reality. Individual outcomes aren't even observables of the pure quantum theory. So how can they be beables of the theory? You said yourself that a beable is
"whatever a certain candidate theory says one should take seriously, as corresponding to something that is physically real."
But quantum mechanics doesn't say that one should take the individual outcomes seriously, because it's not a theory of individual outcomes. It doesn't predict individual outcomes. They aren't an element of the theory (unless you artificially add them like in Bohmian mechanics, but i only talk about standard QM here). Individual outcomes are external things that aren't part of the theory. And if they aren't part of the theory, they can't be beables of the theory. In your own paper, you quoted Bell saying that beables are always to be viewed with respect to a particular theory (in our case QM). They aren't global things that apply to all theories. I think that this is even the most relevant difference between Bohmian mechanics and standard QM. Assinging beable status to individual outcomes would probably cast standard QM almost into being Bohmian mechanics.

In short, the "measurement outcomes" most certainly *are* beables for Copenhagen. Bohr himself insisted on it specifically. And to deny such outcomes beable status -- for *any* theory -- is frankly borderline crazy. We are talking here about concrete things like which way a certain pointer in a certain lab pointed at a certain time. To deny "physically real" status to such things is ... well ... to approach solipsism.

As i said, you could also include the measurement apparatus and thus the pointer into the quantum description like the decoherence people do. Then decoherence tells you that the quantum state of the pointer will be sharply peaked over certain values after a very short time and the peak is getting sharper and sharper every nanosecond, but in fact it will never reach an exact eigenstate, so technically it's always in a superposition unless you wait an infinitely long amount of time, even though the peak will become so sharp that it practically makes no sense to talk about superpositions anymore. In that sense, the pointer of the measurement apparatus -- if described quantum mechanically -- behaves no differently than a quantum particle. We can compute only probability distributions. It's just that macroscopic objects have sharply peaked quantum states, just like particles shortly after their measurements. Sharply peaked quantum states are the classical limit of quantum theory, so to speak, but they aren't classical. They are only classical enough in the sense that the corresponding classical theory would provide a good approximation to the quantum description. I really don't have a problem with that. Especially i don't see why this would approach solipsism. I really have spent a considerable amount of time thinking about this kind of stuff. I haven't always thought about it this way.

You have to view it this way: A physical model is to nature like the word "banana" is to the yellow thing that you can buy in the supermarket. Ceci n'est pas une pipe (google it if you don't recognize it). Theories only describe our world. Some theories have just turned out to be useful. It's the theories that we classify by words like "local", "realistic" and others. It's not nature itself. If i say that standard QM doesn't have a beable corresponding to individual outcomes, this means that standard QM isn't concerned with invidivual outcomes. It doesn't make predictions about them. It only describes some aspects of the world, just like Newtonian gravity doesn't describe nuclear physics. Still, we can classify these theories using words like "local". You wouldn't say that Newtonian gravity can't be classified as "local" or "nonlocal" just because it has no means to describe nuclear physics. QM has no means to describe individual outcomes. Maybe that doesn't satisfy you, but it's enough for virtually every application i can think of and it doesn't prevent it from being classified. Maybe there is a deeper theory that can talk about individual outcomes and has them as beables. But yet, there is only Bohmian mechanics and i don't think that it has any particular advantage over standard QM.

What exactly you say is going on physically at the micro-level of course depends on which theory you're talking about and in particular what objects have beable status for the theory in question. But I fundamentally disagree about the last part of what you write. An individual measurement absolutely does tell us something about nature. Think about what an individual measurement means, concretely and physically. It means (at least) that some macroscopic object (think: pointer) moved a certain way. That is something we can directly perceive. It is pre-eminently physical, a fact about nature. (This was also one of the points that I guess you glossed over in the "long, vague paragraphs of text".)

But an individual measurement tells us very little about nature. It could as well be a measurement error. With only one datapoint, we are completely unable to tell. The actual value of the measurement is almost useless. We need a larger dataset to gain real information. The standard deviation is equally important as the measurements themselves. Measurements are always imprefect and physics somehow has to deal with this imperfection. There is no way out of this. There will never be a perfect measurement apparatus and thus, physics can't possibly live without statistics. This is a fundamental fact that can't be overcome. The only interesting values about a measured dataset are it's statistical properties. If you measure a single datapoint, say the position of an atom, to be "5", then this just tells you that the position of the atom might have been "5", but it might as well not have been "5", because the apparatus just gave you a wrong number due to the intrinsic imperfection of physical measurements. Even worse, if you measure the value "5", then this value is almost certainly wrong, because a measuremt error of "0" would be infinitely unlikely. We can never reliably reproduce individual outcomes, but we can reproduce their statistics. That's by the way also one of the main reasons why I'm willing to give up the beable status of individual measurements so easily.

Some do, some don't. But (as discussed above) this doesn't matter. You can give it either status you want, and "Copenhagen QM" is still nonlocal.

But maybe giving up the individual outcomes as beables makes it local. In fact, i could imagine that this would make Bell's locality definition equivalent to the definition that quantum field theorists use, which would be really cool in my opinion.
 
Last edited:
  • #197
stevendaryl said:
In this case, a "hidden variable" λ is just a specification of who gets the green ball. So there are two possible values of λ: Ag and Bg. So

P(Alice-green, Bob-red | λ=Ag) = 1
P(Alice-green, Bob-red | λ=Bg) = 0
P(Alice-red, Bob-red | λ=Ag) = 0
P(Alice-red, Bob-green| λ=Bg) = 1

I may be wrong, but this does not seem correct. There are only two outcomes for this case:
1) Alice-green, Bob-red
2) Alice-red, Bob-green.

So let A=Alice gets green and B=Bob gets red. From the generalized conditional probability rule:

P(A|B) = P(A)*P(B|A) where P(A) = 0.5 and P(B|A) = 1 (if Alice got green, then it’s a 100% certainty Bob got red). So

P(A|B) = P(A)*P(B|A) = (0.5)(1) = 0.5 Eq(1)

which is the correct result for this special case. (The same may be written for A=Alice gets red and B=Bob gets green.)

Now according to Bell’s logic:

P(AB|λ) = P(A|λ) * P(B|λ) Eq(2)

If A and B are independent events then P(B|A) =P(B) and Eq(1) reduces to
P(A|B) = P(A)*P(B) = (0.5)(0.5) = 0.25 which is incorrect and Bell certainly understood this.

So let me rephrase. What is λ in Eq.(2)? What are the values for the terms P(A|λ) and P(B|λ)? I am assuming that P(AB|λ) = 0.5 the same answer calculated in Eq.(1). I can’t reason logically that Eq(1) and Eq(2) are equivalent. Any clarification would be appreciated.

P.S. I pose these questions in part to understand the challenge by ttn to explain how to account for the perfect correlations when the analyzers point to the same angle.
 
  • #198
rlduncan said:
I may be wrong, but this does not seem correct. There are only two outcomes for this case:
1) Alice-green, Bob-red
2) Alice-red, Bob-green.

So let A=Alice gets green and B=Bob gets red. From the generalized conditional probability rule:

P(A|B) = P(A)*P(B|A) where P(A) = 0.5 and P(B|A) = 1 (if Alice got green, then it’s a 100% certainty Bob got red). So

P(A|B) = P(A)*P(B|A) = (0.5)(1) = 0.5 Eq(1)

which is the correct result for this special case. (The same may be written for A=Alice gets red and B=Bob gets green.)

Now according to Bell’s logic:

P(AB|λ) = P(A|λ) * P(B|λ) Eq(2)

If A and B are independent events then P(B|A) =P(B) and Eq(1) reduces to
P(A|B) = P(A)*P(B) = (0.5)(0.5) = 0.25 which is incorrect and Bell certainly understood this.

So let me rephrase. What is λ in Eq.(2)? What are the values for the terms P(A|λ) and P(B|λ)? I am assuming that P(AB|λ) = 0.5 the same answer calculated in Eq.(1). I can’t reason logically that Eq(1) and Eq(2) are equivalent. Any clarification would be appreciated.

P.S. I pose these questions in part to understand the challenge by ttn to explain how to account for the perfect correlations when the analyzers point to the same angle.

Actually what stevendaryl wrote was exactly right. Once you conditionalize on λ (a *complete* description of the state of the balls prior to measurement) all the probabilities are either 0 or 1 since there is no fundamental randomness here according to the theory in question (which is "common sense" or "classical physics" or some such).

The relation to the case of quantum particles (my challenge) is as follows. Suppose you say that you have a different theory, namely, that neither ball has any definite color while they're still in the boxes. They only acquire a definite color through some random process when looked at. So now λ does *not* include the real pre-observations colors of the balls because there is (according to this alternative theory) no such thing. But now, if the model is *local* -- i.e., if each ball (when observed) switches to red or green with 50/50 probability independently of anything happening far away -- then the theory will predict that 25% of the time the balls are both red, etc.

So if there is some experimental data showing that, actually, the balls are always different colors, we face a choice. Either they had those colors all along and the colors are simply revealed to us when we look (hidden variables!), or the colors are indefinite until observation happens but there is some nonlocality in the way observation makes the colors pop into existence (e.g., Alice's observations randomly makes her ball become either red or green *and makes Bob's distant ball become the opposite color*).

This is exactly what you should be thinking about to appreciate why locality --> "realism", i.e., why the *only* way to explain the perfect correlations *locally* is to posit "hidden variables".
 
  • #199
rubi said:
...

If I understand you correctly, you are saying that QM cannot account for the fact that something like a pointer (or a table or a cat or a planet) exists with definite properties.

You're getting lost in questions like whether/how the "cut" can be pushed around so that macroscopic stuff is described on the wave function side, whether QM *predicts* exactly what the outcome of an experiment will be, whether we can become omniscient about the state of some micro-thingy from only a single measurement on it, etc.

But none of that is relevant to the main point here. We don't need QM or any other fancy theory to tell us that pointers point in particular directions, that there's a table in front of me, a cat on the bed, etc. Physical facts like that are just available to direct sense perception. We know them more directly, with more certainty, than we can possibly ever know anything about obscure microscopic things. Now here is the simple plain fact. To whatever extent you are right that QM cannot account for these sorts of facts (and personally I think you are not right at all, i.e., I think Copenhagen QM *does* account for them, and it was one of Bohr's few valid insights to recognize that it is *crucial* that it be able to account for them) it ceases to be an empirically adequate theory.
 
  • #200
rlduncan said:
I may be wrong, but this does not seem correct. There are only two outcomes for this case:
1) Alice-green, Bob-red
2) Alice-red, Bob-green.

In English, we explain this case as follows: (Let me change it slightly from previously)

There are two boxes, one is labeled "Alice", to be sent to Alice, and the other labeled "Bob" to be sent to Bob. We flip a coin, and if it is heads, we put the green ball in Alice's box, and the red ball in Bob's box. If it is tails, we put the red ball in Alice's box, and the green ball in Bob's box.

In this case, the hidden variable [itex]\lambda[/itex] has two possible values, [itex]H[/itex], for "heads" and [itex]T[/itex] for "tails". Then our probabilities are
(letting [itex]A[/itex] mean "Alice gets green" and [itex]B[/itex] mean "Bob gets red".)

  • [itex]P(H) = P(T) = \frac{1}{2}[/itex]
  • [itex]P(H T) = 0[/itex]
  • [itex]P(A \vert H) = P(B \vert H) = 1[/itex]
  • [itex]P(A \vert T) = P(B \vert T) = 0[/itex]

We can compute other probabilities as follows:
  • [itex]P(AB) = P(AB \vert H) \cdot P(H) + P(AB \vert T) \cdot P(T) = \frac{1}{2}[/itex]
  • [itex]P(A) = P(A \vert H) \cdot P(H) + P(A \vert T) \cdot P(T) = \frac{1}{2}[/itex]
  • [itex]P(B) = P(B \vert H) \cdot P(H) + P(B \vert T) \cdot P(T) = \frac{1}{2}[/itex]
  • [itex]P(A \vert B) = P(AB)/P(B) = 1[/itex]

Bell's criterion for the case of [itex]A[/itex] and [itex]B[/itex] being causally separated is not

[itex]P(A \vert B) = P(A)[/itex]

(which is false). Instead, it's

[itex]P(A \vert B \lambda) = P(A \vert \lambda)[/itex]
where [itex]\lambda[/itex] is a complete specification of the relevant information in the common past of [itex]A[/itex] and [itex]B[/itex], which is true.
 
  • #201
ttn said:
But none of that is relevant to the main point here. We don't need QM or any other fancy theory to tell us that pointers point in particular directions, that there's a table in front of me, a cat on the bed, etc. Physical facts like that are just available to direct sense perception. We know them more directly, with more certainty, than we can possibly ever know anything about obscure microscopic things.

But we are disussing the "locality" of theory QM. In order to do that, we need to identify its beables. As you point out in your paper, what the beables are depends on the particular theory in question. In standard QM, the individual outcomes can't be beables, because they don't exist. Just like you cannot assign beables status to nuclear properties in Newtonian gravity, you cannot assign beable status to individual outcomes in QM, because these theories don't account for these facts. QM doesn't even try to describe individual outcomes.

This is different in Bohmian mechanics, where the description is supplemented by position variables. If there is a position variable, then you can of course assign beable status to it if you want. But there is no such thing in standard QM. The beables of standard QM are the probability distributions and the mean values and so on. Maybe it helps you to put it this way: The prediction of QM for an individual outcome is the mean value. Of course, it's often wrong, but that just means that QM isn't good at predicting individual outcomes. Like Newtonian gravity isn't good at predicting the apsidal precession of mercury.

Now here is the simple plain fact. To whatever extent you are right that QM cannot account for these sorts of facts (and personally I think you are not right at all, i.e., I think Copenhagen QM *does* account for them, and it was one of Bohr's few valid insights to recognize that it is *crucial* that it be able to account for them) it ceases to be an empirically adequate theory.

Well, as i said, I'm not particularly talking about Bohr's exact point of view. I think nobody really shares Bohr's viewpoint exactly. In principle, everything should be put on the quantum side, so there is no classical side. The classical picture is only a useful tool.

If you don't think that quantum mechanics is an empirically adequate theory, just because it is unable to make predictions about every element of the world you observe, then you must also classify Newtonian gravity as empirically inadequate, because it doesn't predict radioactivity. Maybe Bohmian mechanics is "empirically adequate" for you, but that just means that "empirically adequate" isn't a good criterion to single out useful theories, because even though individual outcomes might exist in BM, it's still not able to make more accurate predictions about these than standard QM is.

I'm perfectly fine with theories with theories that don't describe every aspect of the world. In fact, if you want this, then you'd have to wait until someone finds the theory of everything (if it exists at all). Up to now, every theory we have has some weakness, where it doesn't describe nature accurately. I don't think that's a problem. The theories are still useful and we can classify them into categories like "local" and "realistic" and "empirically adequate" adn so on if we like to.

If I understand you correctly, you are saying that QM cannot account for the fact that something like a pointer (or a table or a cat or a planet) exists with definite properties.

Yes, I'm saying this. QM can only predict its mean value, its standard deviation (which might be very small for macroscopic objects and this is how the classical limit emerges) and other statistical properties.

I don't see an ontological problem with this. The world might just not be like we might naiviely imagine it to be. In fact, I'm completely agnostic with respect to whether there is more to reality than what my senses tell me. Maybe the world has classical properties, maybe it doesn't. I haven't found a way to decide this question one way or the other and standard QM doesn't help me to do so.
 
Last edited:
  • #202
rubi said:
But we are disussing the "locality" of theory QM. In order to that, we need to identify its beables. As you point out in your paper, what the beables depends on the particular theory in question. In standard QM, the individual outcomes can't be beables, because they don't exist.

That last is what I (and Bohr) disagree(s) with. The individual outcomes absolutely do exist according to Copenhagen QM. They weren't *predictable* (with certainty) prior to the measurement, but once the measurement happens, one of the results *really occurs*. Yes, which one occurs is *random*; the theory does not predict this. But it does not deny that individual measurements have actual individual outcomes! That would be insane. Or more precisely, as I said before, that would mean that the theory is way wronger than anybody thought.

Concretely: Bob goes into his lab where there is a stern-gerlach apparatus. At noon (EST) he hits a button that makes a particle come out of a particle source, go through the magnets, and get detected by one or the other of two photodetectors. Each photodetector is wired up so that, instead of an audible "click", a little white flag with the words "I found the particle!" printed on it pops up into the air. Now on this particular occasion, at noon, it turns out that the flag on the lower detector pops up. That is -- if anything ever was -- a physical fact out there in nature. And if you are really saying that ordinary QM denies that any such thing happens, then ordinary QM is just simply *wrong*. It fails to describe the facts correctly.

Now for the record, as I've said, I think here it is you who is trivially wrong, not Copenhagen QM. I loathe Copenhagen QM. I think it's a terrible, indeed embarrassing, theory. But it's terrible/embarrassing because it doesn't really give any coherent *physical* account of the microscopic parts of the world; because it involves artificially dividing the world into these two realms, macro and micro; because the idea of distinct laws for these separate realms, and then special exceptions to those laws for the at-best-vaguely-defined situations called "measurements", is ridiculous for any theory with pretensions to fundamentality; etc. But despite all these (really serious) problems, I do concede that Copenhagen QM is at least an empirically adequate theory, in the sense that it says true things about what the directly observable aspects of the world are like and in particular makes the right statistical predictions for how things like the goofy little flags should work in the appropriate circumstances. It's like Ptolemy's theory of the solar system -- it makes the right predictions, but it just can't be the correct fundamental theory.


Just like you cannot assign beables status to nuclear properties in Newtonian gravity, you cannot assign beable status to individual outcomes in QM, because these theories don't account for these facts. QM doesn't even try to describe individual outcomes.

I think you are just taking "QM" to refer *exclusively* to the parts of the theory that pertain only to the so-called microscopic world. That is, you are not treating the usual textbook measurement axioms (and the associated ontological commitments!) as part of the theory. But (unless you are an Everettian, but let us here talk just about "ordinary QM") those parts of the theory really are absolutely crucial. Without them, the theory doesn't say anything at all about experimental outcomes (even the statistics thereof). That is, if you leave those parts out, you are truly left with a piece of math that is totally divorced from the physical world of ordinary experience, i.e., totally divorced from empirical data/evidence/science. Indeed, I think it would be accurate to say that this math is literally meaningless since there is nothing coherent left for it to refer to. Bohr, at least, understood quite well that, at the end of the day, the theory better say something about pointers, tables, cats, planets, flags, etc. I think Bohr was dead wrong insofar as he seems to have thought that this is *all* you could say anything about. To use one of Bell's apt words, Bohr thought the microscopic world was in some sense "unspeakable". That is dead wrong. It was a result of various empiricist/positivist strands of philosophy that were popular at the time, but that practically nobody outside of physics departments takes seriously anymore.


This is different in Bohmian mechanics, where the description is supplemented by position variables. If there is a position variable, then you can of course assign beable status to it if you want. But there is no such thing in standard QM.

Not in the micro-realm, that's true. But Copenhagen QM's full description of the world -- its full ontology -- is *not* simply the wave function for the micro-realm. It is the wave function for the micro-realm *and classical objects/properties for the macro-realm*.


The beables of standard QM are the probability distributions and the mean values and so on. Maybe it helps you to put it this way: The prediction of QM for an individual outcome is the mean value.

No, that is wrong, unless you are just speaking extremely loosely/imprecisely. The prediction of QM for an individual outcome is: the outcome will be one of the eigenvalues of the appropriate operator, with the probabilities of each possibility being given by the expectation value of the projector onto that eigenstate. Yes, you can of course calculate a probability-weighted average of these possible outcome values, the expectation/mean value. But QM absolutely does *not* predict that that mean value will be the outcome. If it did predict that, again, it would be simply, empirically, false. For example, here comes a particle (prepared in the "spin up along x" state) to a SG device that will measure its spin along the z direction. The expectation value is zero. But the actual outcome is never zero, it is always either +hbar/2 or -hbar/2. I know you understand all this, but what you said above is really, badly wrong, at least as written.


Of course, it's often wrong, but that just means that QM isn't good at predicting individual outcomes. Like Newtonian gravity isn't good at predicting the apsidal precession of mercury.

No, that is not at all the right way to think about it. It's not that QM is always (or almost always) wrong. It's rather that it only makes probabilistic predictions. It says (in the example just above) that there's a 50% chance that the outcome will be +hbar/2 and a 50% chance that the outcome will be -hbar/2. When you find out that, in fact, for a given particle, the outcome was -hbar/2, you do not say "QM was wrong". You say "Cool, that's perfectly consistent with what QM said." If you want to know whether QM's predictions are right, then yes, you need to run the experiment a million times and look at the statistics to make sure it really is +hbar/2 about half the time, etc. But it is not at all that the prediction for the individual event was *wrong*. The prediction for the individual event was probabilistic, which is absolutely consistent with what in fact ends up happening in the individual event.



Well, as i said, I'm not particularly talking about Bohr's exact point of view. I think nobody really shares Bohr's viewpoint exactly. In principle, everything should be put on the quantum side, so there is no classical side. The classical picture is only a useful tool.

But if you do that (and again here leaving aside the possible Everettian "out") you get nonsense. That is, you get something that is just as wrong -- just as inconsistent with what we see with our naked eyes actually happening in the lab -- as the denial that there is any physically real definite macro-state.



I'm perfectly fine with theories with theories that don't describe every aspect of the world.

Me too.



Yes, I'm saying this. QM can only predict its mean value, its standard deviation (which might be very small for macroscopic objects and this is how the classical limit emerges) and other statistical properties.

This is simply not true. QM can *also* predict the *possible* definite outcome values. In general, there are several of these, i.e., many different possible outcomes with nonzero probabilities. Despite the flaws in the theory, it is right about these.


I don't see an ontological problem with this. The world might just not be like we might naiviely imagine it to be.

Are you really equating *direct sense perception* -- surely the foundation of all properly empirical science -- with "naive imagination"?



In fact, I'm completely agnostic with respect to whether there is more to reality than what my senses tell me.

Well, I think it's pretty naive to think that our senses tell us everything that is true of the world. (For example, that would mean the world disappears every time you blink.) But this isn't even what's at issue here. The question is just whether what your senses tell you is at least part of what's real. When that one flag pops up, and you see this, it really popped up -- and any theory that says otherwise is ipso facto rendered false.
 
  • #203
stevendaryl said:
In English, we explain this case as follows: (Let me change it slightly from previously)

There are two boxes, one is labeled "Alice", to be sent to Alice, and the other labeled "Bob" to be sent to Bob. We flip a coin, and if it is heads, we put the green ball in Alice's box, and the red ball in Bob's box. If it is tails, we put the red ball in Alice's box, and the green ball in Bob's box.

In this case, the hidden variable [itex]\lambda[/itex] has two possible values, [itex]H[/itex], for "heads" and [itex]T[/itex] for "tails". Then our probabilities are
(letting [itex]A[/itex] mean "Alice gets green" and [itex]B[/itex] mean "Bob gets red".)

  • [itex]P(H) = P(T) = \frac{1}{2}[/itex]
  • [itex]P(H T) = 0[/itex]
  • [itex]P(A \vert H) = P(B \vert H) = 1[/itex]
  • [itex]P(A \vert T) = P(B \vert T) = 0[/itex]

We can compute other probabilities as follows:
  • [itex]P(AB) = P(AB \vert H) \cdot P(H) + P(AB \vert T) \cdot P(T) = \frac{1}{2}[/itex]
  • [itex]P(A) = P(A \vert H) \cdot P(H) + P(A \vert T) \cdot P(T) = \frac{1}{2}[/itex]
  • [itex]P(B) = P(B \vert H) \cdot P(H) + P(B \vert T) \cdot P(T) = \frac{1}{2}[/itex]
  • [itex]P(A \vert B) = P(AB)/P(B) = 1[/itex]

Bell's criterion for the case of [itex]A[/itex] and [itex]B[/itex] being causally separated is not

[itex]P(A \vert B) = P(A)[/itex]

(which is false). Instead, it's

[itex]P(A \vert B \lambda) = P(A \vert \lambda)[/itex]
where [itex]\lambda[/itex] is a complete specification of the relevant information in the common past of [itex]A[/itex] and [itex]B[/itex], which is true.

stevendaryl (and ttn) thank you for the replies. I will need some time to digest them.

rlduncan
 
  • #204
rlduncan said:
stevendaryl (and ttn) thank you for the replies. I will need some time to digest them.

Cool. =)

Here is something else closely related to this for you and others to consider. Assuming we adopt Bell's definition of locality, and restricting our attention to the case where Alice and Bob measure along parallel axes (which is completely equivalent to the red/green balls), we have that

P(A,B|λ) = P_alice(A|λ) P_bob(B|λ).

Here λ is a complete specification of the state of the particles/balls according to some candidate theory (QM, Bohm, whatever). A=±1 and B=±1 are the outcomes on each side (+1 means spin up, for spinny particles, or +1 means "red" for the balls... note this is different from how stevendaryl used the same symbols.)

Now consider one of the particular joint outcomes that never happens, say A=+1 and B=+1. Let's allow that, for the same preparation procedure (producing what QM calls the "singlet state", or some random coin flippy thing that decides which ball goes where, etc.), there are perhaps many different λs that are sometimes produced. Still, if we run the experiment a bajillion times, we *never* see the joint outcome A=+1, B=+1. So it must be that the probability P(+1,+1|λ) = 0 for *all* possible λs that this preparation procedure sometimes produces.

Plugging into the factorization condition above (that remember follows from Bell's definition of locality) we then have that, for all λ,

0 = P(+1,+1 | λ) = P_alice(+1|λ) P_bob(+1|λ).

OK, so these two probabilities multiply to zero. So at least one of them has to equal zero.

You can now easily see that the general class of λs has to break into two sub-classes:

{λa}: those λs for which P_alice(+1|λ)=0, i.e., those λs for which Alice's measurement is guaranteed *not* to yield A=+1, i.e., those λs for which Alice's measurement is guaranteed to instead yield A=-1. Now if, for any λ in {λa}, P_bob(+1|λ) were anything other than 100%, we would occasionally see the joint outcome A=-1, B=-1. Since in fact we never see this, it must be that, for all λ in {λa}, P_alice(-1|λ)=100% and P_bob(+1|λ) = 100%. That is, λ being in {λa} means that both particles carry pre-measurement non-contextual "hidden variables" that pre-determine the outcomes A=-1 and B=+1.

{λb}: those λs for which P_bob(+1|λ)=0, i.e., those λs for which Bob's measurement is guaranteed *not* to yield B=+1, i.e., those λs for which Bob's measurement is guaranteed to instead yield B=-1. Now if, for any λ in {λb}, P_alice(+1|λ) were anything other than 100%, we would occasionally see the joint outcome A=-1, B=-1. Since in fact we never see this, it must be that, for all λ in {λb}, P_alice(+1|λ)=100% and P_bob(-1|λ) = 100%. That is, λ being in {λb} means that both particles carry pre-measurement non-contextual "hidden variables" that pre-determine the outcomes A=+1 and B=-1.

Please appreciate that this is merely a formalization of the EPR argument *from locality to* these deterministic hidden variables. In terms of the red and green balls, it shows that the *only way* to locally explain why Alice's and Bob's balls are always *different colors* is to say that there was some definite, though perhaps unknown, fact of the matter about the colors (perhaps varying randomly from one trial to the next) even prior to the observations. This of course is just the ordinary/obvious/everyday way of explaining what is going on with the balls. If somebody wanted to be weird, they could deny that the balls have definite colors until looked at later, but this would require nonlocality -- in particular, one person looking at his ball would fix not only the color of that ball but would *also* have to fix the color of the distant ball. That is what the simple little theorem above proves. "Realism" (meaning here pre-determined values, "hidden variables") is *required* if you want to try to explain the perfect correlations *locally*.
 
  • #205
Thanks ttn. I agree and well presented which helps alot.
 
  • #206
ttn said:
That last is what I (and Bohr) disagree(s) with. The individual outcomes absolutely do exist according to Copenhagen QM. They weren't *predictable* (with certainty) prior to the measurement, but once the measurement happens, one of the results *really occurs*. Yes, which one occurs is *random*; the theory does not predict this. But it does not deny that individual measurements have actual individual outcomes! That would be insane. Or more precisely, as I said before, that would mean that the theory is way wronger than anybody thought.

Every theory is wrong in the strict sense. Wrong or right are not useful properties to classify models. A model is always different from reality, because it is only a model.

Do you really believe that the world is split into a classical and a quantum world? I don't think there is any physicist today, who really believes this. There is only one world without a split. Decoherence has shown us how the classical world emerges from quantum mechanics. If you believe in the quantum-classical split, then you neglect 40 years of research. The world is either quantum or classical (or something completely different). And i believe it's rather quantum than classical. The quantum-classical split is obsolete. (I'm not saying that there aren't any open problems)

Yes, quantum mechanics does restrict the set of possible measurement values. But that doesn't mean that it predicts that there should be reality asribed to these outcomes. To put it in as few words as possible: QM does not assert that after a measurement a particle acquired the real property of having a position. Instead it asserts that after a measurement, the probability distribution associated with the position observable is very sharply peaked about a certain value. The particle never has the property of having a position. Not before the measurement, not after it and not even in the instant when it is measured. It has only an associated probability distribution that might in certain situations be sharply peaked. The classical picture we perceive is an emergent phenomenon that QM predicts if you include the mesaurement apparatus and the environment into the quantum description and then coarse-grain it by computing partial traces and so on. Are the measured values "real" or is this reality an emergent phenomenon? I think the latter is the better way to think about it.

If this is peculiar to you and you want to discard QM because of this, then this is your personal choice. If you think QM is too weird to be true and one should not further pursue it and rather look for different theories, then it is also your personal choice. But there's no point to argue for one way or the other. At the moment, it's a matter of believe, much like it is a matter of believe of whether there is a god or not. The question of whether there are real counterparts to the peaks of the QM probability distributions cannot be answered. What does "real" mean anyway, if i can only gain knowledge about reality through my senses? Is there any reality beyond what my senses tell me? I think these kind of questions are irrelevant to physics. I have chosen to stay agnostic with respect to it and i don't feel uncomfortable about it.

Concretely: Bob goes into his lab where there is a stern-gerlach apparatus. At noon (EST) he hits a button that makes a particle come out of a particle source, go through the magnets, and get detected by one or the other of two photodetectors. Each photodetector is wired up so that, instead of an audible "click", a little white flag with the words "I found the particle!" printed on it pops up into the air. Now on this particular occasion, at noon, it turns out that the flag on the lower detector pops up. That is -- if anything ever was -- a physical fact out there in nature. And if you are really saying that ordinary QM denies that any such thing happens, then ordinary QM is just simply *wrong*. It fails to describe the facts correctly.

I'm not saying that QM denies that such a thing happens. I'm saying that if you describe that system completely quantum mechanically (including the apparatus), then QM will predict that that the probability distribution of a little white flag appearing will be sharply peaked. Of course, it's completely impractical to include the apparatus into the QM description for such simple experiments. It's only of academic interest. But in principle it's the right way to look at it and it can be done. People study such models. If you aren't familiar with this, you might want to get the book by Maximilian Schlosshauer for starters.

Now for the record, as I've said, I think here it is you who is trivially wrong, not Copenhagen QM. I loathe Copenhagen QM. I think it's a terrible, indeed embarrassing, theory. But it's terrible/embarrassing because it doesn't really give any coherent *physical* account of the microscopic parts of the world; because it involves artificially dividing the world into these two realms, macro and micro; because the idea of distinct laws for these separate realms, and then special exceptions to those laws for the at-best-vaguely-defined situations called "measurements", is ridiculous for any theory with pretensions to fundamentality; etc. But despite all these (really serious) problems, I do concede that Copenhagen QM is at least an empirically adequate theory, in the sense that it says true things about what the directly observable aspects of the world are like and in particular makes the right statistical predictions for how things like the goofy little flags should work in the appropriate circumstances. It's like Ptolemy's theory of the solar system -- it makes the right predictions, but it just can't be the correct fundamental theory.

As i said, the quantum-classical split is obsolete. It's obviously wrong. There is only the quantum part of the theory left. Everything else can in principle be described using decoherence. (At least we believe this to be the case. It's still a actively researched.) If that makes modern quantum researchers non-Copenhagenists, then it's okay. Let's call them quantum instrumentalists. I think that's a fair description.

I think you are just taking "QM" to refer *exclusively* to the parts of the theory that pertain only to the so-called microscopic world. That is, you are not treating the usual textbook measurement axioms (and the associated ontological commitments!) as part of the theory. But (unless you are an Everettian, but let us here talk just about "ordinary QM") those parts of the theory really are absolutely crucial. Without them, the theory doesn't say anything at all about experimental outcomes (even the statistics thereof).

Now we are progressing. QM doesn't say anything about the outcomes. Yes, that's true. But the quantum part of the theory does say everything about the statistics. Just compute [itex]|\psi(x)|^2[/itex], [itex]\int x|\psi(x)|^2\mathrm d x[/itex] and so on. You don't need any classical supplement of the theory in order to compute these things.

That is, if you leave those parts out, you are truly left with a piece of math that is totally divorced from the physical world of ordinary experience, i.e., totally divorced from empirical data/evidence/science.

Math is always divorced from the physical world. Whats wrong is that it is also divorced from empirical data. The math of quantum mechanics predicts accurately the statistical properties of the empirical data. Just compute the things i wrote above. I hope you don't deny this. That would be almost delusional.

Indeed, I think it would be accurate to say that this math is literally meaningless since there is nothing coherent left for it to refer to. Bohr, at least, understood quite well that, at the end of the day, the theory better say something about pointers, tables, cats, planets, flags, etc. I think Bohr was dead wrong insofar as he seems to have thought that this is *all* you could say anything about. To use one of Bell's apt words, Bohr thought the microscopic world was in some sense "unspeakable". That is dead wrong. It was a result of various empiricist/positivist strands of philosophy that were popular at the time, but that practically nobody outside of physics departments takes seriously anymore.

Please let's get rid of the ridiculous quantum-classical split. It's so obvious that it's wrong, especially after the success of decoherence. I'm not a Copenhagenist in the sense you describe it. I think no serious researcher is nowadays.

Not in the micro-realm, that's true. But Copenhagen QM's full description of the world -- its full ontology -- is *not* simply the wave function for the micro-realm. It is the wave function for the micro-realm *and classical objects/properties for the macro-realm*.

You are fighting a straw man here. I don't want to describe the macroworld classically. If i want to talk about outcomes quantum mechanically, i have to include the measurement apparatus into the quantum description. Otherwise, I'm not using pure quantum mechanics. I'd be using a strange mixture of classical and quantum mechanics. I don't care about this mixture theory. If you want to use QM in order to talk about some aspects of nature, then you have to include these aspects of nature into your QM model. If these aspects of nature are pointers and cats, then you have to include pointers and cats into your QM model. There is no reason, why the measurement apparatus shouldn't itself behave quantum mechanical. After all, it's made of the same atoms that your quantum system is made of.

No, that is wrong, unless you are just speaking extremely loosely/imprecisely. The prediction of QM for an individual outcome is: the outcome will be one of the eigenvalues of the appropriate operator, with the probabilities of each possibility being given by the expectation value of the projector onto that eigenstate.

There is no prediction of QM for individual outcomes. Yes, QM restricts the space of measurement values, but that's not a prediction about what the outcome of an experiment will be. Here's a classical analogon: Classical mechanics says that the values of the position variable can be between [itex]-\infty[/itex] and [itex]\infty[/itex]. But the prediction of the position is a function [itex]x(t) = x_0 \sin(\omega t)[/itex]. An analogue to this function [itex]x(t)[/itex] is missing in QM. Yes, the range of measurable values is restricted, but QM is unable to predict a certain value. That's because the existence of an underlying position value is neglected. Of course, QM needs to specify the range of the values, the probability distributions reach over. Otherwise, it would be nonsense to talk about probabilities in the first place. The range is just part of a complete specification of the predicted probability distribution.

Yes, you can of course calculate a probability-weighted average of these possible outcome values, the expectation/mean value. But QM absolutely does *not* predict that that mean value will be the outcome. If it did predict that, again, it would be simply, empirically, false.

Yes, you are right. QM doesn't predict that the measured value will be the mean value. That's because QM doesn't predict at all, what the measured value will be. But if you insist on squeezing a prediction about an individual outcome out of QM, then the best thing you can possibly do is take the mean value. I repeat: You should only do this if you insist and it will give you wrong predictions, albeit they might sometimes be close to the measured values if the standard deviation is small enough.

For example, here comes a particle (prepared in the "spin up along x" state) to a SG device that will measure its spin along the z direction. The expectation value is zero. But the actual outcome is never zero, it is always either +hbar/2 or -hbar/2. I know you understand all this, but what you said above is really, badly wrong, at least as written.
Yes, i agree it is badly written. I was just trying to say that if you really really want to have a prediction about an invidiual outcome from pure QM theory, then the best thing QM offers is the mean value, although that value can give you badly wrong predictions sometimes. After all, it's not it's purpose to predict individual outcomes. But there is no other way to squeeze information about individual outcomes out of QM. Just knowing their range is not a prediction unless maybe the range contains only one point. In that case, i would agree that it would be a prediction of an individual outcome. But what operator corresponding to an observable in QM has only one point in its spectrum?

No, that is not at all the right way to think about it. It's not that QM is always (or almost always) wrong. It's rather that it only makes probabilistic predictions.

Exatly. It doesn't even try to make predictions about individual outcomes. That's why it is not wrong about them. It can't be. "Si tacuisses, philosophus mansisses" holds also for physical models it seems. :)

It says (in the example just above) that there's a 50% chance that the outcome will be +hbar/2 and a 50% chance that the outcome will be -hbar/2. When you find out that, in fact, for a given particle, the outcome was -hbar/2, you do not say "QM was wrong". You say "Cool, that's perfectly consistent with what QM said." If you want to know whether QM's predictions are right, then yes, you need to run the experiment a million times and look at the statistics to make sure it really is +hbar/2 about half the time, etc. But it is not at all that the prediction for the individual event was *wrong*. The prediction for the individual event was probabilistic, which is absolutely consistent with what in fact ends up happening in the individual event.

Yes, it's consistent with QM, because the value lays in the spectrum of the associated observable. But it wasn't a prediction. The value hbar/2 for measurement number 1327, carried out at 12:07 pm wasn't predicted by QM.

But if you do that (and again here leaving aside the possible Everettian "out") you get nonsense. That is, you get something that is just as wrong -- just as inconsistent with what we see with our naked eyes actually happening in the lab -- as the denial that there is any physically real definite macro-state.

Here's where we don't agree. I think that what you call "physically real definite macro-state" should in principle also have a quantum mechanical description. We just normally don't include it into the quantum model. And if we did describe it quantum mechanically, we would get probability distributions for the pointers of the apparati, instead of definite values. They might be sharply peaked, but that's not the point.

This is simply not true. QM can *also* predict the *possible* definite outcome values. In general, there are several of these, i.e., many different possible outcomes with nonzero probabilities. Despite the flaws in the theory, it is right about these.

But the possible outcome values, that is the spectrum of the observables aren't predictions the outcomes themselves in the sense that it's not possible to know what value the next outcome may take (unless you have a one-point-spectrum). The set of possible outcome values is a prediction of QM, but still QM doesn't predict that the quantum objects ever acquire definite values.

Are you really equating *direct sense perception* -- surely the foundation of all properly empirical science -- with "naive imagination"?

I think we can't trust our senses. Our brains forges a picture of the world for us, that doesn't necessarily have anything to do with the "real world", whatever that is. Look at the keyboard in front of you. Does it really have a color or is the color you see only an emergent phenomenon? The same goes for things like position. Is the position or a particle really a "real thing" or an emergent phenomenon our brain tricks us into believing? I think these questions are meaningless and are more religion or philosophy than actual physics. I'm just saying that it's naive to believe that the "real world" is exactly what we imagine it to be, especially after all the really strange things that physics has discovered. In fact, every information we acquire first runs through billions of neurons before it is sent to the visual cortex. Who knows to what extent the information is distorted? After all, we know very little about the inner workings of the brain and our mind. I'm fine with the idea that my perception of the world doesn't necessarily need to have exact counterparts in "reality" (again, whatever that is). What if string theory turns out to be "right" and our world is really 11-dimensional? Could you accept that? For me that'd be just as weird as QM and GR together, but i would manage to accustom to this.

When that one flag pops up, and you see this, it really popped up -- and any theory that says otherwise is ipso facto rendered false.

A flag popping up is consistent with QM. It's just not a prediction.
 
  • #207
rubi said:
Decoherence has shown us how the classical world emerges from quantum mechanics.

decoherence is not enough to explain the emergence of classicalty, this has already been discussed here.
 
  • #208
"If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity. (Einstein, Podolsky, Rosen 1935, p. 777)"

that`s all just an element of the reality not the reality itself.
 
  • #209
audioloop said:
decoherence is not enough to explain the emergence of classicalty, this has already been discussed here.

I know that it doesn't solve all the problems. But i think splitting the world into "quantum" and "classical" is wrong (but useful for practical purposes). If we want to use quantum mechanics to describe the world, then we have to live with the fact that it's doesn't predict individual outcomes and we can't squeeze a completely classical picture out of it. It's just maybe a limitation of the theory.

I really don't want to end up in a philosophical debate over this. I only wanted to make my point clear that individual outcomes aren't beables of standard QM. If the property "position" doesn't exist, then it can't be a beable. You need to supplement the theory with additional elements (like a quantum-classical-split) in order to even be able to talk about something like "definite position" in QM at all. Without such a supplement, usual QM stands on its own and predicts only statistics.
 
  • #210
Rubi... It's clear we are not on the same page here. I've basically already said, as clearly as I know how, what I think is wrong with your position. I will try here to briefly clarify some points of apparent miscommunication, but there is no point continuing to argue about the central point at issue here. Our views have both been made clear.

Do you really believe that the world is split into a classical and a quantum world? I don't think there is any physicist today, who really believes this. There is only one world without a split.

I think we agree here. No, I don't believe the world is split. But Bohr absolutely did! And this notion of a "shifty split" (as Bell called it) is built into the structure of ordinary QM. This is closely related to what is usually called the measurement problem. It sounds like we agree that that's a problem, and that therefore a new theory (which doesn't suffer the problem, and which doesn't divide the world in two) is needed.


Decoherence has shown us how the classical world emerges from quantum mechanics.

No it has not. That's of course a complicated and controversial statement, so take this simply as a report of my view (rather than an attempt to argue for it).


Yes, quantum mechanics does restrict the set of possible measurement values. But that doesn't mean that it predicts that there should be reality asribed to these outcomes.

My point has been that even ordinary QM ascribes reality to the outcomes *after they occur*. That is all that it does, true -- it denies the existence of any pre-measurement values ("hidden variables") in general. But that is all that is necessary here. Remember the context in which this came up. Bell's definition of locality is in terms of "beables". There was a question about whether the *outcomes* (often called "A" and "B") count as beables according to QM. I say: they do. They are beables that evolve *stochastically* -- you cannot predict in advance what "A" or "B" might be. But once they be, they be.



To put it in as few words as possible: QM does not assert that after a measurement a particle acquired the real property of having a position.


That's debatable, but irrelevant. Take "A" and "B" to refer (in the usual spin-based EPR-Bell scenario) not to "what the spins of the particles really are after the measurement" (I agree that we probably shouldn't interpret ordinary QM as claiming that any such thing exists, even after the measurement) but rather "where the flash occurred behind the SG magnets -- up here, or down there" (or if you prefer, "which of the two goofy flags popped up"). Those latter sorts of directly-perceivable, uncontroversially-real physical facts -- those latter sorts of *beables* -- are what phrases like "the actual outcomes" or symbols like "A" and "B" refer to.



As i said, the quantum-classical split is obsolete. It's obviously wrong. There is only the quantum part of the theory left. Everything else can in principle be described using decoherence. (At least we believe this to be the case. It's still a actively researched.) If that makes modern quantum researchers non-Copenhagenists, then it's okay. Let's call them quantum instrumentalists. I think that's a fair description.

What the instrumentalists (or whatever you want to call them) miss is that, if you simply abandon the separate classical/macro realm that Bohr (awkwardly) had to just posit, there are no local beables left in the theory, and hence nothing like tables, chairs, pointers, cats, etc. to be found in the theory. Yes, there is a big wave function, on a big configuration space, some of whose axes correspond in some way to the degrees of freedom that (classically) one would associate with the particles composing the tables, chairs, etc. But there are no actual particles, or any other physically real stuff in 3D space, for the tables and chairs to be made of.

This, incidentally, is why, by getting rid of the whole macro/classical realm and all its associated laws, MWI solves the measurement problem beautifully, but introduces a new (and much more severe!) problem (that Copenhagen QM did *not* suffer from). I guess one might call that new problem the "reality" problem, though it would be nice to find a less wacky sounding name...


Now we are progressing. QM doesn't say anything about the outcomes. Yes, that's true. But the quantum part of the theory does say everything about the statistics. Just compute [itex]|\psi(x)|^2[/itex], [itex]\int x|\psi(x)|^2\mathrm d x[/itex] and so on. You don't need any classical supplement of the theory in order to compute these things.

You need the classical part of the theory (again, in the context of Copenhagen QM here...) in order to give these quantities you compute something to be *about*. It's lovely to be able to calculate something that you *call* "the probability for the top flag to pop up", but if there is no actually existing physically real top flag, which actually really physically pops up or not, then what in the world are you even talking about? I mean that question literally, and the answer, literally, is: nothing.

Here's where we don't agree. I think that what you call "physically real definite macro-state" should in principle also have a quantum mechanical description. We just normally don't include it into the quantum model. And if we did describe it quantum mechanically, we would get probability distributions for the pointers of the apparati, instead of definite values. They might be sharply peaked, but that's not the point.

No, if you describe that stuff QMically (in the sense you mean), you get a big Schroedinger cat state. Yes, yes, you want to consider the reduced density matrix and then *interpret* that as meaning one or the other of the decohered options, with certain probabilities. But surely you can see that a swindle occurs here, in going from the *and* (which is uncontroversially present in the wave function) to the *or* which you get out only after waving your arms and saying magic words.



I think we can't trust our senses.

Then it is impossible to base conclusions (like, for example, the conclusion that classical mechanics failed to correctly predict things like the H spectrum and all the other stuff that convinced us to abandon classical mechanics in favor of QM!) on empirical data, period.
 
Back
Top