What Is an Element of Reality?

In summary, Laloe discusses the meaning of "element of reality" and how it applies to quantum mechanics. He discusses simple experiments and how no conclusion can be made yet. He discusses correlations and how they unveil the cause of a common color. He concludes that the only possible explanation is that there is a common property in both peas that determines the color.
  • #141
ttn said:
1. BUT NONLOCAL HIDDEN VARIBLE THEORIES ARE NOT RULED OUT. That is why the existence of Bohmian mechanics doesn't cause the universe to disappear in a puff of logic. :-p

2. Then you must be confused about what Bell proved. Bell's theorem shows that, if you try to "complete" QM by adding local hidden variables, the theory you get cannot both respect the Bell Locality condition and agree with experiment. So, as lots of people say, if you want a local theory, you'd better stick with QM and its completeness doctrine, and not go down the hidden variables road. But that strategy obviously presupposes that QM itself is local -- otherwise, saying "you should stick with QM and not pursue hidden variable theories, on pain of nonlocality" just makes no sense.

3. And the final piece: Bell states openly that, he thinks, nonlocality is a fact, period -- that it's *not* something which merely afflicts hv theories. As he says, you *cannot* dismiss the operations on one side as causal influences on the other. How can he believe this? What else would he need to have to believe to make this claim given the above paragraph? Obviously he would have to think that orthodox QM was *also* nonlocal. IF it wasn't, there'd be no grounds for claiming that all possible alternatives -- i.e., nature -- were nonlocal.

1. I have reviewed Goldstein's summary of BM at http://plato.stanford.edu/entries/qm-bohm/. Now I am more confused than ever! I just don't see where there are hidden variables at time t=0. He mentions that contextuality is assumed and is no big deal. Yet nowhere is the simultaneous reality of non-commuting observables asserted.

2. You see Locality as the essential assumption, and I see the "reality" of A, B and C as the essential assumption.

3. I think all of the QM interpretations somehow violate "local causality" which I define as: causes must precede effects, and there is no FTL propagation of causes.
 
Physics news on Phys.org
  • #142
ttn said:
You seriously think it "doesn't harm physics" to bring all this blatantly non-physics loony business *into* physics? I think it does tremendous harm.

I guess what you call "loony business" are concepts which you would rather classify as philosophical or spiritual, and not as materialistic science. But it is a long standing tradition that concepts which "didn't have their place in science" came into science when science grew sophisticated enough to deal with it. The human body was something beyond scientific investigation ; organic chemistry was something beyond scientific investigation ; the fact that humans could decend from apes and in the end from bacteria was something unheard of ; the movement of the heavens was beyond scientific investigation. All these domains have now been "invaded" by science. So why not the concepts of consciousness, "I experience" etc... ?
After all, if something like MWI *is* an approximate description of reality, that would then be the first time that such "unheard of" concepts are integrated into the scientific machinery, all be it very coarsely.
So I don't accept the argument that this "loony business" has nothing to do with science, because I put it on the same level as saying that human blood has nothing to do with chemistry.

Hmmm... If you really honestly don't believe any of this (because deep down you're a sane person and no sane person could take this stuff seriously!) then why advocate it as if you did believe it, as if it was a serious physics theory?

You understood my statement "on the wrong side of its meaning" (like I knew a professor who told somebody that he "almost got zero". When the student asked how much he had, the professor said: "minus five" :smile:.)

When I say that I "don't take this seriously" I only mean that this stuff is still too speculative and naive. After all, we don't even have all of *material physics* into a theory. So there are still many changes to be accepted, and I think that something like MWI is only the beginning ! Probably nature is way way more weird than this simplistic view in MWI ; I didn't mean that we would get "back to good old positivism". In fact, if science is to progress, I really think that one day it will have to tackle issues which, for me, seem currently out of its scope, such as consciousness (and I'm not talking about the brain processes that are at its basis). So maybe, just maybe, quantum theory is our first indication of it.

If you're convinced "we will never know" how nature really works, then by all means don't spend your time trying to figure it out. But some people are not convinced of that; we are trying to find out how nature works. Personally I don't think "trying to figure out how nature works" is something that physicists should have to *apologize* for! We should be trying to do it, and we should be doing so proudly.

This is why I think, contrary to what you implied above, it does harm physics to take MWI seriously or to pretend that you do. It makes others feel embarrassed that they *can't* take that seriously, or that they believe physics should be taken seriously, etc. In short, it sends off a vibe into the rest of the physics community that "physics isn't serious", or "physics can never really figure out how nature works, so we just make up stupid stories that we don't really take seriously anyway", or "physics is all a big inside joke, but don't tell the government or they'll stop funding us", or god knows what.

Well, you misunderstood me, clearly. When I say that we will never know exactly what reality is about, I really believe that, because we had to change our views already a few times very profoundly, so we will have to do that again in the future, implying that at any moment, we are only in an approximative scheme (having given up, or not, to continue).
As such, I think that Newtonian physics is farther from reality than say, Maxwell's classical field view. And Maxwell's classical field view is farther from reality, than, say, general relativity. Or farther from reality than QM, with its MWI viewpoint. And MWI is farther from reality than *fill in our next paradigm*. I think however, that MWI is much closer to reality than Newton's viewpoint, for instance. But I'm aware that it will still change many times, and it is probably still too early to get it out of the speculative realm. In fact, as long as we don't have a serious theory of quantum gravity, I think it is too early to know. However, if the theory of quantum gravity DOES still adhere to the superposition principle, I think you cannot get by without MWI. So all string theorists and (I think) all loop quantum gravitists must be somehow "MWI" inspired. Superstrings makes no sense without MWI. But I'm very attentive to positions such as Penrose's, who thinks that gravity will play a crucial role in the measurement process in QM.

Well, as someone who has a lot of respect for people like Newton and Maxwell and Boltzmann and Einstein and Bell -- people who did take physics seriously and didn't think it was doomed to become a failure or a joke -- I don't think it's good to be spreading this kind of attitude. It will cause future Newtons and Boltzmanns and Bells to go into a field that does take itself seriously like, oh god, cultural studies, or basket weaving, or who knows what.

I'm affraid you could say the same about people trying to apply chemistry to organic materials, which were supposed to be "outside of the scope of science" at a certain point, and try to tell them to stick to minerals, if serious people were supposed to take chemistry serious in the future.
I really don't see why bringing in ideas such as subjective experience are blasphemy to physics. After all, one day, we will have to find out ! Before being able to make zargon ray machines of the second generation !
However, it might be that we are still far from that achievement, and that you 're right, that quantum theory as it stands, today, doesn't touch upon it yet, and we need a few more millennia of "positivist materialist science" before we get there. Nevertheless, I think it is funny that if you take QM very literally, you almost naturally arrive at MWI like situations.

However, this discussion showed me something. I think I really should learn more about Bohm's theory. All I know about it is in fact the 1-particle description.

So if you're willing, you can teach me some Bohmian mechanics ! Then I'm the guy making the nasty remarks, and you're the guy who has to explain and justify :-)

cheers,
Patrick.
 
  • #143
vanesch said:
So if you're willing, you can teach me some Bohmian mechanics !

I am interested in knowing more about how BM applies to EPR too.

As to the "paradigm" each of us uses... Who is to say what is best, really? I have seen this point argued ad infinitum in plenty of areas of science, and the answer is always inconclusive.

The fact is, any model (map) can serve a purpose. And a good idea can come from anywhere, even the most unexpected place. So there is not a lot of point to argue that one model is "better" or "more promising" because that can't really be objectively agreed upon.

For example, in the early sixties, who would have thought that the next big thing would emerge from a little city in England called Liverpool? Or that a Swiss clerk would emerge as the most powerful force in theoretical physics?
 
  • #144
DrChinese said:
I am interested in knowing more about how BM applies to EPR too.

As to the "paradigm" each of us uses... Who is to say what is best, really? I have seen this point argued ad infinitum in plenty of areas of science, and the answer is always inconclusive.

The fact is, any model (map) can serve a purpose. And a good idea can come from anywhere, even the most unexpected place. So there is not a lot of point to argue that one model is "better" or "more promising" because that can't really be objectively agreed upon.

This is true. I think the only thing that is not acceptable are logically inconsistent views: you know, things that give a different answer according to the way you arrive at it.

That's why I think that Copenhagen QM has a problem if we take it that the wave function is something "real". And the reason for that is that we arbitrarily have to decide which physical processes are measurements (which induce a collapse), and which physical processes are interactions (which evolve unitarily), and that we can in fact, choose between both for a great many of them, leading to different behaviours of our "real" wavefunction.

However, Copenhagen QM is perfectly all right if we see it as an abstract mathematical machinery out of which come predictions of probability (the "shut-up-and-calculate" approach). In this case, we take physical theories just to be epistemological: they just allow us to know answers, but they don't describe what is "really there". However, there is a danger that we apply the Born rule too early: we have to apply it only at the end, when we are really extracting probabilities that will be compared to experimental relative frequencies.

Nevertheless, a more ontological approach to a physical theory seems to be desirable: you would like to map things of the formalism to "things out there".
The question then is for how much deviation you allow from the existing formalism, and what principles you use in order to guide you to a formalism that IS ontologically mappable.

I would then think that an MWI like approach is what comes closest to the existing formalism: you take 2 basic principles which are at the basis of the theory (namely locality (in its minimalistic version: information locality), and the superposition principle) for "real" in that the machinery has to apply it too.
However, the result sounds "crazy" and you arrive at the use of "loony business" such as talk about subjective experience and so on.

If you give yourself more liberty with the formalism, I guess you can arrive at Bohmian mechanics. I have to say that this discussion gave me some incentive to learn more about it ; for instance, I was of the opinion that Bohmian mechanics had a serious problem with QFT ; but this may not be the case, I don't know. This is then a question if Bohmian mechanics is formally equivalent with any Hilbert state description or not. If it is not, then we're actually talking about 2 different theories, which will probably make different predictions for certain cases, and not about 2 different ways of viewing the same theory (namely quantum mechanics).

Whatever way you take, I suppose, as you say, that it is a matter of preference, as long as it is compatible with established results and is internally consistent.

cheers,
Patrick.

EDIT: Although predicting things is difficult, especially if it concerns the future, and - as you say - that inspiration can come from the most unexpected view, I would nevertheless allow myself to poder whether, say, if Bohm's view had been predominant, and not von Neumann's, whether one would have arrived at anything like quantum field theory...
 
Last edited:
  • #145
vanesch said:
I guess what you call "loony business" are concepts which you would rather classify as philosophical or spiritual, and not as materialistic science. But it is a long standing tradition that concepts which "didn't have their place in science" came into science when science grew sophisticated enough to deal with it. The human body was something beyond scientific investigation ; organic chemistry was something beyond scientific investigation ; the fact that humans could decend from apes and in the end from bacteria was something unheard of ; the movement of the heavens was beyond scientific investigation. All these domains have now been "invaded" by science. So why not the concepts of consciousness, "I experience" etc... ?

Oh, I have nothing against bringing new concepts into science. I guess I got the impression from your earlier comment that the point of bringing consciousness in the way you want to in MWI, however, wasn't really to clarify or build any new science, but, rather, merely because it was a convenient and safe place to "hide" the Born rule where it couldn't be refuted because it didn't make any contact with actual physics (i.e., physical objects). I think that is a pretty bad motive for bringing in new concepts into physics; so it's not that I'm against doing so per se, but only against doing it for what seems like really fishy, unscientific reasons.



So if you're willing, you can teach me some Bohmian mechanics ! Then I'm the guy making the nasty remarks, and you're the guy who has to explain and justify :-)

Well, maybe, but it really is easier to wear the critic's hat...
 
  • #146
vanesch said:
This is true. I think the only thing that is not acceptable are logically inconsistent views: you know, things that give a different answer according to the way you arrive at it.

I don't really want to get into a big debate over pure philosophy here, but I'm not sure I agree with this. In particular, I think you are using a too-narrow definition of "logical consistency" to mean merely narrowly-conceived internal consistency. I believe in scientific realism, part of which is the idea that the physical world we perceive is more or less real -- there really is a world that "looks" that way. Or putting it negatively, we aren't massively deluded (e.g., brains in vats) about everything we rationally believe. But MWI requires us to believe that we are massively deluded about pretty much everything we ever thought we believed. So why shouldn't I count *that* as a violation of "logical consistency"? To say that the only bad theory is one that makes two different predictions for the same one event, is, I think, to have a way-too-open mind about theories. How exactly to sniff out quality theories is a notoriously difficult quesiton; but there must be some sort of standards that go beyond mere blatant internal contradictions. If there aren't, then it's going to be impossible, e.g., to argue against the persistent "local realism" crowd who wants to suppose weird conspiracy theory type explanations for why 17 different experiments don't *really* prove that Bell's inequalities are violated... (not to mention a lot of other *way crazier* or even way more insidious junk.)


I would then think that an MWI like approach is what comes closest to the existing formalism:

The quantum formalism has two parts: unitary evolution and measurement/collapse. When you say MWI stays close to the existing formalism, you are already making a judgment call about which parts of that existing formalism are important and which aren't. But remember the reason for the collapse postulate -- without it, QM cannot predict the experimental outcomes which are actually *observed* and on which basis we believe in QM in the first place! So it wouldn't be all that crazy to think that *that* was the more important of the two aspects of QM. Of course, that doesn't leave you with much of a theory of anything (there being nothing to apply the collapse postulate *to* if you dump wave functions evolving unitarily...). But my point is broader: looked at from a "forest" perspective, wave functions and superpositions are the *least familiar*, most strange and new and suspicious looking parts of QM. The part that says "the probability of this needle pointing left is 50%" is by contrast very grounded, very familiar, very easy to understand. So I think one perspective on what MWI is doing is to dump the most familiar/intuitive/grounded aspects of QM and raise up in importance just those aspects that are least well understood and most unfamiliar. Again, maybe there's no blatant internal contradiction involved in doing this, but it's the kind of move which wouldn't be taken very seriously in other branches of science. That is, it would only be taken seriously if/when some other more obvious, better grounded moves had first been tried and found to fail.

If you give yourself more liberty with the formalism, I guess you can arrive at Bohmian mechanics.

I'm not sure what liberty you need. Basically, if you say, "QM is a theory about particles like electrons" -- and then you take that *seriously* and start to think about how the theory might be attached to a particle ontology -- you get Bohmian mechanics. I think it's far less "liberal" to believe that QM is about particles (if one knows the historical origins of QM especially!) than to think "hey maybe all those experimental results we thought argued for QM actually didn't have outcomes at all!"

I have to say that this discussion gave me some incentive to learn more about it ; for instance, I was of the opinion that Bohmian mechanics had a serious problem with QFT ; but this may not be the case, I don't know.

There are some problems, if you want to call them that. First, there have only been a handful of people working with Bohm's theory for 50 years. So obviously it hasn't been developed *nearly* as much as the standard approach. But several versions of Bohm-type QFT's exist, and I don't think there is any serious obstacle here. The second point is that people will often say that Bohm-type QFT's and such are "obviously wrong" since they are nonlocal and hence seem to violate relativity. But this objection is very wrong-headed. Really, it's just the same old debate about nonlocality from nonrelativistic QM all over again. Yes, Bohmian theories are rather obviously nonlocal (in the Bell locality sense...) and this remains true in the context of relativistic theories (Dirac equation, or QFT). But (drumroll please) so are all the *orthodox* relativistic theories, if you take them seriously, collapse postulate and all. And if all you mean is that regular QFT is *information local*, well, so are the "obviously nonlocal" Bohmian versions. So anyway, one has to be very careful here and keep an open mind about things like extra space-time structure like preferred frames, etc.
 
  • #147
ttn said:
I believe in scientific realism, part of which is the idea that the physical world we perceive is more or less real -- there really is a world that "looks" that way. Or putting it negatively, we aren't massively deluded (e.g., brains in vats) about everything we rationally believe. But MWI requires us to believe that we are massively deluded about pretty much everything we ever thought we believed. So why shouldn't I count *that* as a violation of "logical consistency"? To say that the only bad theory is one that makes two different predictions for the same one event, is, I think, to have a way-too-open mind about theories.

I didn't say that every logically consistent theory is a good theory ! I said that a good theory needs at LEAST this, and that the rest is open to debate, taste etc... I think it is a good idea to be very open minded a priori. And about being deluded not a right thing to do: hey, I believed in Santa Claus until I was 8 years old ! So I have the habit of being deluded. Nothing special about it. :smile:

How exactly to sniff out quality theories is a notoriously difficult quesiton; but there must be some sort of standards that go beyond mere blatant internal contradictions.

But all other standards are open to debate. You'll see that I have different standards.

If there aren't, then it's going to be impossible, e.g., to argue against the persistent "local realism" crowd who wants to suppose weird conspiracy theory type explanations for why 17 different experiments don't *really* prove that Bell's inequalities are violated... (not to mention a lot of other *way crazier* or even way more insidious junk.)

I agree with you that you need other criteria ; only, it is open to debate which criteria. If you allow to impose FIRST some criteria, then the LR crowd can put forward local realism as an unviolable criterium. You're closer to the LR crowd with your viewpoint than you think! Your viewpoint is that we cannot be "deluded" ; I guess this means: A theory that EXPLAINS correctly all results, but which has an underlying ontology which seems to differ with the intuitive picture you have of it, is a bad theory ? The problem is that this is so much dependent on what you can consider as your intuition that it is a dangerous viewpoint. The LR crowd cannot accept the violation of Bell locality ; it is against their intuition too. So if those 17 experiments have to go, and QM has to go, so be it, in order to stick to their intuition. In your case, if SR has to go, then so be it, in order to stick to your intuition.

My criteria is that one should look for fundamental principles which are at the base of the formalism and then stick to it at all costs. In (relativistic) quantum theory, these are the superposition principle and lorentz invariance (from which follows the need for information locality).
All my reserves come from the fact that another great principle, namely general covariance (of general relativity) has a serious clash with the superposition principle. So I expect a serious review of the fundamental principles when we will have a quantum theory of gravity. Such a revision will then imply a fundamental revision of all one can deduce, including MWI and everything. But usually, in such a paradigm shift, things only get weirder, not "more intuitive".

The quantum formalism has two parts: unitary evolution and measurement/collapse. When you say MWI stays close to the existing formalism, you are already making a judgment call about which parts of that existing formalism are important and which aren't. But remember the reason for the collapse postulate -- without it, QM cannot predict the experimental outcomes which are actually *observed* and on which basis we believe in QM in the first place! So it wouldn't be all that crazy to think that *that* was the more important of the two aspects of QM. Of course, that doesn't leave you with much of a theory of anything (there being nothing to apply the collapse postulate *to* if you dump wave functions evolving unitarily...). But my point is broader: looked at from a "forest" perspective, wave functions and superpositions are the *least familiar*, most strange and new and suspicious looking parts of QM. The part that says "the probability of this needle pointing left is 50%" is by contrast very grounded, very familiar, very easy to understand. So I think one perspective on what MWI is doing is to dump the most familiar/intuitive/grounded aspects of QM and raise up in importance just those aspects that are least well understood and most unfamiliar. Again, maybe there's no blatant internal contradiction involved in doing this, but it's the kind of move which wouldn't be taken very seriously in other branches of science. That is, it would only be taken seriously if/when some other more obvious, better grounded moves had first been tried and found to fail.

Well, as far as I know, they have failed as long as we stick to something that looks like special relativity !

I'm not sure what liberty you need. Basically, if you say, "QM is a theory about particles like electrons" -- and then you take that *seriously* and start to think about how the theory might be attached to a particle ontology -- you get Bohmian mechanics. I think it's far less "liberal" to believe that QM is about particles (if one knows the historical origins of QM especially!) than to think "hey maybe all those experimental results we thought argued for QM actually didn't have outcomes at all!"

I'm not denying this. But to me, basically QM is NOT a theory about particles like electrons. To me, QM is the formalism that goes with the superposition principle, and together with special relativity, it gives rise to things like quantum fields, which "look sometimes like particles".

But ok, if Bohmian theory also works for QFT, I'm interested to look at it...
Let's continue in the other thread then !

cheers,
Patrick.
 
  • #148
Born rule and MWI

Hey Patrick,

I read the very interesting discussion that you had with Travis (hey Travis! :-p ) back in Feb, and I have a question for you regarding how the Born rule fits into the MWI.

In your journal, you wrote:
vanesch said:
The problem is that MOST of these "yous" will accumulate experiences which are NOT in agreement with the statistics of the Born rule. It is the problem MWI proponents never could solve (although they give a lot of "plausibility arguments"), and that's because they left out explicitly the Born rule.

This, I think, is a very interesting topic. Here's one way I might flesh it out with an example. Suppose we have N identically prepared spin 1/2 particles, prepared so that a spin measurement along the x-axis will yield spin up with, say, probability = p. (So we get spin down with probability 1-p.) If we measure the spins of each of these N particles, then we end up with 2^N worlds. At the end of these N measurements, the observer plans to calculate p by simply counting the number of particles that were observed to be spin up and dividing by N. The difficulty, as you point out, is that if p is not equal to 0.5, a significant number of these worlds will contain observers who conclude that the Born rule is incorrect! In fact, as N gets larger, the percentage of worlds such that the observed p deviates from the Born-rule prediction by some arbitrary cutoff (at least I think this is true -- I haven't actually shown this rigorously) just gets larger.

iiuc, Everett addresses this situation by defining a "probability measure" to each world which is, simply enough, prescribed by the Born rule. The conceptual difficulty here is that, in keeping with the "spirit" of the MWI, each individual world should be thought of as being on an "equal footing," so to speak. So what we have here are, in a sense, two different ways of counting worlds: by one method, we simply count the number of worlds via treating each distinct experimental outcome as one distinct world, so that, in the above example, we get 2^N worlds; by the other method, we give each of these worlds a "weighting" which is proportional to its Born-prescribed probability. The former method is in keeping with the "spirit" of the MWI by putting each physically distinct world on an equal footing, but suffers from the difficulty that most observers will think the Born rule to be false. In latter method, the set of worlds in which the Born rule is violated is of measure zero (as Everett, Hartle, and others have shown); however, we are in a way breaking with the "spirit" of the MWI.

If we want to rescue Everett's notion of attaching a probability measure to each experimental outcome, one thing we could do would be to say that -- to take a single spin measurement as an example -- instead of splitting into TWO worlds, we instead split into P_tot = P_up + P_down worlds, where p = P_up/ P_tot is the Born-rule given probability of observing spin up. P_tot, P_up, and P_down would, according to this scenario, be integers. And since in general, p can take on any value between 0 and 1, p_up and p_down would typically be expected to be able to be extremely large integers! In fact we can already see a problem here: if we assume P_up and P_tot to be integers, then we are effectively saying p must be an element of the rationals (ie, it cannot be any real-valued number in [0,1].)

A second problem is that, in the simplest scenario at least, there is no PHYSICAL distinction between one observer who ended up in one of the P_up worlds, and another observer who ended up in another of the P_up worlds. Now there is no strong reason to object to this scheme -- it's just that it seems, well, ill-motivated. Why should one observer split into P different directions, when all of them that split into the P_up direction are physically identical? It seems so unfrugal. (Is that a word? :rolleyes: )

This is about as far as I have gotten in thinking about this particular topic. So let me turn to my question for you. Earlier you wrote:

vanesch said:
Well, I'm not a real MWI-er. What you write (get rid of the Born rule) is indeed part of the program of MWI, and I think I have found a mathematical proof that this cannot work, which I hope to publish soon.
I think that there is some real need for the Born rule. On the other hand, I want to get rid of the projection postulate, and up to there, I follow MWI.

I've been doing a little background reading on the MWI -- for instance, I managed to get a hold of Everett's original dissertation from amazon -- but I have not yet run across anyone trying to "get rid of the Born rule." What does this mean, exactly? And what is your argument that "this cannot work?" I am intrigued.

straycat

PS The book that I got from amazon has a very interesting (not published elsewhere, afaict) decades-old article by Neill Graham, one of Bryce DeWitt's former students, addressing this very topic. He lays out the basic issue very nicely, I think, although I'll admit that I cannot follow his "solution."
 
  • #149
straycat said:
iiuc, Everett addresses this situation by defining a "probability measure" to each world which is, simply enough, prescribed by the Born rule.

But there is a problem in doing so. Indeed, in order to be able to apply a Born rule, you have to choose a preferred basis (the measurement basis). It is different if you apply the Born rule for "position" or for "momentum". It is different if you apply the Born rule in the z-spin basis, or in the x-spin basis. The only thing that distinguishes the measurement basis from any other basis, from an observer's point of view, is that in the measurement basis, he appears in a Schmidt decomposition (sum of |observer state> |system "measurement" state> components) while this is not the case in other basis states. But that's of course observer-related! So you can only define that measure AFTER having choosen already a basis. Honestly, you're then just applying the projection postulate.

The conceptual difficulty here is that, in keeping with the "spirit" of the MWI, each individual world should be thought of as being on an "equal footing," so to speak. So what we have here are, in a sense, two different ways of counting worlds: by one method, we simply count the number of worlds via treating each distinct experimental outcome as one distinct world, so that, in the above example, we get 2^N worlds; by the other method, we give each of these worlds a "weighting" which is proportional to its Born-prescribed probability.

The problem with this statement (which I endorse !) is that MWI says:
i d/dt psi = H psi AND THAT'S ALL. The original program (as I understood it) was that we could then derive all physical consequences from it, also the "observed" probabilities by observers. This was in a reaction to the Projection Postulate, which has a lot of problems, namely distinguishing "measurements" from "physical processes".

So when do we assign this "measure" to different terms in the wave function ? This is bluntly introducing the Born rule again!

Let us look at an example:
Imagine that I've been measuring the z-component of an electron spin ; so now the wavefunction is:

|psi> = Sqrt[0.9] |me+> |z+> + Sqrt[0.1] |me-> |z->

In an MWI setting, the idea somehow is that "a real me" only observes ONE of these branches. So there must be some kind of splitting of "me", so that there is now a "me" that will "have measured" the plus component, and that will now work in the first branch and a "me" that will have measured the minus component and will now work in the second branch.
There are NO PROBABILITIES involved in this game. BOTH happen.
What *I* am claiming is that in order for probabilities to arise (namely that I have 90% chance to be me+ and 10% chance to be me-), you HAVE TO MAKE AN EXTRA ASSUMPTION, and that extra assumption amounts to the Born rule: I explicitly consider this a "measurement process" and assign probabilities to the outcomes according to the Born rule. The way I prefer to do it (as I tried to explain in my journal), is by saying that although there are now 2 different body states (nothing surprising, there are also 2 different electron spin states), my "consciousness" has TO CHOOSE, ACCORDING TO THE BORN MEASURE, which body state it will live in. So I put the "measurement process" into the "inhabiting of a body state by my consciousness". But then we get easily into metaphysical discussions, and I don't want to here again. Say that it is some shortcut to make the picture work.

Remember, in a true MWI view, ALL "branches" or "worlds" (simply, Schmidt decomposition terms in the wave function and a choice of a system which is the "observer") "happen" (because simply all terms are present in the wave function), and there is no a priori probability of anything. It is very difficult to assign probabilities to terms in the wavefunction without introducing (again) a distinction between a "physical process" (you don't assign probabilities) and a "measurement" (you assign probabilities). And this was the difference (measurement/process) Everett wanted to get rid off in the first place.
The only way to introduce probabilities in a natural way is by "inhabiting" these terms with observers, according to a kind of probability measure, and the whole point of MWI, as I understood it, was that this probability measure would emerge "naturally" (by counting of something, say). Clearly, simply counting the terms doesn't work, so you can think of more complicated schemes: but then you have to POSTULATE these schemes somehow, and this postulate usually comes down to the Born rule.

cheers,
patrick.
 
Last edited:
  • #150
straycat said:
Hey Patrick,

I read the very interesting discussion that you had with Travis (hey Travis! :-p ) back in Feb, and I have a question for you regarding how the Born rule fits into the MWI.

Well it's about TIME you join in the fun, straycat! Took you long enough! :)

Zz.
 
  • #151
vanesch said:
But there is a problem in doing so. Indeed, in order to be able to apply a Born rule, you have to choose a preferred basis (the measurement basis). It is different if you apply the Born rule for "position" or for "momentum". It is different if you apply the Born rule in the z-spin basis, or in the x-spin basis.

If I understand correctly Bohmian mechanics - which I view as a MWI variant in a certain way, in that unitary evolution is also postulated without exception, even during a "measurement process" - then Bohmian mechanics solves the issues in the following way:

- the basis is postulated to be the position basis.

- there is a mechanism of assigning probabilities through the initial distribution of the "token" (the true particle positions, postulated to be initially distributed by the Born rule) and its associated dynamics (the guiding equation).

cheers,
Patrick.
 
  • #152
vanesch said:
But there is a problem in doing so. Indeed, in order to be able to apply a Born rule, you have to choose a preferred basis (the measurement basis). It is different if you apply the Born rule for "position" or for "momentum". It is different if you apply the Born rule in the z-spin basis, or in the x-spin basis.

If I understand your point correctly, you are asking the question: why should the Born rule be applied for, eg, x-spin as opposed to, say, z-spin?

But it seems to me that you may as well ask: why should the Born rule be applied for a measurement on the electron going through the SG apparatus, as opposed to some measurement on some *other* electron. The answer, of course, is that this "other" electron was not fed into the SG apparatus. Likewise, the reason that the Born rule is applied for x instead of z-spin is that the SG apparatus was set up to measure x-spin, not z-spin.

This makes me think about a way that I decided long ago to conceptualize a typical Alice-Bob EPR experiment -- the delayed choice variety, that is. Suppose Alice is allowed to vary the orientation of her polarizer (or SG apparatus) at the last instant to any angle she wants. This is free choice, right? Well, I have always preferred to simplify things by removing any considerations of "consciousness," "free will," etc, as follows. Instead of putting the angle of the polarizer directly under the control of Alice's hands, and thus under the control of her "free-willed" brain, we instead put the angle under the control of a computer that determines the angle as a function of the input from a geiger counter. So going back to the earlier discussion, when you ask: why should the Born rule be applied to x-spin and not z-spin? We see that in this particular setup, the question: "which angle do we use when we apply the Born rule?" is answered the same way that we answer the question: "is the electron up or down?" In both cases, that is, the anwer to the question is not arbitrary: it is the product of measurement.

vanesch said:
But that's of course observer-related!

Yes, I completely agree with you here. (I am thinking of the SG apparatus as being, in a way, an extension of the observer.) As a slight aside, I have often thought that the choice of the observer in QM is sort of like the choice of a frame of reference in GR. IOW, any choice is "valid." But for the analysis of any given experiment, you have to make your choice and stick with it.

vanesch said:
The problem with this statement (which I endorse !) is that MWI says:
i d/dt psi = H psi AND THAT'S ALL. The original program (as I understood it) was that we could then derive all physical consequences from it, also the "observed" probabilities by observers. This was in a reaction to the Projection Postulate, which has a lot of problems, namely distinguishing "measurements" from "physical processes".

I think that I understand the purpose of original program in the same way. And I agree with you that it has not quite achieved that goal!

vanesch said:
In an MWI setting, the idea somehow is that "a real me" only observes ONE of these branches. So there must be some kind of splitting of "me", so that there is now a "me" that will "have measured" the plus component, and that will now work in the first branch and a "me" that will have measured the minus component and will now work in the second branch.
There are NO PROBABILITIES involved in this game. BOTH happen.

OK, I'm with you.

vanesch said:
What *I* am claiming is that in order for probabilities to arise (namely that I have 90% chance to be me+ and 10% chance to be me-), you HAVE TO MAKE AN EXTRA ASSUMPTION, and that extra assumption amounts to the Born rule:

Yes: there is an extra assumption. And Everett makes this extra assumption in his original program. But I do not agree that it is *necessary* to make this extra assumption (see below).

vanesch said:
I explicitly consider this a "measurement process" and assign probabilities to the outcomes according to the Born rule. The way I prefer to do it (as I tried to explain in my journal), is by saying that although there are now 2 different body states (nothing surprising, there are also 2 different electron spin states), my "consciousness" has TO CHOOSE, ACCORDING TO THE BORN MEASURE, which body state it will live in.

But the problem here, as I discussed in my earlier post, is that a significant measure of your "other conscousnesses" will conclude that the Born rule is false. It seems to me that this (postulating "my consciousness") does not quite address the issue.

Don't get me wrong. I can sort of see the motivation of saying "my consciousness follows the Born rule." But it seems desirable to me to avoid falling back on that sort of explanation if at all possible, and I think it is.

vanesch said:
The only way to introduce probabilities in a natural way is by "inhabiting" these terms with observers, according to a kind of probability measure, and the whole point of MWI, as I understood it, was that this probability measure would emerge "naturally" (by counting of something, say).

Yes, I think the difficulty that we are talking about can only be addressed in the way you just mention: we want the probability measure to arise *naturally* by counting something. The question is, what are we counting? Well the obvious answer is to count worlds, of course!

vanesch said:
Clearly, simply counting the terms doesn't work,

It does not work if we count terms (worlds) in the standard way, eg, a spin 1/2 measurement results in two worlds (two terms). Actually, it is interesting to point out that for a spin measurement, if p = 0.5, then counting the terms DOES work. In a more general sense, if we have a measurement with N outcomes, then counting the terms (worlds) DOES work if and only if the probability of each measurement outcome is equal to 1/N.

But why not try to *make* it work, via some sort of small modification of the standard workings of the MWI? That way, we could avoid making any sort of "extra" (perhaps metaphysical!) postulate. Why not?

David
 
  • #153
ZapperZ said:
Well it's about TIME you join in the fun, straycat! Took you long enough! :)

Zz.


Holy cow, Zz, 1771 posts you have :bugeye: ! it will take me quite some time to catch up!

straycat
 
  • #154
Three possibilities arise in the spin measurement experiments. Simply put they are:

1) The spins are correlated from the beginning and it is just the way they are measured that appears to be spooky. When they are measured they may be rotated to the measuring position. This implies they are within a hemisphere of the measuring position.Then the second particle is in the other hemishphere and can be rotated to the opposite position, or give a null measurement.

2). There is a field connection between the two particles (possible a potential scaler/vector/vector spin field) that allows FTL signals. Thus, when one is measured a signal (possible a spin wave or torsion wave signal) is sent to the other and this signal destoys the field connection.

3) There is, however, a third way to look at the problem. The two particles could really just be a single system (one particle), that breaks in two when measured (its wave function collapses to a two particle system). In this way the signal from one side of the system to the other is totally internal and FTL here may not violate relativity since the signal does not travel through space but through the internal structure of the single system.

juju
 
Last edited:
  • #155
Hey Patrick,

Let me see if I can flesh out what I meant by:

straycat said:
But why not try to *make* it work, via some sort of small modification of the standard workings of the MWI?

by focusing on something you said:

vanesch said:
The only way to introduce probabilities in a natural way is by "inhabiting" these terms with observers, according to a kind of probability measure, and the whole point of MWI, as I understood it, was that this probability measure would emerge "naturally" (by counting of something, say).

The first question here is "what are we counting?" According to the standard workings of the MWI, as I understand it, if a measurement result produces N distinct "worlds," then these N worlds are distinguished from one another by virtue of the fact that each of these N worlds corresponds to a physically distinct *observer* state. IOW, if the unitary evolution of the observer produces just one observer-state after some length of time, then there is no measurement and there is no "split." But if the unitary evolution of the observer takes one observer-state into N physically distinct observer-states, then we have effectively split into N "worlds." Therefore, in answer to the question: "what are we counting?" the answer is that we are counting the number of physically distinct observer-states (that evolve from a single observer-state, according to our unitary operator). Since we are counting physical states, I will call this a "physical measure" of our worlds.

So far, so good -- this jives so far with what Everett did, iiuc. But Everett's next step was to assign the Born rule-generated "probability measure" to each branch. What we would like to do is to throw this in the trash, and instead simply allow probability measure to emerge, as you say, *naturally* by simply *defining* "probability" as being equivalent to the physical measure.

Hold on, you say. That just doesn't work! Well, of course it doesn't, unless we do some tweaking. Let me give an example. Suppose we have a spin measurement with probability spin up p = 1/4, so (1-p) = 3/4. According to the standard workings of the MWI, the unitary operator takes our observer from a single state into two physically distinct states, one in which the observer has recorded "up", the other in which he has recorded "down." Let's imagin tweaking it like this: suppose that the unitary operator produces, not two physically distinct states, but rather four physically distinct states. In ONE of these, the observer has a physical record of up; in THREE of these, the observer has a physical record of "down." Of course, it needs to be determined in what way the three "down" observer-states are physically distinct. But I see no reason that this problem is insurmountable.

So to make an argument in favor of this approach, I would say that it achieves what Everett set out to do in the first place, but failed to do.

What is the argument *against* this approach?

David
 
  • #156
straycat said:
The first question here is "what are we counting?" According to the standard workings of the MWI, as I understand it, if a measurement result produces N distinct "worlds," then these N worlds are distinguished from one another by virtue of the fact that each of these N worlds corresponds to a physically distinct *observer* state. IOW, if the unitary evolution of the observer produces just one observer-state after some length of time, then there is no measurement and there is no "split." But if the unitary evolution of the observer takes one observer-state into N physically distinct observer-states, then we have effectively split into N "worlds."

The way I understood the motivation behind Everett's work was that he didn't want to introduce some special physics for what is an "observation" in contrast to "a physical process described by a hamiltonian", because that is the main difficulty in a Copenhagen-like view.
What I tried to point out earlier is that if you have a global quantum state |psi>, then this "splitting in N terms" is only possible if we CHOOSE to define some part of the system as being "the observer": we split the total Hilbert space into a product space H_observer x H_system_under_observation. But mind you that this is a completely arbitrary thing to do if there is no concept of what is an observer ! ONCE we have split Hilbert space in this arbitrary way, it is possible to apply Schmidt decomposition, and have |psi> written as a unique sum of product terms |obsstate> x |suo_state> in such a way that both state series are a basis in the respective product hilbert spaces.

Something funny now happens: It turns out that we now have to work with only ONE of these terms in the future, and that the probability of taking that term is given by its norm squared. That's essentially Born's rule.
We STILL need something of the kind in a MWI setting, otherwise there is no way to pick ANY term as the "observed one". This is the essentially probabilistic aspect of quantum theory which has to be imported in one way or another.
The simplest thing is to just POSTULATE it. I wasn't aware that Everett ever did that. I thought that he hoped that this would somehow EMERGE from the unitary evolution - indeed, Dewitt had an argument but which only works at infinite evolution time.
The problem is that in order to POSTULATE this rigorously, one has to define what are observers, and what are their associated hilbert spaces.
BUT IF WE DO THAT, THERE'S NO PROBLEM WITH COPENHAGEN EITHER !
And Everett just complicates matters without solving anything.

Therefore, in answer to the question: "what are we counting?" the answer is that we are counting the number of physically distinct observer-states (that evolve from a single observer-state, according to our unitary operator). Since we are counting physical states, I will call this a "physical measure" of our worlds.

So far, so good -- this jives so far with what Everett did, iiuc. But Everett's next step was to assign the Born rule-generated "probability measure" to each branch.

As I said before, I wasn't aware Everett did this! I thought he wanted somehow to have this measure EMERGE from the unitary evolution. I have to say the purpose of his program escapes me completely if he did postulate that.

Hold on, you say. That just doesn't work! Well, of course it doesn't, unless we do some tweaking. Let me give an example. Suppose we have a spin measurement with probability spin up p = 1/4, so (1-p) = 3/4. According to the standard workings of the MWI, the unitary operator takes our observer from a single state into two physically distinct states, one in which the observer has recorded "up", the other in which he has recorded "down." Let's imagin tweaking it like this: suppose that the unitary operator produces, not two physically distinct states, but rather four physically distinct states. In ONE of these, the observer has a physical record of up; in THREE of these, the observer has a physical record of "down." Of course, it needs to be determined in what way the three "down" observer-states are physically distinct. But I see no reason that this problem is insurmountable.

Well, this is cheating ! This IS nothing else but introducing the Hilbert norm as the probabilitiy measure, in a disguised way. But in that case you don't need the disguise: just postulate it. In order to do so, you need to specify what are observer subspaces and what are system subspaces. If you can do that, there is no problem with the von Neuman view, and Everett can go.

So to make an argument in favor of this approach, I would say that it achieves what Everett set out to do in the first place, but failed to do.

What is the argument *against* this approach?

That with all you need to bring in (define what are observers, as distinguished from what are physical systems under observation), there's no point in having Everett's program in the first place. If it is clear what are observers (people ? Conscient people ? Computers ? Printers on paper ? Memory cells ? Macromolecules ?) then there's no problem with von Neumann to be solved in the first place. Just let us keep the projection postulate then, it is easier.

The whole point was that we DIDN'T have to define what exactly was an observer (THE difficulty with von Neumann). But if we don't, you cannot specify your measure either.

cheers,
Patrick.
 
  • #157
vanesch said:
The way I understood the motivation behind Everett's work was that he didn't want to introduce some special physics for what is an "observation" in contrast to "a physical process described by a hamiltonian", because that is the main difficulty in a Copenhagen-like view.
What I tried to point out earlier is that if you have a global quantum state |psi>, then this "splitting in N terms" is only possible if we CHOOSE to define some part of the system as being "the observer": we split the total Hilbert space into a product space H_observer x H_system_under_observation. But mind you that this is a completely arbitrary thing to do if there is no concept of what is an observer ! ONCE we have split Hilbert space in this arbitrary way, it is possible to apply Schmidt decomposition, and have |psi> written as a unique sum of product terms |obsstate> x |suo_state> in such a way that both state series are a basis in the respective product hilbert spaces.

My understanding is that according to Everett, we do, as you say, have to CHOOSE some part of the system as being "the observer." But this is not problematic because we are NOT LIMITED in our choice of the observer. That is, we do not have to restrict ourselves to choosing "microchips" or "people" or "really smart monkeys" or any such thing; rather, ANYTHING can play the role of "the observer." This was the whole point of using the word "relative" in the phrase "relative state formulation;" the entire conceptual framework is built around calculating stuff *relative to a given observer*. You can't calculate anything relative to an observer if you don't first pick an observer.

In fact, the use of the word "relative" is similar in spirit to its use in GR. In GR, you can't talk about the length of an object without first specifying the frame of reference that you're working in. Therefore, you talk about the length of an object "relative" to your chosen FoR.

vanesch said:
The whole point was that we DIDN'T have to define what exactly was an observer (THE difficulty with von Neumann). But if we don't, you cannot specify your measure either.

Once again, it's just like GR. In GR, there is NO SUCH THING as a "preferred" or "privileged" or "special" FoR: they are all equally "valid." Likewise, in QM, there is NO SUCH THING as a "preferred/special/privileged" observer: ANY subsystem of a composite system is as "valid" an observer as any other. But if you want to calculate lengths of objects, you have to pick a FoR first; likewise, if you want to do a Schmidt decomposition, you have to pick an observer first.

In my mind, this is true for any version of QM: Copenhagen, Everett, whatever. Everett's contribution, I think, was that his relative state formulation does a better job than Copenhagen of illustrating the above point.

Just to restate the similarity to the GR-viewpoint, consider the following sentence from page 455 of Everett's original paper [1]: "To any arbitrarily chosen state for one subsystem there will correspond a unique *relative state* for the remainder of the composite system." Everything, therefore, is conceptualized RELATIVE to some subsystem-state, which we call "the observer." Note the word "arbitrary." Although he didn't say this -- he probably thought it was obvious, but he would have been wrong -- he could have phrased it "To any arbitrarily chosen state for *any arbitrarily chosen* subsystem ..." In GR, the choice of FoR is arbitrary, but that doesn't mean we don't choose one. So wha't wrong with choosing an observer in QM?

I must say that I learned to appreciate the MWI much more after reading Everett's original paper. And for the reasons that I gave above, I like the name "relative state formulation" better than "MWI." The sentence that I quoted above, and the paragraph from which I took it, are to me the most important sentence/paragraph in the entire paper. Like I said, I think that the entire notion of "relativity of states" is fundamentally inherent to the CI. The difficulty with the CI is just that Copenhagenists get caught up in trying to calculate how many neurons it takes to collapse the wavefunction, when in fact *any* arbitrarily chosen subsystem will work just fine.

More on probabilities, Born, etc later.

David

[1] Hugh Everett. "Relative State" Formulation of Quantum Mechanics. Reviews of Modern Physics. Vol 29, no 3, July 1957, pp 454 - 462.
 
  • #158
straycat said:
Well, I have always preferred to simplify things by removing any considerations of "consciousness," "free will," etc, as follows. Instead of putting the angle of the polarizer directly under the control of Alice's hands, and thus under the control of her "free-willed" brain, we instead put the angle under the control of a computer that determines the angle as a function of the input from a geiger counter. So going back to the earlier discussion, when you ask: why should the Born rule be applied to x-spin and not z-spin? We see that in this particular setup, the question: "which angle do we use when we apply the Born rule?" is answered the same way that we answer the question: "is the electron up or down?" In both cases, that is, the anwer to the question is not arbitrary: it is the product of measurement.

No, not really ! It is only the case if you consider the computer to be an "observer". But I can just as well consider it part of the system, and then the only thing I can say is that my computer is now in an entangled state with the x-spin state of the electron. If we prefer to write our hilbert space state in a basis which is a product of "computer states" and "spin states". But I'm free to choose any other basis in my H_computer x H_spin hilbert state. I'm not obliged to take a product basis, and I'm also not obliged to take the X-spin basis for the spin. Even if I work in a product basis, I can work with, say, the momentum states of the computer particles and the y-spin states of the electron. My state |psi> is perfectly expressible in that basis.
IT IS ONLY WHEN WE ASSIGN A SPECIAL STATUS TO THE COMPUTER that we want psi to be written in a series of terms such that each term contains ONE computer state of a single computer basis. (that's Schmidt decomposition !). But if the "computer" say, were just a photon, we wouldn't mind working in any other basis that suits us.
And the important point is to note that the application of any Born measure is DEPENDENT ON THE CHOICE OF BASIS WE MAKE.

Yes, I completely agree with you here. (I am thinking of the SG apparatus as being, in a way, an extension of the observer.)

See, you have to assign "special observer status" to something, here the SG apparatus.

As a slight aside, I have often thought that the choice of the observer in QM is sort of like the choice of a frame of reference in GR. IOW, any choice is "valid." But for the analysis of any given experiment, you have to make your choice and stick with it.

If it were so, there wouldn't be any issue to solve. The point is that the Born rule GIVES DIFFERENT OUTCOMES depending on your choice of basis !

But the problem here, as I discussed in my earlier post, is that a significant measure of your "other conscousnesses" will conclude that the Born rule is false. It seems to me that this (postulating "my consciousness") does not quite address the issue.

No, you didn't get my proposal. There is only ONE conciousness of "patrick". Each time, it has to choose which body state to inhabit. The other body states phyically evolve "normally" but only ONE possesses my consciousness.
There are now 2 ways to continue:

1) Solipsist: there is, in the whole universe only ONE SINGLE CONSCIOUSNESS: namely mine. After all, there is only ONE SINGLE PHYSICAL PROCESS I'm absolutely aware of to be a true observation: namely MY observations.
2) There are many consciousnesses out there, which each, independently, jump to their next body state by using the Born rule. So each time there is a split, only ONE is chosen. This means that most people I'm interacting with right now are bodystates which DO NOT have a consciousness. But their body, as a physical structure, will act in exactly the same way as if they were "inhabited".

So, the result, for myself, is the same: the bodystates of others I'm aware of are not conscious :-)

Indeed, there are many bodystates out there which, if they were inhabited by a consciousness, would observe disrespect of the Born rule. But they aren't, and as such, it doesn't even make sense to talk about their "world" because now there is nothing that requires that sums of product states be considered as separate worlds.

Don't get me wrong. I can sort of see the motivation of saying "my consciousness follows the Born rule." But it seems desirable to me to avoid falling back on that sort of explanation if at all possible, and I think it is.

I fully agree with you. However, given the CURRENT STATE OF AFFAIRS, I prefer the above picture, because at least it gives me a coherent view of quantum theory. As I've been repeating often here, it is just a story! But I cannot find any other, that strictly sticks to current quantum theory, assigns some ontology to the formalism (not just "shut up and calculate") and doesn't introduce extra *physical* assumptions which modifies QM predictions.
And it allows me to justify unethical behaviour towards other bodystates :-)))

Ok, this is not entirely true. Bohmian mechanics also allows for such a story. Only, there is too much "symmetry loss" to be paid to my taste: why should we stick to a lot of symmetries for the wave function part, and throw them all overboard to construct the guiding equation ?

cheers,
Patrick.
 
Last edited:
  • #159
vanesch said:
And the important point is to note that the application of any Born measure is DEPENDENT ON THE CHOICE OF BASIS WE MAKE.

Yes, just like the length of an object is dependent on the choice of frame of reference we make.

vanesch said:
The point is that the Born rule GIVES DIFFERENT OUTCOMES depending on your choice of basis !

Yes, it certainly does, just like GR gives different outcomes for length depending on the choice of frame!


vanesch said:
IT IS ONLY WHEN WE ASSIGN A SPECIAL STATUS TO THE COMPUTER ...
See, you have to assign "special observer status" to something, here the SG apparatus.

But we DON'T assign any special status to the computer or the SG apparatus, any more than we assign special status to whatever frame of reference that we used to solve a problem in GR.

Do you see the parallel I'm drawing here between "relativity" of states and general "relativity"? (This was the main point of my previous post.)

vanesch said:
So, the result, for myself, is the same: the bodystates of others I'm aware of are not conscious :-)

Gee, I'm feeling sort of woozy ...

David
 
Last edited:
  • #160
OK, I'm going through Everett's paper to see where probabilities are introduced. On page 460, he states: "In order to establish quantitative results, we must put some sort of measure (weighting) on the elements of a final superposition. ... We must have a method for selecting a typical element from a superposition of orthogonal states. We therefore seek a general scheme to assign a measure to the elements of a superposition of orthogonal states [itex] \sum_{i} a_{i} \phi_{i} [/itex]. We require a positive function [itex]m[/itex] of the complex coefficients of the elements of the superposition, so that [itex]m(a_{i})[/itex] shall be the measure assigned to the element [itex]\phi_{i}[/itex]."

Everett then goes on to discuss standard requirements of probability measures (things like additivity requirements, normalization, etc), and he demonstrates that [itex]m[/itex] is restricted to the form [itex]m(a_{i}) = a_{i} * a_{i}[/itex]. So it's sort of made to look as if it couldn't have been any other way, that is, the probability measure MUST be given by the Born rule, and there is no other option.

So I suppose that Everett did not quite simply "assume the Born rule" outright. But it seems to me that he did the next closest thing: he assumed that the unitary evolution of the composite state is given by the familiar wave equation, and he furthermmore assumed that the probability measure of an eigenstate must be a function of its coefficient (and not a function of, I dunno, something else).

So to recap what I said a few posts back, the difficulty I see with this scheme is that the "physical measure" (as I defined a few posts ago) and the "probability measure" are not equal, and I would seek to find some sort of modification whereby they CAN be equated, along the lines of the "tweaking" that I suggested earlier. Perhaps this would require a different unitary operator in place of the Hamiltonian?

David
 
  • #161
straycat said:
Do you see the parallel I'm drawing here between "relativity" of states and general "relativity"? (This was the main point of my previous post.)

No, I don't, because in GR, when calculating the result of AN OBSERVATION, this result is independent of the frame in which you care to carry out its computation. But in QM, it IS DEPENDENT, if you consider "choice of the basis in which we apply the Born rule".

Let us look at it with an example.
Imagine I have a system S, which has a 3-dim hilbert state space.
Its basis can be |a>, |b> and |c>, but also, |1>, |2>, |3>, linked by a unitary base transformation.

Now imagine that I have an "observer" O which gets entangled with system S through a measure in basis {a,b,c}:

Before, we had:
|O_virgin> |OO_virgin> ( u1 |a> + u2 |b> + u3 |c> )
and after this "measurement" we have:

u1 |O_a> |a> + u2 |O_b> |b> + u3 |O_c> |c>

Now, "observer" OO gets entangled with system S in the 123 base:

as we had:
|a> = xa1 |1> + xa2 |2> + xa3 |3> etc...

we obtain:

u1 xa1 |O_a> |OO_1> |1> + u1 xa2 |O_a> |OO_2> |2> + u1 xa3 |O_a> |OO_3> |3>
+ u2 xb1 |O_b> |OO_1> |1> + u2 xb2 |O_b> |OO_2> |2> + u2 xb3 |O_b>|OO_3>|3>
+ u3 xc1 |O_c> |OO_1> |1> + u3 xc2 |O_c> |OO_2> |2> + u3 xc3 |O_c> |OO_3> |3>

But we could have written that in another way too, if we first decomposed according to OO and then according to O:

u1 xa1 |OO_1> (xa1* |O_a> |a> + xb1* |O_b> |b> + xc1* |O_c>|c>) +
u1 xa2 |OO_2> (xa2* |O_a> |a> + xb2* |O_b> |b> + xc2* |O_c>|c>) +
u1 xa3 |OO_3> (xa3* |O_a> |a> + xb3* |O_b> |b> + xc3* |O_c>|c>) +

u2 xb1 |OO_1> (xa1* |O_a> |a> + xb1* |O_b> |b> + xc1* |O_c>|c>) +
u2 xb2 |OO_2> (xa2* |O_a> |a> + xb2* |O_b> |b> + xc2* |O_c>|c>) +
u2 xb3 |OO_3> (xa3* |O_a> |a> + xb3* |O_b> |b> + xc3* |O_c>|c>) +

u3 xc1 |OO_1> (xa1* |O_a> |a> + xb1* |O_b> |b> + xc1* |O_c>|c>) +
u3 xc2 |OO_2> (xa2* |O_a> |a> + xb2* |O_b> |b> + xc2* |O_c>|c>) +
u3 xc3 |OO_3> (xa3* |O_a> |a> + xb3* |O_b> |b> + xc3* |O_c>|c>) +

= (u1 xa1 + u2 xb1 + u3 xc1) xa1* |OO_1> |O_a> |a>
+ (u1 xa1 + u2 xb1 + u3 xc1) xb1* |OO_1> |O_b> |b>
+ (u1 xa1 + u2 xb1 + u3 xc1) xc1* |OO_1> |O_c> |c>
+ (u1 xa2 + u2 xb2 + u3 xc2) xa1* |OO_2> |O_a> |a>
+ (u1 xa2 + u2 xb2 + u3 xc2) xb1* |OO_2> |O_b> |b>
+ (u1 xa2 + u2 xb2 + u3 xc2) xc1* |OO_2> |O_c> |c>
+ (u1 xa3 + u2 xb3 + u3 xc3) xa1* |OO_3> |O_a> |a>
+ (u1 xa3 + u2 xb3 + u3 xc3) xb1* |OO_3> |O_b> |b>
+ (u1 xa3 + u2 xb3 + u3 xc3) xc1* |OO_3> |O_a> |c>

Let us be clear: this state is identical to the previous state ! It is just another way of writing, here in basis |a> |b> |c> and the other one in basis |1> |2> |3>. There is physically no difference, and systems O and OO interacted in identical ways without system under study.

If we first assign "observer status" to O, then there are 3 probability measures, namely |u1|^2, |u2|^2 and |u3|^2, to be assigned to 3 "worlds" in which OO appears "entangled with a 1 - 2 - 3" state. If we then assign "observer status" to OO, we find an overall probability for O to have observed "a" and OO to have observed "1" of |u1|^2 |xa1|^2.
However, if we assign first "observer status" to OO, and then to O, then the overall probability of having O to have observed "a" and OO to have observed "1" equals |(u1 xa1 + u2 xb1 + u3 xc1) xa1* |^2, which is in general not the same as in the first case.

In von Neuman's approach, this is clear: because O and OO are incompatible measurements, first measuring O and then measuring OO is not the same as the opposite, because of the projection postulate. But in a MWI, where all is "entanglement", it matters in which basis we work ; if it were just a "point of view", the result shouldn't depend on it !

What we have done here is simply shown that the "Born measure" is different, for identical "observer states", according to whether we work in one or another basis.

cheers,
Patrick.
 
Last edited:
  • #162
straycat said:
So I suppose that Everett did not quite simply "assume the Born rule" outright. But it seems to me that he did the next closest thing: he assumed that the unitary evolution of the composite state is given by the familiar wave equation, and he furthermmore assumed that the probability measure of an eigenstate must be a function of its coefficient (and not a function of, I dunno, something else).

Yes, indeed, this is Gleason's theorem. But again, there IS an extra assumption, which you point out: that the probability measure is only function of the coefficient ; this is a property called non-contextuality.
But the very fact that you need this extra assumption, ABOUT A PROBABILITY MEASURE, kills the nice idea that from unitary evolution alone, you can deduce the probability measure in a natural way. You have to postulate its EXISTENCE before you can postulate any property about it (such as non-contextuality). That very existence of a probability measure kills (to my understanding) the original Everett program, because in order to postulate the existence of such a measure, you have to say WHEN you can apply it, which amounts to saying WHEN a physical system is a measurement system.

cheers,
Patrick.
 
  • #163
vanesch said:
Let us be clear: this state is identical to the previous state ! It is just another way of writing, here in basis |a> |b> |c> and the other one in basis |1> |2> |3>. There is physically no difference, and systems O and OO interacted in identical ways without system under study.

Oops, this is wrong what I wrote. Both states are not identical, so my example fails...

sorry about that.

cheers,
Patrick.
 
  • #164
vanesch said:
Yes, indeed, this is Gleason's theorem.
ahhh, cool.

vanesch said:
But again, there IS an extra assumption, which you point out: that the probability measure is only function of the coefficient ; this is a property called non-contextuality.
Yes, so it seems that Everett did, in fact, throw in an extra assumption: essentially, he (slightly indirectly) assumed the Born rule. I was thinking today about whether it would be possible to assume a *different* rule. For example, we could assume that the probability measure m is a function, not of the coefficient, but of the number of branches (ie, the number of base states) at any given split. I think that to satisfy the requirements of a probability measure, we simply need each probability measure m to be a real valued number in [0,1], and we want the sum of the measures at any given "branching point" to equal 1. So we could, for example, assume that each trajectory gets followed with probability p = 1/N. This, to me, is the "natural" way for probabilities to emerge (it is basically equivalent to the "physical measure" I mentioned earlier).

The problem with setting m = 1/N, of course, is that it does not agree with experiment! But my point is that there is no *theoretical* reason we couldn't do it. I suppose that to make the scheme work with experiment, we would need a different unitary operator, ie one that takes one observer-state into N observer-states in a different fashion than the standard Hamiltonian.

vanesch said:
That very existence of a probability measure kills (to my understanding) the original Everett program, because in order to postulate the existence of such a measure, you have to say WHEN you can apply it, which amounts to saying WHEN a physical system is a measurement system.
But it seems to me quite clear when you apply it: you apply it at the very instant that the observer-state becomes entangled with the system-under-observation state.

This notion seems to me to be related to what I was trying to say earlier about the word "relativity" meaning the same thing in "relative state formulation" and "general relativity." Let me see if I can clarify this with an example. Suppose we are doing a simple EPR experiment: we have a pair of entangled, unpolarized electrons e_A and e_B emitted in opposite directions so that their spins are measured by Alice and Bob, respectively. Alice will measure the x-spin, and Bob the y-spin. Their SG apparati are equidistant from the emission site, and situated a distance L from one another. Once Alice observes the spin state of e_A, she immediately signals Bob with the result using a beam of light that encodes the result. Bob does likewise for Alice.

So the question is: WHEN do you apply the Born rule? I claim that to answer this question, you FIRST have to pick an observer. So let's say we pick Alice. Alice becomes entangled with the spin state of e_A at the instant that e_A interacts with Alice's SG apparatus. It is not until some time T = L/c later that she receives Bob's light signal; thus, she becomes entangled with the spin state of e_B AFTER she becomes entangled with e_A. So if we were to draw the tree-diagram (or whatever you call it) that tells us when worlds split, then you would see that it FIRST splits into two branches corresponding to e_A=up and e_A=down, and THEN, at an amount of time T later, each of these branches splits further into two more branches corresponding to e_B=up and e_B=down.

Let's say that we decide to pick Bob instead of Alice as the observer. By symmetry, the tree diagram will look the same, except that the order of the splitting is reversed: in this case, the first split corresponds to the measurement of e_B, and the second split corresponds to the measurement of e_A.

So the point is that WHEN you apply the Born rule is RELATIVE to the observer. Once you have picked the observer, there is no ambiguity. In this respect, I think that Everett has achieved what he set out to do.

David
 
  • #165
straycat said:
So the question is: WHEN do you apply the Born rule? I claim that to answer this question, you FIRST have to pick an observer.

I agree with you. The problem is in the "picking of an observer". What is an observer, and what is not ?
That's how I'm led to talk about conciousness and things like that, because otherwise you have to specify physical interactions and systems which classify as "observer" and others which classify as "physical systems" with a hamiltonian. Once you feel free to do so, however, there is no problem with von Neumann either ! But as in the current state of affairs, there is no indication of what is the physical distinction between an "observer" (something, apparently which doesn't support to be in an entangled state with the rest of the world and has to "pick a branch" to "live in", instead of just happily assuming its entangled state like all good electrons are doing), and a physical system with a hamiltonian. So the very thing that "picks branches" and "lives in it" must be something quite peculiar, not a physical process, and in fact only a subjective experience, because from the outside, ALL physical systems happily get into entanglement and have hamiltonians (unless this will turn out not to be true, for instance in gravity).
Now something that is based upon "subjective experience", "lives in" etc... and is not physically observable from the outside makes me think a lot of "consciousness". But it doesn't matter how we call it or what it is ; only SOMETHING must qualify as "observer", and must be associated to a physical structure (body?).
ONCE you do that, however, von Neumann is OK, no ?

cheers,
Patrick.
 
  • #166
vanesch said:
I agree with you. The problem is in the "picking of an observer". What is an observer, and what is not ?
That's how I'm led to talk about conciousness and things like that, because otherwise you have to specify physical interactions and systems which classify as "observer" and others which classify as "physical systems" with a hamiltonian.
...
But as in the current state of affairs, there is no indication of what is the physical distinction between an "observer" (something, apparently which doesn't support to be in an entangled state with the rest of the world and has to "pick a branch" to "live in", instead of just happily assuming its entangled state like all good electrons are doing), and a physical system with a hamiltonian.

But why do you persist in trying to divide the world this way, into things that do and do not "support to be in an entangled state with the rest of the world"? There is no such distinction. ANYTHING can play the role of observer, and ANYTHING can play the role of being in an "entangled state."

There are two main issues we have been talking about in this thread.

1) What physical objects qualify as "observer," and what do not? I claim that any physical object is a valid choice for either role. Therefore, there is no need to postulate "consciousness" or any such thing as a distinguishing property of the former.

2) The second issue has to do with assigning the probability measure m = a*a to each branch of the tree diagram. Does this issue relate to your postulating "consciousness"?

David
 
  • #167
Definition of probability

It seems to me that if we want to define a "probability measure," we need to define what probability *is*. The best way, imho, is to make it an observable. Here is how it might be done in general terms:

Suppose that we have a system in a state |x> which we know from experience can evolve, under set experimental conditions, into one of I states, |x_i>, with i being an integer in [1, 2, ..., I]. For example, a spin-1 particle, when put through a SG apparatus, can evolve into one of three states: +1, 0, or -1 (so we have I = 3). We want to know: what is the "probability" associated with each of these three outcomes?

The way we do this in practice is to prepare N identically-prepared systems |x>, do the experiment N times, count up the number of times n_i that we observe the |x_i> outcome, and say that the probability of |x_i> is p_i = n_i / N. Theoretically, we do this for N = infty, although practically, we just do this for some large finite N.

So now let's go back to the "physical measure" of the number of worlds that we get via the MWI. For a given I and N, we end up with I^N worlds. Our goal is to define a "probability measure" m_i that we can associate with each state |x_i>, and we will use it as a way to predict p_i. Once we find a way to calculate m_i, we'll call it "straycat's rule" :wink: (in place of the Born rule). I claim that *our goal* is to define m_i in such a way that each individual observer, at the end of a large number of measurements, will conclude that "straycat's rule" is correct: that is, that the predicted value m_i equals the observed value p_i. Actually, I can't say *each* observer. What I really want is for the **physical measure** of observers who conclude that straycat's rule is false to approach zero in the limit of a large number of measurements.

I'm pretty sure that it wouldn't be too difficult to show mathematically that the only way to define m_i with this property is to set m_i equal to 1/I. So this is "straycat's rule".

So to return to the spin-1 example, this means that each outcome, +1, 0, or -1, is associated with probabiity 1/3. This does not agree with experiment, so straycat's rule doesn't work! There are two ways to fix this:

1) Use Born rule instead of straycat's rule. But then we have to deal with the argument that the physical measure of the number of worlds in which the observer believes that Born's rule is WRONG is nonzero - in fact, it can be manipulated into being pretty big! And this leads us into postulating things like "consciousness," which we KNOW deep down will get us nowhere! C'mon, you know this!

2) Use straycat's rule, but consider the possibility that we did not determine I correctly. Suppose that we prepared our spin-1 particle such that the probabilities of +1, 0, and -1 are, respectively, for example, 1/6, 2/6, 3/6. Then we could say that I = 6, and after a single measurement, we have 6 worlds, one/two/three of which correspond to the observation of +1/0/-1. Note that this is well-defined because, by the definition of physical measure, these six "worlds" correspond to six *distinct* physical observer-states.

Option #2, of course, has some big unanswered questions, especially: how do we represent the "physical state" of the observer, and how do we calculate the number of distinct physical states that it can evolve into --that is, how do we calculate I? There are probably lots of schemes that could be devised and tested to do this. The advantage of option #2 over option #1, though, is that it leaves the door open for some *genuine* theorizing, as opposed to leading us down the path toward some metaphysical theory involving consciousness. Unless, of course, a metaphysical theory is what we truly seek, deep down?

So to sum up, I seek a scheme such that the *physical measure* of the number of worlds such that the observer determines that straycat's rule is false *approaches zero* in the limit of a large number of measurements. Compare this to the existing situation in the MWI, in which the physical measure of the number of worlds that contain an observer who concludes that Born's rule is false does NOT approach zero.

David
 
  • #168
straycat said:
Option #2, of course, has some big unanswered questions, especially: how do we represent the "physical state" of the observer, and how do we calculate the number of distinct physical states that it can evolve into --that is, how do we calculate I? There are probably lots of schemes that could be devised and tested to do this.

Let me just point out one idea on how to start, which is based in classical mechanics. Suppose we have a system |O> that is in some classically well-defined state. We calculate its time-dependent evolution using the laws of motion. If we are using Newton's laws of motion, for example, and the initial state is genuinely well-defined, then we know that there is only one unique time-dependent evolution for |O>. So we must have I = 1.

But it turns out that such is not the case in general relativity! That is, it is possible to define a system that starts out in a *single, well-defined* state |O>, such that there is *more than one* valid solution to its evolution in time. The situation I'm thinking of is a paper [1] by Kip Thorne investigating the trajectory of a billiard ball. He found that if he allowed his manifolds to be non simply-connected, he could find *more than one* trajectory of the billiard ball, such that each *individual* trajectory is one valid solution to the equations of motion. So in this case, we can have I > 1!

My point here is that my "option #2" above does in fact have room for development. We could represent the state of the observer using nothing other than the classical framework of GR, and as long as we admit multiply-connected manifolds, then there is the possibility that a single well-defined state can have I > 1 distinct "options" for its future evolution. (And by "straycat's rule," each option is "equiprobable.")

And we don't need to postulate "consciousness."

David

[1] Echeverria, Klinkhammer, Thorne. Billiard balls in wormhole spacetimes with closed timelike curves: Classical theory. Physical Review D. Vol 44, no 4. 15 aug 1991. pp 1077 - 1099.
 
  • #169
straycat said:
Suppose that we have a system in a state |x> which we know from experience can evolve, under set experimental conditions, into one of I states, |x_i>, with i being an integer in [1, 2, ..., I]. For example, a spin-1 particle, when put through a SG apparatus, can evolve into one of three states: +1, 0, or -1 (so we have I = 3). We want to know: what is the "probability" associated with each of these three outcomes?

What you seem to miss (or what I'm not getting) is: WHY should we even talk about probabilities in the first place ? After all, unitary quantum theory just says that our spin-1 particle, after going through the SG apparatus, is now in a state of 3 superposed positions, and everything "looking at its position" is simply in an entangled state with the position states of the atom.
This is one single quantum state:

a|mybrain+>|myeye+>|detector+>|atompos+>|atomspin+> +
b|mybrain0>|myeye0>|detector0>|atompos0>|atomspin0> +
c|mybrain->|myeye->|detector->|atompos->|atomspin->

This is what unitary quantum theory tells us. So what should make us "split this in multiple worlds with multiple probabilities" ?
Why suddenly should we consider "mybrain+>" as some different (?) observer as "mybrain-" ? What makes us now say that "mybrain+" observed the state of "myeye+" ? I can simply say that the physical structure which is mybrain is simply entangled with other physical structures, and I can in fact not draw any conclusion about any probabilistic "observation", no ?
So, SOMETHING must somehow have a property that it can only occur in a product state with the rest of the universe, because otherwise - as far as I understand - there is no indication at all why we should observe a probabilitic world in which only one branch "seems to be realized", no ?

The way I solve it (I am aware that it is a "shortcut" !) is to say that somehow there is something, a token, a "marble", which I call "consciousness" which can be associated with certain (one single ? solipsist ; all? back to animism :-) physical structures, but only with one single state which occurs in product form with the rest of the universe. So when entanglement occurs, it has to choose which term to pick, in a probabilistic way. It is this choice, and this probability, which determines the entire probabilistic structure of "observations".
I don't see how you can, without postulating such a "token" or "choice mechanism", go from the entangled state to a conclusion about probabilistic observations.
You also do that: you rewrite your entangled state in several terms (all with equal Hilbert norm), and then you DISTRIBUTE "observers" over their states, you being one of them. But what makes you think, in the first place, that different "observers" have to be distributed over these different terms ? Why cannot you happily assume the purely physical superposition of the wavefunction terms ? Why are different terms corresponding to DIFFERENT observers, which then, by themselves, have "different histories" and can calculate different observation probabilities ? Why cannot one "observer" just "observe" its entangled state ?

This step, from the wavefunction as a sum of terms, to picking ONE term and claiming it has something to do with the observations of an observer, is an extra postulate, and in doing so, you HAVE assigned "observer status" to certain physical structures. Mind you, that's EXACTLY what I do too :-) Only, I claim that this mechanism has somehow to be postulated OUTSIDE OF UNITARY QM. You should be aware of it. I don't see how you can do otherwise.

Once you ARE aware of it, that you need to assign "observer status" (I call it: give them a consciousness) to certain states of physical structures (at least to one structure, or even to all, if you want to), I don't see the difficulty in assigning directly the Born rule to the observations by that observer: you don't need to try to postulate other tricks from which you can then extract the Born rule: indeed, you ARE anyway postulating things, so go directly to the result you need.
You then also see that it is impossible, for an observer, to find out if another physical structure is an observer or not (has a consciousness or not, that's a well-known philosophical problem :-) Do electrons "observe" ? :-) Ok, for electrons, it is a bit hard because they don't have much memory space :-)

If you limit yourself to only ONE observer (one physical structure, which is associated with a token that chooses probabilistically, according to the Born rule, which term to pick in the wave function - just as well saying that it is YOU), you can then just as well go back to good old von Neumann formalism, using the projection postulate, where there is only ONE measurement apparatus in the universe, namely you. (well, me !).

cheers,
Patrick.
 
  • #170
vanesch said:
What you seem to miss (or what I'm not getting) is: WHY should we even talk about probabilities in the first place ?

Well, the way I defined it in my previous post, probability is simply an observable that is specific to a given branch, in the same way that the result of a spin measurement is an observable, the result of which is specific to a given branch. IOW, if you do N measurements on N identically prepared particles, then the sequence of N results is an observed quantity; likewise, the resulting probability is also an observed quantity.

In the scheme I have been promoting, the next step is to talk about the "measure" of worlds in which such-and-such a rule (Born rule, straycat's rule) is true. I would like to say that if the measure of worlds in which such-and-such rule is false is zero, then we can just ignore them.

But the fact remains that even if we follow my scheme (let's imagine I could make straycat's rule actually work), then there will still exist worlds in which "straycat's rule" is false. And the question is: why do we (or I should say, "I") exist in one of those worlds in which it works? I want to argue that it is because the "number of worlds" in which it works is way more than the "number" in which it does not.

But if I want to talk about the "number of worlds," then I need to define a measure. I prefer to use the "physical measure" because that just seems more "natural" to me. But can I really make a rigorous justification for this? As much as I'd like to, I actually DON'T have a rigorous argument for this! The best I have is that "it seems more natural."

So I completely agree with you that ANY assignment of a "probability measure" to each branch constitutes an "extra assumption." I submit that it would be worthwhile to explore what classes of measures *other than* Born's rule might actually be able to be fit into an actual theory that fits actual experience. Maybe Born's rule will turn out to belong to some narrow class of measures that are "un beautiful," and some other class of measures will turn out to possesses some kind of symmetry that is appealing. I don't know, I'm just rambling here.

As you said earlier, Everett made an extra assumption:
vanesch said:
But again, there IS an extra assumption, which you point out: that the probability measure is only function of the coefficient ; this is a property called non-contextuality.

What I want to do is replace it with a different assumption:
straycat said:
I claim that *our goal* is to define m_i in such a way that each individual observer, at the end of a large number of measurements, will conclude that "straycat's rule" is correct: that is, that the predicted value m_i equals the observed value p_i. Actually, I can't say *each* observer. What I really want is for the **physical measure** of observers who conclude that straycat's rule is false to approach zero in the limit of a large number of measurements.

Why do we want this? Essentially, I am *assuming* the "physical measure" as the probability meaure.

How might we argue that anyone definition of measure is "better" than ony other? I don't know!

It occurs to me that the biggest difference between the Born rule and straycat's rule could be summed up like this: Born assumes that m is a function of a, whereas I assume that m is a function of the total number of branches. Might we perhaps argue that only certain types of variables should be "allowed" into the argument? Perhaps a locality criterion, that the argument must represent "locally accessible" information? I would think that the "number of branches" at a given branch point is, in fact, a "local" variable -- that is, if we define a space of observer-states in which the observer "lives." Would the eigenvalues a_i be "local variables"? I don't know - just rambling again.

vanesch said:
Why cannot you happily assume the purely physical superposition of the wavefunction terms ? Why are different terms corresponding to DIFFERENT observers, which then, by themselves, have "different histories" and can calculate different observation probabilities ? Why cannot one "observer" just "observe" its entangled state ?

Well, you *can* happily assume the superposition! As Everett writes in his paper: "there is no such transition, nor is such a transition necessary for the theory to be in accord with our experience. ... It is unnecessary to suppose that all but one [world] are somehow destroyed ..."

vanesch said:
Why cannot one "observer" just "observe" its entangled state ?

Everett again: "Arguments that the world picture presented by this theory is contradicted by our experience, because we are unaware of any branching process, are like the criticism of the Copernican theory that the mobility of the Earth as a real physical fact is incompatible with the common sense interpretation of nature because we feel no such motion. In both cases the argument fails when it is shown that the theory itself predicts that our experience will be what it in fact is."

I think that you know this argument, though. In fact, I think I remember that you made this argument in your discussion with Travis. (?) To me, the question is not: does it make sense to talk about "observers"? Rather, the question is: how do we justify ignoring the observers who conclude that our Theories of Nature are just plain wrong? What we are trying to do is to define a measure such that the measure of such observers approaches zero. Thus, the question is: how do we justify one measure over another? It would be nice to derive it in some way. But if we can't, then we accept the status quo: the adoption of a measure is an independent postulate.

David
 
  • #171
straycat said:
Well, you *can* happily assume the superposition! As Everett writes in his paper: "there is no such transition, nor is such a transition necessary for the theory to be in accord with our experience. ... It is unnecessary to suppose that all but one [world] are somehow destroyed ..."



Everett again: "Arguments that the world picture presented by this theory is contradicted by our experience, because we are unaware of any branching process, are like the criticism of the Copernican theory that the mobility of the Earth as a real physical fact is incompatible with the common sense interpretation of nature because we feel no such motion. In both cases the argument fails when it is shown that the theory itself predicts that our experience will be what it in fact is."

My point was not that I somehow think that our experience contradicts MWI ; although one can understand why Everett needed to defend himself here when he wrote that down.

My point was that, taking for granted (as I do) that there is strict unitary evolution, that there is still NOTHING in the entire postulate system that relates this "statefunction of the universe" to any actual "observation", and that you need to say, somehow, that an observation is somehow related to ONE term. That is far from evident, a priori, and the comparison with Copernic is some nice rhetoric, but misses ground: in classical mechanics you CAN calculate the accelerations that an individual on the surface of the Earth will experience. But simply given your wave function, somehow you NEED TO MAKE AN EXTRA STATEMENT that what you are going to observe, as an observer, will have to do something only with ONE TERM in the Schmidt decomposition of the physical states of the observer and the rest. I don't see how you can somehow DEDUCE this. You could just as well make a statement that the observer will, say, always find the average of the values associated to 3 terms, no ? So that "an observer" corresponds to some random choice of 3 terms in the Schmidt decomposition (I'm just making this up here). Or to all of them. If you measure A, you always measure its expectation value, say. As I'm making these rules up when I'm writing, they will probably contain elemetary errors, but it is to illustrate the fact that the very choice of a single term is something that is an extra assumption. And THEN, there are further extra assumptions, we've been talking about, of how to assign probabilities. But the very first assumption is that somehow, ONE term has to be picked out. This, to me, is far from evident when you only have the unitary part.
But I'm not fighting it, so Everett's defense doesn't address my point. I'm simply saying that in order to do so, you need an extra assumption.

cheers,
Patrick.
 
  • #172
I am here resurrecting a very old thread in which vanesch and I discussed the issue of locality at great length. I was recently skimming through it, and noticed that (in what seemed at the time to be an important development) vanesch actually made a significant error that I didn't catch at the time. If vanesch is still around, maybe he'd like to comment. But I at least wanted to set the record straight.

vanesch said:
My claim is that causality only has a meaning as "information transfer". This can be "internal information transfer" also, even if we cannot perform real experiments in the lab because the internal quantity we're talking about is not directly accessible (such as a hidden variable) ; but one thing is necessary to be able to send information, and that is making free choices at the sending end. Upon my decision of acting at A, if something happens at B or not determines if there is information transfer and hence a causal link.
Some semantics: my "choice at A" _causes_ "an effect at B". In order to cause something, I have to have a choice in causing it, otherwise I just see it as a "description of what is happening" and not of "what causes what".
Let us call this view on causality "information - causality".
From "information - causality" follows then naturally "information - locality".
I told you why I think that is the right definition, it comes from a paradox you can obtain in SR if you don't stick to it.

You could also define a "correlation causality" and it leads to "Bell Locality".
"Correlation causality" states that you can only have statistical correlations if there is a direct dependence of the result at A on the result at B (in a statistical sense) or if they have both a common origin (state L). Bell Locality is the mathematical expression of this causality if we assume that the direct influence cannot take place ("locality"), that the only link between the two factors is through L (common cause).

But I don't see any requirement in special relativity to require Bell locality.

I will now try to find the link between "information locality" (required by SR) and Bell Locality (required by, eh, what ? We'll see :-).

My second claim is that "Bell Locality" is the above notion, applied to an underlying deterministic model ; that the notion that a "correlation implies a direct causal link or an indirect common cause link" finds its origin in a deterministic underlying model.
I think it is THIS point which is hard to get by, because THIS is the real paradigm shift needed to let go determinism. And I think it was this paradigm shift that Bell couldn't conceive, namely that you could have correlations which were NOT implying a direct causal link or an indirect common cause link.
I don't know what I can do more than reiterate Patrick's theorem :smile:
"Any stochastic theory satisfying Bell locality leads to a deterministic theory satisfying Bell locality".
I think it is a small step to show:
"From Bell locality follows information locality."

Indeed, the factorized form of P(A,B ; a,b) = P(A ; a) x P(B ; b) means that the choice of a cannot influence the probability of B.


I guess what I still should try to prove is that from information-local determinism follows Bell locality.

So now we have, from determinism, that P(A,B ; a,b,K) equals 1 or 0 ; so do the individual probabilities P(A ; a, b, K) and P(B ; a, b, K) ;
and from information locality follows that P(A ; a, K) and P(B ; b, K) do not depend on the "other" choices b and a respectively.

This means, in fact, that A = A(a) and B = B(b): for a given value (choice) of a, there is ONE A value that is the outcome, with certainty ; all other A values have probability 0.

So P(A(a), B(b) ; a,b,K) = 1 = P(A(a) ; a,K) x P(B(b) ; b,K)

So at least for the P=1 values, we can write the product form.
But this is also true for the P=0 values, because if A1 != A(a) OR B1 != B(b), then P(A1, B(b) ; a,b,K) = 0 = P(A1 ; a,K) x P(B(b) ; b, K) (namely 0 x 1)
and idem for the two other cases A(a), B1 and A1,B1.

So we have that determinism and information-locality leads to Bell locality.
So I came to a full circle:

(1)From Bell locality follows Bell local determinism. (Patrick's theorem)
(2) From Bell locality follows information locality
(3) From information locality and determinism follows Bell locality

Together:

BELL LOCALITY <===> information locality and determinism

QED

cheers,
Patrick.

OK, first some terminology. What Patrick is here calling "information locality" is this mathematical condition on probabilities:

P(A|a,b,L) = P(A|a,L)

That is, the probability for a given outcome on one side doesn't depend on the distant setting b. And of course vice versa: P(B|a,b,L) = P(B|b,L).

In the literature, this condition is usually called "parameter independence" or PI for short. (That is Shimony's name for it. Jarrett, who introduced the idea, called it something else.) A similar condition is called "outcome indpendence" or OI for short. This condition says that

P(A|a,b,B,L) = P(A|a,b,L)

and, conversely,

P(B|a,b,A,L) = P(B|a,b,L)

That is, the probabilities for a given outcome on one side (given that we're conditionalizing on both settings) don't depend on the distant outcome.

Now what I noticed about this old "proof" of Patrick's is that he smuggled in OI. He brings in PI explicitly, out in the open. But OI is brought in as well, but not identified as a premise. This happens right at the beginning where he says

So now we have, from determinism, that P(A,B ; a,b,K) equals 1 or 0 ; so do the individual probabilities P(A ; a, b, K) and P(B ; a, b, K) ;
and from information locality follows that P(A ; a, K) and P(B ; b, K) do not depend on the "other" choices b and a respectively.

You see, the "individual probabilities" should have been written initially as
P(A|a,b,B,K) and P(B|a,b,A,K). (He uses "K" instead of "L" to denote the complete specification of the state of the pair.) By simply omitting the in-principle-possible dependence on the distant outcomes (B for A and vice versa), Patrick tacitly assumes outcome independence (OI).

The rest of the proof then amounts to nothing but showing that applying parameter independence (PI) leads back to Bell's Locality condition. But it is a well-known and obvious fact that Bell Locality is equivalent to the conjunction of OI and PI. For, by OI

P(A|a,b,B,L) = P(A|a,b,L)

and then by PI

P(A|a,b,L) = P(A|a,L)

so that, using both of them, we have P(A|a,b,B,L) = P(A|a,L). And that is precisely Bell Locality.

So what? This shows that it is simply not the case that, as Patrick claimed, Bell Locality is equivalent to "signal locality" (which remember is his name for PI) and determinism. Rather, Bell Locality is equivalent to PI and OI and determinism. But that is a really rather pointless conclusion, given that Bell Locality is also equivalent to PI and OI (without determinism). So really all that was shown here is two unrelated things:

1. Any time you have a stochastic theory, it's possible to introduce hidden variables and make it into a deterministic theory with the same predictions. (This is true for any theory, whether it violates Bell Locality or not. A Bell Nonlocal stochastic theory can be made into a Bell Nonlocal deterministic theory by adding hidden variables. A Bell Local stochastic theory can be made into a Bell Local deterministic theory by adding hidden variables.)

2. Bell Locality is equivalent to the conjunction of OI and PI.

These two distinct points are quite different from Patrick's conclusion from all of this -- namely, that what Bell *really* cared about was determinism, and that all is well (for the consistence of QM and SR) if you merely let go of the attachment to determinism. That just ain't so. Determinism really has nothing to do with it (given that any theory that isn't deterministic can be made into one that is by adding hv's).

The real question is whether orthodox QM (which violates Bell Locality and isn't deterministic) can be made into a theory that respects Bell Locality by adding hidden variables. Whether that new theory is deterministic or not is completely irrelevant. Of course, if you can do it at all, then you can do it with a deterministic theory (because any non-deterministic theory can be made into a deterministic one by adding more hv's). But that simply isn't the important issue here. The important thing is Bell Locality: can QM be replaced by something which actually respects Bell Locality? The answer is no (as Bell's Theorem proves) -- at least, not if the QM predictions are correct (and experiment suggests they are).

I hope this clarifies some things, or at least makes people realize they may have concluded the wrong thing way back when...
 
  • #173
ttn said:
I am here resurrecting a very old thread in which vanesch and I discussed the issue of locality at great length. I was recently skimming through it, and noticed that (in what seemed at the time to be an important development) vanesch actually made a significant error that I didn't catch at the time. If vanesch is still around, maybe he'd like to comment. But I at least wanted to set the record straight.

Vanesch is still around, but was on a holliday with not much internet access (dial up at my mother in law's ) :smile:

I noticed this so-called "outcome independence" already before in some of the posts on Bell's stuff, and I think it is an abuse of probability theory, so I agree with what you write, but I don't consider it an error on my part, because "outcome independence" is something that is BUILD INTO KOLMOGOROV PROBABILITY.




OK, first some terminology. What Patrick is here calling "information locality" is this mathematical condition on probabilities:

P(A|a,b,L) = P(A|a,L)

That is, the probability for a given outcome on one side doesn't depend on the distant setting b. And of course vice versa: P(B|a,b,L) = P(B|b,L).

We agree fully here ; well, except for a nitpicking detail in notation which will turn out to be crucial:
I wrote: P(A ; a,b,L) etcetra, and that was not because I have a defective keyboard ; it is because I meant: a Kolmogorov probability measure, which is PARAMETRISED by a, b and L, and of which I mean the probability measure of A (with these parameters). It is a bit as if I wrote: f(x ; a) = x^(a-th prime number), and I say that f(x ; a) is a polynomial. It is in fact a family of polynomials, and a picks out the polynomial ; but clearly f(x,a) as a function of 2 variables x and a is NOT a polynomial, or an analytic function ! It's not even defined for non-integer a.

That's why I wrote the ; and not a |, because | has a specific meaning within a Kolmogorov probability measure: conditional probability.
So I'd write: P(B|A ; a,b,L) which means the conditional probability measure of B on condition A, for the parametrised probability measure with parameters a,b, and L, and is equal (by definition) to the ratio of two measures:
the one of the section of A and B, and of A.

In the literature, this condition is usually called "parameter independence" or PI for short. (That is Shimony's name for it. Jarrett, who introduced the idea, called it something else.) A similar condition is called "outcome indpendence" or OI for short. This condition says that

P(A|a,b,B,L) = P(A|a,b,L)

and, conversely,

P(B|a,b,A,L) = P(B|a,b,L)

That is, the probabilities for a given outcome on one side (given that we're conditionalizing on both settings) don't depend on the distant outcome.

But as I was not talking about conditional probabilities, but only of probabilities (measures) of A and B, it does not make sense, in a Kolmogorov probability system (which we have, once we have fixed a,b and L), to do so. Let us fix for the moment the family. Once a, b and L are fixed, our Kolmogorov measure is fixed. Now, within this measure, it can, or cannot be true that P(A) = P(A|B), of course. I don't see where I did use that. But P(A) has a perfectly well-defined meaning, and so does P(A|B).
Moreover, in a DETERMINISTIC theory, there are no other probabilities but 1 and 0, for ALL possible measurable sets. I think that's all I needed.
Once we have fixed the probability measure (by fixing a,b and L: choosing the probability distribution amongst the family), then P(X) will be an element of {0,1}. I think that's what is meant by determinism, no ?
So P(A) will be 0 or 1 (depending on the choice of a, b and L) ; and so will P(A|B) if it is defined (if P(B) is not 0).

You see, the "individual probabilities" should have been written initially as
P(A|a,b,B,K) and P(B|a,b,A,K). (He uses "K" instead of "L" to denote the complete specification of the state of the pair.) By simply omitting the in-principle-possible dependence on the distant outcomes (B for A and vice versa), Patrick tacitly assumes outcome independence (OI).

This is correct, but it is part of the definition of a probability measure. I'm not talking about conditional probabilities, I'm just talking about the probability measures of A and of B, once the measure is fixed (by fixing a,b and L).

The rest of the proof then amounts to nothing but showing that applying parameter independence (PI) leads back to Bell's Locality condition. But it is a well-known and obvious fact that Bell Locality is equivalent to the conjunction of OI and PI.

That simply means then that my theorem was not my original work
But I think nowhere I needed explicitly to assume that the conditional probability P(B|A) = P(B). I just work with P(A) and with P(B) and with P(A,B) which is the measure of the section of A and B. These are 3 measures which come out of the Kolmogorov probability which is fixed once we have fixed a,b and L. And once we assume this distribution to be DETERMINISTIC, this means that these three numbers cannot be anything else but a 0 or a 1.
Now, information locality means that the probability of A at Alice does not depend on what I (Bob) do with my choice of b. It hasn't gotten anything to do with what I got as an outcome because Alice doesn't know that. So information locality really means that P(A) (the only thing Alice can learn) is not dependent on what I can choose (the parameter b). It hasn't gotten anything to do with a conditional probability P(A|B) because Alice doesn't care what I measure, and I cannot INFLUENCE it. I can only influence the parameter b, to send a message to Alice. If I'm not supposed to send a message to Alice, it is THIS probability (P(A)) which should be independent of my choice, and not P(A|B) - which Alice doesn't know about anyways.

Now, given the fact that the distribution (for a given choice of a, b and L) is deterministic, we have then the following possibilities for the measure with a given a, b and L, for each thinkable measurable set A and B:

P(A) = 1 ; P(B) = 1
P(A) = 0 ; P(B) = 1
P(A) = 1 ; P(B) = 0
P(A) = 0 ; P(B) = 0

Normally from the individual probability measures of A and B, we cannot determine the measure of the section A and B, but in this degenerate case we can of course, and we have respectively:
P(A,B) = 1
P(A,B) = 0
P(A,B) = 0
P(A,B) = 0

Which can be written, in a trivial way, in the product form P(A) x P(B) in each case, so P(A,B) = P(A) x P(B) ; no matter what A and what B.

And that's all there was to show.
It is thanks to determinism that we got these degenerate probabilities which allowed us to infer the measure of the section A and B. Nowhere I needed conditional probabilities and hence made no hypothesis about it.

cheers,
Patrick.
 
  • #174
vanesch said:
I noticed this so-called "outcome independence" already before in some of the posts on Bell's stuff, and I think it is an abuse of probability theory, so I agree with what you write, but I don't consider it an error on my part, because "outcome independence" is something that is BUILD INTO KOLMOGOROV PROBABILITY.

I don't really understand this. I just don't know any details about formal Kolmogorov probability theory. In what way are the variables one "conditions on" there (I gather that's technically the wrong word, but I don't know what the right one is) different from regular variables in regular conditional probabilities?

And how can it be that outcome independence is somehow built into the axioms of probability theory? What does this mean for OQM since that theory violates OI?


That simply means then that my theorem was not my original work

You shouldn't be too upset. The whole scheme of analyzing Bell Locality into Outcome Independence and Parameter Independence was torn to shreds by Maudlin.


But I think nowhere I needed explicitly to assume that the conditional probability P(B|A) = P(B).

I don't know now. You'll have to explain the difference between conditionalizing on a variable and regarding it as a parameter or whatever for
Kolmogorov.

But as far as I know, Bell Locality is still the condition that

P(A|a,b,B,L) = P(A|a,L).






I just work with P(A) and with P(B) and with P(A,B) which is the measure of the section of A and B. These are 3 measures which come out of the Kolmogorov probability which is fixed once we have fixed a,b and L.

Just to repeat my request above, can you clarify how this applies to orthodox QM? Because sure in OQM, we don't have

P(A,B;a,b,L) = P(A;a,b,L) * P(B;a,b,L).

Right? Somehow you've got to "conditionalize" (or whatever) one of the two factors on the right on the other outcome (just like Bayes' rule requires). You seem to be saying that there is no need or ability to do this, yet OQM requires it... :frown:



And once we assume this distribution to be DETERMINISTIC, this means that these three numbers cannot be anything else but a 0 or a 1.
Now, information locality means that the probability of A at Alice does not depend on what I (Bob) do with my choice of b.

I hate to make a fuss over terminology, but could you use the technical term "parameter independence" if that's what you mean? Or "signal locality" if that's what you mean? (And btw, these are not the same. Violating signal locality requires parameter-dependence *and* a sufficient control over the prepared initial state of the system.)

It hasn't gotten anything to do with what I got as an outcome because Alice doesn't know that.

That's right. I mean, that's why OQM is consistent with signal locality. Bob can't send a signal to Alice by making a measurement, because his measurement collapses his particle to some definite but unpredictable state. This causes Alice's particle also to collapse to some definite state, but how that relates to what Bob got is unknown to her. So she can't learn what he did by measuring something on her particle. In other words, the randomness associated with the collapse masks the non-locality of the collapse. OQM violates Bell Locality, but it doesn't permit superluminal signalling.


So information locality really means that P(A) (the only thing Alice can learn) is not dependent on what I can choose (the parameter b). It hasn't gotten anything to do with a conditional probability P(A|B) because Alice doesn't care what I measure, and I cannot INFLUENCE it. I can only influence the parameter b, to send a message to Alice. If I'm not supposed to send a message to Alice, it is THIS probability (P(A)) which should be independent of my choice, and not P(A|B) - which Alice doesn't know about anyways.

Here you're sliding back and forth between "signal locality" and "what relativity requires." Remember, Bohmian Mechanics is also consistent with signal locality, yet somehow you (and most others) think that this theory is inconsistent with relativity. No double standards.


Normally from the individual probability measures of A and B, we cannot determine the measure of the section A and B, but in this degenerate case we can of course, and we have respectively:
P(A,B) = 1
P(A,B) = 0
P(A,B) = 0
P(A,B) = 0

Which can be written, in a trivial way, in the product form P(A) x P(B) in each case, so P(A,B) = P(A) x P(B) ; no matter what A and what B.

And that's all there was to show.
It is thanks to determinism that we got these degenerate probabilities which allowed us to infer the measure of the section A and B.

I still don't understand what you think this proves. Is it: that a deterministic theory automatically respects "outcome independence"? I suppose that's true, especially if you *define* determinism in terms of

P(A|a,b,L)

and

P(B|a,b,L)

equalling either 0 or 1. But then, what's actually relevant is not that those probabilities equal {0,1}, but simply that you've written them without any "outcome dependence"! And obviously a theory with no outcome dependence will respect OI. But that has nothing to do with whether it's deterministic.

Nowhere I needed conditional probabilities and hence made no hypothesis about it.

As far as I can tell, this is true by fiat only. You define "determinism" in a way that precludes outcome dependence from the very beginning. But this is misleading and unnecessary, since we know that Bell Locality = OI and PI *regardless* of whether or not we have also determinism.
 
  • #175
ttn, excuse me for breaking in, but I read this post pretty carefully, and I see you making a distinction between "signal locality":

Bob can't send a signal to Alice by making a measurement, because his measurement collapses his particle to some definite but unpredictable state. This causes Alice's particle also to collapse to some definite state, but how that relates to what Bob got is unknown to her. So she can't learn what he did by measuring something on her particle. In other words, the randomness associated with the collapse masks the non-locality of the collapse. OQM violates Bell Locality, but it doesn't permit superluminal signalling.

and "what relativity requires". But Einstein developed special relativity by considering observers (who might as well be called Alice and Bob) comparing their measurements in different inertial frames via signals limited to the speed c. So if QM obeys signal locality, why doesn't it satisfy what relativity requires?
 
Back
Top