Question regarding the Many-Worlds interpretation

In summary: MWI itself is not clear on what to count. Are all branches equal? Are there some branches which are more "real" than others? Do you only count branches which match the experimental setup? Do you only count branches which match the observer's expectations? All of these questions lead to different probabilities. So the idea of counting branches to get a probability just doesn't work with the MWI.But we can still use the MWI to explain why we observe "x" more often than "y". In the grand scheme of things, there are more branches where we observe "x" because "x" is the more stable and long-lived state. So even though
  • #351
S.Daedalus said:
All I've heard you say in that direction is something about 'hypothesis testing'. And yes: you can form, test, and validate the hypothesis that the relative frequencies are Born-distributed. But if you do so, it's wholly independent of the MWI: it neither implies nor contradicts this hypothesis.
Right, we cannot distinguish between interpretations experimentally, all interpretations give the same observations. That's why they are called interpretations.

But 'where are the branches' has a clear-cut answer in Copenhagen: the collapse gets rid of them.
"But this is an issue, Copenhagen has no way to generate multiple branches!"
(Don't take this seriously, that's how some argument here look like to me).


Okay, seriously, I don't think further discussion with me here will help anyone.
 
Physics news on Phys.org
  • #352
mfb said:
Right, we cannot distinguish between interpretations experimentally, all interpretations give the same observations. That's why they are called interpretations.
But that's the point: the MWI you propose does not account for our observations.
 
  • #353
mfb said:
I just don't think new posts would add anything new, or make anything better. We need someone who can explain that better than me.

"is not required" comes from the attempts to apply Copenhagen to MWI. It is like asking "where are the additional branches in Copenhagen?
mfb, the argument against "is not required" is not Copenhagen but experiment. You constantly ignore the fact that there are experimentally observed statistical frequencies which require an interpretation. You ignore the fact that we can use tr(Pρ) to calculate something formally which approximates these observed statistical frequencies.

The simple questions to MWI are
Why does this work in so many cases?
What replaces the Born rule probabilities and explains the statistical frequencies of 90% - 10% in my original question?

The answer "there are no probabilities" may eliminate the concept of probability from an interpretation but it does not eliminate the entity tr(Pρ) from the formalism. So if you want to interpret the formalism and its application to the real world you should be able to explain why tr(Pρ) does work FAPP even so it is not required.

In order to make this claim against your position work I do rely on Copenhagen but on "shut up and calculate"; I am asking why this works, so please explain.
 
  • #354
S.Daedalus said:
But 'where are the branches' has a clear-cut answer in Copenhagen: the collapse gets rid of them.

I asked this question earlier in this thread, and got no answer: How does getting rid of the other branches change anything, such as the predictiveness of the Born rule, on THIS branch?
 
  • #355
stevendaryl said:
I asked this question earlier in this thread, and got no answer: How does getting rid of the other branches change anything, such as the predictiveness of the Born rule, on THIS branch?
Because you end up with a state in some subspace, over which Gleason's theorem provides a measure, which gives you the Born rule. Also, because it introduces alternatives in the first place: this thing happens rather than that one, making an appeal to probability coherent.
 
  • #356
S.Daedalus said:
Because you end up with a state in some subspace, over which Gleason's theorem provides a measure, which gives you the Born rule. Also, because it introduces alternatives in the first place: this thing happens rather than that one, making an appeal to probability coherent.

I'm saying that the nonexistence of other "branches" is not something that is observable in this branch (well, not in practice--to observe the effects of other branches would require detecting interference among branches, and if the branches involve macroscopically different states, then this is impossible). So whatever rule of thumb you are using to extract predictive content from QM doesn't actually require collapse. You can do the same procedure whether or not collapse happens, and you'll get the same results.
 
  • #357
stevendaryl said:
I'm saying that the nonexistence of other "branches" is not something that is observable in this branch (well, not in practice--to observe the effects of other branches would require detecting interference among branches, and if the branches involve macroscopically different states, then this is impossible). So whatever rule of thumb you are using to extract predictive content from QM doesn't actually require collapse. You can do the same procedure whether or not collapse happens, and you'll get the same results.
As I said, the applicability of Gleason's theorem to extract the Born probabilites depends on having a state that is really in one of the subspaces, rather than a superposition. This is where the collapse comes in; without it, Gleason's theorem simply doesn't talk about the probabilities. There are no observable differences, no, but important conceptual ones that mean you must reason differently if assuming different ontologies. Assuming a collapse, probabilities follow; without it, I don't see how.
 
  • #358
S.Daedalus said:
As I said, the applicability of Gleason's theorem to extract the Born probabilites depends on having a state that is really in one of the subspaces, rather than a superposition. This is where the collapse comes in; without it, Gleason's theorem simply doesn't talk about the probabilities. There are no observable differences, no, but important conceptual ones that mean you must reason differently if assuming different ontologies. Assuming a collapse, probabilities follow; without it, I don't see how.

There is something fishy about this argument. There is no observable difference between situation A and situation B, but the fact that there is a mathematical technique that works for situation A, but not for situation B is an argument that A must be the case?

The statement of Gleason's theorem does not mention the collapse hypothesis. It's about a measure on subspaces of a Hilbert space.
 
  • #359
stevendaryl said:
There is something fishy about this argument. There is no observable difference between situation A and situation B, but the fact that there is a mathematical technique that works for situation A, but not for situation B is an argument that A must be the case?

This business about collapse destroying all the branches but one reminds me of a philosophical argument about Star Trek's teleporter.

Suppose that teleporters are invented some day, and the way they work is this:
  1. At the transmitting end, a laser, or X-ray, or some kind of beam scans every atom in your body and records its state.
  2. As a side-effect, it blasts your body into its component atoms.
  3. At the receiving end, a matter assembler takes the information and builds a new body with the same atomic states. (I'm disregarding quantum mechanics here.)

For all intents and purposes, the traveler leaves the transmitter, and is transported at the speed of light to the receiver. But now let's change things so that step 2 doesn't happen. The original "you" is NOT destroyed in the process. Would you still consider this a way of traveling at the speed of light? From the point of view of the original "you", what happens is that you enter a booth, are scanned, and then walk out of the booth in the same location you started in. Except you are poorer by the cost of the teleportation fees. You'd feel ripped off. But for some reason, you wouldn't be comforted by the offer to have your body torn into its component atoms, reducing the situation to the previous case.
 
  • #360
stevendaryl said:
There is something fishy about this argument. There is no observable difference between situation A and situation B, but the fact that there is a mathematical technique that works for situation A, but not for situation B is an argument that A must be the case?
No. There's a mathematical argument that explains why things look the way they do in situation A, but not in situation B; so from the point of view of explaining why things look a certain way, A gives the better explanation.

The statement of Gleason's theorem does not mention the collapse hypothesis. It's about a measure on subspaces of a Hilbert space.
No, but it mentions subspaces on Hilbert space. And the general state won't be in any of the relevant subspaces. That's why you need the collapse.
 
  • #361
S.Daedalus said:
No. There's a mathematical argument that explains why things look the way they do in situation A, but not in situation B; so from the point of view of explaining why things look a certain way, A gives the better explanation.


No, but it mentions subspaces on Hilbert space. And the general state won't be in any of the relevant subspaces. That's why you need the collapse.

You're drawing conclusions that aren't actually in the theorem, and don't follow from the theorem, as far as I can see.
 
  • #362
stevendaryl said:
You're drawing conclusions that aren't actually in the theorem, and don't follow from the theorem, as far as I can see.
Yes, I do, and I never claimed to be doing anything else. I am talking about how and where the theorem applies and why; to expect that the theorem itself should supply this kind of information would be a bit much, no? Rather, it's the assumptions made by the interpretation that determine whether the theorem applies.
 
  • #363
S.Daedalus said:
Yes, I do, and I never claimed to be doing anything else. I am talking about how and where the theorem applies and why; to expect that the theorem itself should supply this kind of information would be a bit much, no? Rather, it's the assumptions made by the interpretation that determine whether the theorem applies.

But the "collapse" hypothesis seems irrelevant to the conclusion.

Here's a no-collapse, no-preferred basis variant of Many Worlds that I'll call "Many Observers Interpretation":

  1. Define an "observer" to be any mutually commuting set of Hermitian operators.
  2. Define an "observation" to be an assignment of an eigenvalue to every operator associated with an observer.
  3. Postulate an ensemble of observations, with the measure given by the Born rule.

I don't see how collapse is needed. We have a probability distribution on observations; we don't need observations to be mutually exclusive. If I'm an observer making observation [itex]O_1[/itex], I don't see how it's relevant to me that there might be another identical observer making observation [itex]O_2[/itex]
 
  • #364
stevendaryl said:
But the "collapse" hypothesis seems irrelevant to the conclusion.

Here's a no-collapse, no-preferred basis variant of Many Worlds that I'll call "Many Observers Interpretation":

  1. Define an "observer" to be any mutually commuting set of Hermitian operators.
  2. Define an "observation" to be an assignment of an eigenvalue to every operator associated with an observer.
  3. Postulate an ensemble of observations, with the measure given by the Born rule.

I don't see how collapse is needed.
Well, you're just postulating the Born rule by fiat, so it's not.
 
  • #365
S.Daedalus said:
Well, you're just postulating the Born rule by fiat, so it's not.

My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.
 
  • #366
stevendaryl said:
My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.
It is. But not to deriving the Born rule using Gleason's theorem.
 
  • #367
stevendaryl said:
My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.

It seems to me that Gleason's theorem just as much implies that the Born rule is the only sensible measure on my "Many Observers" interpretation. Collapse basically amounts to saying that you pick one of the observations (according to the measure) and call that one "actual" and then removing all the others (or calling them "counterfactuals"). But I don't see how this additional step does anything for the Born rule. The measure existed before the collapse (it has to, since the measure is used to select one outcome for the collapse)
 
  • #368
S.Daedalus said:
It is. But not to deriving the Born rule using Gleason's theorem.

I just don't understand, when the statement of the theorem and its proof make no reference to a "collapse", how you can say that a "collapse" is needed for the theorem to be applicable. That doesn't make sense to me.
 
  • #369
stevendaryl said:
It seems to me that Gleason's theorem just as much implies that the Born rule is the only sensible measure on my "Many Observers" interpretation. Collapse basically amounts to saying that you pick one of the observations (according to the measure) and call that one "actual" and then removing all the others (or calling them "counterfactuals").
No, collapse means taking (for instance) a pure, superposed state, and reducing it to a proper mixture. Since the latter is a state that is in one of the eigenspaces of the observable we're measuring, and Gleason's theorem provides a measure on subspaces, we can thereafter associate a probability of being in one of the eigenspaces with the state, which we could not before, as we had a superposed state that was not in any of the eigenspaces. If you keep this superposed state, then Gleason gives you just as much a measure on the subspaces; it's just that the state is not in any of them.
 
  • #370
stevendaryl said:
I just don't understand, when the statement of the theorem and its proof make no reference to a "collapse", how you can say that a "collapse" is needed for the theorem to be applicable. That doesn't make sense to me.
The theorem provides a measure on subspaces; it's just that, in general, the state is not in any of the subspaces that interest us, i.e. those in which an observable has a certain value. The collapse is needed to get it in there.
 
  • #371
S.Daedalus said:
The theorem provides a measure on subspaces; it's just that, in general, the state is not in any of the subspaces that interest us, i.e. those in which an observable has a certain value. The collapse is needed to get it in there.

Why do we need the wave function to be in one of the subspaces?
 
  • #372
stevendaryl said:
Why do we need the wave function to be in one of the subspaces?
For one, because then we can use Gleason's theorem to furnish a probability interpretation. :-p But also, because we want experiments to be repeatable, i.e. if we observed the value [itex]o_i[/itex] measuring the observable [itex]\mathcal{O}[/itex], then making the same measurement again, we want to observe [itex]o_i[/itex] again; but this we only will if the state is in the subspace spanned by the states [itex]|o_i\rangle[/itex] such that [itex]\mathcal{O}|o_i\rangle = o_i|o_i\rangle[/itex].
 
  • #373
S.Daedalus said:
For one, because then we can use Gleason's theorem to furnish a probability interpretation

I just don't think that's correct. Gleason's theorem says that if we are going to use the wavefunction to assign probabilities to subspaces (with certain assumptions), then pretty much the only sensible choice is the Born rule. It doesn't say anything about the collapse of the wave function into that subspace.
 
  • #374
S.Daedalus said:
For one, because then we can use Gleason's theorem to furnish a probability interpretation. :-p But also, because we want experiments to be repeatable, i.e. if we observed the value [itex]o_i[/itex] measuring the observable [itex]\mathcal{O}[/itex], then making the same measurement again, we want to observe [itex]o_i[/itex] again; but this we only will if the state is in the subspace spanned by the states [itex]|o_i\rangle[/itex] such that [itex]\mathcal{O}|o_i\rangle = o_i|o_i\rangle[/itex].

You don't need collapse to get the result that repeated observations give the same value for an observable.
 
  • #375
stevendaryl said:
You don't need collapse to get the result that repeated observations give the same value for an observable.

Maybe I should expand on this point.

Let's let [itex]|u\rangle[/itex] and [itex]|d\rangle[/itex] be the "up" and "down" states of an electron. Let [itex]|UUDDDU...\rangle[/itex] be the state of the observer/detector when it has measured spin "up" for the first two times that the electron's spin was measured, and spin "down" for the next three times, etc. We assume that the detector's interaction with the electron causes its state to become correlated with that of the electron. That is:

[itex]|\rangle \otimes |u\rangle \rightarrow |U\rangle \otimes |u\rangle[/itex]
[itex]|\rangle \otimes |d\rangle \rightarrow |D\rangle \otimes |d\rangle[/itex]

So if you start off with the electron in a superposition of "up" and "down", then we have:

[itex]|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)[/itex]
[itex]\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)[/itex]

Then the observer/detector measures the spin again, and it evolves further to
[itex]\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)[/itex]

You don't need collapse to guarantee that repeated measurements of the same observable give consistent results.
 
  • #376
stevendaryl said:
I just don't think that's correct. Gleason's theorem says that if we are going to use the wavefunction to assign probabilities to subspaces (with certain assumptions), then pretty much the only sensible choice is the Born rule. It doesn't say anything about the collapse of the wave function into that subspace.
You continue to miss my point. I'm not saying that Gleason says anything about collapse; but it just gives a measure on subspaces, and for that measure to apply, the state must be in one of them. It's like, you have a distribution of marbles in a hat, but in order for that to apply, you must draw a marble from that hat; if you don't, the distribution just doesn't, and can't, tell you anything. The superposed state simply does not correspond to a marble in the hat.

I've laid out the argument in more detail in these two posts; I think this is as clear as I can make it.

stevendaryl said:
Maybe I should expand on this point.

Let's let [itex]|u\rangle[/itex] and [itex]|d\rangle[/itex] be the "up" and "down" states of an electron. Let [itex]|UUDDDU...\rangle[/itex] be the state of the observer/detector when it has measured spin "up" for the first two times that the electron's spin was measured, and spin "down" for the next three times, etc. We assume that the detector's interaction with the electron causes its state to become correlated with that of the electron. That is:

[itex]|\rangle \otimes |u\rangle \rightarrow |U\rangle \otimes |u\rangle[/itex]
[itex]|\rangle \otimes |d\rangle \rightarrow |D\rangle \otimes |d\rangle[/itex]

So if you start off with the electron in a superposition of "up" and "down", then we have:

[itex]|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)[/itex]
[itex]\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)[/itex]

Then the observer/detector measures the spin again, and it evolves further to
[itex]\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)[/itex]

You don't need collapse to guarantee that repeated measurements of the same observable give consistent results.
Well, this only works if you assume that your memory changes upon each new measurement, that is, after having observed [itex]U[/itex], the state evolves to your [itex]\alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)[/itex], in which you have a chance of [itex]|\beta|^2[/itex] of now observing [itex]D[/itex], but also believing of having observed [itex]D[/itex] before; this is in fact a version of Everett Bell considered (and rejected, for its obvious empirical incoherence: if your memory were subject to such 'fakeouts', you could never rationally build up belief in a theory). There's also the problem that it's typically simply false to consider a state as having a definite property while being in a superposition.
 
  • #377
stevendaryl said:
So if you start off with the electron in a superposition of "up" and "down", then we have:

[itex]|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)[/itex]
[itex]\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)[/itex]
This is what we observe, but for the MWI to work a proof is required. Currently we do believe in decoherence to provide the proof that this is approximately true, i.e. that states like |U>*|d> are "dynamically suppressed".

stevendaryl said:
Then the observer/detector measures the spin again, and it evolves further to
[itex]\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)[/itex]
Again this is what we observe, and for which a proof is required.

The 1st step means that a preferred basis is singled out i.e. that off-diagonal terms are suppressed, the 2nd step means that the preferred basis (branching) is stable i.e. that off-diagonal terms stay suppressed.

But besides the fact that a sound proof for realistic systems seems to be out of reach, it is unclear whether Gleason's theorem shall tell us anything in the MWI context (I think this is what mfb wanted to stress). The theorem says that the only valid probability measure is the Born measure. The theorem (no theorem!) tells us why we should interpret a measure as a probability! The question is "probability for what?"
For the state being in a subspace? No, the state is still in a superposition
For observing "UU"? Why shall a prefactor of a specific subspace be a probability?

I know that many do not like why-questions in physics, but this why-question is key for the whole MWI debate!

There is one fact regarding the MWI which is really very disappointing: the whole story starts with a clear and minimalistic setup. But the ideas to prove (or to motivate) the Born rule have become awefully complicated over the last decades. That means that MWI misses the point!
 
  • #378
mfb said:
It [the MWI] allows to formulate theories that predict amplitudes, and gives a method to do hypothesis testing based on those predictions.
I still have some more tangible questions than the apparent dead end of this thread "But what about the probabilities?". One is, how do I measure these amplitudes? The result of a measurement is an eigenvalue. The result of many measurements is a string of eigenvalues. How do I deduce amplitudes from this?
 
  • #379
stevendaryl said:
The terminology of the universe "splitting" is not really accurate. A better way to think of it, in my opinion, is that the set of possibilities is always the same, and the "weight" associated with each possibility just goes up or down with time.
The naive approach to branches is that every state in the (improper) mixture after decoherence defines a branch. So before the measurement we have only one branch and afterwards we have many.

Do you suggest that we should take every one-dimensional subspace of the full Hilbert space as a branch?
 
  • #380
S.Daedalus said:
Well, this only works if you assume that your memory changes upon each new measurement, that is, after having observed [itex]U[/itex], the state evolves to your [itex]\alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)[/itex], in which you have a chance of [itex]|\beta|^2[/itex] of now observing [itex]D[/itex], but also believing of having observed [itex]D[/itex] before;
You are talking about a different measurement which an additional external observer would have to perform. |β|² is the probability for an outcome in a measurement of the composite system electron + steven's observer/detector.

This is very similar to the starting point of Everett: if you introduce a second observer who observes the composite system, the QM calculations for the different observers contradict each other if you make the collapse assumption. So Copenhagen has to limit the applicability of QM in order to be consistent.
 
Last edited:
  • #381
S.Daedalus said:
You continue to miss my point. I'm not saying that Gleason says anything about collapse; but it just gives a measure on subspaces, and for that measure to apply, the state must be in one of them.

You keep saying that, but that just doesn't follow.
 
  • #382
S.Daedalus said:
Well, this only works if you assume that your memory changes upon each new measurement,

If your memory DOESN'T change with each new measurement, then how could you possibly compute relative frequencies to compare with the theory?

that is, after having observed [itex]U[/itex], the state evolves to your [itex]\alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)[/itex], in which you have a chance of [itex]|\beta|^2[/itex] of now observing [itex]D[/itex]

In my notation, [itex]D[/itex] isn't an "observation", it's a state of the observer in which the observer remembered measuring the spin and finding it spin-down.

but also believing of having observed [itex]D[/itex] before; this is in fact a version of Everett Bell considered (and rejected, for its obvious empirical incoherence: if your memory were subject to such 'fakeouts', you could never rationally build up belief in a theory). There's also the problem that it's typically simply false to consider a state as having a definite property while being in a superposition.

I don't know what you're talking about. What definite state are you talking about?
 
  • #383
stevendaryl said:
You keep saying that, but that just doesn't follow.
I have a probability distribution over marbles in a hat as follows: P(Green)=0.5, P(Blue)=0.3, P(Red)=0.2. I draw a marble from a nearby urn. What's the probability that it's green?
 
  • #384
stevendaryl said:
If your memory DOESN'T change with each new measurement, then how could you possibly compute relative frequencies to compare with the theory?
I think he means that an observer with state |U> could get into a state |DD> by measuring the state of the electron again and thus would have to change his knowledge of the past. I have explained why this is wrong in my previous post.
 
  • #385
kith said:
You are talking about a different measurement which an additional external observer would have to perform. |β|² is the probability for an outcome in a measurement of the composite system electron + steven's observer/detector.
I took the setup to be essentially that of a quantum system and a detector with a memory; in order to decide the contents of the memory, you have to do a measurement on the complete system. In the first measurement, for instance, you observe the detector to find that it has observed the system in the spin-up state, and that it consequently has its register in the 'having observed spin up' state. Now the detector measures again, and once you observe it and its memory, you (may) find that it has now observed it in the state spin-down, and consequently, has the register in the 'have observed spin down' state; but furthermore, it will also now 'remember' having observed the system in the spin down state before, since the register for the last observation will now also be in the 'have observed spin down'-state. (Which of course will be completely in line with your own memory, provided memory works as a kind of self-measurement of some sort of 'register'; this is basically Bell's 'Bohmian mechanics without the trajectories' reading of Everett).
 

Similar threads

Replies
4
Views
174
Replies
313
Views
22K
Replies
10
Views
257
Replies
41
Views
4K
Replies
34
Views
3K
Replies
5
Views
2K
Back
Top