Question regarding the Many-Worlds interpretation

In summary: MWI itself is not clear on what to count. Are all branches equal? Are there some branches which are more "real" than others? Do you only count branches which match the experimental setup? Do you only count branches which match the observer's expectations? All of these questions lead to different probabilities. So the idea of counting branches to get a probability just doesn't work with the MWI.But we can still use the MWI to explain why we observe "x" more often than "y". In the grand scheme of things, there are more branches where we observe "x" because "x" is the more stable and long-lived state. So even though
  • #176
bhobba said:
In BM probabilities enter into it due to lack of knowledge about initial conditions. In MWI we have full knowledge of what it considers fundamental and real - the quantum state.

Thanks
Bill

Yes... exactly why it seems to be wrong.
 
Physics news on Phys.org
  • #177
Quantumental said:
Yes... exactly why it seems to be wrong.

I am glad you used the word 'seems'. Wallace in his book argues its not an issue.

I am simply not convinced by his arguments - but it is arguable.

Added Later:

I think that's what MFB is getting at - Wallace's argument is summed up on page 115 of his book:
'Mathematically, formally, the branching structure of the Everett interpretation is a stochastic dynamical theory. And nothing more needs to be said'.

Yea - the theory is as the theory is so what's your beef. My beef is in other stochastic dynamical deterministic theories we know where the 'stochastictisity' (is that a word?) comes from - here we don't.

BTW Wallace gives all sorts of reasons - that's just one. Some are quite subtle. For example against the equal probability rule he brings up actually deciding what is an equal probability. We have an observation with two outcomes so you would naturally say its 50-50. On one of the outcomes you can do another observation with two outcomes giving 3 outcomes in total - so what is it - 1/3 for each or 1/2, 1/4 and 1/4. This boils down to a question of what is an elementary observation - a very subtle issue in QM. Its tied in with one of the quirks of QM as a generalized probability model - in normal probably theory a pure state when observed always gives the same thing - once thrown a dice always has the same face - in QM a pure state can be observed to give another pure state, which is itself tied up with having continuous transformations between pure states (as an aside Hardy believes this is the distinguishing feature of QM). His arguments are full of stuff like that - disentangling them is no easy task. Basically on some reasonableness assumptions he makes the Born rule is the only way a rational agent can assign 'likelihoods' to outcomes.

Thanks
Bill
 
Last edited:
  • #179
Quantumental said:
Bhobba: I thought this had been dealt with by several people already?

It has been DEBATED by several people already. Like just about any issue of philosophy proving one way or the other is pretty much impossible.

Now without going through the papers you mention, which doesn't thrill me greatly, suffice to say in the book I have Wallace goes to great length, and quite a few chapters, discussing objections - even the old one about the frequentest interpretation of probability which is pretty much as ancient as they come.

If you however would, in your own words, like to post a specific objection then I will happily give my view.

Overall I am not convinced by Wallace's arguments. For example merely saying a stochastic theory is - well stochastic and hence of zero concern strikes me as a cop-out of the first order. But that is not his only argument - like I say the issue is subtle and requires a lot of thought to disentangle.

I personally have no problem with the decision theoretic derivation of the Born rule - its assumptions are quite reasonable. My issue is trying to justify the likely outcomes of a deterministic theory on the basis of what a rational agent would require strikes me as not resolving the basic issue at all - yes its a reasonable way to justify the Born rule, and so is Gleason's Theorem for that matter, but does not explain why a rational being, agent or whatever would have to resort to decision theory in the first place, which assumes for some reason you can't predict with certainly the outcome. Why does a deterministic theory have that feature in the first place - blank-out. Logically its impossible to assign a value of true and false to a Hilbert space - Gleason guarantees that - you have probabilities built right into its foundations - without some device like BM's pilot wave to create contextuality no escaping it - so you are caught between the devil and the deep blue sea if you want a deterministic theory.

Again I want to emphasize it doesn't invalidate the interpretation or anything like that. Its very very elegant and beautiful, its just a question the interpretation doesn't answer, but then again all interpretations are like that - they have awkward questions they have difficulty with.

Thanks
Bill
 
Last edited:
  • #180
I would like to ask a different question which seems to be crucial for establishing the whole MWI program.

MWI as of today relies on decoherence. That means that the different branches in the representation of a state vector are defined via a preferred basis. These basis states should be
i) dynamically selected,
ii) "peaked" in classical phase space (as observed classically), and
iii) these branches should be stable w.r.t. time evolution (in the full Hilbert space)
In terms of density matrices this is mostly described as reduced density matrices becoming nearly diagonal statistical mixtures with (approximately) zero off-diagonal terms.

My question is to which extent (i-iii) can be shown to follow strictly from the formalism, i.e. from the Hamiltonian of realistic macroscopic systems.
 
  • #181
tom.stoer said:
My question is to which extent (i-iii) can be shown to follow strictly from the formalism, i.e. from the Hamiltonian of realistic macroscopic systems.

This is the preferred basis problem and books on Decoherence do prove that with a caveat known around here as the factorization problem which basically is, does it work for any decomposition not just the obvious environment, measuring apparatus, and system being measured.

That's up in the air right now - it's not known one way or the other, but it really seems to only gain traction on these forums - all the textbooks I know don't even mention it. I have also seen a paper that shows for a simple model it doesn't depend on the factorization - but beyond that its not known. I also have to say there are other issues in the quantum classical transition that theorems exist only for special cases. The general consensus seems to be its just a matter of crossing the t's and dotting i's sort of stuff - but one never knows.

If you want to go into it further it has been discussed many times on this forum so its really only a search away.

Thanks
Bill
 
  • #182
mfb said:
Gleason's theorem. Every other assignment would lead to results we would not call "probability".
Ok. So we could actually weaken the Born rule postulate of Copenhagen and replace it by something like "for every observable, there is a probability distribution of possible measurement outcomes (<->eigenvalues) which is determined by ρ" and get the quantitative statement from this by applying Gleason's theorem?
 
  • #183
tom.stoer said:
My question is to which extent (i-iii) can be shown to follow strictly from the formalism, i.e. from the Hamiltonian of realistic macroscopic systems.
I remember a Zurek paper which tried to derive the position basis as the preferred basis from the Coulomb interaction. I don't know if this satisfies all your criteria. Also I haven't found it on the arxiv at first glance.
 
  • #184
kith said:
Ok. So we could actually weaken the Born rule postulate of Copenhagen and replace it by something like "for every observable, there is a probability distribution of possible measurement outcomes (<->eigenvalues) which is determined by ρ" and get the quantitative statement from this by applying Gleason's theorem?

Cant quite understand what you are trying to say. But what Gleason's theorem says is the Born Rule and non contextuality (ie the probabilities are basis independent) is equivalent. But why would you choose a vector space formalism to describe states if a fundamental thing like the expected values of the outcome of observations depends on your basis? It's a very strange thing for anything fundamental to be basis dependent since basis are an arbitrary freely chosen man imposed thing. And indeed interpretations like BM where it is violated are basically saying - the Hilberst space formalism is not fundamental - the pilot wave is. The MWI is most definitely NOT like that so its quite reasonable for it to apply. The same with Copenhagen. In that interpretation the state represents the fundamental thing describing a system but is a state of knowledge like probabilities in probability theory - it doesn't exist in a real sense like in the MWI where it is very real. But because it considers the state fundamental it too would more or less would have to accept Gleason.

Thanks
Bill
 
  • #185
bhobba said:
Cant quite understand what you are trying to say.
I'm trying to find out where exactly probabilities enter in Copenhagen, how they could enter in the MWI and if the latter is even necessary. There's obviously no agreement on this among the people in this thread.
 
  • #186
kith said:
I'm trying to find out where exactly probabilities enter in Copenhagen, how they could enter in the MWI and if the latter is even necessary. There's obviously no agreement on this among the people in this thread.

When you make an observation you reasonably expect it to have an expected value. No assumption at that point is made about determinism or probabilities. But what Gleason shows is that expected value contains actual probabilities - not certainties. That's how probabilities enters into it - its inevitable from the Hilbert space formalism - no escaping it. Why does a deterministic theory like MWI contain probabilities - that's the question. Wallace basically says its not an issue - the theory is a stochastic model and that's how stochastic models behave - I respectfully disagree. He also has other arguments but you need to read the book - its a subtle and complex issue.

Thanks
Bill
 
  • #187
bhobba said:
I am not quite following your point here.

The reason probabilities come into it is Born's Rule ie given an observable O its expected value is Tr (OP) where P is the state of the system.

How can probabilities not be involved?
Why (and how?) do you add Born's rule? If you add it, you have "probabilities" hanging around, obviously (but how do they work?), but I don't see why you do that.
I think the factorization problem is similar to the question "when/where do collapses happen in the Copenhagen interpretation?". I agree that more work is necessary here, but I don't think it is specific to MWI, and I do not expect any issue arising from that.
 
  • #188
mfb said:
Why (and how?) do you add Born's rule? If you add it, you have "probabilities" hanging around, obviously (but how do they work?), but I don't see why you do that.

Any observation must have an expected outcome. The Born rule allows you to figure out what it is so you can check experiment against theory.

I may be stating to glimpse your point - is it because all you can predict is probabilities it does not mean its not deterministic?

mfb said:
I think the factorization problem is similar to the question "when/where do collapses happen in the Copenhagen interpretation?". I agree that more work is necessary here, but I don't think it is specific to MWI, and I do not expect any issue arising from that.

It's a general decoherence issue, but not the only one where more work needs to be done in the area of the quantum to classical transition. And I don't expect any issues either - but there are those that argue it. Maybe you can have better luck than me with them - some are rather 'taken' with the idea there is an issue.

Thanks
Bill
 
Last edited:
  • #189
bhobba said:
Any observation must have an expected outcome.
Why?

I may be stating to glimpse your point - is it because all you can predict is probabilities it does not mean its not deterministic?
How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B.
If you cannot measure them, what is the point in predicting probabilities?
 
  • #190
mfb said:
Why? How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B.
If you cannot measure them, what is the point in predicting probabilities?

Obviously any single measurement is like that but the same measurement repeatably done yields an expected result and probabilities.

And to forestall the next objection the frequentest approach to probability is perfectly valid and non circular when done properly as found in standard textbooks like Feller. I don't really feel like having a sojourn into that one again because its an old issue that has been solved ever since Kolmogorov devised his axioms.

Thanks
Bill
 
Last edited:
  • #191
mfb said:
How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B.
If you cannot measure them, what is the point in predicting probabilities?
That was exactly my point when I started this thread: the difference between the (expected) result of a measurement and a sequence of measurements. Whereas the result of a single measurement of observable A is described by the expectation value, experimentally the probability is related to a statistical frequency which is not not related to a single expectation value but to a sequence of projections.

So the basic fact is that we DO observe statistical frequencies in experiments (for identical preparations) which we identify with matrix elements interpreted as probabilities. The "interpreted as" is the non-trivial step!

The expectation value of a single measurement follows from a single calculation which I called the top-down perspective using the full state in the full Hilbert space; the statistical frequency in a sequence of experiments is not observed top-down b/c a physical observer "within" one branch has no access to the full Hilbert space; instead this what I called the bottom-up perspective of individual observers within specific branches.

I started with the fact that in any collapse interpretation the relation between both perspectives is trivial b/c the collapse postulate forces the top-down and the bottom-up perspective to become identical.

Then I continued with sequences of measurements (and thefore branchings) in the MWI. Here the above mentioned relation is no longer available b/c we still have the expectation value calculated on full Hilbert space and the statistical frequency recorded in the individual branches, but we do no longer have a collapse forcing the two perspectives to be identical.

The probability to be in a specific branch, either |x> or |y>, after a measurement cannot be deduced from the formalism, as far as I can see. It seems to be a proven fact the the only way to assign probabilities consistently is given by the Born rule (Gleason), but the fact that we interpret these matrix elements as probabilities related to statistical frequencies is by no means obvious and still subject to discussion.
 
Last edited:
  • #192
Bhobba:

1. What is your response to the factorization problem? J-M Schwindt put up a paper about that late last year that Demystifier wrote a summary of which was the following:

To define separate worlds of MWI, one needs a preferred basis, which is an old well-known problem of MWI. In modern literature, one often finds the claim that the basis problem is solved by decoherence. What J-M Schwindt points out is that decoherence is not enough. Namely, decoherence solves the basis problem only if it is already known how to split the system into subsystems (typically, the measured system and the environment). But if the state in the Hilbert space is all what exists, then such a split is not unique. Therefore, MWI claiming that state in the Hilbert space is all what exists cannot resolve the basis problem, and thus cannot define separate worlds. Period! One needs some additional structure not present in the states of the Hilbert space themselves.
 
  • #193
Quantumental said:
What is your response to the factorization problem?

Already answered that - see post 181.

It has been discussed many times in many threads, easy to do a search and form your own view.

Thanks
Bill
 
Last edited:
  • #194
bhobba said:
Already answered that - see post 181.

It haws been discussed many times in many threads, easy to do a search and form your own view.

I don't see how you can just assume that it'll be fixed somehow by decoherence?

It seems to me like you are forced to postulate that there is additional structure and a dynamics there which isn't present in the formalism
 
  • #195
Quantumental said:
I don't see how you can just assume that it'll be fixed somehow by decoherence? It seems to me like you are forced to postulate that there is additional structure and a dynamics there which isn't present in the formalism

Why do you think I am just assuming it?

Didn't you see my comment about what a simple model proved?

Why do you think more complex models will not confirm what the simple model showed?

Thanks
Bill
 
  • #196
bhobba said:
Why do you think I am just assuming it?

Didn't you see my comment about what a simple model proved?

Why do you think more complex models will not confirm what the simple model showed?

Thanks
Bill

I just don't see how it's really relevant.
The main point of Schwindts paper is that the state vector of the universe does not contain any information at all, because all unit vectors look the same. The information is only in the choice of factorization. And...how, even hypothetically, could decoherence suddenly choose?

I think even Wallace concedes this in the Emergent Multiverse when he postulates additional structure because he realize that nothing but Hilbert Space = won't work.
 
  • #197
Quantumental said:
I just don't see how it's really relevant.

Then I simply do not agree with you.

Its obviously relevant that a simple model singled out a basis regardless of how it was decomposed. To spell out the detail that should be obvious, the statement 'Namely, decoherence solves the basis problem only if it is already known how to split the system into subsystems (typically, the measured system and the environment).' is incorrect for the simple model. Its an open question if its true for more realistic models or even for the entire universe, but most experts in the field, by the fact it doesn't even rate a mention in the textbooks on the matter, seem to think it true, as do I. If you can't see that, shrug.

And that a state vector of anything, the universe, anything, contains no information at all directly contradicts the foundation axioms of QM.

There are two of them as detailed by Ballentine and information is what both of them are about.

Because it's so at odds with those axioms can you state them and explain exactly how a state vector can not contain information?

And all unit vectors looking the same? Sounds like a nonsense statement to me.

Thanks
Bill
 
Last edited:
  • #198
mfb said:
How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B. If you cannot measure them, what is the point in predicting probabilities?
I still don't get your main point. In the Copenhagen interpretation, you postulate probability distributions. From them you get expectation values which can be shown to be consistent with experimental data by hypothesis testing. You seem to suggest that the MWI can do this too. But how do you derive these hypotheses without talking about probability distributions?
 
Last edited:
  • #199
kith said:
I still don't get your main point. In the Copenhagen interpretation, you postulate probability distributions. From them you get expectation values which can be shown to be consistent with experimental data by hypothesis testing. You seem to suggest that the MWI can do this too. But how do you derive these hypotheses without talking about probability distributions?

Its confusing to me as well. The two axioms of QM from Ballentine are:

1. Observable's are Hermitian operators whose eigenvalues are the possible outcomes of an observation.

2. A positive operator P of unit trace exists, called the state, such that the expected value of the observable O is Tr(OP).

Axiom 2 is the Born rule and in fact via Gleason, and the assumption of non-contextuality, follows from axiom 1.

It would seem probabilities are built right into the very definition of a quantum state.

bhobba said:
Because it's so at odds with those axioms can you state them and explain exactly how a state vector can not contain information?

To Quantumental:

The above are the axioms of QM. As you can see the very definition of state is about information. Now there is a bit of an issue about its meaning with regard to the entire universe but just to ensue we are on the same page is that the issue you are talking about? It has a simple solution in the context of MWI, but if you can explain the problem as you see it that would really help.

Thanks
Bill
 
Last edited:
  • #200
bhobba said:
Obviously any single measurement is like that but the same measurement repeatably done yields an expected result and probabilities.
It does not, it still leads to a single result - using the initial example, a string of x and y.
What was the probability of that string? Why did you get this specific string, and not something much more likely, like y only?
If you get the most likely single measurement result, you even reject your initial hypothesis! Why? This question has an answer, but not a probabilistic one. In terms of hypothesis testing, you don't have to assign probabilities to anything. You can do it, but it is not necessary. You can use MWI as well.

You can consider every set of measurements as a single measurement, you don't get rid of that issue just by repeating things.

kith said:
I still don't get your main point. In the Copenhagen interpretation, you postulate probability distributions. From them you get expectation values which can be shown to be consistent with experimental data by hypothesis testing. You seem to suggest that the MWI can do this too. But how do you derive these hypotheses without talking about probability distributions?
The corresponding MWI hypotheses are hypotheses about amplitudes.
 
  • #201
mfb said:
It does not, it still leads to a single result

I am sorry. I just do not get your point. Obviously an observation on a system prepared the same way done many times will yield many outcomes. That's utterly trivial. And one of the fundamental assumptions of QM, as detailed for example on page 43 of Ballentine, is such a sequence has a stable expectation if the number of repetitions is large enough. And since all QM can predict is expectations it means such is the only thing that can be calculated and checked against experiments. If you can't make a statistical statement about the outcome of observations just what can you predict to check against experiment?

Added Later:

Ahhhh. What a dummkopf - MWI is about single outcomes. We can or can not choose to correlate them in a statistical manner - its our choice. Got it.

Thanks
Bill
 
Last edited:
  • #202
And one of the fundamental assumptions of QM, as detailed for example on page 43 of Ballentine, is such a sequence has a stable expectation if the number of repetitions is large enough.
Well, you never get this exact expectation value. You can get something close to it - but you can also get something far away. And then you are back at hypothesis testing, which you can do with all interpretations.

And since all QM can predict is expectations it means such is the only thing that can be calculated and checked against experiments. If you can't make a statistical statement about the outcome of observations just what can you predict to check against experiment?
You can predict amplitudes. And you get the right theory for branches with a total measure of 1-epsilon.
 
  • #203
mfb said:
Well, you never get this exact expectation value. You can get something close to it - but you can also get something far away. And then you are back at hypothesis testing, which you can do with all interpretations.

I am pretty sure I now get your point, but I want to mention what you are talking about above is the supposed issues with the frequentest interpretation of probability. Modern textbooks like Feller solve them, and how they are solved is well known, it constantly surprises me people still seem to worry about it. But that will take us too far afield from the topic of this thread.

Thanks
Bill
 
  • #204
I'm not worried about it - I show that you have to think about it, and if you do that you can use the same result in MWI.
 
  • Like
Likes 1 person
  • #205
Bhobba seems to get it now, I unfortunately don't, I'm afraid. To me, what you say here:

mfb said:
It does not, it still leads to a single result - using the initial example, a string of x and y.
What was the probability of that string? Why did you get this specific string, and not something much more likely, like y only?

Just seems like putting the cart before the horse. Why would the string 'all y' be more likely? First of all, being a single string, even assuming a uniform measure over the set of all strings, it would have to be just as likely as the string you got. Second, assuming that the uniform measure is the right one, and furthermore, that it is meaningful to associate a measure with strings of outcomes at all, is what to me most of the problems seem to be about.

As it appears to me, in collapse quantum mechanics, by virtue of the collapse postulate and the Born rule, you can associate a probability with a certain outcome---which, as you say, isn't something that shows up in a single measurement. But since in ordinary QM, probabilities are tied to experimentally finding one result over another, this translates into a straightforward statement about strings of observations---in your example, the relative frequencies of x's and y's in the string being within some margin.

Now of course, this is as such still a problem, related to the philosophy of probability---what sense does it make to say that the relative frequencies had this particular value with a certain probability---however, the issue that I think is being debated in this thread is that in the MWI, you can't even get this far: you don't even get to associating some probability with the string of observations, so you can't even wonder about how this relates to the observed relative frequencies.
 
  • #206
S.Daedalus said:
Bhobba seems to get it now, I unfortunately don't, I'm afraid.
neither do I

S.Daedalus said:
Why would the string 'all y' be more likely? First of all, being a single string, even assuming a uniform measure over the set of all strings, it would have to be just as likely as the string you got. Second, assuming that the uniform measure is the right one, and furthermore, that it is meaningful to associate a measure with strings of outcomes at all, is what to me most of the problems seem to be about..
that's a reformulation of my problem

S.Daedalus said:
As it appears to me, in collapse quantum mechanics, by virtue of the collapse postulate and the Born rule, you can associate a probability with a certain outcome---which, as you say, isn't something that shows up in a single measurement. But since in ordinary QM, probabilities are tied to experimentally finding one result over another, this translates into a straightforward statement about strings of observations---in your example, the relative frequencies of x's and y's in the string being within some margin.
that's why I think as well; in case of a collapse interpretation these problems are "solved"

S.Daedalus said:
... the issue that I think is being debated in this thread is that in the MWI, you can't even get this far: you don't even get to associating some probability with the string of observations, so you can't even wonder about how this relates to the observed relative frequencies.
I think so, too (thank gid - it was me who started this trouble ;-)

don't give up, tom ...
 
  • #207
S.Daedalus said:
Bhobba seems to get it now, I unfortunately don't, I'm afraid.

I am pretty sure its the Baysean hypothesis testing thing that simply requires a utility giving a confidence in how sure you are the result is correct. Probabilities do not even have to enter into it although it can be shown its logically equivalent to it if you want to go down that path.

Its not my preferred method which is the standard probability one, but you can do it that way if you like.

Added Later:

Just to elaborate a little in such an approach what Gleason, or the decision theory proof of Wallace, would give is a fraction, or percentage, representing a confidence you have. It is that confidence you use in hypothesis testing such at what branch was taken after decoherence. Eg After decoherence you have the mixed state Ʃ pi |ui><ui|. The pi represent a confidence you have that you experience world |ui><ui| and is used in testing hypothesis based on that assumption.

Thanks
Bill
 
Last edited:
  • #208
S.Daedalus said:
Why would the string 'all y' be more likely?
I used the Born rule you you like so much :). This was about collapse interpretations.
First of all, being a single string, even assuming a uniform measure over the set of all strings, it would have to be just as likely as the string you got.
Wait, what? Uniform measure (of what?) in Copenhagen?
Second, assuming that the uniform measure is the right one
Who does that?

this translates into a straightforward statement about strings of observations---in your example, the relative frequencies of x's and y's in the string being within some margin.
And we are back to hypothesis testing. There was never any disagreement about the possibility to do hypothesis testing in collapse interpretations, I don't see the point.
 
  • #209
I went back to your longer post, hoping to get some more insight into what, exactly, it is you're advocating. But it seems to me that still, the account you propose just doesn't get off the ground. You claim that we should make the hypothesis "the squared amplitude of the wave going through is 10% of the initial squared amplitude", but there's two problems with that. One is that doing so assumes a satisfactory treatment of measurement; otherwise, it would simply be contentless. The other, which is the problem we're discussing here, is that in the MWI, there are no grounds for proposing this hypothesis.

Basically, you're assuming that something like my toy account for observing one thing to the exclusion of the other can be made to work; but this is what's at stake. Maybe a better way to see this is that on your account, we could have been merely lucky so far: on the set of all sequences of observations, those that obey the Born rule up to some point N, thereafter violating it, outnumber those that obey the Born rule to within some margin of error on their entire length. So it is at least as much in account with present evidence that the apparent holding of the Born rule is just a fluke; it is just as sensible to consider, for example, the uniform measure on the set of all strings, and expect it to cease holding any minute now. On the MWI, there's simply no grounds for deciding that, while given the collapse, we have only the measure given by Gleason's theorem---which has a natural interpretation as a measure on events, i.e. experiment outcomes---as a candidate. Here, we can cogently form the above hypothesis, because we have both a clear meaning for the event '10% squared amplitude passing through', and a reason to consider that Gleason's theorem has something to say about that.

Perhaps it's clearer if you imagine that the world actually branched, and no matter the Born rule, you'd have exactly 50% probability to find yourself in either branch. This is clearly a way how things could work in the MWI, Gleason's theorem notwithstanding; thus, the latter has no intrinsic connection to probabilities in the MWI. But then, in this universe, there's a version of you, in some branch that has up to some point obeyed the Born rule statistics, making exactly your argument; however, in this universe, he'd be wrong. But that means that your argument doesn't lead to a necessary conclusion.
 
  • #210
S.Daedalus said:
This is clearly a way how things could work in the MWI, Gleason's theorem notwithstanding; thus, the latter has no intrinsic connection to probabilities in the MWI

Unfortunately its not, because in QM you have the issue of deciding exactly what an elementary observation is. Suppose you have an observation with two outcomes 1 and 2 so you would naturally assign 1/2 and 1/2. Now 2 can be observed to give 3 and 4 and that would naturally give 1/2 and 1/2 so you have 1/2, 1/4, and 1/4. But this can be represented as an observation with 3 outcomes so you would assign 1/3, 1/3, 1/3. Its one of the quirks of QM, and it undoubtedly part of the deep reason we have Gleason - its the only thing consistent with basis independence.

Wallace examines this issue in a lot of detail as well as many criticisms that have been leveled against his proof. Its a minefield of deep stuff.

Thanks
Bill
 

Similar threads

Back
Top