Question regarding the Many-Worlds interpretation

In summary: MWI itself is not clear on what to count. Are all branches equal? Are there some branches which are more "real" than others? Do you only count branches which match the experimental setup? Do you only count branches which match the observer's expectations? All of these questions lead to different probabilities. So the idea of counting branches to get a probability just doesn't work with the MWI.But we can still use the MWI to explain why we observe "x" more often than "y". In the grand scheme of things, there are more branches where we observe "x" because "x" is the more stable and long-lived state. So even though
  • #211
S.Daedalus said:
You claim that we should make the hypothesis "the squared amplitude of the wave going through is 10% of the initial squared amplitude", but there's two problems with that. One is that doing so assumes a satisfactory treatment of measurement; otherwise, it would simply be contentless.
I don't see an issue here.
The other, which is the problem we're discussing here, is that in the MWI, there are no grounds for proposing this hypothesis.
You don't need that. And at the same time, there are - just consider previous experiments. That will lead to wrong predictions in some branches, but they will drop this wrong hypothesis after testing it (again, with a measure 1-eps), so this is not an issue.

Maybe a better way to see this is that on your account, we could have been merely lucky so far: on the set of all sequences of observations, those that obey the Born rule up to some point N, thereafter violating it, outnumber those that obey the Born rule to within some margin of error on their entire length.
Why are you interested in counting? You are objecting to things you proposed yourself.

Perhaps it's clearer if you imagine that the world actually branched, and no matter the Born rule, you'd have exactly 50% probability to find yourself in either branch. This is clearly a way how things could work in the MWI
Certainly not. MWI is a deterministic theory. "Probability to find yourself in a branch" is not a meaningful concept. In addition, see bhobba's post.
 
Physics news on Phys.org
  • #212
bhobba said:
Unfortunately its not, because in QM you have the issue of deciding exactly what an elementary observation is. Suppose you have an observation with two outcomes 1 and 2 so you would naturally assign 1/2 and 1/2. Now 2 can be observed to give 3 and 4 and that would naturally give 1/2 and 1/2 so you have 1/2, 1/4, and 1/4. But this can be represented as an observation with 3 outcomes so you would assign 1/3, 1/3, 1/3. Its one of the quirks of QM, and it undoubtedly part of the deep reason we have Gleason - its the only thing consistent with basis independence.
Again, Gleason just works on the wrong level. And for any length of time, you can simply consider the number of branches, and count them evenly; I don't see the problem. (And of course, the fact that it's somewhat ambiguous to attach a probability to a branch in the MWI is kind of the whole reason for this exercise.)

mfb said:
I don't see an issue here.
Well, if you're essentially saying 'if this-and-that happens, we'll measure such-and-such', you'll need to minimally know what 'measure' means, no?

You don't need that. And at the same time, there are - just consider previous experiments.
Previous experiments, in the absence of a clear account of the notion of probability within the MWI, would count against it.

That will lead to wrong predictions in some branches, but they will drop this wrong hypothesis after testing it (again, with a measure 1-eps), so this is not an issue.
This again assumes the conclusion that you have a well-defined account of probability, otherwise, you could not make that statement.

Certainly not. MWI is a deterministic theory. "Probability to find yourself in a branch" is not a meaningful concept.
Good, now what does 'probability to observe a certain outcome' mean in the MWI? Or better, probability to observe a certain sequence of outcomes. Because the way I see it, sequence of outcomes = branch, so if you've got a probability for one, you have one for the other; and if you have no probability for the sequence of outcomes, you have no theory.
 
  • #213
S.Daedalus said:
Well, if you're essentially saying 'if this-and-that happens, we'll measure such-and-such', you'll need to minimally know what 'measure' means, no?
The whole branch (well, the whole "branch tree") has a wavefunction which can be identified with a photon that passed through the polarizer (or whatever). Everything in that branch agrees on that.

Previous experiments, in the absence of a clear account of the notion of probability within the MWI, would count against it.
Apparently it is possible, as physical theories do get formulated and tested successfully.

This again assumes the conclusion that you have a well-defined account of probability, otherwise, you could not make that statement.
The statement is completely independent of any notion of probability.

Good, now what does 'probability to observe a certain outcome' mean in the MWI? Or better, probability to observe a certain sequence of outcomes. Because the way I see it, sequence of outcomes = branch, so if you've got a probability for one, you have one for the other; and if you have no probability for the sequence of outcomes, you have no theory.
There is no probability (and no need for it) in MWI. This is the last time I repeat it, it gets pointless.
 
  • #214
mfb said:
There is no probability (and no need for it) in MWI. This is the last time I repeat it, it gets pointless.
But then, what are its predictions?
 
  • #215
It allows to formulate theories that predict amplitudes, and gives a method to do hypothesis testing based on those predictions.
 
  • #216
mfb said:
There is no probability (and no need for it) in MWI. This is the last time I repeat it, it gets pointless.

mfb said:
It allows to formulate theories that predict amplitudes, and gives a method to do hypothesis testing based on those predictions.

Sorry to say that, but I am totally confused.

Experimentally I get a result string like "xyxxyxx..." with a statistical frequency calculated based on the result string.
I used to calculate probabilities using Born's rule and the "shut-up-and-calculate" interpretation with something like <ψ|P|ψ> or Tr(Pρ) or whatever.
Now I expect to get these probabilities assigned to branches (= result strings) in the MWI as well (the formalism is the same, only the interpretation changes).
Last but not least I would like to compare calculated probabilities with the observed statistical frequencies (here my intention is to becoime an MWI follower).
→ 1st question: what am I doing or thinking wrong?

w/o having Born's rule at hand I do not have a pre-defined concept of probability (I have to derive it somehow).
My first proposal is to assign 50% to "x" and 50% to "y".
Comparing calculated probabilities with observed statistical frequencies I find out that this is wrong! It should by 90% and 10% for my polarizator.
→ 2nd question: How do I derive this 90% and 10% probabilities in the MWI (or using decoherence)?

(I hope I know the answer, I just look at the diagonal matrix elements in ρ)

I am still confused about the "50-50" vs. "90-10" issue.
So I look at ρ, I see 90% and 10%, and I am convinced that when looking at the full Hilbert space which is used to calculate ρ, then 90% and 10% is correct.
Then I check my result string (I find "x"), I identify my branch within ρ (it is something like |"x", ...><"x", ...|) and now I say that a) everything within that branch is my world, everything outside that branch is not observable to me, so I am wondering why the pre-factor 90% has anything to say about the results I observe
→ 3rd question (my original question): living within one branch, why shall I interpret the pre-factor as a probability? why not 50% - 50% or whatever? or anything else?

You say that "there is no probability".
I observe statistical frequencies.
→ eh? (sorry, but I am not able to be more specific)
 
Last edited:
  • #217
mfb said:
It allows to formulate theories that predict amplitudes, and gives a method to do hypothesis testing based on those predictions.
But we don't observe amplitudes. We observe relative frequencies. That they have anything to do with the amplitudes is what the whole thing is about.
 
  • #218
S.Daedalus said:
Again, Gleason just works on the wrong level.

Hmmmm. In math there are many theorems that on the surface seem 'wrong'. I think most people learn early on the math isn't wrong, its their intuition.

Thanks
Bill
 
  • #219
mfb said:
There is no probability (and no need for it) in MWI. This is the last time I repeat it, it gets pointless.

This was key point to when the penny dropped for me about what MFB was saying. One does not have to introduce probability - simply Hypothesis testing using Bayesian methods which require a utility function expressing a confidence.

You can introduce probability if you like, just like you can in Bayesian hypothesis testing - but its not required. I prefer doing that just like in statistics I prefer the frequentest interpretation where the confidence level (say .1%) is given the interpretation of 1 in 1000 but there is no need to do it and in discussions of such things you will find hard core Bayesian's getting really worked up about the issue. Its not something to get worked up about IMHO since it can be shown the two views are logically equivalent - same here - once you realize that then the confidence view has a lot of conceptual advantages in disentangling MWI - one chooses the view appropriate for the situation - here the Baysian approach is better.

Its a subtle point but once you understand it things are a lot clearer.

Thanks
Bill
 
  • #220
S.Daedalus said:
But we don't observe amplitudes. We observe relative frequencies. That they have anything to do with the amplitudes is what the whole thing is about.

You are missing the point.

What you observe is the outcomes of observations - you can count them if you like and get an average - but you do not have to.

On those outcomes one can do hypothesis testing as per Bayesian statistics:
http://en.wikipedia.org/wiki/Bayesian_statistics

'In general, Bayesian methods are characterized by the following concepts and procedures:

The use of random variables to model all sources of uncertainty in statistical models. This includes not just sources of true randomness, but also uncertainty resulting from lack of information.

The sequential use of the Bayes' formula: when more data become available after calculating a posterior distribution, the posterior becomes the next prior.

For the frequentist a hypothesis is a proposition (which must be either true or false), so that the frequentist probability of a hypothesis is either one or zero. In Bayesian statistics, a probability can be assigned to a hypothesis that can differ from 0 or 1 if the true value is uncertain.'

The key point to realize is Bayesian probabilities are slightly different to the usual concept of probabilities you may be more familiar with - its a degree of confidence. It is also important to realize this confidence may be the result of true randomness, but also can be the result of lack of information. In the MWI the idea is the different outcomes of observations is not true randomness, but lack of information, namely which world you are in. Its logically equivalent to the frequentest interpretation (which in this case would correspond to the Ensemble interpretation of QM) but there is no requirement to introduce these concepts.

Although MWI is a deterministic theory it gives a set of data, and as such, if one can introduce some kind of utility function expressing a confidence in what data you will get, then one can do hypothesis testing. It's not necessarily saying its not a deterministic theory just like when one does hypothesis testing on any results one obtains in experiments. Rather what one interprets the different outcomes not as true randomness, but as lack of information, namely exactly what world am I in.

I must say this has been one of the most enjoyable threads I have participated in since the penny dropped about what MFB was on about - every post has allowed me to sharpen in my mind what's going on and the MWI is now much clearer to me.

Thanks
Bill
 
Last edited:
  • #221
bhobba said:
Hmmmm. In math there are many theorems that on the surface seem 'wrong'. I think most people learn early on the math isn't wrong, its their intuition.
I don't think I've indicated anywhere that I don't believe in the theorem as a piece of mathematics; as such, it's clearly true. But it's also true that a priori a piece of mathematics need not tell you anything about the world, and it's just this connection between the world and Gleason that's missing in the MWI.

bhobba said:
You are missing the point.
I don't think I am. The hypothesis to be tested is something like 'after the polarizer, the squared amplitude will be 10% of what it was before'. But now you need to specify something observable that corresponds to the squared amplitude, in order to make this hypothesis even coherent, to make it a statement connecting theoretical and experimental terms. Otherwise, it's simply not clear what 'the squared amplitude' corresponds to in the real world, in the string of observations I make.

What kind of observation should increase my confidence in the hypothesis? If you're saying that any observation in which the measurement results are distributed such that around 10% of them correspond to some detection event, then you are assuming that the squared amplitude translates to a relative frequency of observations---but this is exactly what you set out to establish.

Although MWI is a deterministic theory it gives a set of data, and as such, if one can introduce some kind of utility function expressing a confidence in what data you will get, then one can do hypothesis testing.
I think this is basically the problem. You're saying that---somehow---you can from the MWI extract a confidence level in the proposition 'the squared amplitude corresponds to relative frequencies of observations'. But I think the MWI makes absolutely no statement in that direction; it just does not tell you what sequence of observations you should expect to make, rather, it tells you that you should expect to make all these sequences of observations, without in any way discriminating between them.

Or, to put it in different terms, yes, you can just observe outcomes and notice, oh, they're distributed according to the squared amplitude! But the problem is that the MWI lacks an explanation for this, and without one, seems simply inadequate.
 
Last edited:
  • #222
S.Daedalus said:
I think this is basically the problem. You're saying that---somehow---you can from the MWI extract a confidence level in the proposition 'the squared amplitude corresponds to relative frequencies of observations'.

There is no somehow about it - we have Gleason. Basis independence means Gleason - it's that simple. It means a confidence level - not at this point having anything to do with probabilities - but purely on the basis its a number from 0 to 1 - that is given by the Born Rule.

Wallace has a lot to say about this. He points out, correctly, his rational agent approach also implies it because a rational agent should have no preference for any particular basis - these are man made things and should have no bearing on your confidence. If you choose any other confidence measure than Born it's not basis independent. If you do it you have chosen one that singles out a particular basis - physically to me that's rather - well - weird. Its like in GR - we don't believe the physics depends a coordinate system and using that you get GR. GR is pretty weird in its conclusions - but it's really the same thing - coordinate systems are arbitrary - nature should not depend on it. In fact, and this is purely a personal comment based on my personal philosophy, I believe this is the KEY lesson of modern physics - at rock bottom nature is about symmetry and the very closely related concept of invariance. Nature should be invariant to man made choices like co-ordinate systems or vector space basis. Its the weirdness of having states that form a Hilbert space. Normal probability vectors are not like that - they are positive vectors who entries add up to 1 and do not have the extra structure of a Hilbert space - that's the root cause of the whole issue.

Thanks
Bill
 
Last edited:
  • #223
bhobba said:
There is no somehow about it - we have Gleason. Basis independence means Gleason - it's that simple.
But Gleason does not mean that the outcomes of observations in the MWI is distributed according to Born's rule (and if it did, you wouldn't need the whole Bayesian detour). The connection between Gleason and the distribution of outcomes in the standard theory is the collapse; without the collapse, this connection is missing, and there does not seem to be anything in the MWI that could take up that role.
 
  • #224
S.Daedalus said:
But Gleason does not mean that the outcomes of observations in the MWI is distributed according to Born's rule (and if it did, you wouldn't need the whole Bayesian detour).

Errr - come again.

What Gleason says is the only measure (ie a number between 0 and 1) that can be defined, if you want basis independence, is the Born Rule. That measure can represent a confidence level or probabilities - its purely up you how to interpret it.

The Bayesian detour is purely for conceptualization in a deterministic theory. It is a confidence level in face of lack of knowledge - not due to fundamental randomness - this is a key point. What world you will be in prior to observation is something you do not know - you lack the knowledge to determine it until you have done the observation. But prior to doing the observation you can know a confidence level - and Born is the only one comparable with basis independence.

Once you have a series of observational outcomes then you can give a frequentest probability interpretation - no problem. But for understanding what's going on in the MWI its best not to view it that way or you run into issues.

Thanks
Bill
 
Last edited:
  • #225
tom.stoer said:
Experimentally I get a result string like "xyxxyxx..." with a statistical frequency calculated based on the result string.
Usually you don't. You can get something close to the calculated frequency.

Now I expect to get these probabilities assigned to branches (= result strings) in the MWI as well (the formalism is the same, only the interpretation changes).
If you use probabilities, you do not change the interpretation. You try to combine MWI and Copenhagen, which does not work.

w/o having Born's rule at hand I do not have a pre-defined concept of probability (I have to derive it somehow).
You don't have to derive it.
My first proposal is to assign 50% to "x" and 50% to "y".
Why not 37% an 63%? Or 3.14% and 96.86%?
Anyway, probabilities are not present in MWI.

You say that "there is no probability".
I observe statistical frequencies.
→ eh? (sorry, but I am not able to be more specific)
You are observing measurement results, nothing else.

S.Daedalus said:
But we don't observe amplitudes. We observe relative frequencies.
See bhobba's reply.

S.Daedalus said:
What kind of observation should increase my confidence in the hypothesis? If you're saying that any observation in which the measurement results are distributed such that around 10% of them correspond to some detection event, then you are assuming that the squared amplitude translates to a relative frequency of observations---but this is exactly what you set out to establish.
No I don't assume that. I just choose the definition of "hypothesis verified" to be such that it gets confirmed in a large measure, if the hypothesis is right. Why? Because that's the best we can do.
 
  • #226
mfb said:
No I don't assume that. I just choose the definition of "hypothesis verified" to be such that it gets confirmed in a large measure, if the hypothesis is right. Why? Because that's the best we can do.

This is the key point, and what happened with my - I get it - moment. You do not have to introduce probabilities. Since in the MWI you will have a copy in each world you do not know prior to the observation which one you will experience - you know it will be one - but which one? All you can know is a confidence level. It may SEEM that confidence level should be equal for any world but what Gleason shows is if you assume that you are singling out a preferred basis. The only rule that respects that is Born. This is weird and counter intuitive - sure - but if you want a view of fundamental physics in accord with the lessons we have learned (ie basis and coordinate systems are not what fundamental laws depend on) it's what's required.

Thanks
Bill
 
  • #227
Following this discussion is driving me insane. All the arguments I see here in favor of MWI reproducing QMs probability predictions seem circular.

I have no problem apparent with probability in a deterministic system arising from ignorance of an observer, however, if MWI is deterministic then I would expect it to contain all the information necessary to explain probabilities to be observed, even if the observer is ignorant themselves. I can't see that reason being put forward in this thread.

Let us set up a QM polarization experiment that gives 90% probability of outcome A, and 10% of outcome B.

1) I am going to assume that the QM predictions are correct and there is a 90% chance that I observe A.

2) Now I am going to assume MWI causes the world to be split into 2 equal measures. Each universe after the split is equal. In each universe I observe A and B.

3) Following from 2, I assume that the all things being equal, MWI predicts that there is 50% chance of both A and B.

4) I'm assuming 1 and 3 contradict

5) I am assuming 1 of my assumptions is incorrect.

So, which of the above is incorrect?

The only answer I can see is that 2 is incorrect, and that the world is not split equally. Is this the standard way of thinking about MWI?

The only way I would personally interpret this is that the world is split into a large or infinite number of branches, in which 90% of I experience A, and 10% of which I experience B.

I suppose the infinite maths might work out if we imagine infinite universes occupying an infinity divisible space from 0 to 1, and all the universes from 0 to 0.9 experience A, 0.9 to 1 experience B.

This is the only type of thinking that makes any sense to me, which seems incredibly simple to me, however, I haven't noticed an explanation like this in this thread (Although, maybe I am completely missing the point of this thread). I think I may have elsewhere read people trying to a similar interpretation to me but a much less straightforward way.

Anyway, after this thread, I am completely confused.
 
  • #228
lukesfn said:
Now I am going to assume MWI causes the world to be split into 2 equal measures. Each universe after the split is equal. In each universe I observe A and B.

That assumption, via Glaeson, means you are abandoning basis independence. Why do you want to choose one basis over another? These are man made things introduced for calculational convenience - why do you think nature should depend on such a choice?

lukesfn said:
The only answer I can see is that 2 is incorrect, and that the world is not split equally. Is this the standard way of thinking about MWI?

I don't think there is any standard way of thinking about it, there seems to be a number of slightly different takes about (and certainly Wallace freely admits that), but assuming a rational agent is assigning confidence levels to experiencing a particular world after an observation, its via the Born rule, with a number of different arguments showing that.

Thanks
Bill
 
Last edited:
  • #229
lukesfn said:
Following this discussion is driving me insane. All the arguments I see here in favor of MWI reproducing QMs probability predictions seem circular.

I have no problem apparent with probability in a deterministic system arising from ignorance of an observer, however, if MWI is deterministic then I would expect it to contain all the information necessary to explain probabilities to be observed, even if the observer is ignorant themselves. I can't see that reason being put forward in this thread.

Let us set up a QM polarization experiment that gives 90% probability of outcome A, and 10% of outcome B.

1) I am going to assume that the QM predictions are correct and there is a 90% chance that I observe A.

2) Now I am going to assume MWI causes the world to be split into 2 equal measures. Each universe after the split is equal. In each universe I observe A and B.

The terminology of the universe "splitting" is not really accurate. A better way to think of it, in my opinion, is that the set of possibilities is always the same, and the "weight" associated with each possibility just goes up or down with time.
 
  • #230
bhobba said:
...assuming a rational agent is assigning confidence levels to experiencing a particular world after an observation, its via the Born rule, with a number of different arguments showing that.

You're not just assuming a rational agent, but a rational agent who has heard and understood the relevant arguments.
 
  • #231
bhobba said:
That assumption, via Glaeson, means you are abandoning basis independence. Why do you want to choose one basis over another? These are man made things introduced for calculational convenience - why do you think nature should depend on such a choice?
I need to brake this down further then.

assumption 1) 1 universe is replaced by 2 universes with 2 out comes.
assumption 2) both universes are equal, apart from the different out come.
assumption 3) 2 equal universes implies equal confidence in experience, so apply 50/50 probability.

Here, for me, the easiest assumption to sacrifice is 1.

At least, I need to brake 2. Once you apply a different amplitude to each universe, that seems to define them as un-equal.

bhobba said:
I don't think there is any standard way of thinking about it, there seems to be a number of slightly different takes about (and certainly Wallace freely admits that), but assuming a rational agent is assigning confidence levels to experiencing a particular world after an observation, its via the Born rule, with a number of different arguments showing thatl
What other ways are there of looking at it other then an unequal split?

What about my way of looking at a continues space of universes that can be split unevenly. Is that consistent with the MWI mathematics? Is that a common interpretation?

edit:
stevendaryl said:
The terminology of the universe "splitting" is not really accurate. A better way to think of it, in my opinion, is that the set of possibilities is always the same, and the "weight" associated with each possibility just goes up or down with time.
splitting, doubling, multiplying, branching, or possibilities all sound exactly the same to me. I will think of it how it makes sense to me. I think the problem is that people just have a realization when they start thinking about a world differently, then come and try to explain it to you, using the same word that now has a different meaning to you, without making enough effort to explain their new meaning.
 
Last edited:
  • #232
bhobba said:
Errr - come again.

What Gleason says is the only measure (ie a number between 0 and 1) that can be defined, if you want basis independence, is the Born Rule.
A measure on the linear subspaces of Hilbert space. But those are only important because of the collapse postulate. It does not give you a measure on branches. It just doesn't talk about them at all, at least that I can see.

mfb said:
No I don't assume that. I just choose the definition of "hypothesis verified" to be such that it gets confirmed in a large measure, if the hypothesis is right. Why? Because that's the best we can do.
But the hypothesis that the relative frequencies of observations are given by the squared amplitude is independent of the MWI; it does not follow from it. And without it, the MWI does not constitute an alternative to standard QM. At least, this is the issue of the problem of probability. Certainly, you can form this hypothesis, test it, and find it confirmed; but if you accept the validity of the MWI, then all you've succeeded in doing is uncovering an explanandum that your theory fails to account for.
 
  • #233
bhobba said:
The key point to realize is Bayesian probabilities are slightly different to the usual concept of probabilities you may be more familiar with - its a degree of confidence. It is also important to realize this confidence may be the result of true randomness, but also can be the result of lack of information. In the MWI the idea is the different outcomes of observations is not true randomness, but lack of information, namely which world you are in.
I have thought about this before but couldn't reconcile it with the idea of branch splitting. If the two branches already exist before the measurement we have a lack of information in which of them we are because they look the same to us. The outcome of our experiment then simply improves our knowledge.

But that's not how the MWI is understood by most people in this thread. Initially, we have one branch which splits into two due to decoherence, so the lack of knowledge should refer to in which branch we will end up and not in which branch we are.
 
  • #234
lukesfn said:
What about my way of looking at a continues space of universes that can be split unevenly. Is that consistent with the MWI mathematics? Is that a common interpretation?

Well Wallace didn't elaborate and my knowledge at a more detailed level of MW really comes from that book. He simply said he would be really surprised if what he advocates is the same as what other colleagues thought.

And yes what you suggest above is consistent and I have heard others suggest that.

Thanks
Bill
 
  • #235
stevendaryl said:
You're not just assuming a rational agent, but a rational agent who has heard and understood the relevant arguments.

Hmmmm. Interesting point. Not sure if those advocating such an interpretation make a distinction, or even if Bayesian theory makes a distinction. I suspect it must for the Bayesian approach to work ie a rational agent in possession of all the relevant facts and rational enough to figure out the confidence level. But the other thing about the Bayesian approach is the confidence gets updated as more observations occur so if they don't know it initially they eventually will.

Thanks
Bill
 
  • #236
kith said:
so the lack of knowledge should refer to in which branch we will end up and not in which branch we are.

I meant which branch you are in after the observation.

Thanks
Bill
 
  • #237
lukesfn said:
splitting, doubling, multiplying, branching, or possibilities all sound exactly the same to me.

Well, none of them is accurate. There is no doubling, multiplying or branching going on, really. Those all misleadingly imply discreteness, that there is a number of possibilities and you can just count them. But in general, the wave function is compatible with an infinite number of possibilities. It doesn't make sense to "count" the number of branches.
 
  • #238
bhobba said:
Well Wallace didn't elaborate and my knowledge at a more detailed level of MW really comes from that book. He simply said he would be really surprised if what he advocates is the same as what other colleagues thought.

And yes what you suggest above is consistent and I have heard others suggest that.

Thanks
Bill
Thanks,

Perhaps I should try stop following this thread around this point. At least I have an interpretation of MWI that makes some kind of sense to me that might be inline with how some others more knowledgeable might think. It certainly doesn't make sense to me to say that one branch is more likely to be experience then the other, without a reason other then to say, that is what is necessary by the amplitudes. Honestly, I am uncomfortable with MWI on a few different levels levels, and find a bunch of other interpretations seem a lot easier to swallow, however, I would expect that it should at least make sense to me.
 
  • #239
lukesfn said:
Honestly, I am uncomfortable with MWI on a few different levels levels, and find a bunch of other interpretations seem a lot easier to swallow, however, I would expect that it should at least make sense to me.

MW is NOT my preferred interpretation either - I hold to the Ensemble Interpretation with deocherence.. It just interested me enough to investigate it further. And indeed it has proved quite interesting.

Thanks
Bill
 
  • #240
bhobba said:
I meant which branch you are in after the observation.
But obviously, you're in all of them; anything else assumes tacitly some sort of 'you-ness' that gets randomly distributed over the branches after the split, making one of them be the 'you'-you, with the others being mere impostors, leading to something like the Albert-Loewer 'single mind' theory.

Anyway, for another argument that the MWI alone does not single out the quantum probabilities, consider the following: whatever set of axioms you have, if you enlarge it, and get a consistent theory, which has a certain consequence, then the original axioms could not entail something inconsistent with that consequence, right?

So, now enlarge the axioms of many worlds with a preferred observable that is always definite, and a dynamics for this observable---getting something like Bohmian mechanics. But in Bohmian mechanics, the probabilities only come out the right way if you assume the initial particle distribution was consistent with the Born rule; from there on, because the Schrödinger equation entails the continuity of probability, you know that it always holds.

But in principle, you could substitute another probability distribution as an initial condition, and thus get probabilities different from the quantum case. However, since Bohmian mechanics is just no-collapse QM + the particle trajectories, and you can get non-QM probabilities in the theory, only the no-collapse part alone can't entail the quantum probabilities.

(The situation is actually not quite analogous, as in Bohmian mechanics, you have a sense of 'discovering' some underlying fact, i.e. observing a particle here rather than there, and thus, you should be able to appeal to Gleason's theorem; however, you lack noncontextuality. The broad similarity is, however, that both are no-collapse theories, and in both, you have to do extra work in order to get the QM probabilities, with some believing that it's flat-out impossible to do so in the MWI case. At any rate, this provides a way to associate probabilities with the outcomes in experiments in no-collapse QM that is just as consistent as the Born rule.)
 
Last edited:
  • #241
bhobba said:
MW is NOT my preferred interpretation either - I hold to the Ensemble Interpretation with deocherence.. It just interested me enough to investigate it further. And indeed it has proved quite interesting.

Thanks
Bill

I don't actually see much difference between an ensemble interpretation and a Many Worlds interpretation.
 
  • #242
stevendaryl said:
Well, none of them is accurate. There is no doubling, multiplying or branching going on, really. Those all misleadingly imply discreteness, that there is a number of possibilities and you can just count them. But in general, the wave function is compatible with an infinite number of possibilities. It doesn't make sense to "count" the number of branches.

I would be included to agree with you. However, in the context I was using them, they where definitely meant to be discrete, and I was trying to show that it is wrong to thing of them as such. It isn't helpful to say the word is wrong in that context.

However, on second thought, perhaps split is a good word, because it is easy to conceptualize splitting an apparently continuous entity unevenly since we do it every we slice a piece of pie, or take a bite of food.

edit:
stevendaryl said:
I don't actually see much difference between an ensemble interpretation and a Many Worlds interpretation.
I thought that the ensemble interpretation is mostly a plea of ignorance that doesn't explain help anything... I would love somebody to be able to explain to me otherwise, however, this is off topic.
 
Last edited:
  • #243
S.Daedalus said:
But obviously, you're in all of them

Not quite.

There is a copy of you in all of them and each copy experiences a different outcome. What copy you are and what you experience is the same thing.

Thanks
Bill
 
  • #244
lukesfn said:
I thought that the ensemble interpretation is mostly a plea of ignorance that doesn't explain help anything... I would love somebody to be able to explain to me otherwise, however, this is off topic.

Actually there is a lot of similarity with the decoherence ensemble interpretation the detail of which you can read about here (as well as decoherence):
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

The key difference is the ensemble decoherence interpretation leaves up in the air exactly how a improper mixed state becomes a proper one (I get around that by my version of QM has as a basic postulate observationally equivalent systems are equivalent but that is another story). The MWI has no such issue because all outcomes actually occur and are equally real, its just that each copy of you only experiences one outcome. Slightly different issues - with MW you have this extravagant exponentially increasing worlds and no problem about what is chosen because they all are, and with mine you only have one outcome but the problem of how that outcome is chosen.

Thanks
Bill
 
  • #245
bhobba said:
Slightly different issues - with MW you have this extravagant exponentially increasing worlds

Viewed slightly differently, the number of "possible worlds" remains constant, but associated with each possible world is an amplitude that changes with time. All worlds exist at all times.
 

Similar threads

Replies
4
Views
116
Replies
313
Views
22K
Replies
4
Views
71
Replies
41
Views
4K
Replies
34
Views
3K
Replies
5
Views
2K
Back
Top