Question regarding the Many-Worlds interpretation

In summary: MWI itself is not clear on what to count. Are all branches equal? Are there some branches which are more "real" than others? Do you only count branches which match the experimental setup? Do you only count branches which match the observer's expectations? All of these questions lead to different probabilities. So the idea of counting branches to get a probability just doesn't work with the MWI.But we can still use the MWI to explain why we observe "x" more often than "y". In the grand scheme of things, there are more branches where we observe "x" because "x" is the more stable and long-lived state. So even though
  • #71
kith said:
A simple definition is to take the von Neumann measurement scheme and instead of singling out one term in the final sum by collapse, you interpret each term as a branch. The physical justification to talk about such a sum in the first place is given by decoherence.
So tracing out the environment E the system "cat + pointer" is described by

##\rho_E = \text{tr}_E (\rho) = p_a\,P_a\,P_1 + p_d\,P_d\,P_2 + \ldots##

pa,d is the probability to find the cat a(live) or d(dead)
Pa,d is the corresponding projector to a huge cat-subspace
P1,2 is the corresponding projector to a huge pointer subspace with positions 1 or 2
"..." indicates further branches which are supressed due to decoherence

Correct?
 
Last edited:
Physics news on Phys.org
  • #72
Yes, looks about right.
 
  • #73
OK.

So for my original experiment with polarizations σ and result strings s(σ) like "xyxxyy..." the reduced density matrix looks like

##\rho_E = \sum_\sigma p_\sigma\,P_\sigma\,P_{s(\sigma)} + \ldots##

The remaining problem is to show that the probabilities pσ obtained for the reduced density matrix correspond to the correct quantum mechanical probabilities.
 
  • #74
The basic measurement being referenced has two states, but the probabilites of each state isn't 50-50. Doesn't two states mean 2 branches at each measurement? If two states means two branches, then how can state counting in any way help with probabilities. I have a hard time with the view that calculating the probability isn't fundamental to the state preparation and measurement processes but, rather, is somehow a function of interpretation.
 
  • #75
meBigGuy said:
The basic measurement being referenced has two states, but the probabilites of each state isn't 50-50. Doesn't two states mean 2 branches at each measurement? If two states means two branches, then how can state counting in any way help with probabilities.
As far as I understand branch counting isn't relevant at all (but I will come back to it later). Instead there is a claim that the Born rule follows automatically. In the above mentioned example we would have something like

##( a_x|x\rangle + a_y|y\rangle ) (\langle x|a_x^\ast + \langle y|a_y^\ast) \; \stackrel{\text{decoherence}}{\longrightarrow}\; |a_x|^2 |x\rangle\langle x| + |a_y|^2 |y\rangle\langle y| + \ldots ##

(I know that using reduced density matrices is by no means complete and that the r.h.s. is only an approximation following from partial trace, but it allows for a rather compact notation; for the same reason I ommited further subsystems like pointers etc.)

So there are several claims:
- there is a preferred factorization of the Hilbert space singled out by dynamics (not fixed by hand *)
- this factorization is stable w.r.t. further interaction and time evolution (dynamical superselection rule for subspaces **)
- this allows for a description of subsystems in terms of (reduced) density matrices with off-diagonal terms ≈ 0
- in addition dynamics automatically fixes the preferred pointer basis (corresponding to polarization states)
- the coefficients |ai|2 on the r.h.s. are determined by dynamics (Born rule derived rather than postulated)
- MWI follows directly from (*) and (**); every term in the sum corresponds to one branch

Questions:
- is this correct? (up to sloppy notation)
- do N polarization experiments simply result in 2N terms with subscripts "xxx..."?
 
  • #76
tom.stoer said:
As far as I understand branch counting isn't relevant at all (but I will come back to it later).

Historically, Everett, Wheeler, Deutsch and other MWI supporters agreed that probabilities must emerge from branch counting, if done properly. This only changed when recent results (90s) showed, that the number of branches depend on the course graining level you apply and that there's no objective way to count. This was a big problem for the theory, and derivations shifted to different approaches that are much less well founded in logical deduction than branch counting but instead called philosophical principles and new postulates.

So I think branch counting is very relevant, it's just not used anymore. But being unable to identify (and count) branches is also a huge problem for identifying stable single measurement outcomes, especially for a measurement on a finite or countable basis.

Instead there is a claim that the Born rule follows automatically. In the above mentioned example we would have something like
##( a_x|x\rangle + a_y|y\rangle ) (\langle x|a_x^\ast + \langle y|a_y^\ast) \; \stackrel{\text{decoherence}}{\longrightarrow}\; |a_x|^2 |x\rangle\langle x| + |a_y|^2 |y\rangle\langle y| + \ldots ##

It depends on what you mean with "automatically", but in general the reduced density matrix "probabilities" are not used for arguing about probabilities in MWI. Even more, density matrices are entirely avoided in recent arguments about decoherence, or only used as an illustrative tool, not as a fundamental contribution to derivations. The reason is that density matrices make implicit use of the measurement postulate. This may not be obvious, but the one reason to describe an ensemble with the use of a density matrix is that the measurement postulate allows you to mix quantum and classical probabilities. So since the measurement reduces the state to probabilities anyway, we can encode an ensemble in the compact form of density matrices. If there was no measurement postulate, the only way to encode an ensemble would be to list the single states and their corresponding probability.

So there are several claims:
- there is a preferred factorization of the Hilbert space singled out by dynamics (not fixed by hand *)
- this factorization is stable w.r.t. further interaction and time evolution (dynamical superselection rule for subspaces **)

This is a tough one. It is sometimes claimed, that there are preferred factorizations, but in general, there are no such factorizations. Even worse: There are no factorizations, at all! As a simple example, look at the Fock space of any particle and try to single out one of those particles and describe it in a tensor factor space. That does not work. It gets even more problematic if you consider the direct sum of several Fock spaces.

So the whole "factorization business" used in decoherence is very questionable, as it cannot really be used to talk about the local information we have about a system.

- this allows for a description of subsystems in terms of (reduced) density matrices with off-diagonal terms ≈ 0
- in addition dynamics automatically fixes the preferred pointer basis (corresponding to polarization states)

The preferred basis problem is only solved by decoherence in some pathological cases, but unfortunately not in general. There has been a lengthy discussion of this in the community, and it's now generally accepted, that we need more than decoherence to fix a basis.

- the coefficients |ai|2 on the r.h.s. are determined by dynamics (Born rule derived rather than postulated)

No, the Born rule enters this argument through the back door. By using a density matrix for describing the reduced state and comparing that state to an ensemble, you implicitly used the Born rule.

Cheers,

Jazz
 
  • #77
tom.stoer said:
So there are several claims:
- there is a preferred factorization of the Hilbert space singled out by dynamics (not fixed by hand *)
Given only the full Hilbert space and the full Hamiltonian, I don't see how a preferred factorization could possibly emerge. I think this claim is quite certainly false. However, I haven't read much MWI papers and not a single one which makes that claim explicitly. I don't know in detail how people today define the MWI.

tom.stoer said:
- the coefficients |ai|2 on the r.h.s. are determined by dynamics (Born rule derived rather than postulated)
The decoherence argument is that the processes of relaxation (= substantial change in |ai|) and decoherence happen on very different time scales. Of course, this doesn't answer the fundamental question why we should interpret the coefficients as a probability.
 
Last edited:
  • #78
Jazzdude said:
Even more, density matrices are entirely avoided in recent arguments about decoherence, or only used as an illustrative tool, not as a fundamental contribution to derivations. The reason is that density matrices make implicit use of the measurement postulate.
Jazzdude said:
The preferred basis problem is only solved by decoherence in some pathological cases, but unfortunately not in general. There has been a lengthy discussion of this in the community, and it's now generally accepted, that we need more than decoherence to fix a basis.
Can you recommend some papers to read about this?

Also a little off topic: you mentioned that you are not an Everettian. Do you have a preferred interpretation?
 
  • #79
Jazz, thanks for the information; your conclusions seem to be much more pessimstic (and quite different) than those from "standard" review papers (Zurek, Zeh et al.). If I understand you correctly then branch counting (wich was my original idea) does not work for weights or probabilities but in addition there are no other derivations, either (and even if branch counting would work, I am still convinced that the contradictions from post #1 are not resolved).

Is there any objective review paper summarizing the current status including open issues for MWI and decoherence?
 
  • #80
Let me explain why I thought that branch counting is relevant. It depends on the perspective.

Top-down
- one starts with a Hilbert space
- dynamics selectes a preferred basis and mutual orthogonal subspaces
- subspaces, identified with MWI branches, are "dynamically disconnected"
- weights for subspaces are related with Born's rule
(let's assume that all these ideas are correct then everything is fine - and from a top-down-perspective for the full Hilbert space branch counting is irrelevant)

Bottom-up
from the perspective of one observer "within" one branch the relative weights of the branches are irrelevant and not observable; for repeated experiments as described above one observer can write down the results & branchings and in most cases (as defined by branch counting) he will find himself in branches with very low probability (as defined by QM calculations); but the observer, from his bottom-up-perspective, does not see the full Hilbert space, and for him the calculated probabilities = weights for branches are irrelevant; only branch counting, which is directly related to the results of the experiments, is relevant.

So the contradiction I indicated in my first post is relevant for a single observer (in the sense I understood decoherence, branches and observers in MWI). Most observers with knowledge about QM will wonder why they observe and live in a branch with low probability. Of course one single observer can always argue that he may be the untypical, pure observer living in a branch with low probability, but that in total everything is fine. But if most observers have to argue that they live in unypical branches, then this is a major issue.

That's why I still think that branch counting and QM probabilities should not contradict each other.

Of course this becomes irrelevant if branching can be defined differently.
 
Last edited:
  • #81
tom.stoer said:
Bottom-up
from the perspective of one observer "within" one branch the relative weights of the branches are irrelevant and not observable; for repeated experiments as described above one observer can write down the results & branchings and in most cases (as defined by branch counting) he will find himself in branches with very low probability (as defined by QM calculations); but the observer, from his bottom-up-perspective, does not see the full Hilbert space, and for him the calculated probabilities = weights for branches are irrelevant; only branch counting, which is directly related to the results of the experiments, is relevant.

So the contradiction I indicated in my first post is relevant for a single observer (in the sense I understood decoherence, branches and observers in MWI). Most observers with knowledge about QM will wonder why they observe and live in a branch with low probability. Of course one single observer can always argue that he may be the untypical, pure observer living in a branch with low probability, but that in total everything is fine. But if most observers have to argue that they live in unypical branches, then this is a major issue.
Well, it depends on your goal.

Do you want to get physics right in most branches, or in most of the measure?
The first won't succeed, but the second does (as most of the measure will see "right" distributions).

In a probabilistic interpretation, do you want to get physics right in most possible experimental results, or with the largest probability?
The first won't succeed, but the second does.

"Most branches get 'wrong' results" is as much an issue for MWI as "most possible result are very unlikely" is for probabilistic interpretations. I don't think it is one.
 
  • #82
mfb said:
Well, it depends on your goal.

Do you want to get physics right in most branches, or in most of the measure?
The first won't succeed, but the second does (as most of the measure will see "right" distributions).

In a probabilistic interpretation, do you want to get physics right in most possible experimental results, or with the largest probability?
The first won't succeed, but the second does.

"Most branches get 'wrong' results" is as much an issue for MWI as "most possible result are very unlikely" is for probabilistic interpretations. I don't think it is one.
Excellent summary.

First of all it's not a problem in collapse interpretations b/c there is only one observer.

The problem in MWI is that we want to explain physical and observable results fom the top-down perspective (which gets it right) but that our own perspective of observers is the bottom-up perspective. And in this perspective there is no measure, there is only one branch which I am observing. I introduce the labelling "xyxxyxxxx..." for branches (and I know that other observers do the same thing in their branch - but I don't need this for my argumentation).

Now it's rather simple: if the measure is correct and if the measure affects the branching in some way then there is a chance that the observed frequency of "x" and "y" I observe is the correct one (correct in terms of the 90% and 10%). If the measure does not affect the branching then the measure is not observable for me, and all that remains is the result string "xyxxyyx...". If the branching is not affected by the measure in some way then "x" and "y" in each result string are equally probable and therefore the measured statistical frequency is wrong (when compared to the QM result based on 90% and 10%).

So I cannot agree to
mfb said:
In a probabilistic interpretation, do you want to get physics right in most possible experimental results, or with the largest probability?
The first won't succeed, but the second does.
I want to get physics right in most experimental results in all cases and for all interpretations b/c this is what I as a single observer do observe.

There is a big difference between the branching in MWI (as I have described it) and the collapse: the collapse to either "x" or "y" is affected by the QM probability (we do not know how, it's postulated, but it gets it right); the collapse to |x> happens with 90% probability, so in 90% the single observer will observe "x" and everything is fine.

In the MWI as described by me the branching is not affected by the measure of the branch, so there is one observer in the "x" branch and one observer in the "y" branch. Repeating the experiment each observer will calculate the probability for his branch and will find 1/2N which does not agree with the QM result. But b/c he cannot change from the bottom-up to the top-down perspective all he can do is to conclude that he was not lucky and is sitting in the "wrong branch", unfortunately.

And here's my point: repeating this very often the observer will no longer trust in this model b/c in most cases be is sitting in the wrong branch; and if this happens to most observers (and it happens to most observers in the experiment I described) most observers will no long trust in this model.

My conclusion is not that MWI is wrong, but that at least this model of branch counting is wrong (and it's this simple branching which is always used to explain the MWI, so it's not irrelevant). My conclusion is that a measure multiplying a branch is not observable from the bottom-up perspective and does therefore not help (b/c the top-down perspective which may get it right theoretically is not available for experimentalists). So there must be something more complex (e.g. a measure affecting further branching)

Another conclusion is that these contradictions cannot be resolved by wording, discussions etc. b/c MWI says that there is nothing else but the QM state vector it should be possible to derive the correct observation based on the formalism = via calculations. So if the above mentioned simple branching model is wrong that's not a problem for me but for MWI (b/c it's what they quite frequently use to explain MWI); they should be able to get the model right.

The claims that I mentioned above is in summary what I expect from the MWI + decoherence:
- to get a correct (classical) and dynamically stable factorization
- to get the classically stable preferred pointer basis
- to get both the top-down and the bottom-up perspective right (correct statistical frequency for me as an observer!)
- to define branching, branch structure, branch counting and measures or to derive it from the theory
(please note that this is well-defined in the collapse interpretations; yes, it's an additional axiom and not derived from the theory, but it works FAPP; as long as this is not defined or derived for MWI than I am not convinced that MWI works FAPP = for the bottom-up perspective)

A final remark: I think I understood that diffent branches in the MWI are nothing else but infinite-dimensional "dynamically selected and dynamically stable, orthogonal superselection sectors" in the whole Hilbert space. If this is true and can be proven then I do not have any philosophical problem with the MWI. On the contrary, I would become a follower immediately. But what I learn from this discussion is that we are far from proving these claims (which is different from what I read or hear e.g. in Zeh's articles and talks!)
 
Last edited:
  • #83
Maybe there is more then one way to interpret what single branch is.

Perhaps what some refer to as single branch, if you look at the mathematical definition, could also be interpreted as a (infinite?) number of identical branches.

Maybe it just makes things simpler to say there are 2 branches with different weights, rather then there are 2 sets of an of an infinite number of branches, with the infinities balanced right so that they produce the right odds.

I have very little understanding about anything though.
 
  • #84
tom.stoer said:
Jazz, thanks for the information; your conclusions seem to be much more pessimstic (and quite different) than those from "standard" review papers (Zurek, Zeh et al.).

I think I'm much more realistic than pessimistic. I've been working in this field for more than 10 years and I had to read a lot of exaggerated claims, that are kept up despite very good evidence against them. Many physicists, who work in that field have become quite delusional, following the same route and trying to improve their arguments even though we know that they most likely won't lead anywhere. Of course this is partly due to the lack of alternatives.

Interestingly, especially Zurek and Zeh keep claiming that their approaches solve all/most problems while the community disagrees or at least remains very skeptical. I would recommend that you look elsewhere if you want something less subjective. See below.

If I understand you correctly then branch counting (wich was my original idea) does not work for weights or probabilities but in addition there are no other derivations, either (and even if branch counting would work, I am still convinced that the contradictions from post #1 are not resolved).

Yes, branch counting does not work in general, and in those instances where you can make it work, they predict the wrong statistics.

Other "derivations" really don't deserve the name. The assumptions that go in there are just as good as postulating the Born rule in the first place. Zurek for example argues that, if a probability law exists (and certain stability assumptions about states in the environment are true), then it must be the Born law. This is only slightly better than Gleason's theorem and doesn't really tell us, why there should be probabilities in the first place, specifically in the realist understanding of the quantum state he refers to.

Others believe the problem is not one of physics, but one of behaving sensibly inside a quantum universe. They argue that if you wanted to bet on which reality you end up with, the best bet would be in agreement with the Born rule prediction. Of course, what they postulate without further explanation is, that the "value" assigned to the reality is proportional to the squared magnitude amplitude.

David Wallace has invested some time on these approaches and published at least two relevant papers (one with Deutsch and one alone), but these days he doesn't believe in the idea anymore. He says, that the assumptions are too strong (partly because another publication has shown that other natural betting games give completely different results), and that the Born rule should instead follow from a proof that shows, that an observer, who *assumes* the Born rule is correct, will never find good statistical evidence against his assumption.

Is there any objective review paper summarizing the current status including open issues for MWI and decoherence?

David Wallace has written a good overview paper, but it's from 2007 and does not include the most recent publications and is slightly biased towards MWI. Still, it's a very relevant read:

David Wallace (2007) , "The quantum measurement problem: State of play"
http://arxiv.org/abs/0712.0149

And then there's Hanneke Janssen's thesis (2008) "Reconstructing Reality: Environment-Induced Decoherence, the Measurement Problem, and the Emergence of Definiteness in Quantum Mechanics"
http://philsci-archive.pitt.edu/4224/

I hope this will help to give you a perspective on the state of the art. The last 5 years have not changed anything significant. There have been papers, followed by rebuttals, iterated over a few times. No breakthroughs.

Cheers,

Jazz
 
  • #85
kith said:
Can you recommend some papers to read about this?

Also a little off topic: you mentioned that you are not an Everettian. Do you have a preferred interpretation?

I think both aspects are well covered in the two papers, I linked in my answer to tom.stoer just above. There's much more, but those two are the most complete and coherent collection of arguments I can think of right now. And David Wallace really manages to give a good and nearly objective overview. He is probably the least dogmatic well published Everettian.

And I must say that I don't really subscribe to an interpretation. I'm a realist, and that implies that I believe in some physical state and corresponding process, that explains it all. What I am looking forward to, is a physical theory, that explains observation and our experience exactly. And I think that this will necessarily come with predictions that are experimentally verifiable.

The obvious candidate for a realist theory of observation would be MWI in fact. I have worked on it for a rather long time and came to understand, that it wouldn't work out. But I still believe, that the idea of a universal quantum state with unitary evolution is a very good starting point. In order to avoid to go the MWI route from there, we have to adjust the questions to research slightly. I think, one of the reasons why MWI is flawed is the requirement for factor spaces of some sorts, but the idea of looking into subjectively relevant information (i.e. the relative state of the universe with respect to the observer in MWI) is basically right.

My alternative to MWI therefore asks the following questions:

1) What does quantum theory look like from the inside?
If you are a mechanism, realized within a quantum universe, what can you say about the universe and your environment?
MWI answers this with relative states, but those are not local. But locality is fundamental for the way we learn about our environment, because we do that by interaction. That leads to the next question:

2) How can we define and describe subsystems of the universe that are not factor spaces?
The obvious alternative to factor spaces would be parts of the universe that a spatially related. Asking for the state of a part of the universe, that is inside some horizon around an observer would be sensible for example. This is clearly not a trivial question.

3) Why can we describe the state of the local universe around us so successfully with a pure quantum state?
Considering everything we know about how to reduce global quantum sates to subsystems there should neither be a factor space representation, nor a pure state representation (but rather a density matrix) for the world we see around us. So there's definitely something missing in our understanding.

4) How would information exchange with the environment affect this local pure state?
Let's say we have found a way to describe the local quantum state from 3), then how would it react if a photon comes in and interacts locally? The photon was not part of the local description before, so it provides new information and a possible source of subjective randomness. The appearance of the new information is also abrupt and would probably require a discontinuous update of the local state.

5) What would be the laws for (4)? Is this related to quantum jumps and possibly even the Born rule?
Random jumps of the perceived quantum state suggest such a relationship. So it may be possible to derive the Born rule from the answer of questions 1-4.

6) How is the construction of the quantum state space compatible with relativity?
When we make QT covariant, we really mean that the hamiltonian and the interactions it generates are covariant (and therefore Einstein-local). The state space contains non-local multi particle states however, and those have a preferred rest frame in which the Schroedinger equation updates non-locally entangled properties instantly. This is not a problem for observations, because the non-locality only leads to correlations that are independent of the order of measurement on entangled systems and the indeterministic nature of the outcomes effectively hides all non-local aspects. This specifically makes the preferred rest frame undetectable.
If we however uncover a collapse mechanism as suggested in 5), or even only insist on a realistic picture of quantum theory, does this break relativity on some level? If not, how can the quantum state space be constructed without reference to a preferred frame?

7) If there is a mechanism associated with the collapse and the Born rule, can it be controlled?
Such control would make 6) a practical problem and might reveal the preferred rest frame, if it exists. Deterministic control over the collapse process may also result in several new physical results related to no-go theorems. For example no-cloning and no-signaling rely on indeterminstic collapses.


I believe that studying and answering these questions will advance our understanding of quantum theory. Interestingly, it seems that none of them have been answered until now, and that is surely partly to be blamed on the rather dogmatic view, that quantum theory cannot be understood in that way. And I'm afraid, that simply asking these questions will be enough reason for some to criticize me.

Cheers,

Jazz
 
  • #86
Jazzdude: have you read Wallace's "emergent multiverse" ?
 
  • #87
Jazzdude said:
1) What does quantum theory look like from the inside?
If you are a mechanism, realized within a quantum universe, what can you say about the universe and your environment? MWI answers this with relative states, but those are not local. But locality is fundamental for the way we learn about our environment, because we do that by interaction.
Thanks for sharing your thoughts. My personal opinion is that we won't get rid of the non-seperability of QM in a satisfactory way. I mean, non-locality is also present in the inside view. Or how do you take Bell tests into account?
 
  • #88
Quantumental said:
Jazzdude: have you read Wallace's "emergent multiverse" ?

Yes. There's nothing new in there, if you have been working in the field for many years. It's mostly an exposition of the idea for someone who has not looked into it a lot. The much better read is "Many Worlds?", also edited by Wallace, because it also contains some critical contributions.

Cheers,

Jazz
 
  • #89
kith said:
Thanks for sharing your thoughts. My personal opinion is that we won't get rid of the non-seperability of QM in a satisfactory way. I mean, non-locality is also present in the inside view. Or how do you take Bell tests into account?

I think we have a misunderstanding there. That's not what I meant to imply. Of course, the non-separability is fundamental and Bell is very important and very right! And I wasn't meaning to imply to change anything about this.

What I meant to say way, that we have to find a way to describe a system locally, in some way. This is clearly an incomplete description, but it's what we do anyway if we describe the world around us. The observable universe does not live in a tensor factor space of the whole universe, still we are able to describe it. Clearly, we cannot just remove anything outside the horizon of observation, because of the non-separability as you rightly stated. But we have to understand how that local description relates to the whole.

Consider doing an experiment that's limited to some spatial region. We get the right experimental predictions from assuming that there is a pure state that can be assigned to the state of the experiment. And the question is, why we can do that. And how that local description that we use so successfully relates to the real state of the whole universe.

Does that make it clearer?

Cheers,

Jazz
 
  • #90
tom.stoer said:
First of all it's not a problem in collapse interpretations b/c there is only one observer.
That is a big issue for collapse interpretations. How do you define a probability, if something either happens or does not? There is no way to observe "ah this happened with 10% probability", it either happened or it did not. Performing the measurement many times does not help, all you do is calculating the "probability" of the whole experimental result (like XYYXYYYYYXXY) and you get the same issue again.
To get a meaningful probability value, you need something like the ensemble interpretation - hypothetical exact repetitions of the experiment. But then you have multiple (imaginary) observers and counting them won't help ;).
 
  • #91
mfb, this is no issue in the collapse interpretation b/c you do not have to derive the probability for the collapse; it's postulated.

First of all we know that the QM probability

##p_i = |\langle i|\psi\rangle|^2##

is correct, simply b/c it agrees with our observation.

In the MWI case the problem is to derive this QM probability from branch counting (or something else in the formalism). You have two "probability measures" and two "perspectives", one which is known from other interpretations and experiments, another one from branch counting.

In the collapse case the collapse to i=x,y happens with the correct QM probability by definition! And when comparing it with experiment I see that it perfectly agrees. I cannot explain why it works (b/c I cannot explain why and how the collapse happens), but it works since nearly one century. In the collapse case there is no branch counting, and the two perspectives are identical! So I would never have the idea not to take this probability into account.

In the MWI case there are less axioms, so there must be more theorems ;-)
 
  • #92
tom.stoer said:
mfb, this is no issue in the collapse interpretation b/c you do not have to derive the probability for the collapse; it's postulated.
Right, but how do you measure this probability experimentally? No measurement result will ever be a probability.

First of all we know that the QM probability

##p_i = |\langle i|\psi\rangle|^2##

is correct, simply b/c it agrees with our observation.
As shown in my previous post, this is already a non-trivial interpretation of the measurement results.

In the MWI case the problem is to derive this QM probability from branch counting (or something else in the formalism).
You don't have to do this, in the same way you don't care about highly improbable events (as calculated with Born) in collapse interpretations.
 
  • #93
Jazzdude: what puzzles me is that you seem to reject Zeh, Zurek, Wallace, Saunders and Deutsch like it's common place. But it really isn't.

More and more people have become sympathetic and some have become downright proponents of MWI in the last 5 years.

Look at skeptics like Matt Leifer, Scott Aaronson and Peter J. Lews, while none of them are downright MWI'ers, they all have written well about it lately.

Peter J. Lewis has written extensively about the Born Rule issue in MWI and so on, but look at his review of Wallace's book: http://ndpr.nd.edu/news/38878-the-emergent-multiverse-quantum-theory-according-to-the-
everett-interpretation/Note that I am *not* a proponent of MWI as I think the born Rule issue is still a issue, but I'd love to understand better how you dismiss the Decoherence approach in terms of bases
 
  • #94
mfb said:
Right, but how do you measure this probability experimentally? No measurement result will ever be a probability.
@mfb, it's trivial.

In the collapse interpretation I have
1) a statistical frequency written down as a result string "xyxxyx..." by one observer
2) a probability which is postulated for the collapse and which can be calculated directly
Both (1) and (2) agree, that's why QM and "Kopenhagen" work

In the MWI I have
1) statistical frequencies written down as a set of result strings by a set of observers
2) no probability postulate

So if MWI shall be correct, then
1) the probability must be derived
2) it must not only work out top-down but also bottom-up

I think the problem I indicated in the very beginning can be fixed by replacing
a) the sum over branches with a branch-specific measure with
b) a sum over branches where the probability is replaced by in infinite number of sub-branches (with the same result string!)
then there is no probability anymore, but the correct statistical frequency is carried by the measure = by branch counting

If this is correct, then my conclusion changes slightly: if MWI shall be correct, then
1) the measure must be derived from branch counting
(it will work out top-down and bottom-up automatically)

Anyway, having one axiom less than in collapse models, MWI has to deliver what it promised as a theorem.
 
Last edited:
  • #95
tom.stoer said:
@mfb, it's trivial.

In the collapse interpretation I have
1) a statistical frequency written down as a result string "xyxxyx..." by one observer
2) a probability which is postulated for the collapse and which can be calculated directly
Both (1) and (2) agree, that's why QM and "Kopenhagen" work
It is not trivial, if you don't use handwaving.
Perform your 10%/90% experiment 1000 times. I am highly confident (from an everyday perspective) you will not get 100 X and 900 Y. Does this mean your theory is wrong? Certainly not.

How do you test "probabilities" predicted by your theory? How do you distinguish between correct and wrong predictions?

I know this can be done. And if you write down a formal way to do this, you can do exactly the same for MWI, if you are interested in hypothesis testing.

I think the problem I indicated in the very beginning can be fixed by replacing
a) the sum over branches with a branch-specific measure with
b) a sum over branches where the probability is replaced by in infinite number of sub-branches (with the same result string!)
then there is no probability anymore, but the correct statistical frequency is carried by the measure = by branch counting

If this is correct, then my conclusion changes slightly: if MWI shall be correct, then
1) the measure must be derived from branch counting
(it will work out top-down and bottom-up automatically)

Anyway, having one axiom less than in collapse models, MWI has to deliver what it promised as a theorem.
(a) is fine.
Having less axioms is better in terms of Occam's razor.
 
  • #96
mfb said:
It is not trivial, if you don't use handwaving.
Perform your 10%/90% experiment 1000 times. I am highly confident (from an everyday perspective) you will not get 100 X and 900 Y. Does this mean your theory is wrong? Certainly not.

How do you test "probabilities" predicted by your theory? How do you distinguish between correct and wrong predictions?
It's about statistical hypothesis tests, levels of significance and all that.

mfb said:
And if you write down a formal way to do this, you can do exactly the same for MWI, if you are interested in hypothesis testing.
Provided it works for both top-down and bottom-up.

mfb said:
(a) is fine.
Having less axioms is better in terms of Occam's razor.
(a) is nice in theory (top-down) but not in practice (bottom-up) as I tried to explain several times.

And yes, having less axioms is fine in terms of Ockham's razor - provided that the required theorems can be proven. But what I read here seems to indicate that there is by no means agreement on these bold claims.

So my conclusion is that MWI has no philosophical acceptance problem but a physical problem. It is not clear whether the required theorems regarding probabilities / measures / branching and branch counting / Born rule etc.. follow from the formalism.
 
  • #97
tom.stoer said:
It's about statistical hypothesis tests, levels of significance and all that.

Maybe this would be considered argumentative, but I don't personally regard those standard tools of statistical analysis as having a whole lot of first-principles, theoretical support for them. They are simply rules of thumb for using statistical data. You can use the same sorts of rules of thumb, regardless of whether you believe the statistics arise from randomness, or ignorance of hidden variables, or many-worlds type multiplicity. If they are just rules of thumb, then the only justification you need for them is that they seem to work pretty well, and that justification is empirical, not theoretical. You don't need a different justification for each interpretation of quantum mechanics.
 
  • #98
stevendaryl said:
you don't need a different justification for each interpretation of quantum mechanics.
I don't think this is true.

You have the Born rule as an axiom in collapse interpretations. You do not have this axiom in MWI, instead you have branch counting. So you need a theoretical derivation and experimental tests, otherwise it's not physics.

The claim that MWI is fully equivalent to other interpretations can't be true if their axioms differ and if the gap cannot be closed by a theorem.
 
  • #99
stevendaryl said:
Maybe this would be considered argumentative, but I don't personally regard those standard tools of statistical analysis as having a whole lot of first-principles, theoretical support for them. They are simply rules of thumb for using statistical data. You can use the same sorts of rules of thumb, regardless of whether you believe the statistics arise from randomness, or ignorance of hidden variables, or many-worlds type multiplicity. If they are just rules of thumb, then the only justification you need for them is that they seem to work pretty well, and that justification is empirical, not theoretical. You don't need a different justification for each interpretation of quantum mechanics.

or symply, data probabilistically infected.
convenient coincidences.

http://plato.stanford.edu/entries/probability-interpret/
 
  • #100
tom.stoer said:
I don't think this is true.

You have the Born rule as an axiom in collapse interpretations. You do not have this axiom in MWI, instead you have branch counting. So you need a theoretical derivation and experimental tests, otherwise it's not physics.

The claim that MWI is fully equivalent to other interpretations can't be true if their axioms differ and if the gap cannot be closed by a theorem.

I don't think it's really true that you have "branch counting" in MWI. The branches are not discrete, there are infinitely many of them. With an infinite collection of possibilities, there is no way to count the numbers. You have to have a measure on the sets of possibilities. I'm not sure whether it is possible to derive the measure to use from first principles, but if it has to be an additional axiom, I don't see how that's any worse than the standard collapse interpretations.

I also think that you're glossing over the conceptual problems with the standard (Copenhagen) interpretation. You say you have the Born rule, but as others have pointed out, there is no way to absolutely test the correctness of that rule. The best you can do is have a rule of thumb for saying when the discrepancy between relative frequencies and the probabilities predicted by the Born rule are great enough to falsify your theory. What such a rule of thumb amounts to is ASSUMING that our actual history is fairly typical of the possible histories described by the theory. Without such an assumption, a probabilistic theory is not testable. The only difference (as far as the meaningfulness of probabilities) that I can see between the standard collapse interpretation and the Many Worlds interpretation is that in the first case, the set of possibilities are considered to be theoretical possibilities, while in the second case, they are considered to be actual alternatives.
 
  • #101
stevendaryl said:
I don't think it's really true that you have "branch counting" in MWI. The branches are not discrete, there are infinitely many of them. With an infinite collection of possibilities, there is no way to count the numbers. You have to have a measure on the sets of possibilities.
It doesn't matter whether they are continuous or discrete. It's key that you have some well-defined measure.

stevendaryl said:
I'm not sure whether it is possible to derive the measure to use from first principles, but if it has to be an additional axiom, I don't see how that's any worse than the standard collapse interpretations.
An axiom would not really make sense. A theorem is required, but w/o a sound proof it's unclear how MWI is viable.
 
  • #102
tom.stoer said:
It's about statistical hypothesis tests, levels of significance and all that.
That's exactly the hand-waving I mentioned.

I am not interested in precise numbers, but can you suggest an experimental test which can verify or disprove some hypothesis about the squared amplitudes of quantum-mechanical systems?




Here is what I would suggest:

Probabilistic interpretations:

Find a test that you can repeat as often as you like (like shooting photons with a specific polarization at a polarizer, rotated by some specific angle). For each photon, detect if it passed the polarizer. Let's assume this detection is 100% efficient and has no background.

Let's test the hypothesis "the squared amplitude* of the wave going through is 10% of the initial squared amplitude". I will call this hypothesis A. In probabilistic interpretations, this translates to "as expectation value, 10% of the photons pass through" via an additional axiom.
I will call this event X, and the opposite event Y.

*and let's ignore mathematical details, it should be clear how this is meant

We decide to test 100,000 photons. If every photon has a 10% probability for x and all photons are independent, we expect 10,000 the result "x", with a standard deviation of (roughly) 100. This is standard probability theory, nothing physical so far.

To distinguish our hypothesis from other hypotheses (like "20% probability of x" - hypothesis B), we look for measurement results which are in agreement with A, but not with B or a large class of other possible hypotheses.
A natural choice is "we see agreement with hypothesis A if we see x between 9800 and 10200 times".
Mathematics tells us that with hypothesis A, we should see agreement with a probability of ~95%, while with hypothesis B, the probability is basically 0.
We can perform the test. If we get a result between 9800 and 10200 we are happy that hypothesis A passed the test and that we could reject hypothesis B and many others.


There are hypotheses we could not reject with that test. Consider hypothesis C: The number of x-events will be even with 95% probability. Can we test this? Sure. Make another test with "we see agreement with hypothesis C if the number of x is even". If we get 10044, we do not reject C, if we get 10037, we do.

Actually, it is completely arbitrary which events we consider as "passing the test" versus "failing", as long as the sum of probabilities of all events in the class "passing the test" is some reasonably large number (like 95% or whatever you like).

To test more and more hypotheses with increasing precision, we can perform multiple experiments, which is basically the same as one larger experiment.

The result?
A true hypothesis will most likely (->as determined by the probabilities of the true hypothesis) pass the tests, while a wrong hypothesis will most likely (->as determined by the probabilities of the true hypothesis) fail.

Most possible results will reject the true hypothesis. Consider the first test, for example: Only a fraction of ~10-16000 of all possible results will pass the test. Even the most probable single result (no x at all) is part of the "reject the test" fraction of the possible measurements.
This small fraction of measurements passing the test is not fixed and depends on the test design, but for large tests it is always extremely small.

How can we "hope" that we hit one of those few events (in order to confirm the correct hypothesis? Well, we cannot. We just know that they get a large amplitude, and call this a large "probability". The "probability" to accept the true hypothesis and reject as many others as possible can go towards 1.
--> We cannot get physics right with certainty, we cannot even get it right with most possible measurement results, but we can get it right with a high probability (like "1-epsilon").

MWI

The QM formalism stays the same, and we can make hypotheses about amplitude.
We can define and perform the same tests as above. Again, most results will reject the true hypothesis - the true hypothesis will get rejected in most branches. But at the same time, most of the measure (we can let this fraction go towards 1 for many tests) will see passed tests for the true hypothesis only.
--> We cannot get physics right in all branches, we cannot even get it right with most branches, but we can get it right within branches with a large measure (like "1-epsilon").

That's all I want to get from tests.
 
  • #103
tom.stoer said:
It doesn't matter whether they are continuous or discrete. It's key that you have some well-defined measure.

It's the same measure as is used in the collapse interpretation.

An axiom would not really make sense. A theorem is required, but w/o a sound proof it's unclear how MWI is viable.

I don't agree. It's an analogous situation in both the collapse interpretation and the MWI. In the collapse interpretation, there is a set of "possible" histories (possible according to the theory), and then there is our actual history. The Born rule only makes a testable prediction if we assume that our actual history is "typical" out of the set of all possible histories. In MWI, the only difference is that the alternative histories are not considered just theoretical possibilities, but are ACTUAL. They're just not our history. To get a prediction from MWI, you have to have a notion of a typical history, and assume that ours is typical. I don't see much difference, as far as the meaningfulness of probabilistic predictions.
 
  • #104
tom.stoer said:
An axiom would not really make sense. A theorem is required, but w/o a sound proof it's unclear how MWI is viable.

Why doesn't it make sense to have an axiom giving the measure to use?
 
  • #105
The_Duck said:
Why doesn't it make sense to have an axiom giving the measure to use?

Here's a "toy" universe that has some of the properties of MWI:

The universe is deterministic, except for a mysterious, one-of-a-kind perfect coin. When you flip it, it's completely impossible to predict ahead of time whether it will end up "heads" or "tails".

Behind the scenes, this is what really happens: Whenever someone flips the coin, God (or if you want to be non-religious about it, the Programmer---you can assume that the world is really a simulation conducted inside a supercomputer) stops everything for a moment, and makes two copies of the world that are identical in every respect, except that in one copy, the coin lands head-up, and in the other copy, the coin lands tails-up.

As time goes on, some worlds will have histories in which the coin has landed heads-up half the time, and tails-up the other half the time. Other worlds will have different relative frequencies.

Now, a person living in one of the worlds can come up with a measure on possible histories, by using the assumption that every coin flip has probability 50/50 of landing heads or tails. He can define a "typical world" is one in which the relative frequencies approach 1/2 in the limit. He can prove that, according to his measure, the set of worlds that are "typical" have measure 1, and the set that are "atypical" have measure 0. So if he assumes that his own world is typical, he can make probabilistic predictions.

But not everybody will live in a world where the relative frequency for heads approaches 1/2. Some people will live in a world where the relative frequency approaches 1/3, or 1/5, or any other number you choose. So you can't deduce from the many-worlds theory (I'm talking about the theory of the many worlds created by God or the programmer, not Everett's Many Worlds) what the relative frequency must be, because it's different in different possible worlds. You can assume that you live in a world with a particular relative frequency, but that's an additional assumption; it doesn't follow from the theory.
 

Similar threads

Replies
4
Views
116
Replies
313
Views
22K
Replies
4
Views
71
Replies
41
Views
4K
Replies
34
Views
3K
Replies
5
Views
2K
Back
Top