Are World Counts the Key to Understanding the Born Rule?

  • Thread starter RobinHanson
  • Start date
In summary: In much the same way, the precise definition of "branching" is very sensitive to the details of the models of splitting being considered.Wallace's key point is that world counting is incoherent, because it requires knowing the number of branches. This is impossible, because the models of splitting considered in discussions of Everett -- usually involving two or three discrete splitting events, each producing in turn a smallish number of branches -- bear little or no resemblance to the true complexity of realistic, macroscopic quantum systems. In conclusion, Wallace claims that world counting is incoherent in all contexts, not just in some.
  • #1
RobinHanson
19
0
This discussion is moved from the thread "my paper on the Born rule" (post #40). The key claim is:

Ontoplankton said:
In section 5.3, http://philsci-archive.pitt.edu/archive/00001742/" .

Wallace and Greaves and many others seem to accept the claim that if there are naturally distinguishable branches/worlds in the Everett approach, then it is natural to assign probabilities proportional to world counts, producing a difficult conflict with the Born rule. They claim, however, that world counting is incoherent. Page 21 of Wallace's paper cited above gives the most elaboration I've seen defending this view.

How correct is their claim? Are world counts incoherent in all contexts, or only in some? In particular, are they coherent in this situation of most interest to me: counting arguments suggest that relative to the parent world where we started testing the Born rule, our world is very unusual, having seen measurement statistics close to the Born rule, while the vast majority of worlds should instead see near uniform measurement statistics.

In posts to follow, I'll quote Wallace's argument, and offer my own opinions.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
As promised, here is the longest discussion I've seen on this issue. In "[URL Probability from Subjective Likelihood:
improving on Deutsch’s proof of the probability
rule[/URL]", David Wallace writes:

Non-branch-indifferent strategies require us to know the number of branches, and there is no such thing. Why? Because the models of splitting often considered in discussions of Everett — usually involving two or three discrete splitting events, each producing in turn a smallish number of branches — bear little or no resemblance to the true complexity of realistic, macroscopic quantum systems. In reality:
  • Realistic models of macroscopic systems are invariably infinite-dimensional, ruling out any possibility of counting the number of discrete descendants.
  • In such models the decoherence basis is usually a continuous, over-complete basis (such as a coherent-state basis10 rather than a discrete one, and the very idea of a discretely-branching tree may be inappropriate. (I am grateful to Simon Saunders for these observations).
  • Similarly, the process of decoherence is ongoing: branching does not occur at discrete loci, rather it is a continual process of divergence.
  • Even setting aside infinite-dimensional problems, the only availablemethod of ‘counting’ descendants is to look at the time-evolved state vector’s overlap with the subspaces that make up the (decoherence-) preferred basis: when there is non-zero overlap with one of these subspaces, I have a descendant in the macrostate corresponding to that subspace. But the decoherence basis is far from being precisely determined, and in particular exactly how coarse-grained it is depends sensitively on exactly how much interference we are prepared to tolerate between ‘decohered’ branches. If I decide that an overlap of 10^-10^10 is too much and change my basis so as to get it down to 0.9 × 10^-10^10, my decision will have dramatic effects on the “head-count” of my descendants.
  • Just as the coarse-graining of the decoherence basis is not precisely fixed, nor is its position in Hilbert space. Rotating it by an angle of 10 degrees will of course completely destroy decoherence, but rotating it by an angle of 10^-10^10 degrees assuredly will not. Yet the number of my descendants is a discontinuous function of that angle; a judiciously chosen rotation may have dramatic effects on it.
  • Branching is not something confined to measurement processes. The interaction of decoherence with classical chaos guarantees that it is completely ubiquitous: even if I don’t bother to turn on the device, I will still undergo myriad branching while I sit in front of it. (See Wallace (2001, section 4) for a more detailed discussion of this point.)
The point here is not that there is no precise way to define the number of descendants; the entire decoherence-based approach to the preferred-basis problem turns (as I argue in Wallace (2003a)) upon the assumption that exact precision is not required. Rather, the point is that there is not even an approximate way to make such a definition.

In the terminology of Wallace (2003a), the arbitrariness of any proposed definition of number of descendants makes such a definition neither predictive nor explanatory of any detail of the quantum state’s evolution, and so no such definition should be treated as part of macroscopic reality. (By contrast, the
macroscopic structure defined by decoherence (which can be specified by how the weights of a family of coarse-grained projectors in the decoherence basis
change over time) is fairly robust, and so should be treated as real. It is only when we start probing that structure to ridiculously high precision — as we must do in order to count descendants — that it breaks down.)
 
Last edited by a moderator:
  • #3
Analogy with Entropy

Informally, entropy is often defined in terms of "the number of states of a system consistent with macroscopic knowledge of that system." Under such a definition, the exact number of states is a sense very sensitive to the details of the exact model one uses, including the exact system boundary. It is also very sensitive to how big a difference is enough to call something a different state. Systems with an apparent infinity of degrees of freedom are particularly troublesome. Tiny changes can easily change the number of states by a factor of 10^10 or more.

Nevertheless, the concept of entropy is very useful, allowing robust predictions of system behavior. When the number of states is of the order of 10^10^25, a factor 10^10 makes little difference. We have standard and apparently reasonable ways to deal with ambiguities. It is hard to escape the impression that real systems really do have entropy.

The driving concept of a state is a difference that makes enough of a difference in long-term system evolution. So the standard quantum entropy definition just takes the projection on some basis, ignoring the fact that quantum states can also vary in relative phases. Apparently the relative phases are not differences that make enough difference regarding long term system evolution.

The driving concept of a world is a component of a quantum state whose future evolution is independent enough from the evolution of other states. While there might be ambiguities in classifying borderline cases, it is not clear to me that these ambiguities are so severe as to make the concept of world count "incoherent" or "meaningless."

In particular, it is not clear that those ambiguities can go very far toward reducing the apparent strangness of our seeing Born rule frequencies in measurements in our world, when in simple models the typical world sees near uniform frequencies.
 
  • #4
Ah, I saw that a moderator moved this thread.

I agree with you that "world counting" is always a possibility, at least grossly, with the idea that if we make it precise, with one or the orther arbitrary postulate, the numbers will not varying a lot. However, I have more difficulties a priori, with assigning equal probabilities to each of the worlds. In thermodynamics, there's a good reason to do so, and that is ergodicity: the state is in any case rattling back and forth eratically in phase space, so any "initial condition" is about equivalent and will soon cover the entire phase space. But in "world counting" there's no such thing as ergodicity! We're not hopping randomly (at least, that's how I understand it) through all the worlds, so that we get an "average impression" of all the worlds, in which case it could be justified to say that they all have an equal probability. We just "happen to be" in one of the worlds ; but how is this "happen to be" distribution done ? Uniformly ?

As a silly illustration of what I'm trying to say: imagine that there are 10^10 ants on the planet and 5 humans (and let us take it for the sake of argument that ants are conscious beings). You happen to be one of the humans, and you could find that rather strange, because there is a large probability for you to be an ant. There are many more "ant worlds" than "human worlds". Does such an assignment of probability even make sense ? (I do realize the sillyness of my proposal :smile: )
 
  • #5
RobinHanson said:
They claim, however, that world counting is incoherent. ...How correct is their claim?

Well, I think that I would actually agree with Wallace's statement that "Realistic models of macroscopic systems are invariably infinite-dimensional." However, I do not agree with Wallace that this implies that world-counting is necessarily incoherent.

In trying to think about how to explain my point of view, I find myself returning to my high school calculus days, when I first encountered the concept of the limit. Basically, the limit is a conceptual bridge from "discrete" to "continuous." If we want to define the area under a curve, for example, then we start out approximating the area as being composed of discrete chunks, and we add them up. Then we take the limit as the number of chunks approaches infinity. Of course, for this scheme to work, we require that the area calculation remain stable as we take our limit.

So I have been borrowing some of these concepts from my high school days in considering how to make the APP (outcome counting) into a coherent scheme. If we assume outcome counting, and we also assume that Wallace is correct to assert that the number of branches is formally infinite, then the way to reconcile these two facts would go something like this:

1) There exists some sort of meaningful method of approximating the infinite number of worlds associated with a measurement via a finite representative "sampling" of these worlds.

2) We apply outcome counting to the finite worlds generated via the above approximation method, and use this to give us probabilities.

3) There exists some method of calculating probabilities in the limit as the number of sampled worlds becomes infinite (ie, as our model becomes more fine-grained).

4) We require that the calculated probabilities must remain stable as we take this limit.

Wallace's argument, it seems to me, is simply an assertion that this cannot be done. But there are many examples of situations where such limits CAN be taken, and DO remain stable. Your (Robin's) discussion of entropy is one such situation. I argued earlier in Patrick's thread that the FPI presents another such situation. So the interesting question, to me: how, exactly, do we go about the above procedure, in the attempt to make the APP into a coherent scheme?

Alternatively, we may wish to discuss whether Wallace is correct to assert that the number of worlds is formally infinite. I think your paper, Robin, assumes that world number remains finite. What is your argument for this? Are you essentially trying to avoid the incoherency arguments posed by Wallace? Would your scheme perhaps be made to work even if we assume an infinite number of worlds?

David
 
  • #6
straycat said:
1) There exists some sort of meaningful method of approximating the infinite number of worlds associated with a measurement via a finite representative "sampling" of these worlds.
2) We apply outcome counting to the finite worlds generated via the above approximation method, and use this to give us probabilities.
3) There exists some method of calculating probabilities in the limit as the number of sampled worlds becomes infinite (ie, as our model becomes more fine-grained).
4) We require that the calculated probabilities must remain stable as we take this limit.

My fear is that your requirement of the stability of probabilities as you increase the number of worlds is nothing else than the non-contextuality requirement. Then you will hit Gleason's theorem: he showed that the only way of assigning probabilities in that case is the Hilbert norm.

But I have difficulties with arguments that rely on the fact that the number of degrees of freedom is infinite. After all, we don't know if there is no discrete underlying structure, say at 10^(-5000) times the Planck scale. That's still finite. Infinity in a theory is, to me, a mathematical approximation of "very large".
 
  • #7
RobinHanson said:
Wallace and Greaves and many others seem to accept the claim that if there are naturally distinguishable branches/worlds in the Everett approach, then it is natural to assign probabilities proportional to world counts, producing a difficult conflict with the Born rule.

The following is maybe not 100% on-topic here, but, I wouldn't accept this claim myself.

I guess the intuition behind assigning uniform probabilities is that, if we don't have any better criteria, we might as well treat each world symmetrically. But for me this intuition is undermined by the fact that the number of worlds increases by some huge factor each second. If we treat each world one second from now as being equally important, then why not treat each world one second from now as being equally important as this world? But that would mean the part of the universe that is not in the far future is negligible. There would be more "world-moments" where it was the year one billion but we mistakenly thought it was 2005, than there would be "world-moments" where it actually was 2005. It would mean utilitarians should be going around advocating more world-splitting where people were happy, and less world-splitting where people were unhappy.

So to put it in different terms... That each world has an unimaginably large number of descendant-worlds after even one second, suggests to me that the relevant variable is not world-count, but that we should imagine worlds as having a different "thickness". Earlier worlds should be much "thicker" if we want to avoid a lot of paradoxes. And once we've allowed that, there's no reason why we couldn't see some worlds as "thicker" than others that result from the same splitting.

Of course, just because a decision rule has weird consequences doesn't mean we can ignore it, and it may be useful to work out how we should behave in a universe that really gets many times as "big" (in some decision-theoretical sense) each second. ("One man's modus tollens is another's modus ponens"...) And someone who was sufficiently convinced that equal probabilities are right, but thought the consequences were impossible, could just see this as refuting the Everett interpretation itself. But for me thinking about these things destroys the intuition that all worlds matter equally, or should be seen as equally probable. And the Everett interpretation itself has a lot going for it.

So I don't agree that the program Deutsch started tries to solve the problem by introducing a new or different decision theory, because I think the old decision theory doesn't say anything clear about this situation.
 
  • #8
vanesch said:
My fear is that your requirement of the stability of probabilities as you increase the number of worlds is nothing else than the non-contextuality requirement. Then you will hit Gleason's theorem: he showed that the only way of assigning probabilities in that case is the Hilbert norm.

What exactly is the non-contextuality requirement? You've mentioned this many times, but I've never really understood it. :rolleyes:

vanesch said:
But I have difficulties with arguments that rely on the fact that the number of degrees of freedom is infinite. After all, we don't know if there is no discrete underlying structure, say at 10^(-5000) times the Planck scale. That's still finite. Infinity in a theory is, to me, a mathematical approximation of "very large".

I'm happy with the position that we just don't know right now. I sort of lean toward infinity, but I'm not wedded to that position.

Keep in mind that it is quite possible for the "underlying structure" to be discrete in some ways, continuous in others, at the same time. If spacetime is multiply connected, as in some models of quantum foam, then one could imagine that the *topological* degrees of freedom (like the genus of a surface, etc) are discrete and finite -- and yet spacetime is still fundamentally continuous.

David
 
  • #9
Hey Onto,

Ontoplankton said:
So to put it in different terms... That each world has an unimaginably large number of descendant-worlds after even one second, suggests to me that the relevant variable is not world-count, but that we should imagine worlds as having a different "thickness". Earlier worlds should be much "thicker" if we want to avoid a lot of paradoxes.

So if a world of thickness T splits into two worlds with thicknesses T1 and T2, then we require: T = T1 + T2.

I think that the term "world" should perhaps be given a meaning different than the term "branch," like this. Suppose we have an experiment with two possible outcomes. Everett says that we have one world that splits into two. Personally, I have always imagined that, in fact, we had "two worlds all along," so it would be more appropriate to say that we had one branch that split into two. The thickness of each branch would then be defined roughly as the number of worlds that are represented by that branch.

David
 
  • #10
I posed the topic of this thread as whether we agree with the claim that world counts are incoherent. So far none of three people who have commented in this thread have agreed with this claim. Before we go too far in the direction of changing the topic - does anyone out there agree with this main claim?
 
  • #11
Ontoplankton said:
... if we don't have any better criteria, we might as well treat each world symmetrically. But for me this intuition is undermined by the fact that the number of worlds increases by some huge factor each second. ... There would be more "world-moments" where it was the year one billion but we mistakenly thought it was 2005, than there would be "world-moments" where it actually was 2005. ... for me thinking about these things destroys the intuition that all worlds matter equally, or should be seen as equally probable.

vanesch said:
I agree with you that "world counting" is always a possibility, at least grossly, ... However, I have more difficulties a priori, with assigning equal probabilities to each of the worlds. ... in "world counting" there's no such thing as ergodicity! ... imagine that there are 10^10 ants on the planet and 5 humans (and let us take it for the sake of argument that ants are conscious beings). You happen to be one of the humans, and you could find that rather strange, because there is a large probability for you to be an ant. There are many more "ant worlds" than "human worlds". Does such an assignment of probability even make sense ?

OK, apparently there is a lot more interest her in the topic of how to assign probabilities, assuming that world counts are coherent. So let me make a few comments.

1. While utilities are about what we want, probabilities are about what we know about the way the world is. If you lump them both together into "decision theory," you risk confusing the choice we have regarding what we care about, with a choice about how to assign probabilities. If the physics is clear, then I think there should be a right answer about how to assign probabilities. It may not be obvious at first what that right answer is, but it is out there nonetheless. Even if there is some freedom about how to assign a prior, we should get enough evidence in this sort of context to get a strong posterior.

2. The kinds of uncertainty we are talking about here is "indexical." This is not uncertainty about the state of the universe, but rather uncertainty about which part of the universe we are. There is a philosphy literature on how to assign probabilities regarding indexical uncertainty. See Nick Bostrom's book http://www.anthropic-principle.com/book/"

4. The issue is how to assign a prior, not a posterior. If we have reliable evidence that the year is 2005, then that will beat a strong prior that the year is one billion. If you don't think we have strong evidence, you get into http://www.simulation-argument.com/" territory. It is not clear to me that a prior that favoring future moments is irrational.
 
Last edited by a moderator:
  • #12
RobinHanson said:
OK, apparently there is a lot more interest her in the topic of how to assign probabilities, assuming that world counts are coherent.

Perhaps the topic could be: assuming that world counting is coherent, then what is the best way to make this into a workable theory? afaik 2 of the 3 people in the world who have published bona fide attempts to do this are here in PF (I mean Robin and Weissman - with Graham being the one not here), and I am working on a fourth. Perhaps we could compare and contrast these various different strategies?

David
 
  • #13
All of the following is still about whether it's rational to base probabilities on world-counting, not about whether world-counting is incoherent. Maybe it belongs in the other thread, but since it's still a response to statements made here, I'm posting it here.

RobinHanson said:
1. While utilities are about what we want, probabilities are about what we know about the way the world is. If you lump them both together into "decision theory," you risk confusing the choice we have regarding what we care about, with a choice about how to assign probabilities.

Maybe it's a good idea to use Wallace's terminology here. Before a quantum experiment, an observer can take two different points of view:

1. The pre-experiment observer is certain that he will become all post-experiment observers. Given the determinism of the theory there is nothing to be uncertain about; the observer only has to decide what utility weight to assign to each of his future selves. Wallace calls this the "Objective Determinism", or "OD" viewpoint.

2. The pre-experiment observer is uncertain which of the post-experiment observers he will become. He quantifies this uncertainty by assigning probabilities. Wallace calls this the "Subjective Uncertainty", or "SU" viewpoint.

I used to think that "OD" was obviously the right way to think about this. But Wallace uses an argument for the validity of "SU" taken from an earlier paper by Saunders. As I understand it, it goes as follows: The "person-stage" that is the pre-experiment observer is a part of many different "four-dimensional" persons. There is one person who observes first A and then B2, and another person who observes first A and then B1 (where A is everything the pre-experiment observer has experienced, and B1 and B2 are possible outcomes of the experiment). Before the experiment, each of these persons is indexically uncertain whether he is the person that will observe B1 or the person that will observe B2.

If I'm reading your post correctly, you're saying that the pre-experiment observer's problem is one of choosing the right probabilities (to quantify his indexical uncertainty), rather than one of choosing a utility function with potentially different weightings for each possible future. So this would mean you agree with Wallace that "SU" is a valid perspective to take, and you disagree with Greaves that "OD" is the only valid perspective.

But according to Wallace, the "SU" perspective allows you to make a strong argument for the validity of the Born rule that you can't make based on the "OD" perspective. (see page 19 of http://users.ox.ac.uk/~mert0130/papers/decprob.pdf ). He claims that it's enough to prove "branching indifference" ("An agent is rationally compelled to be indifferent about processes whose only consequence is to cause the world to branch, with no rewards or punishments being given to any of his descendants."). He claims that from the "SU" viewpoint branching indifference is easily seen to be true; see the middle of page 19 for the argument. (As I understand it: if you add extra uncertainty about some branching process whose outcomes all have the same utility, this never changes the expected utility of any quantum process of which that branching process is a part.)

Do you think this argument works? If so, it should prove that the Born rule gives you the only rational prior. (In particular, it should prove that uniform probabilities on worlds are irrational.)
 
Last edited by a moderator:
  • #14
Ontoplankton said:
Wallace's terminology ... an observer can take two different points of view:
1. The pre-experiment observer is certain that he will become all post-experiment observers. ... "Objective Determinism" ...
2. The pre-experiment observer is uncertain which of the post-experiment observers he will become. ... "Subjective Uncertainty" ...
If I'm reading your post correctly, you're saying that the pre-experiment observer's problem is one of choosing the right probabilities (to quantify his indexical uncertainty), rather than one of choosing a utility function with potentially different weightings for each possible future. So this would mean you agree with Wallace that "SU" is a valid perspective to take, and you disagree with Greaves that "OD" is the only valid perspective. But according to Wallace, the "SU" perspective allows you to make a strong argument for the validity of the Born rule that you can't make based on the "OD" perspective. (see page 19 of http://users.ox.ac.uk/~mert0130/papers/decprob.pdf ). He claims that it's enough to prove "branching indifference" ... if you add extra uncertainty about some branching process whose outcomes all have the same utility, this never changes the expected utility of any quantum process of which that branching process is a part. ... Do you think this argument works? If so, it should prove that the Born rule gives you the only rational prior. (In particular, it should prove that uniform probabilities on worlds are irrational.)

No, I think Wallace's argument confuses utilities and probabilities. Even if some branching doesn't influence certain decisions, your probabilities might still be influenced. It is just a mistake to bring up "indifference" when thinking about probabilities.

Indexical uncertainty is certainly possible, so I don't see how one can claim the perspective of such uncertainty is invalid.

By the way I should admit to being at least a bit bothered by your pointing out that there are far more future "world-selves" than past world-selves, and so a uniform prior over such selves greatly favors the future. Not sure how strong an issue it is though.
 
Last edited by a moderator:
  • #15
Ontoplankton said:
But according to Wallace, the "SU" perspective allows you to make a strong argument for the validity of the Born rule that you can't make based on the "OD" perspective. (see page 19 of http://users.ox.ac.uk/~mert0130/papers/decprob.pdf ). He claims that it's enough to prove "branching indifference" ("An agent is rationally compelled to be indifferent about processes whose only consequence is to cause the world to branch, with no rewards or punishments being given to any of his descendants."). He claims that from the "SU" viewpoint branching indifference is easily seen to be true; see the middle of page 19 for the argument. (As I understand it: if you add extra uncertainty about some branching process whose outcomes all have the same utility, this never changes the expected utility of any quantum process of which that branching process is a part.)

Do you think this argument works?

This is a good paper for us to use in this thread. Having read page 19, I do not see how or why SU compells us to accept branching indifference. In particular I agree with Robin:

Even if some branching doesn't influence certain decisions, your probabilities might still be influenced.

But I'm going to sit down and read through the entire paper and see if there is something I've missed ...
 
Last edited by a moderator:
  • #16
straycat said:
But I'm going to sit down and read through the entire paper and see if there is something I've missed ...

OK, I'm going through Wallace's arguments in favor of branch indifference, which is iiuc equivalent to measurement neutrality. The APP, ie "outcome counting," would, btw, represent a violation of branch indifference. On page 20, Wallace poses the following objection to any violation of branch indifference -- ie, he poses this objection to the APP:

1. The epistemic objection: to take decisions in such a universe, an agent who was not branch indifferent would have to be keeping microscopically detailed track of all manner of branch-inducing events (such as quantum decays) despite the fact that none of these events have any detectable effect on him. This is beyond the plausible capabilities of any agent.

This objection seems unreasonable. My main objection to Wallace's above argument is that there is nothing in the APP to exclude the possibility that some sort of pattern to the branching structure might exist, such that the coarse-grained probabilities (ie, the Born rule) can be approximated without knowing the precise fine-grained structure. Indeed, this is what makes the APP interesting to me: we can theorize relationships between the fine-grained structure and the coarse-grained probabilities. Since we know the coarse-grained probabilities are the Born rule, then we are encouraged to search for a fine-grained structure, based on the APP, that is equivalent to a coarse-grained Born rule.

A secondary objection to Wallace's argument is that the "plausible capabilities of any agent" are immaterial. When Mother Nature wrote Her most basic laws, I do not think she bothered to ask whether they were too complicated for us to understand!

David
 
  • #17
RobinHanson said:
No, I think Wallace's argument confuses utilities and probabilities. Even if some branching doesn't influence certain decisions, your probabilities might still be influenced. It is just a mistake to bring up "indifference" when thinking about probabilities.

It looks like you're right: Wallace justifies "branching indifference" in terms of utilities, and he then uses it to prove "equivalence", which is phrased in terms of probabilities.

But maybe this doesn't matter for the argument; maybe it still allows him to resolve the two lacunae he mentions at the start of section 9.

I think Wallace's position is that probabilities should make sense both from the "SU" viewpoint where you're uncertain who you will become, and the "OD" viewpoint where you just look at the deterministic physics and determine how much you care about each branch. In his coinflip example on p19, if the extra branching event doesn't change the expected utility of the coinflip, then how is it possible that it changes the probability of either outcome?

Indexical uncertainty is certainly possible, so I don't see how one can claim the perspective of such uncertainty is invalid.

In the situation where the experiment has already taken place but you don't know the outcome, I agree that indexical uncertainty is obviously the right way to look at it. In that case there are multiple physical observers, and each of them doesn't know which one he is.

But before the experiment has taken place, it's less obvious to me. In that case there is only one physical person, who knows for certain that he will split into multiple physical persons when the experiment takes place. To get something like indexical uncertainty there, you have to believe that there are in a sense already multiple persons before the experiment starts, each of which will observe a different outcome. (In other words the one physical person who exists before the experiment is a stage of many different person-histories, and each of these person-histories is uncertain (pre-experiment) which person-history it is.)
 
  • #18
RobinHanson said:
If we have reliable evidence that the year is 2005, then that will beat a strong prior that the year is one billion.

If the number of worlds is multiplied by some number like 10^(10^30) between now and then, the prior will be some number like 10^(10^30) to 1. It's hard to see how even reliable evidence could beat that.
 
  • #19
straycat said:
A secondary objection to Wallace's argument is that the "plausible capabilities of any agent" are immaterial. When Mother Nature wrote Her most basic laws, I do not think she bothered to ask whether they were too complicated for us to understand!

Wallace is assuming subjective probability here: probabilities aren't features of nature, they're features of how we should rationally deal with our uncertainty. If we were 99% sure we should assign equal probabilities and 1% sure we should assign Born probabilities, and assuming it really is completely beyond our capacities to figure out what we want to do in the case world-counting matters (i.e. we would have no way to estimate the expected utility of one action as higher than that of some other action), then I think the 99% would just drop out of our calculations and in practice we would be making decisions based on the 1% chance we should assign Born probabilities.

Having said that, I'm also not sure whether decision-making is always impossible if world counts matter.
 
  • #20
Ontoplankton said:
In his coinflip example on p19, if the extra branching event doesn't change the expected utility of the coinflip, then how is it possible that it changes the probability of either outcome?

But how do you know that the extra branching doesn't change the expected utility of the coinflip? That's an assumption, but we could equally assume the APP, which tells us that extra branching event does necessarily change the probability (= expected utility) of the coinflip.
I keep returning to the same conclusion that Patrick made in his paper: from a logical standpoint, we are free to postulate either the APP, or the Born rule, or some other rule. In my mind, the APP is more "beautiful" than the Born rule because of its symmetry; this, however, does not in and of itself tell us that the APP is "right" and the Born rule is "wrong." It's simply: more beautiful (to me, at least).

It's the same argument that one might make for the principle of relativity. It just so happened that if we make the principle of relativity a requirement, then we end up with all sorts of new and wonderful things (ie, GR). Who would have guessed? So I'm speculating that if we assert the APP, and figure out how to make it compatible with known physics, then who knows where else it might take us?

David
 
  • #21
Ontoplankton said:
In his coinflip example on p19, if the extra branching event doesn't change the expected utility of the coinflip, then how is it possible that it changes the probability of either outcome?

If you don't care whether the coin comes up head or tails, you don't care about the probability of head vs. tails. That is not the same as knowing it is 50/50.

Ontoplankton said:
In the situation where the experiment has already taken place but you don't know the outcome, I agree that indexical uncertainty is obviously the right way to look at it. In that case there are multiple physical observers, and each of them doesn't know which one he is.
But before the experiment has taken place, it's less obvious to me. In that case there is only one physical person, who knows for certain that he will split into multiple physical persons when the experiment takes place. To get something like indexical uncertainty there, you have to believe that there are in a sense already multiple persons before the experiment starts, each of which will observe a different outcome.

Beliefs are relative to information (states of knowledge), not places in the universe. Places may be correlated with information, but at anyone place one can counterfactually consider what one would believe with other information. Before you split you can consider what you might believe in the states of knowledge after you split.

In any case, to me the key question is not about predicting the future but about how surprised we should be to see that our experimental frequencies have confirmed the Born rule. For this question our initial indexical uncertainty about which world we are in is relevant.
 
Last edited:
  • #22
I think I've been misreading Wallace slightly.

The argument, I think, is this:

By assumption, heads and tails are equally probable without the extra branching process.

Therefore, you're indifferent between a bet on heads and a bet on tails when there is no extra branching process.

Under the "SU" viewpoint, you can see the extra branching process as just some irrelevant uncertainty; therefore, if there is an extra branching process after heads, you're still indifferent between betting on heads and betting on tails.

Therefore, heads and tails are still equally probable if there is an extra branching process after heads.

(The reason it isn't a mistake to bring up indifference is that two events are equally probable if and only if you're indifferent between betting on one and betting on the other.)
 
  • #23
Ontoplankton said:
Wallace ... argument...:

By assumption, heads and tails are equally probable without the extra branching process. Therefore, you're indifferent between a bet on heads and a bet on tails when there is no extra branching process.

Under the "SU" viewpoint, you can see the extra branching process as just some irrelevant uncertainty; therefore, if there is an extra branching process after heads, you're still indifferent between betting on heads and betting on tails.

The assumption that branching is irrelevant to probabilities directly implies that probabilities do not depend on world count. It is far from a mild assumption.
 
  • #24
I think the assumption being made is closer to:

"If you're uncertain about the outcome of some future process, and if for each possible outcome of that process you don't care whether or not the outcome happens, then you don't care whether or not the process happens."
 
  • #25
Ontoplankton said:
I think the assumption being made is closer to:
"If you're uncertain about the outcome of some future process, and if for each possible outcome of that process you don't care whether or not the outcome happens, then you don't care whether or not the process happens."

You can be indifferent to the outcome of a process while caring whether that process happens. I might not care if I have a boy or a girl, but I might care that I have one of them.
 
  • #26
I should have said: "if for each possible outcome of that process you don't care whether or not the outcome happens compared to the default outcome that would happen if the process didn't happen at all". If you're indifferent between having a boy and no child, and indifferent between having a girl and no child, then you're indifferent between having a child and not having a child.
 
  • #27
Ontoplankton said:
I think the assumption being made is closer to:
"If you're uncertain about the outcome of some future process, and if for each possible outcome of that process you don't care whether or not the outcome happens, then you don't care whether or not the process happens."

Well that assumption is clearly false. The process itelf happening can change the chances of other things you do care about, even if you don't care about the particular outcomes of that process.
 
  • #28
I don't have any solid arguments at this point. I'm not sure that your position falls under what Wallace would call "SU", but obviously that's just semantics.

Still, there is something that bothers me. The extra branching process could happen later than the coinflip, right? In that case, you're letting probabilities be influenced by events that happen later in time. Does that make sense?
 
  • #29
FYI, in the latest issue of the British Journal of the Philosophy of Science, Hillay Putnam has an article http://bjps.oxfordjournals.org/cgi/reprint/56/4/615" , wherein he rejects the many worlds view because of the conflict between world counting and the Born rule.
 
Last edited by a moderator:
  • #30
Ontoplankton said:
Still, there is something that bothers me. The extra branching process could happen later than the coinflip, right? In that case, you're letting probabilities be influenced by events that happen later in time. Does that make sense?

Well, according to the APP/outcome counting, if the extra branching process happens later than the coinflip, then the extra process does not influence the probability that the coinflip comes up heads or tails. That is, if the tree diagram splits in two directions corresponding to heads vs tails, and then splits into other directions b/c of some other process, then you still have 50-50 odds for heads vs tails. (Note that I am making the assumption here that probabilities are transitive.)

David
 
  • #31
Ontoplankton said:
Still, there is something that bothers me. The extra branching process could happen later than the coinflip, right? In that case, you're letting probabilities be influenced by events that happen later in time. Does that make sense?

I don't see a problem with it.
 
  • #32
straycat said:
according to the APP/outcome counting, if the extra branching process happens later than the coinflip, then the extra process does not influence the probability that the coinflip comes up heads or tails

If post-experiment branching doesn't matter, then isn't that already enough to complete the erasure proof in section 8?
 
Last edited:
  • #33
Quantum Arson: You can win the lottery by resolving to burn a lot of stuff if you win, because that will generate a lot of entropy, so it will have happened in more worlds.
 
  • #34
RobinHanson said:
FYI, in the latest issue of the British Journal of the Philosophy of Science, Hillay Putnam has an article http://bjps.oxfordjournals.org/cgi/reprint/56/4/615" , wherein he rejects the many worlds view because of the conflict between world counting and the Born rule.

Unfortunately, Wash U does not have an online subscription to BJPS :mad: .

D
 
Last edited by a moderator:
  • #35
Ontoplankton said:
Quantum Arson: You can win the lottery by resolving to burn a lot of stuff if you win, because that will generate a lot of entropy, so it will have happened in more worlds.

No, this wouldn't work, because the APP, as I understand it, is transitive.

David
 

Similar threads

Back
Top