Many Worlds Interpretation and act of measuring

In summary: ThanksBillThe image is of a cat in a box, which is an example of the 'measurement problem.' We can't make a measurement without influencing what we measure, and that's why there's only a 50% chance of the cat being alive. After the experiment is finished (box is opened), then the measurement has been made and we can say for certain what happened.
  • #141
stevendaryl said:
Suppose you have a million people, and they each flip a coin 20 times to figure out the probability of heads and tails. Then typically,
  1. 1 person will see all heads. This person will assume that the probability is 1 of getting heads.
  2. 20 people will see 19 heads and 1 tail. These people will assume that the probability is 95% of getting heads.
  3. 190 people will see 18 heads and 2 tails. These people will assume that the probability is 90% of getting heads.
  4. etc.
Around 185,000 people will correctly come up with 50% probability. A much larger number will come up with a probability between 0.4 and 0.6.

But it definitely will not be the case that everyone comes up with the observed probability of 50% heads.

Would you make your inductive inference from 20 flips? No reasonable scientist would. This is a straw man.

My point stands, as does my choice.
 
Physics news on Phys.org
  • #142
RUTA said:
Would you make your inductive inference from 20 flips? No reasonable scientist would. This is a straw man.

No, it is not. Replacing 20 by 20,000 or 20,000,000 does not change the conclusion.

My point stands as does my choice.

Do you see that replacing 20 flips by 20,000 or 20,000,000 won't change the conclusion? Your choice seems to me to be mathematically inconsistent. A contradiction.
 
Last edited:
  • #143
TrickyDicky said:
But then your example goes against what you are arguing for. And option 1 from RUTA is the only one consistent, certainly not 2.

I don't know what you're saying. My point is that
  1. "Empirically determining the probability" means a FINITE number of observations. Nothing that requires an infinite number of observations could be called "empirical".
  2. If you only have finitely many observations, then there is the possibility that the statistical results are a fluke.
In other words, what RUTA was demanding, that everyone everywhere at all times must come to the same conclusions about the probabilities, is just not possible to guarantee. It's mathematically inconsistent.
 
  • #144
After being AFK for a while, I am just going to leave this new paper here: http://arxiv.org/abs/1504.01063 it explains what is so wrong with the current approach to probabilities in MWI
 
  • #145
so problem solved, MWI is bull****, time to move on. :biggrin:
 
  • #146
stevendaryl said:
I don't know what you're saying. My point is that
  1. "Empirically determining the probability" means a FINITE number of observations. Nothing that requires an infinite number of observations could be called "empirical".
  2. If you only have finitely many observations, then there is the possibility that the statistical results are a fluke.
In other words, what RUTA was demanding, that everyone everywhere at all times must come to the same conclusions about the probabilities, is just not possible to guarantee. It's mathematically inconsistent.
The reason you don't understand what I'm saying seems to lie on your ignoring the difference between the abstract mathematical probability theory and statistics in this discussion. But without making that basic distinction is impossible to say anything meaningful about the problem with MWI.
 
  • #147
stevendaryl said:
I don't know what you're saying. My point is that
  1. "Empirically determining the probability" means a FINITE number of observations. Nothing that requires an infinite number of observations could be called "empirical".
  2. If you only have finitely many observations, then there is the possibility that the statistical results are a fluke.
In other words, what RUTA was demanding, that everyone everywhere at all times must come to the same conclusions about the probabilities, is just not possible to guarantee. It's mathematically inconsistent.

You're making an assumption (#2) about the nature of reality that I deny. It's that simple.
 
  • Like
Likes bhobba
  • #148
TrickyDicky said:
The reason you don't understand what I'm saying seems to lie on your ignoring the difference between the abstract mathematical probability theory and statistics in this discussion

I understand what you're saying, but it isn't relevant to the claim being discussed.
 
  • #149
RUTA said:
You're making an assumption (#2) about the nature of reality that I deny. It's that simple.

Yes, I understand that you are denying it. But it is logically inconsistent of you.
 
  • #150
Quantumental said:
After being AFK for a while, I am just going to leave this new paper here: http://arxiv.org/abs/1504.01063 it explains what is so wrong with the current approach to probabilities in MWI

That's the paper that we've been discussing for the last week or so.
 
  • #151
Quantumental said:
After being AFK for a while, I am just going to leave this new paper here: http://arxiv.org/abs/1504.01063 it explains what is so wrong with the current approach to probabilities in MWI

Gave it a quick sqizz - here is one bit:
'But as Peter Lewis points out, ‘to say that the state has branches is just to say that it can be written as a sum of more-or-less independent terms, where each term is taken as a description of a state of affairs ... so the state of affairs is the branch ... it makes no sense to conceive of the same state of affairs in a different branch.’

and

'The Relevance-Limiting Thesis: It is never epistemically rational for an agent who learns only self-locating information to respond by altering a non-self-locating credence'

If anyone can explain it to me be my guest.

Probabilities in MW is simple - they define it as per decision theory which is a variant of Bayesen probability. It's this - the probability, P, of something is that a rational agent is willing to bet on it at 1/P to 1 odds (see page 132 of Wallaces text). Nothing hard about it. Its widely used by Actuaries, for example. If there was anything the mater with it those guys would have found it long ago.

MW is conceptually simple. After decoherence you have a mixed state ∑ pi |bi><bi| and each |bi><bi| is interpreted as a separate world. Nothing hard about it.

If MW is so wrong then it should be explainable, clearly and simply, why each |bi><bi| can't be interpreted as a separate world. One issue is the factorisation problem - agreed - but that requires more work. Another is how do we explain the randomness of the environment in decoherence models - again more work needs to be done. These statements that MW has definitely been disproved is, to be blunt, sensationalism of dubious value.

Thanks
Bill
 
Last edited:
  • #152
stevendaryl said:
Yes, I understand that you are denying it. But it is logically inconsistent of you.

There's no logical reason preventing 1 million people, each having flipped a coin 20 million times, from concluding that the probability of heads is 0.5. It is merely an assumption about the nature of reality (not logical necessity) to claim that some of those people will necessarily conclude the probability is other than 0.5. Using your assumption, one would have to be very skeptical about the many probabilistic formulae we claim to have tested empirically. Using my assumption, it is safe to conclude that empirical deviations from a probabilistic formula actually discredit the formula and aren't merely statistical anomalies.
 
  • Like
Likes bhobba
  • #153
RUTA said:
There's no logical reason preventing 1 million people, each having flipped a coin 20 million times, from concluding that the probability of heads is 0.5. It is merely an assumption about the nature of reality (not logical necessity) to claim that some of those people will necessarily conclude the probability is other than 0.5. Using your assumption, one would have to be very skeptical about the many probabilistic formulae we claim to have tested empirically. Using my assumption, it is safe to conclude that empirical deviations from a probabilistic formula actually discredit the formula and aren't merely statistical anomalies.

Probabilities are defined rigorously by the Kolmogorov axioms. It is well known measuring probabilities is problematical because the law of large numbers converges only almost surely. But we can always put bounds on it that can be reduced to well below what any rational person would accept as being for all practical purposes zero probability.

Thanks
Bill
 
  • #154
bhobba said:
Probabilities are defined rigorously by the Kolmogorov axioms. It is well known measuring probabilities is problematical because the law of large numbers converges only almost surely. But we can always put bounds on it that can be reduced to well below what any rational person would accept as being for all practical purposes zero probability.

Thanks
Bill

This says nothing about the way the probabilities are instantiated in reality. That requires an additional assumption.
 
  • #155
RUTA said:
This says nothing about the way the probabilities are instantiated in reality. That requires an additional assumption.

Are you and stevendaryl referring to the section on the Principal Principle in Adlam's paper (http://arxiv.org/abs/1504.01063, section IV.B, p15)?
 
  • #156
RUTA said:
This says nothing about the way the probabilities are instantiated in reality. That requires an additional assumption.

Yes. Application of Kolmogorov's axioms requires some 'reasonableness' assumptions to apply it. They are usually so obvious books like Feller that detail applied probability don't actually state them - you pick them up by doing problems.

Thanks
Bill
 
  • #157
atyy said:
Are you and stevendaryl referring to the section on the Principal Principle in Adlam's paper (http://arxiv.org/abs/1504.01063, section IV.B, p15)?

I agree with points 1-3 in the list at the end of page 15, beginning of page 16. However, what I disagree with is an additional assumption she makes later:
...we require that the reason it is rational to do this [make credence a function of probability] is that having such credences is a good way of arriving at true beliefs

We might wish for that to be true, but we can't require it. As I said, if you are unlucky enough to get a run of 20 (or 20,000) heads in a row while tossing coins, then you will come to a false conclusion about whether you have a fair coin. Equating relative frequencies with probabilities is not a guaranteed way of arriving at the truth. The best you can say is that you probably will arrive at something close to the truth. I don't see that MWI makes things any worse.

I agree that there is something a little mysterious and unsatisfactory about the accounts of probability within MWI, but in my opinion, the problems reflect problems with making sense of probabilities, in any case.

Presumably, a Von Neumann-style "collapse" interpretation has no conceptual difficulties with probabilities. You perform a measurement, and you get one result out of a set of possibilities with a certain weight on the possibilities. This weight has the empirical content that if you repeat the measurement many times under identical conditions, the relative frequencies will approach the weight. That sounds unmysterious.

However, if you conceptually replace a single "run" of the universe by an ensemble of infinitely many independent runs, then this ensemble will include all possible finitely specifiable outcomes. Even though the evolution of a single system in the ensemble is nondeterministic (at least if we treat the "index" of individual systems as irrelevant), the evolution of the entire ensemble is deterministic: every possible outcome happens. The ensemble model, with deterministic evolution, is in some sense equivalent to the original nondeterministic single-system model. I can't see how there can be problems with the concept of probability that apply to the one that doesn't also apply to the other in a transformed way.

Of course, MWI is not simply an ensemble version of Copenhagen, because there are interference effects and because of the basis problem, and so forth. But conceptually, it seems to me that the problems with probability are not unique to MWI.
 
  • Like
Likes atyy
  • #158
atyy said:
Are you and stevendaryl referring to the section on the Principal Principle in Adlam's paper (http://arxiv.org/abs/1504.01063, section IV.B, p15)?

I'm pointing out a (naïve) issue with checking a universal probability. For example, if half the civilizations in the universe always found heads when flipping a coin and the other half always found tails, the universal probability would be 0.5 for heads/tails, but no civilization would discover it. There are assumptions you have to make as to how a probabilistic rule is instantiated in reality. My assumption is that every civilization will deduce the correct universal probability. That's all I'm saying. The reason it bears on Many Worlds is that, apparently, and I'm waiting for someone to clarify this for me, the simple counting of branches for a frequentist view of MW produces "too many" of the branches deducing empirically the wrong universal probability. This spawned a "subjectivist" view of probability in MW (per Deutsch and Wallace) which is predicated on inconsistent assumptions per Dawid and Thebault (http://philsci-archive.pitt.edu/9542/1/Decoherence_Archive.pdf).
 
  • Like
Likes atyy
  • #159
atyy said:
Are you and stevendaryl referring to the section on the Principal Principle in Adlam's paper (http://arxiv.org/abs/1504.01063, section IV.B, p15)?

If you can understand that paper you are a better man than me Gunga Din.

Thanks
Bill
 
  • #160
RUTA said:
This spawned a "subjectivist" view of probability in MW (per Deutsch and Wallace) which is predicated on inconsistent assumptions per Dawid and Thebault (http://philsci-archive.pitt.edu/9542/1/Decoherence_Archive.pdf).

Can't agree with that one. Actuaries for example make extensive use of the decision theory view of probability.

I believe probabilities are a tricky issue when analysed carefully - but the modern axiomatic view resolves them.

Thanks
Bill
 
  • #161
bhobba said:
If you can understand that paper you are a better man than me Gunga Din.

Thanks
Bill

I didn't read her paper, but I read the one by Dawid and Thebault http://philsci-archive.pitt.edu/9542/1/Decoherence_Archive.pdf and its arguments looked sound. I don't study MW (for or against), so I was hoping someone here would clarify/correct my naïve understanding of the Kent --> Deutsch/Wallace --> Dawid/Thebault chain of argument posted in #158.
 
  • #162
bhobba said:
Can't agree with that one. Actuaries for example make extensive use of the decision theory view of probability.

I believe probabilities are a tricky issue when analysed carefully - but the modern axiomatic view resolves them.

Thanks
Bill

The term "subjectivist" is the language in Dawid's paper with a footnote that it should be "epistemic," but ... . So do you disagree with the "subjectivist" approach of Deutsch/Wallace? Or do you agree with that and disagree with Dawid's arguments against it?
 
  • #163
RUTA said:
I'm pointing out a (naïve) issue with checking a universal probability. For example, if half the civilizations in the universe always found heads when flipping a coin and the other half always found tails, the universal probability would be 0.5 for heads/tails, but no civilization would discover it. There are assumptions you have to make as to how a probabilistic rule is instantiated in reality. My assumption is that every civilization will deduce the correct universal probability. That's all I'm saying. The reason it bears on Many Worlds is that, apparently, and I'm waiting for someone to clarify this for me, the simple counting of branches for a frequentist view of MW produces "too many" of the branches deducing empirically the wrong universal probability. This spawned a "subjectivist" view of probability in MW (per Deutsch and Wallace) which is predicated on inconsistent assumptions per Dawid and Thebault (http://philsci-archive.pitt.edu/9542/1/Decoherence_Archive.pdf).

I'll read Dawid and Thebault, but in the mean time, have you seen http://philsci-archive.pitt.edu/4222/ ?
 
  • #164
RUTA said:
The term "subjectivist" is the language in Dawid's paper with a footnote that it should be "epistemic," but ... . So do you disagree with the "subjectivist" approach of Deutsch/Wallace? Or do you agree with that and disagree with Dawid's arguments against it?

Ok - first my own view. I am not into Baysian, subjectivist stuff - for me probability is simply Kolmogorovs axioms and its frequentest implementation via the law of large numbers. I find MW simply too weird to accept. I hold to the ignorance ensemble interpretation.

That said Bayesian views - either the decision theoretic version or the Cox axioms version, and there are probably others as well, all conform to Kolmogorov's axioms so are equally as valid. Because of that it's impossible for them to have logical fault. I think people that attack it on those grounds are simply doing philosophical sophistry for philosophical sophistry's sake. When I read http://arxiv.org/abs/1504.01063 I thought - this isn't science - its just philosophical waffle. The paper you mentioned is another matter - I would need to look at it a bit more carefully.

Thanks
Bill
 
  • #165
RUTA said:
That said Bayesian views - either the decision theoretic version or the Cox axioms version, and there are probably others as well, all conform to Kolmogorov's axioms so are equally as valid. Because of that it's impossible for them to have logical fault. I think people that attack it on those grounds are simply doing philosophical sophistry for philosophical sophistry's sake. When I read http://arxiv.org/abs/1504.01063 I thought - this isn't science - its just philosophical waffle. The paper you mentioned is another matter - I would need to look at it a bit more carefully.

The Bayesian view - say de Finetti's - is beautifully coherent - the part that makes it very hard to know whether the Deutsch-Wallace version of MWI is correct is that they take the decision theory without the Bayesian part - because they want to derive probability without the Bayesian axioms. Eg. Wallace, http://arxiv.org/abs/quant-ph/9906015: "all the practical consequences of such predictions follow from the remaining, non-probabilistic, axioms of quantum theory, together with the non-probabilistic part of classical decision theory." !

BTW, I'm a frequentist if you are wondering, because I prefer being incoherent :p
 
  • #166
atyy said:
that they take the decision theory without the Bayesian part

They are all logically equivalent. Like I said Actuaries have been using decision theory for years - its simply another way of looking at probability.

Thanks
Bill
 
  • #167
bhobba said:
They are all logically equivalent. Like I said Actuaries have been using decision theory for years - its simply another way of looking at probability.

No, they are not - if you look at Deustch - he is saying he is only taking the decision theory part without the probability - so what he is doing is not what actuaries have been doing for years.

One way to see that it is not the same is that Wallace's defence of Deutsch depends on MWI. As far as I know, actuaries do not assume MWI.

Wallace, http://arxiv.org/abs/quant-ph/0312157: "His work has not so far met with wide acceptance, perhaps in part because it does not make it at all obvious that the Everett interpretation is central (and his proof manifestly fails without that assumption)."
 
Last edited:
  • #168
atyy said:
No, they are not - if you look at Deustch - he is saying he is only taking the decision theory part without the probability

That is NOT what Wallace says. He specifically defines it as I said before - and in his book - page 472 - proves it is equivalent to Bayesian probability - Theorem 4 - Diachtonic Representation Theorem.

Thanks
Bill
 
  • #169
bhobba said:
That is NOT what Wallace says. He specifically defines it as I said before - and in his book - page 472 - proves it is equivalent to Bayesian probability - Theorem 4 - Diachtonic Representation Theorem.

Yes, Wallace intends to show that decision theory + MWI gives Bayesian probability - so it is not the same as what actuaries have been doing - actuaries do not assume MWI. They assume decision theory + Bayesian theory.
 
  • #170
atyy said:
Yes, Wallace intends to show that decision theory + MWI gives Bayesian probability - so it is not the same as what actuaries have been doing - actuaries do not assume MWI. They assume decision theory + Bayesian theory.

No.

From the axioms of decision theory Theroem 4 proves the link. Its got nothing to do with MW - only decision theory axioms.

It's simply a different view of probability from different axioms.

Thanks
Bill
 
  • #171
bhobba said:
No.

From the axioms of decision theory Theroem 4 proves the link. Its got nothing to do with MW - only decision theory axioms.

It's simply a different view of probability from different axioms.

Well, at some step there is something in Wallace's understanding of the Deustch proof that requires MWI. So while I accept that there are decision theoretic axioms that give rise to probability, I do not believe that is what Deustch-Wallace are doing.

Wallace, http://arxiv.org/abs/quant-ph/0312157: "His work has not so far met with wide acceptance, perhaps in part because it does not make it at all obvious that the Everett interpretation is central (and his proof manifestly fails without that assumption)."
 
Last edited:
  • #172
atyy said:
Wallace, http://arxiv.org/abs/quant-ph/0312157: "His work has not so far met with wide acceptance, perhaps in part because it does not make it at all obvious that the Everett interpretation is central (and his proof manifestly fails without that assumption)."

My knowledge of the detail of MW comes from Wallace's book - not Deutch - so I don't know his arguments.

But for me Wallace looks sound - although, as I have said before, when I went through his arguments, I thought it simply was Gleason in disguise because contextuality doesn't quite work - indeed it's encoded in the non-contextuality theorem on page 475.

Thanks
Bill
 
  • #173
bhobba said:
My knowledge of the detail of MW comes from Wallace's book - not Deutch - so I don't know his arguments.

But for me Wallace looks sound - although, as I have said before, when I went through his arguments, I thought it simply was Gleason in disguise because contextuality doesn't quite work - indeed it's encoded in the non-contextuality theorem on page 475.

OK, I was thinking of the earlier Wallace argument. I don't have access to his book - does http://arxiv.org/abs/0906.2718 look close enough?
 
  • #174
atyy said:
OK, I was thinking of the earlier Wallace argument. I don't have access to his book - does http://arxiv.org/abs/0906.2718 look close enough?

Looks about the same - including interesting comments about Gleason.

Thanks
Bill
 
  • #175
atyy said:
I'll read Dawid and Thebault, but in the mean time, have you seen http://philsci-archive.pitt.edu/4222/ ?

No, I haven't read that or any Deutsch or Wallace. I was hoping you guys could save me having to do that. I started reading this link and see that it's 47 pp, so it would take me awhile to get through it. Have you read it? Can you summarize anything?
 

Similar threads

Replies
4
Views
249
Replies
3
Views
3K
Replies
5
Views
2K
Replies
5
Views
226
Replies
14
Views
1K
Replies
15
Views
3K
Back
Top