Question regarding the Many-Worlds interpretation

In summary: MWI itself is not clear on what to count. Are all branches equal? Are there some branches which are more "real" than others? Do you only count branches which match the experimental setup? Do you only count branches which match the observer's expectations? All of these questions lead to different probabilities. So the idea of counting branches to get a probability just doesn't work with the MWI.But we can still use the MWI to explain why we observe "x" more often than "y". In the grand scheme of things, there are more branches where we observe "x" because "x" is the more stable and long-lived state. So even though
  • #246
stevendaryl said:
Viewed slightly differently, the number of "possible worlds" remains constant, but associated with each possible world is an amplitude that changes with time. All worlds exist at all times.

Sure. Slightly different ways of viewing it certainly exist - its basically what you feel the most comfortable with.

Thanks
Bill
 
Physics news on Phys.org
  • #247
bhobba said:
Not quite.

There is a copy of you in all of them and each copy experiences a different outcome. What copy you are and what you experience is the same thing.
This is probably a bit too metaphysical, but if you split into two identical copies, one having experience A, and the other experience B, in order to be able to talk about 'you' being one of these copies (as opposed to the other), you have to introduce some means that picks out your 'you'-ness, i.e. that makes it so that your continuous experience is with copy B, rather than A, say. This is the route Albert and Loewer have considered (but rejected), and under such a constraint, it's indeed possible to make sense of probability in the 'MWI' (though of course it's not really the MWI anymore, but essentially a hidden-variable theory with an observer's mind being the hidden variable), by simply postulating that whatever this 'you'-index is, it 'jumps' stochastically into one of the two copies (as opposed to the other), thus providing a basis for a probabilistic interpretation. But I think that many worlds traditionally does not boil down to such a view; rather, one would typically hold that both copies are indeed equally much you, with no additional distinguishing features (though how this works is a bit of a subtle issue).

Anyway, it seems most people have made up their minds, but I've decided nevertheless to flesh out the reason why Gleason does not help in the MWI a little. I won't really be telling anybody anything new, but I think it helps looking at the story in a different way from how it's usually told.

Let's start classical. Consider a set of classical objects---marbles, say. This set is partitioned into subsets, according to some distinguishing characteristic of the marbles---say, their colour; say furthermore that we have four colours, red, green, blue, and pink. A subset of the total marble set corresponds to a proposition; whether or not a given marble belongs to that subset corresponds to the truth value of that proposition, i.e. 'the marble is red' is true if the marble belongs to the subset of red marbles.

Now, let's assume there's some fixed quantity of marbles---say 100---, and that in each subset, there's a fixed number of marbles, as well---50 red ones, 30 green ones, 19 blue ones and 1 pink marble. This gives us a way to associate a measure with the subsets---the set of red marbles gets measure 0.5, and so on, while the full set obviously gets measure 1. Now, we can make sense of the construction 'the probability that the marble has colour c is p'. If we don't know anything about the marble---draw it at random---, then, for instance, the probability that the marble is green is 30%. (This doesn't consider any subtleties about probability, and isn't meant to; even without the precise definitions, I suppose it's obvious to anybody that we can grasp the extension of the concept of probability.) If we know something---say, that the marble is neither green nor pink---we can adjust our probabilities accordingly. In fact, writing [itex]P_R[/itex] for the proposition that the marble is red, and [itex]P_B[/itex] for the proposition that it is blue, we might represent our knowledge of the situation by the quantity
[tex]r=p_1P_R+p_2P_B.[/tex]
Now let's move over to quantum mechanics. Here, we don't have a set, but a Hilbert space, and we don't have subsets, but (closed) subspaces. And we also don't have propositions in the classical sense, but we can associate to every subspace a special operator, namely the projector on that subspace. So, we can kinda do the same thing we did before, and imagine the Hilbert space partitioned into subspaces, and any state belonging to a certain subspace has a certain property---assume there's four, and call them R, G, B, P. But here's the first problem: the counting measure doesn't really make sense anymore. Luckily, there's a way out: as Gleason showed, there's a unique measure attributable to closed subspaces, and it's given by the squared amplitudes (we don't need to get into any technicalities here, as most people are familiar with them anyway). So, we can play the same game as before---well, almost. We can play the same game if we have some assurance that the state is in one of the subspaces we're considering, that is, determinately has any of the properties R, G, B, P. Then, we can proceed as before, and (again, given knowledge that the state does not have property G or P) write, for instance

[tex]\rho=|c_1|^2P_R+|c_2|^2P_B,[/tex]

which, with [itex]P_R=|\psi_R\rangle\langle \psi_R|[/itex] and [itex]P_B=|\psi_B\rangle\langle \psi_B|[/itex], corresponds to the proper mixture

[tex]\rho=|c_1|^2|\psi_R\rangle\langle \psi_R|+|c_2|^2|\psi_B\rangle\langle \psi_B|.[/tex]

Up to this point, anything proceeds analogously to the classical case, thanks to Gleason. However, in general, we don't have the assurance that the state will have any of the properties determinately; in general, we're faced with a superposition such as

[tex]|\psi\rangle=c_1|\psi_R\rangle + c_2|\psi_B\rangle.[/tex]

The question now is, how do we get from [itex]|\psi\rangle[/itex] to [itex]\rho[/itex]? And on the standard interpretation, the answer to this is the collapse; [itex]\rho[/itex] is just the state after a measurement has occurred on [itex]|\psi\rangle[/itex], but before anybody had a look, so to speak. This is the key point: we must get to [itex]\rho[/itex] before Gleason is of any help.

However, on the many worlds interpretation, no such mechanism is available. The state never collapses; we're left with [itex]|\psi\rangle[/itex] at all times. Hence, Gleason doesn't say anything about the probabilities we ought to expect---in contrast to the case of the collapse. Appealing to Gleason's theorem, then, to the best of my ability to tell, is simply a nonstarter in the case of many worlds; it simply doesn't apply.
 
  • #248
mfb said:
It allows to formulate theories that predict amplitudes, and gives a method to do hypothesis testing based on those predictions.
The Hamiltonian of the system can be used in principle to predict the amplitudes. Does every Hamiltonian define one of these theories or what kind of theories do you have in mind?

S.Daedalus said:
This is probably a bit too metaphysical, but if you split into two identical copies, one having experience A, and the other experience B, in order to be able to talk about 'you' being one of these copies (as opposed to the other), you have to introduce some means that picks out your 'you'-ness, i.e. that makes it so that your continuous experience is with copy B, rather than A, say. This is the route Albert and Loewer have considered (but rejected), and under such a constraint, it's indeed possible to make sense of probability in the 'MWI' (though of course it's not really the MWI anymore, but essentially a hidden-variable theory with an observer's mind being the hidden variable), by simply postulating that whatever this 'you'-index is, it 'jumps' stochastically into one of the two copies (as opposed to the other), thus providing a basis for a probabilistic interpretation. But I think that many worlds traditionally does not boil down to such a view; rather, one would typically hold that both copies are indeed equally much you, with no additional distinguishing features (though how this works is a bit of a subtle issue).
Maybe this is related to the point mfb is trying to make: if you want to talk about probabilities, you can do so by using the 'jumping you'. But if you consider both yous to be qually real it doesn't make sense to talk about probabilities.

S.Daedalus said:
The question now is, how do we get from [itex]|\psi\rangle[/itex] to [itex]\rho[/itex]?
Decoherence?
 
  • #249
S.Daedalus said:
[tex]|\psi\rangle=c_1|\psi_R\rangle + c_2|\psi_B\rangle.[/tex]

The question now is, how do we get from [itex]|\psi\rangle[/itex] to [itex]\rho[/itex]? And on the standard interpretation, the answer to this is the collapse; [itex]\rho[/itex] is just the state after a measurement has occurred on [itex]|\psi\rangle[/itex], but before anybody had a look, so to speak. This is the key point: we must get to [itex]\rho[/itex] before Gleason is of any help.

That was actually the chief mathematical result that Everett derived in his first paper on the Many Worlds Interpretation (which he didn't call that--that name was coined by DeWitt). He showed that density matrices arise naturally from pure wave functions in cases of entanglement.
 
  • #250
kith said:
The Hamiltonian of the system can be used in principle to predict the amplitudes. Does every Hamiltonian define one of these theories or what kind of theories do you have in mind?
Different Hamiltonians that lead to different evolutions of wavefunction amplitudes are different theories.
 
  • #251
stevendaryl said:
That was actually the chief mathematical result that Everett derived in his first paper on the Many Worlds Interpretation (which he didn't call that--that name was coined by DeWitt). He showed that density matrices arise naturally from pure wave functions in cases of entanglement.

Here's a sketch from memory:

Suppose that you have a system in state [itex]|\Psi \rangle[/itex] that is made up of two subsystems. We can write [itex]|\Psi \rangle[/itex] as a superposition of product states of the two subsystems:

[itex]|\Psi \rangle = \sum_{i, a} C_{i a} |\varphi_i \rangle | \chi_a \rangle [/itex]

Now suppose that we have an operator [itex]O[/itex] that depends only on one of the subsystems. In other words:

[itex]\langle \varphi_i' | \langle \chi_a' | O | \chi_a \rangle | \varphi_i \rangle = O_{a a'} \delta_{i i'}[/itex]

In that case, the expectation value of [itex]O[/itex] in state [itex]\Psi[/itex] is given by:

[itex]\langle \Psi |O|\Psi \rangle = \sum_{i, a, a'} C^*_{i a'} C_{i a} O_{a a'} [/itex]

This is the same result you would get using a density matrix [itex]\rho[/itex] with components

[itex]\rho_{a a'} = \sum_i C^*_{i a'} C_{i a}[/itex]

As long as you are only talking about measurements of one of the two subsystems, you can treat the system as if it were in a mixture, rather than a pure state.
 
  • #252
kith said:
Decoherence?

stevendaryl said:
That was actually the chief mathematical result that Everett derived in his first paper on the Many Worlds Interpretation (which he didn't call that--that name was coined by DeWitt). He showed that density matrices arise naturally from pure wave functions in cases of entanglement.
This is why I specified proper mixture: improper mixtures that arise via tracing out a part of the system look mathematically indistinguishable from 'true' mixtures that arise from uncertainty about what the actual state is, but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

(In fact, I've sometimes thought that the confusion behind probability in the MWI looks a lot like the confusion between proper and improper mixtures; there's an argument due to Albert and Barrett that's structurally very similar to d'Espagnats argument regarding improper mixtures, producing a contradiction between considering each term of the superposition as a world in itself and the predictions of quantum mechanics taking into account he complete quantum state.)
 
Last edited:
  • #253
S.Daedalus said:
but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

Sorry - but since it is observationally equivalent to a proper mixture a perfectly valid interpretation is to simply assume it is - nothing can prove you wrong. See the paper I posted previously about decoherence where this is examined in detail. You may not like the ignorance interpretation - but valid it is.

Thanks
Bill
 
  • #254
Seems that I am the only one where "the penny did not drop".

I still have the same problems - or even more.

In experiments we can make fundamental individual observations and we can derive statistical frequencies. In many interpretations of QM we can both derive matrix elements and interpret them as probabilities to be compared with the statistical frequencies.

Now I expected that MWI - being a different interpretation of the same underlying formalism - allows for the same calculations and experimental tests. But what I read is very confusing (for me):
1) MWI is talking about branches and relies on decoherence to identify them, but is not able to count them or to derive a corresponding measure
2) My simple question regarding the "probability being in a certain branch" which I can identify via a result string seems to become meaningless
3) I still have the feeling that my concerns regarding the "missing link" between the experimentally inaccesable top-down perspective of the full Hilbert space with all its branches and the accessable bottom-up approach restricted to the branch I am observing right now have not been understood
4) We have the above mentioned statistical frequencies, but I learn that MWI does not provide the corresponding probabilities - that there are no probabilities at all
5) It is often claimed that the Born rule can derived, but what does it mean if there are no probabilities?

I guess the answers are there, written down in numerous posts, but I am not able to identify them.
 
  • #255
bhobba said:
Sorry - but since it is observationally equivalent to a proper mixture a perfectly valid interpretation is to simply assume it is - nothing can prove you wrong. See the paper I posted previously about decoherence where this is examined in detail. You may not like the ignorance interpretation - but valid it is.
Locally, that's true, but once Alice and Bob get together and compare their measurement results, or a measurement is carried out on both parts of the system, you get results that falsify the idea that the parts of the system are in some definite state, and we just don't know which---correlations that we can't account for with such a model in the first case, and interference results in the second. These are perfectly valid observations, so I don't see how it's true that the two states are 'observationally equivalent'.

tom.stoer said:
Seems that I am the only one where "the penny did not drop".
If by the penny dropping you mean understanding how (Born) probabilities arise in many worlds, then no, count me as one as confused as you are.
 
  • #256
bhobba said:
Sorry - but since it is observationally equivalent to a proper mixture a perfectly valid interpretation is to simply assume it is - nothing can prove you wrong.

You keep asserting this, but it's not true. To make this statement true, you have to employ the measurement postulate, which is what we are trying to motivate or prove. It comes in in the construction of ensemble descriptions as density matrices, which is only a sensible construction with the measurement postulate in mind. If you don't have the measurement postulate then the only complete way to describe an ensemble is a list of states along with the probability of finding each.

Cheers,

Jazz
 
  • #257
tom.stoer said:
Seems that I am the only one where "the penny did not drop".

I see it differently. I think you just don't get confused by shifting the argument between different levels all the time and drawing different conclusions. I have yet to see a way to make MWI work that doesn't rely on obfuscation of the real issues.

Cheers,

Jazz
 
  • #258
Jazzdude said:
I see it differently. I think you just don't get confused by shifting the argument between different levels all the time and drawing different conclusions. I have yet to see a way to make MWI work that doesn't rely on obfuscation of the real issues.

Cheers,

Jazz
So you say MWI can't answer these valid questions?
 
  • #259
S.Daedalus said:
This is why I specified proper mixture: improper mixtures that arise via tracing out a part of the system look mathematically indistinguishable from 'true' mixtures that arise from uncertainty about what the actual state is, but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

This is a very good thought. If you want to get ensembles out you have to put ensembles in. For example you can assume that the environment is in an unknown (but single) state that you model by an ensemble and then see how interaction with the environment splits up a well defined single observed state into a real ensemble. But then you have to face another problem: The contributing states cannot be recovered from a density matrix representation. Since we're talking about definite single (but unknown) states, we want to track their individual histories. That means you have to rather use a more complete representation of a quantum ensemble. That would be a list of states with associated probabilities.

If you perform a calculation like sketched above for a single qubit with an unknown environment you can express the qubit ensemble as a density function on the Bloch sphere that evolves in time as described by a generalized diffusion equation. Unfortunately the Born rule cannot be recovered from the dynamic, because linearity doesn't allow it.

Cheers,

Jazz
 
  • #260
tom.stoer said:
So you say MWI can't answer these valid questions?

Yes, that's what I'm convinced of.
 
  • #261
Nevertheless I would like to give mfb et al. a chance!
 
  • #262
S.Daedalus said:
Locally, that's true, but once Alice and Bob get together and compare their measurement results, or a measurement is carried out on both parts of the system, you get results that falsify the idea that the parts of the system are in some definite state, and we just don't know which---correlations that we can't account for with such a model in the first case, and interference results in the second. These are perfectly valid observations, so I don't see how it's true that the two states are 'observationally equivalent'.

That was dealt with in the thread you mentioned. The answer is in the reference I gave there - the book on Consistent Quantum Theory by Griffiths - it has to do with the necessity of requiring a consistent framework.

Thanks
Bill
 
  • #263
bhobba said:
That assumption, via Glaeson, means you are abandoning basis independence. Why do you want to choose one basis over another? These are man made things introduced for calculational convenience - why do you think nature should depend on such a choice?

Measurement does single out a basis, that's the whole point of it and the heart of the preferred basis problem. The Hilbert space structure is motivated by unitary evolution, not measurement. And using the construction of the space for the theory that describes an observed phenomenon as the reason for this phenomenon is highly circular!

Cheers,

Jazz
 
  • #264
Jazzdude said:
You keep asserting this, but it's not true.

It is true - simple as that. There is no way to observationally tell the difference. If you know of one do tell.

Thanks
Bill
 
  • #265
bhobba said:
It is true - simple as that. There is no way to observationally tell the difference. If you know of one do tell.

Thanks
Bill

You're misunderstanding the point. If we perform a measurement then the measurement postulate comes in and you cannot distinguish the two. But that's not what we're arguing about. We're talking about a situation where the measurement postulate is NOT there and we try to derive or motivate it, by means of an interpretation or theory. In this case you cannot assume the equivalence, because it's precisely what are are intending to show!

Cheers,

Jazz
 
  • #266
Jazzdude said:
Measurement does single out a basis, that's the whole point of it and the heart of the preferred basis problem.

That is at odds with standard textbooks like Decoherence and the Quantum-to-Classical Transition by Schlosshauer. The factoring problem naysayers must provide a proof it is purely an artifact of decomposition - so far they haven't.

The truth of the matter is detailed in the the paper I linked to before, and I gave in the thread previousoly mentioned:
https://www.physicsforums.com/showthread.php?t=707290

As I posted in that thread:
'Basically he is a holding to the decoherence ensemble interpretation as do I. Rather than me go through its pro's and con's here is a good paper on it:
http://philsci-archive.pitt.edu/5439...iv_version.pdf
'Postulating that although the system-apparatus is in an improper mixed state, we can interpret it as a proper mixed state superficially solves the problem of outcomes, but does not explain why this happens, how or when. This kind of interpretation is sometimes called the ensemble, or ignorance interpretation. Although the state is supposed to describe an individual quantum system, one claims that since we can only infer probabilities from multiple measurements, the reduced density operator SA is supposed to describe an ensemble of quantum systems, of which each member is in a definite state.'

The bottom line is, the naysayers are correct - without additional assumptions decoherence does not solve the measurement problem. That's true. But there is another part to it - with additional assumptions it does. That's the key point and a number of interpretations like decoherent histories, MWI, and the Ensemble ignorance interpretation do have additional assumptions and that's what makes them viable.

The guy that wrote the paper in the thread above was wrong - he needed additional assumptions that he didn't make explicit - however with those additional assumptions its valid.

Thanks
Bill
 
Last edited by a moderator:
  • #267
bhobba said:
That is at odds with standard textbooks like Decoherence and the Quantum-to-Classical Transition by Schlosshauer. The factoring problem naysayers must provide a proof it is purely an artifact of decomposition - so far they haven't.

That's not what I have quoted or referred to. My reply was specifically about your claim that it makes no sense to have a special basis in the process of observation.

And the factoring problem is already one step to far. In most cases there are no sensible factors at all. If you are able to specify a tensor factor space for an observed electron let me know. Practically all the interesting (for description of observation) systems do not have the structure of a tensor factor space of the universal Hilbert space.

Cheers,

Jazz
 
  • #268
Jazzdude said:
You're misunderstanding the point. If we perform a measurement then the measurement postulate comes in and you cannot distinguish the two. But that's not what we're arguing about. We're talking about a situation where the measurement postulate is NOT there and we try to derive or motivate it, by means of an interpretation or theory. In this case you cannot assume the equivalence, because it's precisely what are are intending to show!

You are missing my point, and the point of the decoherence adherents. There is no circularity in explicitly stating it gives the appeance of wave function coolapse and because of that actual collapse is a non issue. By this is meant since you can't tell the difference it's not something to worry about.

The Wikipedisa article on it explains it quite well:
http://en.wikipedia.org/wiki/Quantum_mind%E2%80%93body_problem
'Decoherence does not generate literal wave function collapse. Rather, it only provides an explanation for the appearance of wavefunction collapse, as the quantum nature of the system "leaks" into the environment. That is, components of the wavefunction are decoupled from a coherent system, and acquire phases from their immediate surroundings. A total superposition of the universal wavefunction still exists (and remains coherent at the global level), but its fundamentality remains an interpretational issue. "Post-Everett" decoherence also answers the measurement problem, holding that literal wavefunction collapse simply doesn't exist. Rather, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. Moreover, our observation tells us that this mixture looks like a proper quantum ensemble in a measurement situation, as we observe that measurements lead to the "realization" of precisely one state in the "ensemble".'

To put it another way, except for people like the person that wrote the article claiming the issue had been solved (and he was wrong), decoherence adherents freely admit it only gives the appearance of collapse, for them that's is good enough - but for others like you it isn't.

Thanks
Bill
 
Last edited by a moderator:
  • #269
bhobba said:
You are missing my point, and the point of the decoherence adherents. There is no circularity in exlicitly stating it gives the appeance of wave function coolapse and because of that actual collapse is a non issue. By this is meant since you can't tell the difference it's not something to worry about.

I've not been missing your point. I'm intimately familiar with the arguments used in decoherence, and I still disagree. The way you lay it out the argument depends highly on the construction of the density matrix to encode quantum ensembles. And this construction is only motivated if you assume that upon observations quantum probabilities mix with classical (ensemble) probabilities. It doesn't matter then how you construct improper ensembles or if they're sensible constructs because the real ensembles are already problematic.

If you can motivate the construction of a density matrix encoded ensemble without referring to outcomes probabilities and/or the measurement postulate (which includes references to observations of the same) then please share your wisdom.

The Wikipedisa article on it explains it quite well: ...

It's no surprise that you find something like that on Wikipedia. There are still many decoherence misinterpretations found in literature. And yes, there are supporters of this view, but that doesn't make it any more correct.

Cheers,

Jazz
 
  • #270
tom.stoer said:
Nevertheless I would like to give mfb et al. a chance!
I think it is all written in the thread now. It would be pointless to repeat it.
 
  • #271
Jazzdude said:
That's not what I have quoted or referred to. My reply was specifically about your claim that it makes no sense to have a special basis in the process of observation.

I never claimed that - in fact I don't even know what you mean by that. To be clear my claim is that decoherence solves the preferred basis problem as stated on page 113 of the reference I gave before by Schlosshauer. He gives 3 issues the measurement problem must solve:

1. The preferred basis problem
2. The problem of non observability of interference
3. The problem of why we have outcomes at all.

The statement he makes is my position:
'it is reasonable to conclude decoherence is capable of solving the first two, whereas the third problem is linked to matters of interpretation'

And that is exactly it - the first two have had considerable work done that indicates decoherence will likely solve them - in fact for a number of models given in the book it does. To solve the third one you need further assumptions and the detail of those assumptions is peculiar to each interpretation.

Thanks
Bill
 
  • #272
Jazzdude said:
I've not been missing your point. I'm intimately familiar with the arguments used in decoherence, and I still disagree. The way you lay it out the argument depends highly on the construction of the density matrix to encode quantum ensembles. And this construction is only motivated if you assume that upon observations quantum probabilities mix with classical (ensemble) probabilities. It doesn't matter then how you construct improper ensembles or if they're sensible constructs because the real ensembles are already problematic.

This is going around in circles.

One more time - then that's the end of it for this thread for me. However based on past experience it will be rehashed.

Decoherence adherents, unless they are being disingenuous like the paper cited before, do not claim it solves the measurement problem. What they claim is its non issue because its observationally the same as a proper mixture and gives the appearance of wavefunction collapse.

Thanks
Bill
 
  • #273
S.Daedalus said:
This is why I specified proper mixture: improper mixtures that arise via tracing out a part of the system look mathematically indistinguishable from 'true' mixtures that arise from uncertainty about what the actual state is,

The MWI interpretation basically amounts to the claim that all mixtures are "improper" in that sense.

but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

I have not followed that thread, but as I understand it, MWI absolutely depends on there being no observational difference between "proper" and "improper" mixtures.
 
  • #274
bhobba said:
Decoherence adherents, unless they are being disingenuous like the paper cited before, do not claim it solves the measurement problem. What they claim is its non issue because its observationally the same as a proper mixture and gives the appearance of wavefunction collapse.
I just don't see in what sense that's true; it's trivial to distinguish a proper from an improper mixture. Take the state
[tex]|\Phi^+\rangle=\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle).[/tex]
Locally, both Alice and Bob describe it by the mixture
[tex]\rho_A=\rho_B=\frac{1}{2}(|0\rangle\langle 0| + |1\rangle\langle 1|),[/tex]
and all of their observations will be in line with this assignment. But if they now believe that therefore, their respective states are in an actual mixture of [itex]|0\rangle[/itex] and [itex]|1\rangle[/itex], they must also believe that the global state corresponds to
[tex]\rho_{AB}=\rho_A\otimes\rho_B=\frac{1}{4}(|00\rangle\langle 00| + |01\rangle\langle 01| + |10\rangle\langle 10| + |11\rangle\langle 11|),[/tex]
simply because that is the state the system would be in if it were the case that each local state were a proper mixture, one for instance generated by producing the states [itex]|0\rangle[/itex] or [itex]|1\rangle[/itex] equiprobably at random. But of course, this state is observationally very different from [itex]|\Phi^+\rangle[/itex]: for instance, both Alice and Bob would expect their measurements to be entirely uncorrelated, but in fact, they will be perfectly correlated. This amounts to a falsification of the belief that their states can be described by a proper mixture, i.e. that they can be given an ignorance interpretation. The states can't be identified, even though the local observations are equivalent.

Alternatively, you can just measure [itex]|\Phi^+\rangle \langle\Phi^+|[/itex]: clearly, while [itex]|\Phi^+\rangle[/itex] is an eigenstate, and thus, the measurement will return +1 determinately, [itex]\rho_{AB}[/itex] is not, and the outcome will be random; the assumption of being able to give an ignorance interpretation to their states leads Alice and Bob to make false predictions.

So in what sense could you consider these states equivalent?
 
  • #275
Jazzdude said:
That's not what I have quoted or referred to. My reply was specifically about your claim that it makes no sense to have a special basis in the process of observation.

I have had a look at the responses here and there has been a misunderstanding of contexts.

The original quote you gave was in reference to the assumption of non-contextuality in the proof of Gleason's theorem which is the measure is basis independent. That didn't make sense in that context and I assumed it meant that decoherence didn't single out a specific basis.

Thanks
Bill
 
  • #276
stevendaryl said:
The MWI interpretation basically amounts to the claim that all mixtures are "improper" in that sense.

I have not followed that thread, but as I understand it, MWI absolutely depends on there being no observational difference between "proper" and "improper" mixtures.

So I took a look at one of your comments in that thread:

Mathematically, this is the same object that one would use to describe a system that is prepared either in the state |M1⟩ or |M2⟩ with a respective probability of |c1|2 or |c2|2. However---and this is where the argument goes wrong, I believe---in case this object is arrived at by tracing out the degrees of freedom of another subsystem, one can't interpret it in the way that the system is in fact in either of the states |M1⟩ or |M2⟩, and we just don't know which.

Is that supposed to be a criticism of MWI? MWI essentially claims that it's incorrect to attribute QM probabilities to ignorance. It's incorrect to attribute mixed states to ignorance. So it's no criticism of MWI that it has a conclusion that is different from the conclusion of a theory that attributes mixed states to ignorance--that's the whole point of MWI. The real issue isn't whether the mixed states are due to ignorance (they are, in some interpretations, and they are not, in other interpretations). The issue is whether and how both interpretations are consistent with what we observe.
 
  • #277
S.Daedalus said:
I just don't see in what sense that's true; it's trivial to distinguish a proper from an improper mixture.

What exactly about the answer based of Decoherent Histories I gave in the thread you linked to didn't you understand?

Added Later:

You seemed to understand the issue in that thread:
'Now as I said, you can augment the scheme so as to avoid this problem---by, say, going the modal route, or by just working within the framework of consistent histories as Griffiths does; but then you are adding an extra interpretation to the quantum formalism and not, as Hobbes claims to do, resolving the problem 'from within' (something which by the way runs headlong into multiple insolubility theorems of the measurement problem from within quantum mechanics formulated over the years, starting with Fine (1970)). And of course, all of these interpretations do have their own problems (contradictory inferences in consistent histories, Kochen-Specker contradictions/inconsistent value state assignments in modal theories, etc.).'

The answer is still the same, and I have never shied away from it, but for some reason it seems to be bought up all the time, you need further assumptions. MWI is one route, Decoherent Histories is another.

Thanks
Bill
 
Last edited:
  • #278
stevendaryl said:
Is that supposed to be a criticism of MWI?
No, that was a criticism of Hobson's approach. It's also relevant in the context of this thread because people keep attempting to appeal to Gleason's theorem in order to recover the Born probabilities in the MWI, but this fails for a similar reason, namely that proper and improper mixtures are not the same thing.

In a way, getting a proper from an improper mixture is what the measurement problem is all about. Collapse interpretations solve it by fiat: someone snips their fingers, and out comes the desired proper mixture. Many people find this dissatisfying, and with good reason. But then proposing a solution that ends up depending on the very same sleight of hand is no progress at all.
 
  • #279
bhobba said:
What exactly about the answer based of Decoherent Histories I gave in the thread you linked to didn't you understand?
Well, you didn't really give me something to understand, you merely gave a reference. And furthermore, many worlds and consistent histories are two distinct interpretations, with different problems; that a problem can be solved in one, doesn't mean it can be solved in the other. Besides, from what I know about it, I don't see how consistent histories permits the identification of improper and proper measurements; after all, it still must account for Bell tests somehow. But at any rate, if you think that you can show how, in the scenario I outlined above, Alice and Bob can't simply meet up and compare their measurement records in order to find out that their states couldn't have been proper mixtures, I'm all ears.

Response to the bit you added later: yes, I do think that the problem of probability is (at the very least) less severe in consistent histories, but that doesn't mean that it solves the problems of the MWI; you can't fix the MWI by just switching to a different interpretation. It's within the MWI framework that we need to solve the problem here; you can add all manner of things in order to get the right probabilities (though some people wouldn't even believe that), but the question is whether you can (as is often claimed) resolve it within the MWI itself.
 
  • #280
S.Daedalus said:
I just don't see in what sense that's true; it's trivial to distinguish a proper from an improper mixture. Take the state
[tex]|\Phi^+\rangle=\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle).[/tex]
Locally, both Alice and Bob describe it by the mixture
[tex]\rho_A=\rho_B=\frac{1}{2}(|0\rangle\langle 0| + |1\rangle\langle 1|),[/tex]
and all of their observations will be in line with this assignment. But if they now believe that therefore, their respective states are in an actual mixture of [itex]|0\rangle[/itex] and [itex]|1\rangle[/itex], they must also believe that the global state corresponds to
[tex]\rho_{AB}=\rho_A\otimes\rho_B=\frac{1}{4}(|00\rangle\langle 00| + |01\rangle\langle 01| + |10\rangle\langle 10| + |11\rangle\langle 11|),[/tex]
simply because that is the state the system would be in if it were the case that each local state were a proper mixture, one for instance generated by producing the states [itex]|0\rangle[/itex] or [itex]|1\rangle[/itex] equiprobably at random. But of course, this state is observationally very different from [itex]|\Phi^+\rangle[/itex]: for instance, both Alice and Bob would expect their measurements to be entirely uncorrelated, but in fact, they will be perfectly correlated. This amounts to a falsification of the belief that their states can be described by a proper mixture, i.e. that they can be given an ignorance interpretation. The states can't be identified, even though the local observations are equivalent.

Alternatively, you can just measure [itex]|\Phi^+\rangle \langle\Phi^+|[/itex]: clearly, while [itex]|\Phi^+\rangle[/itex] is an eigenstate, and thus, the measurement will return +1 determinately, [itex]\rho_{AB}[/itex] is not, and the outcome will be random; the assumption of being able to give an ignorance interpretation to their states leads Alice and Bob to make false predictions.

So in what sense could you consider these states equivalent?

I think you miss the point of the use of decoherence in saying that proper and improper mixed states are observationally indistinguishable. There is a mathematical difference between the two, because the improper mixed state contains "interference terms" that are absent in the proper mixed state. But in order to observe these interference effects, you have to perform a measurement that has different outcomes (or different probabilities of outcomes) if the interference terms are present. Basically, what that amounts to is performing a measurement that "unmixes" the states. But when the subsystems involve many, many states (an observer, or the environment) this is a practical impossibility on the order of putting a broken pane of glass back together. Mixing is irreversible in practice for the same reason that classical physics of 10^23 particles is irreversible in practice, even both are reversible in theory.

I don't think your mathematical demonstration is correct. I don't think you combine mixed states that way.
 

Similar threads

  • Quantum Interpretations and Foundations
9
Replies
313
Views
20K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
41
Views
3K
  • Quantum Interpretations and Foundations
Replies
34
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
16
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
2K
  • Quantum Interpretations and Foundations
Replies
4
Views
942
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
Back
Top