Physicist disentangles 'Schrodinger's cat' debate

In summary: M_2\rangle, but rather in some new state that is a superposition of the two original states.In summary, the article is Physicist disentangles 'Schrodinger's cat' debate. The author tries to make calculations on entangled pairs by describing each system of the pair as a separate state, to show its not in a superposition, but then goes on the describe the MS as being in a coherent superposition. He claims that solves the measurement problem by assuming entanglement selects an outcome. Another no interpretation interpretation - it happens because it happens. The author of that paper (Art Hobson) also holds the view that decoherence solves the measurement problem. A rebuttal regarding the later paper
  • #1
StevieTNZ
1,934
883
As per the title, the article is Physicist disentangles 'Schrodinger's cat' debate

The paper is http://arxiv.org/ftp/arxiv/papers/1301/1301.1673.pdf

To be quite honest, I don't quite understand what he is trying to say. It looks as though he's trying to make calculations on entangled pairs by describing each system of the pair as a separate state, to show its not in a superposition, but then goes on the describe the MS as being in a coherent superposition.

I'm sure once a few people read the article they may be able to clarify what the author is trying to say, and how he derives his conclusion. I'm utterly confused by his approach.
 
Last edited:
Physics news on Phys.org
  • #2
I think it suggests that solves the measurement problem...



.
 
  • #3
Another no interpretation interpretation -it happens because it happens. At least there is no contradiction.
 
  • #4
audioloop said:
I think it suggests that solves the measurement problem...



.

There is no measurement problem if you assume entanglement selects an outcome.
 
  • #5
The author of that paper (Art Hobson) also holds the view that decoherence solves the measurement problem. A rebuttal regarding the later paper was given by Ruth Kastner and was published very recently in ArXiv.
 
Last edited:
  • #6
Maui said:
There is no measurement problem if you assume entanglement selects an outcome.

The author says this, but is his conclusion correct based on his premises?
 
  • #7
of course not
 
  • #8
StevieTNZ, thanks for posting the paper! (I read the article before and could not find the paper).

Maui said:
There is no measurement problem if you assume entanglement selects an outcome.

That's also the impression I got of the paper after glancing it through. I haven't digested it more thoroughly, so I've got nothing further to add at this moment.
 
  • #9
Why does he say that (4) implies <Q> = 0?
 
  • #10
IMHO, the author demonstrates quite clearly, that he has not really understood decoherence theory and its implications. I wonder why this gets so much attention. To be honest I'm even surprised this got through peer review.

Cheers,

Jazz
 
  • #11
Maui said:
There is no measurement problem if you assume entanglement selects an outcome.

Indeed there isn't - but that assumption entails the core issue and is a massive can of worms. The key issue is - can a improper mixed state be considered a proper one. Observationally they are equivalent - but are they equivalent? That's the can of worms.

Basically he is a holding to the decoherence ensemble interpretation as do I. Rather than me go through its pro's and con's here is a good paper on it:
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf
'Postulating that although the system-apparatus is in an improper mixed state, we can interpret it as a proper mixed state super cially solves the problem of outcomes, but does not explain why this happens, how or when. This kind of interpretation is sometimes called the ensemble, or ignorance interpretation. Although the state is supposed to describe an individual quantum system, one claims that since we can only infer probabilities from multiple measurements, the reduced density operator SA is supposed to describe an ensemble of quantum systems, of which each member is in a definite state.'

The bottom line is the conclusion:
'Decoherence theorists have generally come to accept the criticisms above, and accept that decoherence alone does not solve the problems of outcomes, and therefore leaves the most essential question untouched.'

I however make the assumption that observationally equivalent systems are equivalent so the problem is solved. Whether you are happy with that or not only you can decide.

Thanks
Bill
 
  • #12
Reading the paper today in order to avoid doing actual work, it seems to me that really all Hobson is doing is to attempt to handwave away the distinction between proper and improper mixtures, and then attaching an ignorance interpretation to the latter. Basically, he starts with an entangled state like

[tex]|\Psi_{MO}\rangle=c_1|M_1\rangle|O_1\rangle + c_2|M_2\rangle|O_2\rangle,[/tex]

which can be interpreted as the post-measurement state of a measurement system M and an object system O that started out in a superposition of its two accessible states [itex]|O_1\rangle[/itex] and [itex]|O_2\rangle[/itex], with the index of M indicating which state was detected.

This can be described equivalently by the density matrix

[tex]\rho_{MO}=|\Psi\rangle \langle\Psi| = |c_1|^2|M_1O_1\rangle\langle O_1 M_1| + |c_2|^2|M_2 O_2\rangle\langle O_2 M_2|[/tex][tex] + c_1c_2^\star|M_1 O_1\rangle\langle O_2 M_2| + c_2c_1^\star|M_2 O_2\rangle\langle O_1 M_1|[/tex]

In order to now get a mathematical object that describes either of the systems on its own, one uses the procedure known as 'tracing out' the other subsystem, that is, one performs the partial trace over, say, the object system to get the description of the measurement system:

[tex]\rho_M=\mathrm{tr}_O\rho_{MO}=|c_1|^2|M_1\rangle\langle M_1| + |c_2|^2|M_2\rangle\langle M_2|[/tex]

Mathematically, this is the same object that one would use to describe a system that is prepared either in the state [itex]|M_1\rangle[/itex] or [itex]|M_2\rangle[/itex] with a respective probability of [itex]|c_1|^2[/itex] or [itex]|c_2|^2[/itex]. However---and this is where the argument goes wrong, I believe---in case this object is arrived at by tracing out the degrees of freedom of another subsystem, one can't interpret it in the way that the system is in fact in either of the states [itex]|M_1\rangle[/itex] or [itex]|M_2\rangle[/itex], and we just don't know which. This is the basis of the distinction between 'improper' (arrived at by tracing) and 'proper' (arrived at by epistemic uncertainty) mixtures, due originally to Bernard d'Espagnat, I believe.

Now, Hobson is well aware of the distinction, but---as best I can gather---he believes it doesn't matter, as locally, we can't tell a difference between the two. This much is true. But nevertheless, there is an important and simple distinction (that has direct experimental consequences). For consider now that the state [itex]|\Psi\rangle[/itex] to describe an EPR pair, say, electrons entangled regarding their spin properties, and in the possession of Martha and Oliver (sorry for introducing nonstandard nomenclature here).

Then, both parties would quite reasonably consider the part of the state in their possession to be described by a mixture such as [itex]\rho_M[/itex] above. But, interpreting their state as being actually in a proper mixture of the form---that is, being actually in a definite state they just don't happen to know---has important consequences, not for the outcomes obtained in their local experiements, but for the correlations between these outcomes. For they would judge that the total state must be just the tensor product of their local states, which looks like this:

[tex]\rho_{MO}^\prime=\rho_O\otimes\rho_M=(|c_1|^2|M_1\rangle\langle M_1| + |c_2|^2|M_2\rangle\langle M_2|)\otimes(|c_2|^2|O_1\rangle\langle O_1| + |c_2|^2|O_2\rangle\langle O_2|),[/tex]

which is a state corresponding to an epistemic mixture of the possibilities [itex]|M_1 O_1\rangle[/itex], [itex]|M_2 O_2\rangle[/itex], [itex]|M_1 O_2\rangle[/itex] and [itex]|M_2 O_1\rangle[/itex]---four possibilities, while there were only two in the original state [itex]\rho_{MO}[/itex], corresponding to the fact that entanglement means that whenever Martha detects the state [itex]|M_1\rangle[/itex], Olliver detects [itex]|O_1\rangle[/itex], and whenever she detects [itex]|M_2\rangle[/itex], Olliver detects [itex]|O_2\rangle[/itex]. In assumind that their local states are epistemic mixtures of two distinct possibilities, that is, that they really are in one of two possible states---the analogy of obtaining a definite measurement outcome---, Martha and Olliver can no longer account for the correlations between their measurements.

Now, in response to this, Hobson appears to put up some handwaving about how it's the correlations that are in a superposition, not the states, but it's unclear to me what that's supposed to mean. And regardless, the problem lies in his assertion that in an entangled state such as [itex]|\Psi_{MO}\rangle[/itex], one can always say that (in his example) 'one photon measures the other', leading thus to definite states; but for any pair of entangled photons, I can do either of two things: I can measure the photons individually, obtaining definite outcomes; or, I can measure the entanglement of the state, which is incompatible with the assertion that the photons are in definite states (at least in standard QM). The fact that I can make that choice and must use different prescriptions to account for my observations is exactly what the measurement problem is all about; in Hobson's proposal, once two photons are entangled, and one has 'measured' the other, I would never observe any of the phenomena that make quantum mechanics so interesting---superposition, entanglement, interference, etc.
 
Last edited:
  • #13
Jazzdude said:
IMHO, the author demonstrates quite clearly, that he has not really understood decoherence theory and its implications. I wonder why this gets so much attention. To be honest I'm even surprised this got through peer review.

I don't think its so much he doesn't understand it as he is making a key assumption and is not up-front about it. Decoherence does not solve the measurement problem without further assumptions - claims otherwise are wrong.

Thanks
Bill
 
  • #14
S.Daedalus said:
Reading the paper today in order to avoid doing actual work, it seems to me that really all Hobson is doing is to attempt to handwave away the distinction between proper and improper mixtures, and then attaching an ignorance interpretation to the latter.

Bingo - you got it in one.

However he may have something slightly different in mind but is not spelling it out. He states:
'That phenomenon must be taken into account to resolve the measurement problem, he said. That means with Schrodinger's cat, the cat is no longer predicted to be both dead and alive. It is instead dead if the nucleus decays, and alive if the nucleus does not decay, just as one would expect.'

There are some interpretations like the transactional interpretation and its variants that suggest its an influence traveling back in time from the observing apparatus.

Thanks
Bill
 
  • #15
I think the proof of the theorem must be correct.
I understand for [itex]2Re(\beta \gamma^*)[/itex] and [itex]2Im(\beta \gamma^*)[/itex]
but not what follows. Why have the expectation values of Qs and Ps to be = 0?
I suppose it is basic.
 
  • #16
S.Daedalus said:
Now, in response to this, Hobson appears to put up some handwaving about how it's the correlations that are in a superposition, not the states, but it's unclear to me what that's supposed to mean.
This was one of Kastner's criticisms of Hobson's argument:
The author’s second argument ('improper mixtures are epistemic' argument) concerns the reduced state of a component system, which is an improper mixture. The author wishes to apply an epistemic interpretation to this reduced state – i.e., he wants to argue that S is locally collapsed, which means that it really must be in one local state or another. However, R. I. G. Hughes (1992) provides a well-known proof that given a composite system in a pure entangled state, applying an epistemic interpretation to the improper mixed state of a component system fails. Hughes shows that taking the improper mixture as representing ignorance of which state the subsystem is actually in results in a contradiction: i.e., it implies that the composite system is in a mixed state, contrary to the assumed composite pure state. Now, the author apparently wants to use the fact that the subsystem S is not in a single space superposition to argue that it is locally collapsed. But as Hughes shows, if S approaches its local measuring device in a collapsed state, describable by an epistemic (proper) mixed state, that contradicts the prepared composite pure state.
Measurement: still a problem in standard quantum theory
http://arxiv.org/ftp/arxiv/papers/1308/1308.4272.pdf
 
  • #17
From the above paper:
'rather than by invoking observation to try to explain the observation that we see collapsed states when we perform measurements.'

I want to add the above doesn't disprove it either. Its basically philosophical mumbo jumbo (by which I mean a play on words) trying to assert that because to make decoherence work as explaining the measurement problem the simplest and easiest assumption is that observationally equivalent systems are equivalent. You are not using observation to explain observation, you are saying because observation can't tell a difference there is no difference.

And the claim that its wrong because you are contradicting an initial assumption its in a pure state is incorrect. The system, environment, and observational apparatus start out in a pure state and by unitary evolution must remain in a pure state but because they have become entangled, by the phenomena of tracing over the environment, the observational apparatus and system are now an improper mixed state. This is the key point - no contradiction.

In fact my favorite approach to QM makes that assumption right from the outset:
http://arxiv.org/pdf/0911.0695v1.pdf
Axiom 1. (Information capacity) An elementary system has the information carrying capacity of at most one bit. All systems of the same information carrying capacity are equivalent.

It not that such a position is logically incorrect or anything like that, its simply to be correct you should be upfront about it rather than tacitly assume it.

Thanks
Bill
 
Last edited:
  • #18
After discussing this article with Bruce Rosenblum, one of the authors of Quantum Enigma, he replies

Reading the first couple of pages and the conclusion, I cannot distinguish this paper from nonsense. How can it get published in Physical Review? It just takes a couple of referees not wishing to argue further. Listing 36 references and thanking 15 (some of whom are expert) individuals for "useful exchanges" helps.

So much for peer review.
 
  • #19
That seems to be a trend in these physics journals.
I Guess not enough real science is being conducted so they have to grab anything that might get someone to buy their articles.

Capitalism101
 
  • #20
bhobba said:
However he may have something slightly different in mind but is not spelling it out. He states:
'That phenomenon must be taken into account to resolve the measurement problem, he said. That means with Schrodinger's cat, the cat is no longer predicted to be both dead and alive. It is instead dead if the nucleus decays, and alive if the nucleus does not decay, just as one would expect.'

There are some interpretations like the transactional interpretation and its variants that suggest its an influence traveling back in time from the observing apparatus.

Thanks
Bill
Hm, taking the following at face value: "[the cat is] dead if the nucleus decays, and alive if the nucleus does not decay", this seems rather like what Everett started out from, the realization that the quantum state has a propositional content that is only definite relative to some reference; i.e. the cat's state is only definite relative to the nucleus having some definite state. The problem is, of course, that the quantum state then does not contain any information about which one of the alternatives is definite.

Today, most people believe that Everett had something like the 'many worlds' view in mind, i.e. that each outcome in some sense 'occurs' in a different world, or that the world splits in two after such a measurement. But some, like Tim Maudlin, believe he was trying to do something more subtle, instead considering the nature of facts to be fundamentally relational, like for instance tensed facts---'it is raining today'---depend on the specification of a value of 'today' for determining their truth.

Besides, if he were trying to do something transactional, I should think that Ruth Kastner would have been far more happy with his article than she seems to be!

bhobba said:
I want to add the above doesn't disprove it either. Its basically philosophical mumbo jumbo (by which I mean a play on words) trying to assert that because to make decoherence work as explaining the measurement problem the simplest and easiest assumption is that observationally equivalent systems are equivalent. You are not using observation to explain observation, you are saying because observation can't tell a difference there is no difference.

And the claim that its wrong because you are contradicting an initial assumption its in a pure state is incorrect. The system, environment, and observational apparatus start out in a pure state and by unitary evolution must remain in a pure state but because they have become entangled, by the phenomena of tracing over the environment, the observational apparatus and system are now an improper mixed state. This is the key point - no contradiction.
Well, I guess most people would hold the contradiction coming in at a later point. Say you have a Bell state distributed to two parties Alice and Bob, i.e.
[tex]|\Phi^+\rangle_{AB}=\frac{1}{\sqrt{2}}(|0_A0_B\rangle+|1_A1_B\rangle),[/tex]
then both would justifiably consider their 'local state' to be the improper mixture
[tex]\rho_A=\rho_B=\frac{1}{2}|0\rangle\langle 0|+\frac{1}{2}|1\rangle\langle 1|.[/tex]
Attaching now an epistemic interpretation to these states, both would believe that their system really is in either the state [itex]|0\rangle[/itex] or [itex]|1\rangle[/itex]. But this straightforwarldy entails the belief that the global state really is either of [itex]|00\rangle[/itex], [itex]|11\rangle[/itex], [itex]|01\rangle[/itex] or [itex]|10\rangle[/itex]. But this would be a state that produces different experimental results from the original Bell state, i.e. if they were simply to combine their two photons and perform measurements on the combination, they would invariably observe that they get results that can't be explained by the state being actually in either of the four possible combinations; but from this, they must conclude that their local states couldn't possibly have been in a definite state, either.

You've really got to explain these two things in a consistent manner in order to claim a solution to the measurement problem: 1) local measurements on the Bell state always produce definite outcomes, and 2) 'global' measurements (which can of course be done perfectly locally if one just takes the whole Bell state as a specific state of a four-level quantum system) produce results incompatible with the idea that the system is in some definite state. Hobson's approach really only attacks 1), and thus, just falls short (as far as I can see, at least).

Now, he also makes some noises in the direction of so-called modal interpretations, alleging that they're the same kind of thing that he has in mind. But of course, modal interpretations are in fact very different beasts: there, you suppose that the quantum state really only gives you an overview of possibilities ('modalities'), not the full description of physical reality. The state then has to be augmented by what is actually the case, effectively attaching a certain definite system state to the quantum mechanical state in the manner of a hidden variable.

So your total inventory includes 1) the quantum state, [itex]|\psi\rangle=\sum_i c_i |i\rangle[/itex], and 2) the 'value state', some concrete state [itex]|i\rangle[/itex], which represents the actual 'ontic' content of the theory. What the precise value state is depends on the quantum state, hopefully in such a manner as to not be vulnerable to the argument given above; how this works explicitly differs among modal theories, ranging from 'hand-picking' the value state to giving explicit dynamics for it, similar to the velocity field equation in Bohmian mechanics (which in fact can be regarded as a certain kind of modal interpretation in which it is always the value of the position observable that is definite). Different modal interpretations also have different problems: a few have fallen prey to Kochen-Specker type contradictions, while others have the somewhat disheartening feature that the observable definite for the total system may be completely different from the observable definite for some subsystems (which I think may be regarded as a remnant of the improper-mixture problem).
 
  • #21
It's all good and well that people discuss a paper and all, but I think people give this random paper a ****lot more attention than it warrants simply because it made a headline.

The author of the article that discusses this paper is notorious for presenting fringe-borderline-pseudoscience as "NEW BREAKTHROUGHS".
Whenever I see her name as author I just skip reading
 
  • #22
S.Daedalus said:
Attaching now an epistemic interpretation to these states, both would believe that their system really is in either the state [itex]|0\rangle[/itex] or [itex]|1\rangle[/itex].

I am not into philosophy at all but I don't think eipistemic means that - it means a view that sees quantum states as states of knowledge, more akin to the probability distributions of statistical mechanics. It doesn't ascribe a system being really in anything until observed. There are a number of interpretations in that camp but probably the best known are Copenhagen and the Ensemble interpretation. With regard to the ensemble interpretation the twist dechoerence adds to it is by being in an improper mixed state as a result of decoherence one can interpret it as being in that state - LOCALLY - but only after decoherence has occurred. It's not saying anything about a global state.

Exactly why such a view doesn't lead to any issues can be found in Chapter 24 - Consistent Quantum Theory by Griffiths. Basically you are assuming it possesses those properties SIMULTANEOUSLY globally - which the situation doesn't require. The complete analysis in given in the reference and hinges on the concept of framework used in that interpretation - basically one is free to choose frameworks as long as they are consistent - and a framework exists where it has both those properties locally - but not globally.

Thanks
Bill
 
Last edited:
  • #23
bhobba said:
I am not into philosophy at all but I don't think eipistemic means that - it means a view that sees quantum states as states of knowledge, more akin to the probability distributions of statistical mechanics.
Yes, using that terminology was a bit confusing by me, I'm sorry. You're right that typically, 'epistemic interpretation' means that the quantum state as a whole is taken as a state of knowledge (though about what, one usually remains silent). I merely meant it in the sense of an ignorance interpretation of a mixed state---this is, if such an interpretation is possible, also an epistemic view of the state in the sense that it describes our knowledge about the underlying quantum state, i.e. in the example either [itex]|0\rangle[/itex] or [itex]|1\rangle[/itex]. Describing your state as a mixture thereof, and believing an ignorance interpretation to hold, you indeed believe that the system is in either state, and you just don't know which (which is where the epistemic comes in).

The problem is then that if both Alice and Bob are committed to thinking of their state in this way, they are also committed to think of the global state in this way, as simply the tensor product of their local states---after all, that's what QM says the state is if you prepare a mixture of both states locally at both Alice's and Bob's labs. But this belief leads to a contradiction with experiment.

Exactly why such a view doesn't lead to any issues can be found in Chapter 24 - Consistent Quantum Theory by Griffiths. Basically you are assuming it possesses those properties SIMULTANEOUSLY - which the situation doesn't require.
No, I'm merely arguing, or echoing the argument, that the state possesses one property---that of locally being a mixture to which one can attach an ignorance interpretation---definitely, in order to show what (it seems to me) is wrong about it.

To make this as concrete as possible, consider the example in which you cook up many copies of the state [itex]|\Phi^+\rangle[/itex], on which you perform either the measurements [itex]\mathbb{I}\otimes\sigma_z[/itex], [itex]\sigma_z\otimes\mathbb{I}[/itex], or that of an observable [itex]\mathcal{O}[/itex] such that [itex]\mathcal{O}|\Phi^+\rangle=|\Phi^+\rangle[/itex] (i.e. the projector on [itex]|\Phi^+\rangle[/itex]).

In the case of the first two observables, you will get an outcome of either +1 or -1, with a probability of 50% each. This, then, you could explain via Hobson's argument, i.e. that the local state is a mixture of the form [itex]\rho_A[/itex] or [itex]\rho_B[/itex] above, and that furthermore, you can furnish an ignorance interpretation from this---i.e. (you say) 'entanglement decoheres the particles', meaning that they are really in either of the states [itex]|0\rangle[/itex] or [itex]|1\rangle[/itex], you just don't know which, and the measurement revealed that fact to you. But this entails that the global state is the tensor product of these local states, since that is the state you would get if the particles were actually in either of these states, that is, if you had Alice and Bob locally cook up a mixture of such states, by say making random Stern-Gerlach measurements in opposite directions.

But then, of course, you would expect the outcome of measuring [itex]\mathcal{O}[/itex] to be randomly distributed by that very logic, as the state you assign to the whole system is not an eigenstante of it. In fact, however, you find that you always obtain the outcome +1 instead; so your prior reasoning was faulty, and it's just not permissible after all to attach an ignorance interpretation to the state: it leads to wrong predictions.

Now as I said, you can augment the scheme so as to avoid this problem---by, say, going the modal route, or by just working within the framework of consistent histories as Griffiths does; but then you are adding an extra interpretation to the quantum formalism and not, as Hobbes claims to do, resolving the problem 'from within' (something which by the way runs headlong into multiple insolubility theorems of the measurement problem from within quantum mechanics formulated over the years, starting with Fine (1970)). And of course, all of these interpretations do have their own problems (contradictory inferences in consistent histories, Kochen-Specker contradictions/inconsistent value state assignments in modal theories, etc.).
 
  • #24
S.Daedalus said:
Now as I said, you can augment the scheme so as to avoid this problem---by, say, going the modal route, or by just working within the framework of consistent histories as Griffiths does; but then you are adding an extra interpretation to the quantum formalism and not, as Hobbes claims to do, resolving the problem 'from within' (something which by the way runs headlong into multiple insolubility theorems of the measurement problem from within quantum mechanics formulated over the years, starting with Fine (1970)). And of course, all of these interpretations do have their own problems (contradictory inferences in consistent histories, Kochen-Specker contradictions/inconsistent value state assignments in modal theories, etc.).

Ahhh. Now I get it it - all true. I don't believe Consistent Histories has contradictory inferences though because they deliberately define things so this can't happen - that's the whole idea of a framework. But like all interpretations, including mine, it has issues.

Thanks
Bill
 
  • #25
bhobba said:
Ahhh. Now I get it it - all true. I don't believe Consistent Histories has contradictory inferences though because they deliberately define things so this can't happen - that's the whole idea of a framework. But like all interpretations, including mine, it has issues.

Thanks
Bill
I can't say I've really kept up with the development of the consistent histories framework, but I recall that at least some years back, there was the problem that you can construct mutually contradictory inferences (probability one predictions regarding orthogonal projectors) from the same data, which was presented as a problem for the theory exactly because it's sold on its consistency (quick googling yields a paper by Adrian Kent on the issue, I'm sure you can dig up more if you're interested).

As for your last sentence---that all interpretations have issues---we can certainly agree on that.
 
  • #26
"And the claim that its wrong because you are contradicting an initial assumption its in a pure state is incorrect. The system, environment, and observational apparatus start out in a pure state and by unitary evolution must remain in a pure state but because they have become entangled, by the phenomena of tracing over the environment, the observational apparatus and system are now an improper mixed state. This is the key point - no contradiction."

But the issue is how to interpret the improper mixed state. If, as in AH's approach, you take it as representing only epistemic uncertainty over a definite outcome, corresponding to a particular, collapsed state of the system (such as 'the cat is really alive'), there is indeed a contradiction. The proof shows that for the reduced improper mixed state of one of the component systems to be interpreted as ignorance over an actual (collapsed) outcome of that system the combined system must be in a mixed state (not just one or other of the entangled systems). This does contradict the initial assumption that the combined system is in a pure state.

So there has been no solution to the measurement problem based on decoherence arguments. There's been the assertion that because we see particular outcomes, we know collapse has (somehow) happened. But we all know that! The problem is showing where this comes from in the theory. TI does this, so it really does provide a solution to the measurement problem. Technically, what TI gives you is a theoretical basis for the transition from a pure state to a proper mixed state. This is von Neumann's 'process 1'. To get from the mixed state to a single outcome (one of the projections only) you must have a process of spontaneous symmetry breaking, analogous to that in the Higgs mechanism. For further details see my book.

(BTW: a recent review of my book by M. Probert says, in part: ""the author systematically refutes all the latest and most sophisticated challenges to the TI" -- http://www.tandfonline.com/doi/abs/10.1080/00107514.2013.825322#.UjzO9sq1ss0 )
 
  • #27
rkastner said:
But the issue is how to interpret the improper mixed state. If, as in AH's approach, you take it as representing only epistemic uncertainty over a definite outcome, corresponding to a particular, collapsed state of the system (such as 'the cat is really alive'), there is indeed a contradiction

That's not what I do at all.

Instead of going over it again for I don't know how many times here is a link to a paper that examines the issue fairly:
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

I hold to the ignorance or ensemble interpretation. There is no contradiction - the issue, the key issue, is if you think it resolves what you consider is the fundamental issue. Do you consider the APPEARANCE of wave-function collapse good enough - that's it - that's all. That's the key issue. No contradiction - simply a matter of opinion. I believe, along with many people, that is good enough - others don't. We have a difference of opinion - learn to live with it - I have.

Thanks
Bill
 
  • #28
Thanks Bill--

According to the paper you referenced, the 'ensemble' approach you talk about here, i.e., choosing to interpret an improper mixture as a proper mixture (even though it isn't)

"super ficially solves the problem of outcomes, but does not explain why this happens, how or when..."

The author says the following before that sentence:

"Recalling the discussion about proper vs. improper density operators in
section 1.2.3, the system-apparatus are still entangled with the environment,
which means that it is not in a de finite state. Thinking back to the example
of the double slit experiment, the disappearance of the interference pattern
means that the phase relations of the superposition of the particle going
through the two slits has disappeared. It does not mean that each particle
path is determinate. Key here is equation (2.3). As Bell remarked:
'The idea that elimination of coherence, in one way or another,
implies the replacement of `and' by `or', is a very common one
among solvers of the `measurement problem'. It has always puz-
zled me.' (Bell 1990 [15] pp. 36)

The author then goes on to note:

"Decoherence theorists have generally come to accept the criticisms above,
and accept that decoherence alone does not solve the problems of outcomes, and
therefore leaves the most essential question untouched."

That's basically what I'm saying. And Bell was quite right in the quote given here by the author you referenced.

He even goes on to say: " Some authors even think environment-induced decoherence
aggravates the measurement problem. Indeed in the case of particle localisation due to environmental scattering, we found that the ensemble width increased faster, under the influence of decoherence, so
that without a collapse postulate the particles position seems to have become
indeterminate faster under the influence of decoherence."

So what you call a 'fair' examination of the issue is basically saying the same thing I've been saying -- decoherence alone is not really a solution to the measurement problem.

You apparently have decided not to be concerned about the 'most essential question' (in your referenced author's terms), and of course that's your prerogative.
 
  • #29
rkastner said:
That's basically what I'm saying.

If that is what you are saying then I agree. This is indeed the central issue here.

My concern was the claim my position, and the many who hold similar positions, is contradictory. It isn't.

I must also say contradiction is a strong claim and requires a bit more detail that what you posted.

I consider that paper fair in that it lays out the central issue - not that I agree with its conclusions - but the issue is well explained.

Thanks
Bill
 

FAQ: Physicist disentangles 'Schrodinger's cat' debate

What is Schrodinger's cat thought experiment?

Schrodinger's cat is a thought experiment in quantum mechanics, proposed by physicist Erwin Schrodinger in 1935. It imagines a cat in a sealed box with a radioactive substance and a poison that will be released if the substance decays. According to quantum mechanics, the substance exists in a state of superposition, being both decayed and not decayed at the same time. This means the cat is also in a state of superposition, being both alive and dead simultaneously.

2. What is the debate surrounding Schrodinger's cat?

The debate surrounding Schrodinger's cat centers around the interpretation of quantum mechanics. Some scientists argue that the cat remains in a state of superposition until the box is opened and observed, while others argue that the act of observation collapses the superposition and the cat is either alive or dead. This debate has been ongoing since the introduction of the thought experiment and has not been definitively resolved.

3. How does a physicist disentangle the Schrodinger's cat debate?

A physicist may disentangle the Schrodinger's cat debate by conducting experiments and analyzing the results. This may involve creating a controlled environment to observe quantum phenomena and testing different interpretations of quantum mechanics. Through rigorous scientific methods, a physicist may provide evidence to support one interpretation over another, helping to clarify the debate.

4. What are the implications of resolving the Schrodinger's cat debate?

If the Schrodinger's cat debate is resolved, it could have significant implications for our understanding of the fundamentals of the universe. It could also impact the development of new technologies, such as quantum computers, which rely on the principles of quantum mechanics. Additionally, resolving the debate could pave the way for further advancements in the field of quantum physics.

5. Why is the Schrodinger's cat thought experiment important?

The Schrodinger's cat thought experiment is important because it highlights the strange and counterintuitive nature of quantum mechanics. It challenges our understanding of reality and raises questions about the role of observation in the physical world. It has also sparked ongoing debates and discussions among scientists, leading to further research and advancements in the field of quantum physics.

Similar threads

2
Replies
46
Views
7K
Replies
17
Views
4K
Replies
61
Views
6K
Replies
38
Views
4K
Replies
3
Views
2K
Replies
13
Views
3K
Back
Top