- #1
nicf
- 12
- 2
- TL;DR Summary
- Does the account of measurement in Neumaier's thermal interpretation papers actually depend on the thermal interpretation?
I'm a mathematician with a longstanding interest in physics, and I've recently been enjoying reading and thinking about Arnold Neumaier's thermal interpretation, including some threads on this forum. There's something that's still confusing me, though, and I'm hoping someone here can clear it up. Most of the questions here come from the third paper in the series.
Consider some experiment, like measuring the spin of a suitably prepared electron, where we can get one of two outcomes. The story usually goes that, before the electron sets off the detector, the state is something like ##\left[\sqrt{\frac12}(|\uparrow_e\rangle+|\downarrow_e\rangle)\right]\otimes|\mbox{ready}\rangle##, where ##|\uparrow_e\rangle## denotes the state of an electron which has spin up around the relevant axis, and afterwards the state is something like ##\sqrt{\frac12}(|\uparrow_e\rangle\otimes |\uparrow_D\rangle+|\downarrow_e\rangle\otimes|\downarrow_D\rangle)##, where ##|\uparrow_D\rangle## denotes a state in which the (macroscopic) detector has reacted the way it would have if the electron had started in the state ##|\uparrow_e\rangle##. It's usually argued that this macroscopic superposition has to arise because the Schrödinger equation is linear. Let's call this the first story.
This description has struck many people (including me) as confusing, since it seems to contradict what I actually see when I run the experiment: if I see the "up" result on my detector, then the "down" term above doesn't seem to have anything to do with the world I see in front of me. It's always seemed to me that this apparent contradiction is the core of the "measurement problem" and, to me at least, resolving it is the central reason to care about interpretations of quantum mechanics.
Neumaier seems to say that the first story is simply incorrect. Instead he tells what I'll call the second story: because the detector is in a hot, noisy, non-at-all-isolated environment, and I only care about a very small number of the relevant degrees of freedom, I should instead represent it as a reduced density matrix and, since I've chosen to ignore most of the physical degrees of freedom in this system, the detector's position evolves in some complicated nonlinear way, but with the two possible readings as the only (relevant) stable states of the system. Which result actually happens depends on details of the state of the detector and the environment which aren't practically knowable, but the whole process is, in principle, deterministic. But the macroscopic superposition from the first story doesn't actually ever obtain, or if it does it quickly evolves into one of the two stable states.
So, finally, here's what I'd like to understand better:
(0) Did I describe the second story correctly?
(1) It seems to me that the second story could be told entirely within what Neumaier calls the "formal core" of quantum mechanics, the part that every interpretation agrees on. In his language, after my experiment, the q-probability distribution of the location of the detector needle really is supported only in the "up" region, and this follows from ordinary, uncontroversial quantum mechanics. Is this right? Does anything about the second story actually depend on the thermal interpretation?
(2) A more philosophical question: If macroscopic superpositions never actually appear, why all the fuss about interpretations? (For example, the many worlds interpretation seems to exist entirely to describe what it would mean for the universe to end up in such a macroscopic superposition.) What else even is there to worry about? If this does resolve the measurement problem, why wasn't it pointed out a long time ago?
(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?
Consider some experiment, like measuring the spin of a suitably prepared electron, where we can get one of two outcomes. The story usually goes that, before the electron sets off the detector, the state is something like ##\left[\sqrt{\frac12}(|\uparrow_e\rangle+|\downarrow_e\rangle)\right]\otimes|\mbox{ready}\rangle##, where ##|\uparrow_e\rangle## denotes the state of an electron which has spin up around the relevant axis, and afterwards the state is something like ##\sqrt{\frac12}(|\uparrow_e\rangle\otimes |\uparrow_D\rangle+|\downarrow_e\rangle\otimes|\downarrow_D\rangle)##, where ##|\uparrow_D\rangle## denotes a state in which the (macroscopic) detector has reacted the way it would have if the electron had started in the state ##|\uparrow_e\rangle##. It's usually argued that this macroscopic superposition has to arise because the Schrödinger equation is linear. Let's call this the first story.
This description has struck many people (including me) as confusing, since it seems to contradict what I actually see when I run the experiment: if I see the "up" result on my detector, then the "down" term above doesn't seem to have anything to do with the world I see in front of me. It's always seemed to me that this apparent contradiction is the core of the "measurement problem" and, to me at least, resolving it is the central reason to care about interpretations of quantum mechanics.
Neumaier seems to say that the first story is simply incorrect. Instead he tells what I'll call the second story: because the detector is in a hot, noisy, non-at-all-isolated environment, and I only care about a very small number of the relevant degrees of freedom, I should instead represent it as a reduced density matrix and, since I've chosen to ignore most of the physical degrees of freedom in this system, the detector's position evolves in some complicated nonlinear way, but with the two possible readings as the only (relevant) stable states of the system. Which result actually happens depends on details of the state of the detector and the environment which aren't practically knowable, but the whole process is, in principle, deterministic. But the macroscopic superposition from the first story doesn't actually ever obtain, or if it does it quickly evolves into one of the two stable states.
So, finally, here's what I'd like to understand better:
(0) Did I describe the second story correctly?
(1) It seems to me that the second story could be told entirely within what Neumaier calls the "formal core" of quantum mechanics, the part that every interpretation agrees on. In his language, after my experiment, the q-probability distribution of the location of the detector needle really is supported only in the "up" region, and this follows from ordinary, uncontroversial quantum mechanics. Is this right? Does anything about the second story actually depend on the thermal interpretation?
(2) A more philosophical question: If macroscopic superpositions never actually appear, why all the fuss about interpretations? (For example, the many worlds interpretation seems to exist entirely to describe what it would mean for the universe to end up in such a macroscopic superposition.) What else even is there to worry about? If this does resolve the measurement problem, why wasn't it pointed out a long time ago?
(3) I've seen many arguments (e.g. https://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro, which cites https://arxiv.org/abs/quant-ph/0112095 and https://arxiv.org/abs/quant-ph/9506020 pp. 14-15) that sound to me like they're saying the second story can't possibly work, usually with language like "decoherence cannot solve the measurement problem". Am I misunderstanding them? If not, would the counterargument just be that they're making the same linearity mistake as the first story?