Effective Dynamics of Open Quantum Systems: Stochastic vs Unitary Models

In summary: Not quite. But it necessarily has to be described by a different quantum model than unitary dynamics if it is an open system and the rest of the universe is not explicitly modeled.
  • #141
atyy said:
If the wave function and collapse are not taken to be necessarily real, but the experimental apparatus and results are real, then we have to have a cut somewhere.
It is enough to know that the density matrix of the experimental apparatus makes very accurate predictions of the macroscopic variables. This doesn't require a cut.
 
Physics news on Phys.org
  • #142
A. Neumaier said:
It is enough to know that the density matrix of the experimental apparatus makes very accurate predictions of the macroscopic variables. This doesn't require a cut.

Or to put it the way Weinberg does - there are two rules for time evolution - deciding which one to use when requires outside input.
 
  • #143
atyy said:
Or to put it the way Weinberg does - there are two rules for time evolution - deciding which one to use when requires outside input.
No. It is decided by the way the system is coupled to the detector - which is a property of the combined quantum system. Before and after the system passes the detector, the system is approximately isolated and hence the unitary dynamics is an appropriate approximation. During the interaction with the detector, the system is obviously coupled and the appropriate approximation is a reduced dynamics in Lindblad form.

No outside input is needed - except of course for deciding about which system to consider. But the latter kind of input is also needed when describing classical subsystems of a classical universe. In this sense you should probably claim that classical systems also need an observer to make sense, and hence a cut.
 
  • Like
Likes vanhees71
  • #144
vanhees71 said:
Where is a classical/quantum cut by applying standard quantum mechanics to predict probabilities and then measure them by looking at large ensembles of preparations?

I would say that "measure them" means to make a persistent, macroscopic record of some property. So the act of measurement necessarily involves macro/micro distinction. Probabilities don't apply to microscopic properties; if an electron is in the state "spin-up in the z-direction", it doesn't really make sense to say that it has a probability of 50% of being spin-up in the x-direction. It only makes sense to say that a measurement of the spin in the x-direction will result in spin-up. So probabilities only apply to measurements, which are necessarily macroscopic events.

That to me is the measurement problem: How do probabilities arise for a macroscopic event, when the macroscopic event is (presumably) just made up of many, many microscopic events for which probability is not meaningful? You can say that it arises, as it does in classical statistical mechanics, from coarse-graining, but I don't think that's true. Coarse graining introduces probabilities through ignorance of the details of the actual state, but Bell's theorem shows that quantum probabilities cannot be interpreted as being due to ignorance (at least not without nonlocal or retrocausal influences).
 
  • #145
stevendaryl said:
the act of measurement necessarily involves macro/micro distinction.
But this is not a cut. ''macro = 10^10 or more atoms'' is fully sufficient for typical accuracies.
stevendaryl said:
Bell's theorem shows that quantum probabilities cannot be interpreted as being due to ignorance
But Bell's theorem is about a microscopic system. The detectors are macroscopic and produce a unique result. That it looks random can be understood in the same way as random buckling in a classical symmetry breaking situation. The Bell state effectively acts like an external force breaking the metastability of the detector state.
 
  • #146
A. Neumaier said:
But this is not a cut. ''macro = 10^10 or more atoms'' is fully sufficient for typical accuracies.

But Bell's theorem is about a microscopic system. The detectors are macroscopic and produce a unique result. That it looks random can be understood in the same way as random buckling in a classical symmetry breaking situation. The Bell state effectively acts like an external force breaking the metastability of the detector state.

You've said things along those lines before, but I don't understand how that can be true. Or to put it more strongly: I do not think that it is true. I do not believe that the appearance of definite outcomes is explained by analogy to classical symmetry breaking.

It certainly is the case that measurement requires a metastable system. That's the way that microscopic events get magnified to macroscopic events. But I think you're mixing up two different notions of "definite outcome" when you propose that the "collapse" of a metastable system results in a definite state.

If I have a coin balanced on its edge, that is one kind of "indefinite state" between "heads" and "tails". It's metastable, in that the slightest perturbation will result in either in "heads" or "tails". That's a classical type of "collapse". But quantum mechanics introduces another kind of indefinite state: superpositions. If it made sense to talk about the wave function of something as large as a coin, then presumably you could have a state of a coin that is a superposition of the coin being heads and the coin being tails. That is an indefinite state that is completely unrelated to the classical metastable state of a coin balanced on its edge.

To me, it seems that by invoking metastability to explain why there are definite outcomes, you are mixing up two completely different notions of "indefinite state".
 
Last edited:
  • #147
A. Neumaier said:
But Bell's theorem is about a microscopic system.
I disagree. It is about directly measurable phenomena explained in terms of variables which are not directly measurable. The former (directly measurable phenomena) are macroscopic almost by definition, while the latter (variables which are not directly measurable), in principle, may be either microscopic or macroscopic.
 
  • #148
stevendaryl said:
you are mixing up to completely different notions of "indefinite state".
There is only a single notion of state, and it is very definite. Both in classical and in quantum statistical mechanics, it is a density operator, if one uses for classical mechanics the Koopman representation, where the operators are multiplication operators in some Hilbert space of functions on phase space. The only difference between the classical and the quantum case is that in the former case, all operators are diagonal. Decoherence shows that in a reduced description, the density matrices soon get very close to diagonal, recovering a Koopman picture of classical mechanics after a very short decoherence time. While the Koopman dynamics is strictly linear in terms of the density matrix (comparable to a quantum Lindblad equation), the resulting dynamics is highly nonlinear when rewritten as a classical stochastic process (comparable to a quantum jump or quantum diffusion process). Thus decoherence in principle provides (though only very few people think of it in these terms) a reduction of the quantum mechanics of open systems to a highly nonlinear classical stochastic process.

This stochastic process is no different in character from the stochastic process governing the dynamics of a metastable inverted pendulum, and hence has the same kind of multistable effective behavior that leads to definite classical outcomes within the accuracy due to the approximations involved. I recommend reading papers on optical bistability, e.g. this one or this one, where it is shown how semiclassical bistability arises from a quantum model by projecting out irrelevant degrees of freedom.
 
Last edited:
  • #149
Demystifier said:
I disagree. It is about directly measurable phenomena explained in terms of variables which are not directly measurable. The former (directly measurable phenomena) are macroscopic almost by definition, while the latter (variables which are not directly measurable), in principle, may be either microscopic or macroscopic.
Of course the theorem itself is about classical hidden variables, but the experiments violating the resulting inequality are about a microscopic 2-photon system interacting with two macroscopic devices.
 
  • #150
A. Neumaier said:
There is only a single notion of state, and it is very definite. Both in classical and in quantum statistical mechanics, it is a density operator

I think that's very misleading. If you are invoking "metastability" as an explanation for why pointers or cats, or whatever, have (apparently) definite positions, or approximately definite positions. Saying that the density operator is definite is using a different notion of "definite".

The issue, which metastability does not address, and neither does the use of density matrices, or whatever, is this: Why do classical objects such as cats or pointers have approximately definite positions (and approximately definite momenta, etc.)? Metastability has nothing to do with it.

I think that Many Worlds gives real insight into this question, even if you don't believe in it. If you consider the wave function of the entire universe, then there is no reason to think that macroscopic objects have approximately definite positions. There can perfectly well be a state for the whole universe that is a superposition of a state where I am in Seattle, and another state where I am in New York City. Metastability does not at all imply that such superpositions can't exist, or are unstable. But what we can argue is that there are negligible interference effects between these two elements of the superposition. So these two elements of the superposition will evolve independently as if they were completely separate worlds. The "me" in each branch can consistently believe that his is the ONLY branch, and he will never face a contradiction. If there is no way to observe the other branches, then you can apply Occam's razor and get rid of them from your ontology. But there is no physical event corresponding to "getting rid of the other branches". There is no physical collapse, whether mediated by metastability or not.

Even if you don't buy the Many Worlds interpretation, one aspect of it is true for any interpretation: If macroscopic objects failed to have definite positions, it would be unobservable.
 
  • #151
stevendaryl said:
Why do classical objects such as cats or pointers have approximately definite positions (and approximately definite momenta, etc.)?
This is answered by the law of large numbers and statistical mechanics. It is very well-known that the standard deviations of all macroscopic variables of interest in physics scale like ##O(N^{-1/2})##, where ##N## is the conserved number of particles involved, and the mean number if there is no conservation. Metastability answers why in the case of a binary measurement one of these actually comes out.

All of this is completely unrelated to MWI.
 
  • #152
A. Neumaier said:
This is answered by the law of large numbers and statistical mechanics. It is very well-known that the standard deviations of all macroscopic variables of interest in physics scale like ##O(N^{-1/2})##, where ##N## is the conserved number of particles involved, and the mean number if there is no conservation. Metastability answers why in the case of a binary measurement one of these actually comes out.

All of this is completely unrelated to MWI.

No, I think you're completely wrong about both paragraphs above. Metastability has nothing to do with it. [edit] I shouldn't say nothing, but it doesn't explain definite outcomes. I think you're completely wrong about this.
 
  • #153
stevendaryl said:
No, I think you're completely wrong about both paragraphs above. Metastability has nothing to do with it. [edit] I shouldn't say nothing, but it doesn't explain definite outcomes. I think you're completely wrong about this.

An example of a metastable system might be a lattice of 1000 magnetic dipoles. They tend to line up; the state with all dipoles pointing in the same direction is lower energy than the state with them pointing in different directions. So if you start with an unmagnetized state (the dipoles pointing in all sorts of different directions), then a small perturbation will likely result in most dipoles pointing in the same direction. But that does not mean that you can't have a superposition of one state with all dipoles pointing up, and another state with all dipoles pointing down. If you started in such a superposition, it would not ever evolve into a state with all pointing one way, or all pointing the other way. If the initial state is symmetric under parity, then the final state will be.

I know what you're going to say: Couple it to an environment--a thermal bath of some sort. But I think that that would not make any difference. The same argument holds: If the thermal bath + lattice is initially symmetric under parity, then it will never evolve into a state that is not symmetric. It will never evolve into a state with a nonzero magnetic moment. Metastability just does not explain definite outcomes.
 
  • #154
stevendaryl said:
I think you're completely wrong about this.
I cannot argue about your subjective beliefs.

But what I stated is the reason why practitioners of QM don't feel a need to investigate the foundations of quantum mechanics, except in as far as there are challenging experiments to perform. It is very clear to them that statistical mechanics explains the gradual emergence of classicality, due to the law of large numbers to an ever increasing accuracy as the object size grows, and that the quantum dynamics morphs as gradually to classical dynamics. There are even all sorts of intermediate stages modeled by quantum-classical dynamics, used a lot in situations where the quantum regime is important for some degrees of freedom but not for others. Thus there is a continuum from the fully quantum to the fully classical, and the only role of observers is to select from this spectrum the model that is most tractable computationally given a desired resolution.

A measurement problem arises only if one ignores all this and insists on the rigid, far too idealized framework in which quantum mechanics was introduced historically and is typically introduced in textbooks.
 
  • Like
Likes vanhees71 and Mentz114
  • #155
stevendaryl said:
An example of a metastable system might be a lattice of 1000 magnetic dipoles. They tend to line up; the state with all dipoles pointing in the same direction is lower energy than the state with them pointing in different directions. So if you start with an unmagnetized state (the dipoles pointing in all sorts of different directions), then a small perturbation will likely result in most dipoles pointing in the same direction. But that does not mean that you can't have a superposition of one state with all dipoles pointing up, and another state with all dipoles pointing down. If you started in such a superposition, it would not ever evolve into a state with all pointing one way, or all pointing the other way. If the initial state is symmetric under parity, then the final state will be.

I know what you're going to say: Couple it to an environment--a thermal bath of some sort. But I think that that would not make any difference. The same argument holds: If the thermal bath + lattice is initially symmetric under parity, then it will never evolve into a state that is not symmetric. It will never evolve into a state with a nonzero magnetic moment. Metastability just does not explain definite outcomes.
A small enough collection of dipoles ( eg quantum magnetic dot) may be in a superposition, but if the object was large enough then at some point it becomes fixed and irreversibly in one outcome. What else could possibly happen ? Your argument is based on Platonic ideals. "Every quantum state has fluctuations" - Ballentine ( says it twice actually)
 
  • #156
A. Neumaier said:
I cannot argue about your subjective beliefs.

But what I stated is the reason why practitioners of QM don't feel a need to investigate the foundations of quantum mechanics, except in as far there are challenging experiments to perform. It is very clear to them that statistical mechanics explains the gradual emergence of classicality, due to the law of large numbers to an ever increasing accuracy as the object size grows, and that the quantum dynamics morphs as gradually to classical dynamics. There are even all sorts of intermediate stages modeled by quantum-classical dynamics, used a lot in situations where the quantum regime is important for some degrees of freedom but not for others. Thus there is a continuum from the fully quantum to the fully classical, and the only role of observers is to select from this spectrum the model that is most tractable computationally given a desired resolution.

A measurement problem arises only if one ignores all this and insists on the rigid, far too idealized framework in which quantum mechanics was introduced historically and is typically introduced in textbooks.

Landau, Dirac, Bell, Adler, Weinberg, Haroche, Raimond, Laloe, Susskind, Zurek, Zeilinger, Hartle, Gell-Mann - are these not practioners of quantum mechanics?
 
  • #157
atyy said:
Landau, Dirac, Bell, Adler, Weinberg, Haroche, Raimond, Laloe, Susskind, Zurek, Zeilinger, Hartle, Gell-Mann - are these not practitioners of quantum mechanics?
Who of these thinks that there is an unsolved measurement problem? The unsolved problems Landau, Dirac, and Weinberg are concerned about are the problematic mathematical basis of relativistic quantum field theory, not the measurement problem.
 
  • #158
stevendaryl said:
If the initial state is symmetric under parity, then the final state will be.
Mentz114 said:
A small enough collection of dipoles ( eg quantum magnetic dot) may be in a mixed starte, but if the object was large enough then at some point it becomes fixed and irreversibly in one outcome. What else could possibly happen ? Your argument is based on Platonic ideals. "Every quantum state has fluctuations" - Ballentine ( says it twice actually)

I think you're completely wrong about that. The evolution of the wave function is linear. So if initial state [itex]I_1[/itex] leads to final state [itex]F_1[/itex], and initial state [itex]I_2[/itex] leads to final state [itex]F_2[/itex], then the superposition of [itex]I_1[/itex] and [itex]I_2[/itex] will lead to a superposition of [itex]F_1[/itex] and [itex]F_2[/itex]. It will not lead to a random pick between [itex]F_1[/itex] and [itex]F_2[/itex]. The same thing is true if you want to do density matrices.

Metastability cannot explain definite outcomes.

I have no idea what you mean by my argument being based on "Platonic ideals". It's based on quantum mechanics.
 
  • #159
A. Neumaier said:
I cannot argue about your subjective beliefs.

Then let me put it more strongly: You are wrong about this.
 
  • #160
stevendaryl said:
The evolution of the wave function is linear.
The evolution of the Fokker-Planck equation is also linear. Nevertheless it describes classical nonlinear stochastic processes.
 
  • #161
stevendaryl said:
I think you're completely wrong about that. The evolution of the wave function is linear. So if initial state [itex]I_1[/itex] leads to final state [itex]F_1[/itex], and initial state [itex]I_2[/itex] leads to final state [itex]F_2[/itex], then the superposition of [itex]I_1[/itex] and [itex]I_2[/itex] will lead to a superposition of [itex]F_1[/itex] and [itex]F_2[/itex]. It will not lead to a random pick between [itex]F_1[/itex] and [itex]F_2[/itex]. The same thing is true if you want to do density matrices.

Metastability cannot explain definite outcomes.

Linear evolution is your Platonic ideal. It can only exist in very small highly-isolated systems. It only takes energy to leak out to make the sub-system non-conservative and lose normalization. This will drive a stochastic process to a definate result.

We must agree to disagree about this.
 
  • #162
stevendaryl said:
Then let me put it more strongly: You are wrong about this.
I cannot argue about your subjective beliefs. Repeating variations on them doesn't improve the situation.
 
  • #163
A. Neumaier said:
Who of these thinks that there is an unsolved measurement problem? The unsolved problems Landau, Dirac, and Weinberg are concerned about are the problematic mathematical basis of relativistic quantum field theory, not the measurement problem.

All believed there was an unsolved measurement problem (eg. Dirac, Weinberg) or that a classical/quantum cut is needed (eg. Landau).
 
  • #164
atyy said:
that a classical/quantum cut is needed
The cut is just the decision at which description level the quantum corrections (that decay like ##O(N^{-1/2})##) can be neglected. It is not a bigger problem than the problem of whether or not to include into the classical description of a pendulum the surrounding air and the way it is suspended, or whether taking it into account with a damping term is enough.
 
  • #165
A. Neumaier said:
The cut is just the decision at which description level the quantum corrections (that decay like ##O(N^{-1/2})##) can be neglected. It is not a bigger problem than the problem of whether or not to include into the classical description of a pendulum the surrounding air and the way it is suspended, or whether taking it into account with a damping term is enough.

Not in Landau's view.
 
  • #166
Mentz114 said:
Linear evolution is your Platonic ideal. It can only exist in very small highly-isolated systems. It only takes energy to leak out to make the sub-system non-conservative and lose normalization. This will drive a stochastic process to a definate result.

We must agree to disagree about this.
What if you look at the whole universe? Where does energy leak out to?
 
  • #167
atyy said:
Not in Landau's view.
In Volume IX (Statistical physics, Part 2) of their treatise on theoretical physics, Landau and Lifshits derive the hydrodynamic equations without needing any cut. The cut is mentioned only in the introduction to quantum mechanics and nowhere used - thus recognizable as a purely pedagogical device.
 
  • #168
A. Neumaier said:
The cut is just the decision at which description level the quantum corrections (that decay like ##O(N^{-1/2})##) can be neglected. It is not a bigger problem than the problem of whether or not to include into the classical description of a pendulum the surrounding air and the way it is suspended, or whether taking it into account with a damping term is enough.
The air surrounding the pendulum works to disrupt the macroscopic behavior I expect to observe, not to actually explain it. So I'm not finding this comparison fair or convincing.
 
  • #169
georgir said:
What if you look at the whole universe? Where does energy leak out to?
The whole universe only has one possible outcome :wink:

I assume you're joking.
 
  • #170
Mentz114 said:
The whole universe only has one possible outcome :wink:

I assume you're joking.
I'm not sure if you are now. The whole point of MWI etc is many possible outcomes. Or you could call it one, but it could still be a superposition of steven both in Seattle and in New York.
 
  • #171
A. Neumaier said:
In Volume IX (Statistical physics, Part 2) of their treatise on theoretical physics, Landau and Lifshits derive the hydrodynamic equations without needing any cut. The cut is mentioned only in the introduction to quantum mechanics and nowhere used - thus recognizable as a purely pedagogical device.

Sorry, I cannot agree. You, vanhees71, Ballentine, and Peres are wrong.
 
  • #172
georgir said:
The air surrounding the pendulum works to disrupt the macroscopic behavior I expect to observe, not to actually explain it. So I'm not finding this comparison fair or convincing.
In both cases, the explanation is in the derivation of the approximations. One needs the surrounding to explain why the pendulum is damped (as observed) rather than ideal (as the ideal Hamiltonian dynamics would suggest). Notice the complete similarity with the collapse (observed in a continuous measurement) rather than the unitary evolution (as the ideal Hamiltonian dynamics would suggest).
 
  • #173
atyy said:
Sorry, I cannot agree. You, vanhees71, Ballentine, and Peres are wrong.
I cannot argue about your subjective beliefs. As stevendaryl, you simply call wrong what differs from your preferences.
 
  • #174
A. Neumaier said:
I cannot argue about your subjective beliefs. As stevendaryl, you simply call wrong what differs from your preferences.

You are wrong because your thermal interpretation contradicts Bell's theorem.
 
  • #175
georgir said:
I'm not sure if you are now. The whole point of MWI etc is many possible outcomes. Or you could call it one, but it could still be a superposition of steven both in Seattle and in New York.
Why invoke MWI when a much simpler explanation is available ?

There is no measurement problem. People make measurements and get results. The only quibble is from those who insist that something weird an inexplicable is happening. Maybe they have a need for spookiness.

(I am not being disparaging. I respect other people's freedom to hold any views they wish to)
 
Last edited:

Similar threads

Replies
1
Views
2K
Replies
10
Views
3K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
26
Views
8K
Replies
65
Views
8K
Replies
87
Views
6K
Back
Top