An argument against Bohmian mechanics?

In summary: Simple systems can exhibit very different behavior from more complex systems with a large number of degrees of freedom. This is a well-known fact in physics. Thus, I don't understand why you keep bringing up the hydrogen atom as a counterexample to ergodic behavior, when it is not a representative system for such a discussion. In summary, Neumaier argues that Bohmian mechanics is wrong because it fails to predict all observed results from experiments. However, this argument ignores the theory of quantum measurements and fails to take into account the effect of measurement. Furthermore, the Bohmian theory of quantum measurements is incomplete and cannot fully explain the behavior of the single universe we know of. Additionally, the claim that ergodic theorem is necessary for
  • #281
stevendaryl said:
The use of probabilities involves a cut: You can describe it in several different ways:
  1. The cut between the system, which is the thing being measured, and the observer, which is the thing being measured.
There is a way to make this sound quite ordinary: if something is measured in the lab, of course there is an observer and of course, he makes the decision what the system of his interest is and what part of the universe he is going to ignore. This is a common characteristic of all scientific experiments.

So from this point of view, the Heisenberg cut seems like a totally ordinary thing which is part of all of science. I don't claim that there's no problem at all but that if there is one, it is surprisingly difficult to pin down.

Also, the omnipresent entanglement of QM suggest that we cannot just ignore a part of the whole without altering something important. So even without the Born rule, QM tells us that the naive application of the scientific method in the way I described above -which involves separating the observer and the environment from the system- may lead to a different behaviour.

(and I suppose that you wanted to write something along the lines of "the observer, who does the measuring" in the quote above)
 
Physics news on Phys.org
  • #282
But @rubi, for macroscopic objects we see only one history. How does consistent histories explain that?

Also, the quote below is from the book "Do we really understand quantum mechanics":
Which history will actually occur in a given realization of the physical system is not known in advance: we postulate the existence of some fundamentally random process of Nature that selects one single history among all those of the family.
This seems like collapse, more precisely, an objective collapse. Do you reject this @rubi or you have some explanation?
 
  • #283
To go back to BM, I would like to know if there is an empirical way to distinguish a "local realist theory" from a "nonlocal realist theory" like BM. In a previous post Demystifier claimed that even if BM is nonlocal(meaning that it allows FTL influence) in practice since this FTL influence couldn't be observed or measured it actually didn't allow FTL signaling. And if this is the case I have to wonder what is the empirical difference between being local or nonlocal for a realistic theory, and if there is none, the Bell inequalities would model both local and nonlocal realist theories(since they would be empirically indistinguishable), and their experimental violation would serve to reject realist theories in general without any further assumption.
So how are they empirically(i.e. scientifically as opposed to philosophically) distinguished? Anybody knows?
 
  • #284
ShayanJ said:
But @rubi, for macroscopic objects we see only one history. How does consistent histories explain that?
How is that different from ordinary classical Brownian motion? The particle follows exactly one path. We just don't know which one. The Wiener measure specifies the probability density for a certain path. Quantum mechanics is a stochastic theory that specifies probability distributions over spaces of quantum histories, just like Brownian motion or other stochastic processes specify probability distributions over spaces of classical trajectories. We see only one Brownian path and we see only one quantum history.

This seems like collapse, more precisely, an objective collapse. Do you reject this @rubi or you have some explanation?
It has nothing to do with a "collapse". The situation is exactly the same in classical stochastic processes. One Brownian path/quantum history is randomly selected. If you toss a coin, one side is randomly selected. The only difference is the word "quantum".
 
  • #285
kith said:
There is a way to make this sound quite ordinary: if something is measured in the lab, of course there is an observer and of course, he makes the decision what the system of his interest is and what part of the universe he is going to ignore. This is a common characteristic of all scientific experiments.

But in QM, the difference is not simply a matter of what to choose to ignore. Different rules apply to measurements than to other types of interactions.

In what I would consider a coherent formalism, you would describe how the world works, independently of observers, and then add physical-phenomenal axioms saying that such-and-such a condition of such-and-such subsystem counts as a measurement of such-and-such a property. There would be no additional physics to the measurement process, since it would just be an ordinary process.

(and I suppose that you wanted to write something along the lines of "the observer, who does the measuring" in the quote above)

Yes.
 
  • #286
vanhees71 said:
Of course, these are part of the postulates of QT, but it doesn't imply that there is a distinct classical world or that the measurement is outside of the laws described by QT. There are no extra rules.

The rule that a measurement results in an eigenvalue with probabilities given by the square of the amplitude IS an extra rule. It only applies to measurement interactions, and not to other types of interactions.

Observables are in the formalism represented by self-adjoint operators on a Hilbert space, and possible values these observables take when measured are the eigenvalues of these operators.

So that's an example of a rule that applies to measurements and not to other interactions. If it applied to other types of interactions, then you wouldn't have to use the phrase "when measured".
 
  • #287
Mentz114 said:
OK, that is pretty clear. So if these electrons interact, we can calculate the amplitudes of various outcomes but we cannot ( or should not) square these amplitudes to get probabilities ?

If there are no measurements, then there are no outcomes. There are only states, and those states evolve deterministically, not probabilistically.
 
  • #288
...
stevendaryl said:
If there are no measurements, then there are no outcomes. There are only states, and those states evolve deterministically, not probabilistically.
Ok, I assume you mean no operator was assumed to operate. Given some initial states and some putative final states, is it possible to calculate the probabilities of the final states ? I have to ask because I'm not sure if this is always possible .

But I take the point about the extra rule ...
 
  • #289
Mentz114 said:
...

Ok, I assume you mean no operator was assumed to operate. Given some initial states and some putative final states, is it possible to calculate the probabilities of the final states ? I have to ask because I'm not sure if this is always possible .

But I take the point about the extra rule ...

Given a final state, we can calculate an amplitude for the system ending up in that final state, and we can square that to get a probability. But the problem is that there are infinitely many possible final states, and the corresponding probabilities don't add up to 1 (they add up to infinity). For a concrete example, if you prepare an electron in the spin state [itex]|u\rangle[/itex] (spin-up in the z-direction), and there are no interactions acting on it at all, then:
  • It has probability 1 of ending up spin-up in the z-direction.
  • It has probability 1/2 of ending up spin-up in the x-direction.
  • It has probability of ending up spin-up in the y-direction.
  • It has probability 1/2 of ending up spin-up in the direction [itex]\frac{1}{\sqrt{2} } \hat{x} + \frac{1}{\sqrt{2}} \hat{y}[/itex]
  • It has probability 1/2 of ending up spin-up in the direction [itex]\frac{1}{\sqrt{2}} \hat{x} - \frac{1}{\sqrt{2}} \hat{y}[/itex]
  • etc.
If you don't say what's being measured, then you have no way to chop down the set of possibilities to a set of exclusive alternatives whose probabilities add up to 1.
 
  • #290
vanhees71 said:
I don't understand your last sentence. Why isn't there any probablities if I analyze everything using microscopic dynamics? The microscopic dynamics, i.e., QT, describes probabilities and only probabilities. What else should the meaning of this dynamics be than the time evolution of probability distributions for observables?

There are no probabilities in QM without a choice of a basis. The microscopic evolution doesn't select a basis.
 
  • #291
stevendaryl said:
Given a final state, we can calculate an amplitude for the system ending up in that final state, and we can square that to get a probability. But the problem is that there are infinitely many possible final states, and the corresponding probabilities don't add up to 1 (they add up to infinity). For a concrete example, if you prepare an electron in the spin state [itex]|u\rangle[/itex] (spin-up in the z-direction), and there are no interactions acting on it at all, then:
  • It has probability 1 of ending up spin-up in the z-direction.
  • It has probability 1/2 of ending up spin-up in the x-direction.
  • It has probability of ending up spin-up in the y-direction.
  • It has probability 1/2 of ending up spin-up in the direction [itex]\frac{1}{\sqrt{2} } \hat{x} + \frac{1}{\sqrt{2}} \hat{y}[/itex]
  • It has probability 1/2 of ending up spin-up in the direction [itex]\frac{1}{\sqrt{2}} \hat{x} - \frac{1}{\sqrt{2}} \hat{y}[/itex]
  • etc.
If you don't say what's being measured, then you have no way to chop down the set of possibilities to a set of exclusive alternatives whose probabilities add up to 1.
Thanks, I guess this is not a good time to pursue this but I'm specifically interested in an interaction because symmetries will come into play which will restrict the outcomes ( I think ).

Anyway, since this is my last post this year, all the best to you and all PF'ers for 2017.
 
  • #292
Mentz114 said:
Thanks, I guess this is not a good time to pursue this but I'm specifically interested in an interaction because symmetries will come into play which will restrict the outcomes ( I think ).

Anyway, since this is my last post this year, all the best to you and all PF'ers for 2017.

Happy New Year!
 
  • #293
stevendaryl said:
But the problem is that there are infinitely many possible final states, and the corresponding probabilities don't add up to 1 (they add up to infinity). For a concrete example, if you prepare an electron in the spin state [itex]|u\rangle[/itex] (spin-up in the z-direction), and there are no interactions acting on it at all, then:
  • It has probability 1 of ending up spin-up in the z-direction.
  • It has probability 1/2 of ending up spin-up in the x-direction.
  • It has probability of ending up spin-up in the y-direction.
  • It has probability 1/2 of ending up spin-up in the direction [itex]\frac{1}{\sqrt{2} } \hat{x} + \frac{1}{\sqrt{2}} \hat{y}[/itex]
  • It has probability 1/2 of ending up spin-up in the direction [itex]\frac{1}{\sqrt{2}} \hat{x} - \frac{1}{\sqrt{2}} \hat{y}[/itex]
  • etc.
If you don't say what's being measured, then you have no way to chop down the set of possibilities to a set of exclusive alternatives whose probabilities add up to 1.
This problem is resolved in CH by noting that your alternatives are not mutually exclusive. Probabilities don't need to add up to 1 if the alternatives are not mutually exclusive. You need to choose a set of mutually exclusive alternatives and you will get a proper probability distribution. Several such choices exist and experimental results will always be consistent with any such choice; the physics doesn't depend on it. However, many choices will answer different questions than the questions you're interested in. That's not problematic as long as the physics is consistent with any choice.
 
  • #294
rubi said:
From reading vanhees' posts, I think he is secretly a consistent histories advocate without knowing it yet.

CH does not preserve unitary evolution, so he is more likely a secret MWI advocate :biggrin:
 
  • #295
RockyMarciano said:
I would like to know if there is an empirical way to distinguish a "local realist theory" from a "nonlocal realist theory" like BM.

It depends on how you define these terms. Or you could just realize that ordinary language is not well suited to this kind of discussion, and specify theories in terms of their actual math and their actual predictions, which is how we distinguish them in practice. It's easy to test whether a theory's predictions satisfy the Bell inequalities or not. Whether you want to call a theory that violates them "nonlocal" or "non-realist" is a matter of words, not physics.
 
  • #296
kith said:
There is a way to make this sound quite ordinary: if something is measured in the lab, of course there is an observer and of course, he makes the decision what the system of his interest is and what part of the universe he is going to ignore. This is a common characteristic of all scientific experiments.

So from this point of view, the Heisenberg cut seems like a totally ordinary thing which is part of all of science. I don't claim that there's no problem at all but that if there is one, it is surprisingly difficult to pin down.

Also, the omnipresent entanglement of QM suggest that we cannot just ignore a part of the whole without altering something important. So even without the Born rule, QM tells us that the naive application of the scientific method in the way I described above -which involves separating the observer and the environment from the system- may lead to a different behaviour.

(and I suppose that you wanted to write something along the lines of "the observer, who does the measuring" in the quote above)

But why can we ignore part of the universe? Is it because of locality? Why is the universe operationally local, even though reality is nonlocal (or retrocausal etc ...)?

Bohmian Mechanics is one example of emergent operational locality. Holography is another.
 
  • #297
atyy said:
In classical mechanics, eg. general relativity, there is no problem with the notion of the state of the universe. In quantum mechanics, what is the meaning of the quantum state of the universe?
It's an empty phrase in all of physics. I'm not even able to define what "the state of the universe" should mean, no matter whether within classical or quantum physics.
 
Last edited:
  • #298
atyy said:
But why can we ignore part of the universe? Is it because of locality? Why is the universe operationally local, even though reality is nonlocal (or retrocausal etc ...)?

Bohmian Mechanics is one example of emergent operational locality. Holography is another.
We can ignore part of the universe (almost all of the universe in fact), because we can make only pretty local observations and the locality of relativistic QFT ensures the validity of the linked-cluster principle. So far this model of "reality" is pretty successful, and according to this model interactions are local and microcausal. Retrocausality is just a misnomer for the possibility to choose subensembles of full ensembles of measurements due to a fixed measurement protocol. You don't "change the past" you just choose to at which part of the data you look at (take the Walborn et al quantum eraser experiment as a very clear typical example for this kind of "retrocausality").
 
  • #299
ShayanJ said:
That's exactly what @stevendaryl means(correct me if I'm wrong!). The dynamics of a quantum system only involves probability amplitudes and if no one wants to know anything about the system, no probability comes into play. So a unified description of all phenomena using quantum mechanics, can't involve axioms about probabilities because if all things that happen are governed by quantum mechanics, we should be able to treat measurements with the same language that we treat Schrodinger evolution. The fact that we have to introduce Born's rule to deal with measurements is the Heisenberg cut. Of course you may say that its ridiculous to apply quantum mechanics to macroscopic objects because it'll be unnecessarily complicated and that's why we introduce the Born's rule. But that viewpoint can only be true if you can derive Born's rule from the regular dynamics of quantum systems and that's what all these interpretations are all about. Of course decoherence makes things better for this viewpoint but I'm not sure we can count on it to solve the problem completely. Can we?
Without Born's rule, I've no clue what quantum theory is about. Then it's a funny mathematical game to play without any contact to measurements and observations in the real world. Why this should imply that there are two different dynamics for "classical" and "quantum" systems is still an enigma to me. For me the behavior of macroscopic objects is explainable with standard QT and involves a corresponding coarse-graining procedure. It's of course impractical in practically impossible to describe the ##10^{24}## degrees of freedom of 1 mole of gas in some container in all microscopical detail. It's sufficient to describe the relevant observable in the sense of statistical quantum physics.
 
  • #300
rubi said:
From reading vanhees' posts, I think he is secretly a consistent histories advocate without knowing it yet.Consistent histories is essentially the minimal interpretation stated with more conceptual clarity. It keeps all the concepts from Copenhagen, but it interprets time evolution as a stochastic process, much like classical Brownian motion. The insertion projection operators between the time evolution doesn't correspond to any physical process. Instead, it just selects a subset of histories from the path space, whose probability of occurring is to be calculated. It's completely analogous to the insertion of characteristic functions in the case of Brownian motion. No explicit references to measurements remain and all quantum paradoxes are resolved.

In Brownian motion, the Wiener measure on the space of Brownian paths is constructed by specifying it on so called cylinder sets of paths ##x(t)##:
$$O^{t_1 t_2 \ldots}_{B_1 B_2 \ldots}=\{x : x(t_1)\in B_1, x(t_2) \in B_2, \ldots \}$$
For example, the probability for a path (with ##x(t_0)=x_0##) to be in the cylinder set ##O^{t_1 t_2}_{B_1 B_2}## is (up to some normalization factors) given by
$$P(O^{t_1 t_2}_{B_1 B_2})=\int dx_2 dx_1 \chi_{B_2}(x_2) e^{-\frac{(x_2-x_1)^2}{t_2-t_1}} \chi_{B_1}(x_1) e^{-\frac{(x_1-x_0)^2}{t_1-t_0}} \hat = \lVert P_{B_2} U(t_2-t_1) P_{B_1} U(t_1-t_0)\delta_{x_0 t_0}\rVert$$
Here, I have defined the projections ##(P_B f)(x)=\chi_B(x) f(x)## and the time evolution operators ##U(t)=e^{-t\Delta}##, which are just expressed as integrations against the heat kernel in the above integral. Of course, nobody would think of the projectors ##P_B## as a form of time evolution in the totally classical case of Brownian motion. The Brownian particle just follows some random path and the the probability for a given set of paths just happens to involve this projector.

In quantum mechanics, the situation is completely analogous. The projectors of the position operator ##\hat x## are also given by characteristic functions ##\chi_B(x)## and the time evolution of the Schrödinger equation (for example for the free particle) is given by ##U(t)=e^{-it\frac{\Delta}{2m}}##. This suggests in a very compelling way that the projections are not "a different form of time evolution", like the Copenhagen interpretation suggests. Measurements don't play any distinguished role.
That's indeed just minimally interpreted QT. Why one labels it as "consistent histories" is not clear to me yet, but if this is all to it, I've no objections against it.
 
  • #301
stevendaryl said:
The rule that a measurement results in an eigenvalue with probabilities given by the square of the amplitude IS an extra rule. It only applies to measurement interactions, and not to other types of interactions.
Yes, it's part of the standard rules of QT in the minimal interpretation, but there's nothing special in interactions between the system and the measurement apparatus. It's the same rules since a measurement apparatus consists of the same microsopic building blocks as any other object.

So that's an example of a rule that applies to measurements and not to other interactions. If it applied to other types of interactions, then you wouldn't have to use the phrase "when measured".
I still don't understand, how one can come to such a conclusion. It's just a tautology: If I want to know a (more or less precise) value of an observable, I have to measure it. No matter whether I'm thinking in terms of classical or quantum theory.
 
  • #302
stevendaryl said:
There are no probabilities in QM without a choice of a basis. The microscopic evolution doesn't select a basis.
In the standard formalism there's a large freedom in choosing the time evolution picture. Thus states, represented by the statistical operator ##\hat{\rho}(t)## and eigenvectors ##|a(t) \rangle## of a complete set of compatible observables represented by a corresponding set of self-adjoint operators ##\hat{A}(t)##, for themselves have no physical meaning, i.e., are not referring to directly observable/measurable phenomena. That's the case only for the corresponding probability (distributions), $$P(t,a)=\langle a(t)|\hat{\rho}(t)|a(t) \rangle.$$
Thus there are probabilities in standard QT from the very beginning. It's part of the postulates and the only relation between the mathematical formalism to observable/measurable phenomena in nature.
 
  • #303
vanhees71 said:
Without Born's rule, I've no clue what quantum theory is about. Then it's a funny mathematical game to play without any contact to measurements and observations in the real world.
There was a time when the universe was playing that "funny mathematical game"! I'm sure even now there are places in the universe where that game is still being played.
 
  • #305
vanhees71 said:
?
I suppose that's for me!
I meant there was a time where no people were around to observe anything and yet the universe was behaving quantum mechanically. And even in the present, there are parts of the universe where we can't observe and still quantum mechanics applies to them. Unless you're willing to assume quantum mechanics applies only when people are around to observe the result!
 
  • #306
PeterDonis said:
It depends on how you define these terms. Or you could just realize that ordinary language is not well suited to this kind of discussion, and specify theories in terms of their actual math and their actual predictions, which is how we distinguish them in practice. It's easy to test whether a theory's predictions satisfy the Bell inequalities or not.
I did define the terms local and realist in previous posts and both definitions have math content(the math content of being realist is summarized in the Bell inequalities) and the nonlocal/local is defined in terms of physical possibility of FTL signals or not.
Whether you want to call a theory that violates them "nonlocal" or "non-realist" is a matter of words, not physics.
Well, it seems for some the distinction is about physics. Although in fact it seems more about philosophy the way it is presented here: http://fetzer-franklin-fund.org/media/emqm15-reconcile-non-local-reality-local-non-reality-3/
 
  • #307
vanhees71 said:
Yes, it's part of the standard rules of QT in the minimal interpretation, but there's nothing special in interactions between the system and the measurement apparatus.

Then why does the rule single out measurements?

I still don't understand, how one can come to such a conclusion. It's just a tautology: If I want to know a (more or less precise) value of an observable, I have to measure it. No matter whether I'm thinking in terms of classical or quantum theory.

There is no special interaction associated with measurements in classical theory. Look, try to formulate the Born rule without mentioning "measurement" or a macroscopic/microscopic distinction. It is not possible. In contrast, the rules for classical mechanics can be formulated without mentioning measurement. That doesn't mean that there are no measurements in classical mechanics, but that measurements are a derived concept, not a primitive concept.

We could try to make measurements a derived concept in QM, as well.

What is a measurement? Well, a first cut at this is that a measurement is an interaction between one system (the system to be measured) and a second system (the measuring device) so that the quantity to be measured causes a macroscopic change in the measuring device. Roughly speaking, we have:
  • A system being measured that has some operator [itex]O[/itex] with eigenvalues [itex]o_i[/itex].
  • A measuring device, which has macroscopic states "[itex]S_{ready}[/itex]" (for the state in which it has not yet measured anything) and "[itex]S_i[/itex]" (for the state of having measured value [itex]o_i[/itex]; each macroscopic state will correspond to a huge number of microscopic states.)
  • The device is metastable when in the "ready" state, meaning that a small perturbation away from the ready state will lead to an irreversible, entropy-increasing transition to one of the states [itex]S_i[/itex].
  • The interaction between system and measuring device is such that if the system is in a state having eigenvalue [itex]o_i[/itex], then the device is overwhelmingly more likely to end up in the state [itex]S_i[/itex] than any other macroscopic state [itex]S_j[/itex]. (There might be other macroscopic states representing a failed or ambiguous measurement, but I'll ignore those for simplicity.)
Note: There is a bit of circularity here, in that to really make sense of the Born rule, I have to define what a measuring device is, and to define a measuring device, I have to refer to probability (which transitions are much more likely than which others), which in quantum mechanics has to involve the Born rule. What we can do, though, is treat the fact that the device makes a transition to state [itex]S_i[/itex] when it interacts with a system in a pure state with eigenvalue [itex]o_i[/itex] as initially being a matter of empirical observation, or it can be derivable from classical or semiclassical physics.

Then the Born rule implies that if the system to be measured is initially in a superposition of states of the form: [itex]\sum_i c_i |\psi_i\rangle[/itex] where [itex]|\psi_i\rangle[/itex] is an eigenstate of [itex]O[/itex] with eigenvalue [itex]o_i[/itex], then the measuring device will, upon interacting with the system, make a transition to one of the macroscopic states [itex]S_i[/itex] with a probability given by [itex]|c_i|^2[/itex].

Now, at this point, I think we can see where the talk about "measurement" in quantum mechanics is something of a red herring. Presumably, the fact that a pure eigenstate [itex]|\psi_i\rangle[/itex] of the system triggers the measuring device to go into macroscopic state [itex]S_i[/itex] is in principle derivable from applying Schrodinger's equation to the composite system, if we know the interaction Hamiltonian. But because of the linearity of quantum evolution, one would expect that if the initial state of the system were a superposition [itex]\sum_i c_i |\psi_i\rangle[/itex], then the final state of the measuring device would be a superposition of different macroscopic states, as well. (Okay, decoherence will actually prevent the device from being in a superposition, but the combination system + device + environment would be in a superposition.) So the Born rule for measurements can be re-expressed in what I think is an equivalent form that doesn't involve measurements at all:

If a composite system is in a state of the form [itex]|\Psi\rangle = \sum_i c_i |\Psi_i\rangle[/itex], where for [itex]i \neq j[/itex], [itex]|\Psi_i\rangle[/itex] and [itex]|\Psi_j\rangle[/itex] represent macroscopically distinguishable states, then that means that the system is in exactly one of the states [itex]|\Psi_i\rangle[/itex], with probability [itex]|c_i|^2[/itex]

Note the difference with the usual statement of the Born rule: I'm not saying that the system will be measured to have some eigenvalue [itex]\lambda_i[/itex] with probability [itex]|c_i|^2[/itex], because that would lead to an infinite regress. You would need a microscopic system, a measuring device to measure the state of the microscopic system, a second measuring device to measure the state of the first measuring device, etc. The infinite regress must stop at some point where something simply HAS some value, not "is measured to have some value".

But with this reformulation of Born's rule, which I'm pretty sure is equivalent to the original, you can see that the macroscopic/microscopic distinction has to be there. You can't apply the rule without the restriction that [itex]i \neq j[/itex] implies [itex]|\Psi_i\rangle[/itex] is macroscopically distinguishable from [itex]|\Psi_j\rangle[/itex]. To see this, take the simple case of a single spin-1/2 particle, where we only consider spin degrees of freedom. A general state can be written as [itex]|\psi\rangle = \alpha |u\rangle + \beta |d\rangle[/itex]. Does that mean that the particle is "really" in the state [itex]|u\rangle[/itex] or state [itex]|d\rangle[/itex], we just don't know which? No, it doesn't mean that. The state [itex]|\psi\rangle[/itex] is a different state from either [itex]|u\rangle[/itex] or [itex]|d\rangle[/itex], and it has observably different behavior.

If you eliminate "measurement" as a primitive, then you can't apply the Born rule without making a macroscopic/microscopic distinction.
 
Last edited:
  • Like
Likes MrRobotoToo, zonde, ShayanJ and 1 other person
  • #308
ShayanJ said:
I suppose that's for me!
I meant there was a time where no people were around to observe anything and yet the universe was behaving quantum mechanically. And even in the present, there are parts of the universe where we can't observe and still quantum mechanics applies to them. Unless you're willing to assume quantum mechanics applies only when people are around to observe the result!

Yeah, to me, the minimal interpretation is schizophrenic (whoops! I guess that's no longer a socially or medically acceptable term for split personality) On the one hand, it denies that there is anything special about measurement, and on the other hand, it seems to declare that the whole theory is meaningless without measurements.
 
  • Like
Likes RockyMarciano
  • #309
stevendaryl said:
Yeah, to me, the minimal interpretation is schizophrenic (whoops! I guess that's no longer a socially or medically acceptable term for split personality) On the one hand, it denies that there is anything special about measurement, and on the other hand, it seems to declare that the whole theory is meaningless without measurements.
To be fair one must admit that the origin of this situation lies in the mathematical formulation, the Born postulate gives non-classical probabilities as the only way to get predictions of measurements while the rest of the postulates based on a Hilbert space are classical. The only way to avoid direct schizophrenic contradiction is by adding an ad hoc macroclassical/microquantum cut to the theory that establishes a dependency on the classical theory and of course an unsolvable enigma about the difference between measurement interactions and the rest of interactions.

But in the minimal interpretation(and others) this cut is understood as a continuous smooth approximation from the micro to the macro situations so that there isn't really cut between measurements and the rest of interactions in principle but only the practical difficulties of dealing with the microquantum sizes with our big measurement apparatuses, and they are completely quantum too. This is actually hinging on the correspondence principle of the classical limit of QM(defined as: A more general theory can be formulated in a logically complete manner independently of the less general theory which forms a limiting case of it), based on certain requisites like the Ehrenfest equations among others. The problems is that these requisites depend on the truth of the classical postulates, so the principle is not met, and if those postulates really contradict the Born postulate they are of no use to base the correspondence limit and its smooth connection from micro to macro. One is then obligated to acknowledge the introduction of the cut in the theory in order to avoid inconsistence even if one doesn't think it exists in nature. In the words of Landau: "It is impossible in principle to formulate the basic concepts of QM without using classical mechanics".

Now try to explain this to an experimentalist minded person. As long as the Born rule postulate works they couldn't care less whether it is in contradiction or not with the rest of the postulates, to them it is none of their business.
 
  • #310
stevendaryl said:
Then why does the rule single out measurements?
There is no special interaction associated with measurements in classical theory. Look, try to formulate the Born rule without mentioning "measurement" or a macroscopic/microscopic distinction. It is not possible. In contrast, the rules for classical mechanics can be formulated without mentioning measurement. That doesn't mean that there are no measurements in classical mechanics, but that measurements are a derived concept, not a primitive concept.

We could try to make measurements a derived concept in QM, as well.

What is a measurement? Well, a first cut at this is that a measurement is an interaction between one system (the system to be measured) and a second system (the measuring device) so that the quantity to be measured causes a macroscopic change in the measuring device. Roughly speaking, we have:
  • A system being measured that has some operator [itex]O[/itex] with eigenvalues [itex]o_i[/itex].
  • A measuring device, which has macroscopic states "[itex]S_{ready}[/itex]" (for the state in which it has not yet measured anything) and "[itex]S_i[/itex]" (for the state of having measured value [itex]o_i[/itex]; each macroscopic state will correspond to a huge number of microscopic states.)
  • The device is metastable when in the "ready" state, meaning that a small perturbation away from the ready state will lead to an irreversible, entropy-increasing transition to one of the states [itex]S_i[/itex].
  • The interaction between system and measuring device is such that if the system is in a state having eigenvalue [itex]o_i[/itex], then the device is overwhelmingly more likely to end up in the state [itex]S_i[/itex] than any other macroscopic state [itex]S_j[/itex]. (There might be other macroscopic states representing a failed or ambiguous measurement, but I'll ignore those for simplicity.)
Note: There is a bit of circularity here, in that to really make sense of the Born rule, I have to define what a measuring device is, and to define a measuring device, I have to refer to probability (which transitions are much more likely than which others), which in quantum mechanics has to involve the Born rule. What we can do, though, is treat the fact that the device makes a transition to state [itex]S_i[/itex] when it interacts with a system in a pure state with eigenvalue [itex]o_i[/itex] as initially being a matter of empirical observation, or it can be derivable from classical or semiclassical physics.

Then the Born rule implies that if the system to be measured is initially in a superposition of states of the form: [itex]\sum_i c_i |\psi_i\rangle[/itex] where [itex]|\psi_i\rangle[/itex] is an eigenstate of [itex]O[/itex] with eigenvalue [itex]o_i[/itex], then the measuring device will, upon interacting with the system, make a transition to one of the macroscopic states [itex]S_i[/itex] with a probability given by [itex]|c_i|^2[/itex].

Now, at this point, I think we can see where the talk about "measurement" in quantum mechanics is something of a red herring. Presumably, the fact that a pure eigenstate [itex]|\psi_i\rangle[/itex] of the system triggers the measuring device to go into macroscopic state [itex]S_i[/itex] is in principle derivable from applying Schrodinger's equation to the composite system, if we know the interaction Hamiltonian. But because of the linearity of quantum evolution, one would expect that if the initial state of the system were a superposition [itex]\sum_i c_i |\psi_i\rangle[/itex], then the final state of the measuring device would be a superposition of different macroscopic states, as well. (Okay, decoherence will actually prevent the device from being in a superposition, but the combination system + device + environment would be in a superposition.) So the Born rule for measurements can be re-expressed in what I think is an equivalent form that doesn't involve measurements at all:

If a composite system is in a state of the form [itex]|\Psi\rangle = \sum_i c_i |\Psi_i\rangle[/itex], where for [itex]i \neq j[/itex], [itex]|\Psi_i\rangle[/itex] and [itex]|\Psi_j\rangle[/itex] represent macroscopically distinguishable states, then that means that the system is in exactly one of the states [itex]|\Psi_i\rangle[/itex], with probability [itex]|c_i|^2[/itex]

Note the difference with the usual statement of the Born rule: I'm not saying that the system will be measured to have some eigenvalue [itex]\lambda_i[/itex] with probability [itex]|c_i|^2[/itex], because that would lead to an infinite regress. You would need a microscopic system, a measuring device to measure the state of the microscopic system, a second measuring device to measure the state of the first measuring device, etc. The infinite regress must stop at some point where something simply HAS some value, not "is measured to have some value".

But with this reformulation of Born's rule, which I'm pretty sure is equivalent to the original, you can see that the macroscopic/microscopic distinction has to be there. You can't apply the rule without the restriction that [itex]i \neq j[/itex] implies [itex]|\Psi_i\rangle[/itex] is macroscopically distinguishable from [itex]|\Psi_j\rangle[/itex]. To see this, take the simple case of a single spin-1/2 particle, where we only consider spin degrees of freedom. A general state can be written as [itex]|\psi\rangle = \alpha |u\rangle + \beta |d\rangle[/itex]. Does that mean that the particle is "really" in the state [itex]|u\rangle[/itex] or state [itex]|d\rangle[/itex], we just don't know which? No, it doesn't mean that. The state [itex]|\psi\rangle[/itex] is a different state from either [itex]|u\rangle[/itex] or [itex]|d\rangle[/itex], and it has observably different behavior.

If you eliminate "measurement" as a primitive, then you can't apply the Born rule without making a macroscopic/microscopic distinction.
This is again an example of philosophical misunderstandings. The Born rule doesn't single out ineractions between a measurement device and the system with regard to any other interaction. The same fundamental interactions of the Standard Model are at work always. Of course quantum theory as well as classical theory is about what we are able to observe and measure. So are the probabilities described by quantum theory probabilities for the outcome of measurements of a given observable on a system whose state is given by previous observations or a preparation procedure.

What you describe further is the collapse hypothesis, which I think is only a very special case which almost always doesn't apply and if so it's a measurement device carefully constructed to enable a (good approximation) of a von Neumann filter measurement. I thus don't say, that the system undergoes a transition to a the state ##|\psi_i \rangle \langle \psi_i|## with probability ##|c_i|^2## but simply that I measure ##o_i## with this probability. It may well be that the system is destroyed by the measurement process (e.g., a photon is absorbed when registered by a photodetector or em. calorimeter).

That the measurement device must have the classical properties you describe is also pretty clear since we have to "amplify" the microscopic properties to be able to measure them, but I don't think that there is a distinction between classical and quantum laws on a fundamental level. The classical behavior is derivable from quantum theory by appropriate averaging procedures in the usual sense of quantum statistics. A "macro state" is thus describable as the average over a large number of "micro states". You mention entropy production yourself, but that indeed makes it necessary to neglect information, i.e., to coarse grain to the relevant macroscopic observables.
 
  • #311
atyy said:
Anyway, the basic idea is that unless there is fine tuning, it is unlikely the universe was created in equilibrium.
I think that this idea missis the idea of statistical equilibrium. The system in statistical equilibrium tends to stay close to it, because the majority of all possible states are close to the equilibrium. The statistical equilibrium is nothing but the state of largest entropy. Therefore one does not need fine tuning to have an equilibrium. Just the opposite, one needs fine tuning to be in a state far from equilibrium.
 
  • Like
Likes vanhees71
  • #312
rubi said:
BM is possibly one of the least rational explanations that people have come up with in the history of science.
So what's the most rational interpretation of QM in your opinion? Consistent histories? With non-classical logic (see Griffiths)? Changing the rules of logic is the least rational thing to do for my taste.
 
  • Like
Likes vanhees71
  • #313
Demystifier said:
I think that this idea missis the idea of statistical equilibrium. The system in statistical equilibrium tends to stay close to it, because the majority of all possible states are close to the equilibrium. The statistical equilibrium is nothing but the state of largest entropy. Therefore one does not need fine tuning to have an equilibrium. Just the opposite, one needs fine tuning to be in a state far from equilibrium.

But this assumes a discrete state space. If the state space is not discrete, then there is no unique notion of majority.

Also, it makes no sense to use "majority" as an argument. It is dynamics that is fundamental, not statistical mechanics.
 
  • #314
atyy said:
But this assumes a discrete state space. If the state space is not discrete, then there is no unique notion of majority.
Sure, you have to fix some measure. But in many cases there is a natural choice of measure. For instance, in classical statistical physics for one particle in 3 dimensions the natural measure is ##d^3x d^3p##, which is related to the fact that the phase volume is conserved owing to the Liouville theorem. A similar measure exists for Bohmian mechanics.
 
  • #315
Demystifier said:
Sure, you have to fix some measure. But in many cases there is a natural choice of measure. For instance, in classical statistical physics for one particle in 3 dimensions the natural measure is ##d^3x d^3p##, which is related to the fact that the phase volume is conserved owing to the Liouville theorem. A similar measure exists for Bohmian mechanics.

But the world is manifestly not in thermodynamic equilibrium.
 

Similar threads

Back
Top