Is the Ensemble Interpretation Inconsistent with the PBR Theorem?

In summary: No, that's not what Ballentine says. In fact, he explicitly defines "objective" in terms of what the PBR theorem says:2) The properties of a single object are objective, i.e. do not depend on someone's knowledge about them.
  • #71
atyy said:
[...]Incidentally, the paper also has another wrong criticism of the standard interpretation. The paper claims that position and momentum can be simultaneously measured, but in the counterexample he gives, the position and momentum are not canonically conjugate. [...]

Do you have /know of a proof of that?
 
Physics news on Phys.org
  • #72
dextercioby said:
Do you have /know of a proof of that?

Using the position at the screen to measure the momentum gives the momentum at the slit. I don't have a detailed calculation at the moment, but the essential idea is that the far field distribution is the Fourier transform of the wave function at the slit, by analogy to the Fraunhofer approximation https://en.m.wikipedia.org/wiki/Fraunhofer_diffraction_equation

A simpler way to see it is that a sharp position measurement will be distributed according to squared amplitude of the wave function in position coordinates, and a sharp momentum measurement distributed according to the squared amplitude of the wave function in momentum coordinates, and these two distributions are not typically equal.
 
  • #73
atyy said:
Using the position at the screen to measure the momentum gives the momentum at the slit.
It doesn't determine the momentum in the orthodox sense. It only determines momentum if one assumes a semi-classical picture in which the particle is a point-like object with straight (not Bohmian) trajectory. As Einstein would say, it is theory that determines what is measurable.
 
  • Like
Likes dextercioby
  • #74
Demystifier said:
It doesn't determine the momentum in the orthodox sense. It only determines momentum if one assumes a semi-classical picture in which the particle is a point-like object with straight (not Bohmian) trajectory. As Einstein would say, it is theory that determines what is measurable.

I think it does determine momentum in the orthodox sense without assuming a semi-classical picture if the position measurement is taken at infinity. Roughly, it should be something like momentum being the Fourier transform, and in the far field limit unitary Schroedinger evolution causes the wave function to be the Fourier transformed version, so if one measures position in the far field limit, one is measuring the Fourier transform, which is momentum in the orthodox sense. And then in the far field limit, the "quick-and-dirty" derivation assuming classical paths can be rigorously justified without assuming any particle tranjectories. Or at least that's what I remember, but I cannot find a derivation by a quick search at the moment.

This isn't for wave functions, but I think the maths should work out similarly.
https://en.wikipedia.org/wiki/Fraunhofer_diffraction_equation
Capture.JPG


I found a reference by Atom Optics by Adams, Siegel and Mlynek which says " The atomic momentum distribution, or in the Fraunhofer diffraction limit the far-field real space distribution (equation (35) in Section 2.4)"
 
Last edited:
  • #75
Thinking about this a bit more (and rereading the thread), I am not sure what exactly the problem was! In fact it seems to me that the ensemble interpretation as described by Ballentine is fine.
 
  • Like
Likes WernerQH and vanhees71
  • #76
atyy said:
I'm not sure off the top of my head, but in corresponds to a classical uncertainty being a unique mix of "pure states" (the complete state that can be assigned to a single system), whereas quantum density matrices don't have a unique decomposition into pure states (ie. preferred basis must be picked out by measurement or decoherence or whatever).

Holevo has some discussion at the start of his book (I'm don't have it with me at the moment).
The point is the uniqueness. For each point inside a simplex, there is a decomposition into ## \sum_i p_i n^i## where ##n^i## are the vertices of the simplex. In 3D those vertices would be maximal four points. If you start with more vertices in 3D, the convex hull is in general no longer a simplex, and the decomposition is no longer unique.

In classical mechanics, this would be a decomposition like ##\int \rho(p,q)dp dq##. It is unique.

In quantum theory you can have such decompositions into pure states, but they are not unique. There can be several. Each particular complete measurement of some ##\hat{a}## defines such a decomposition of arbitrary states, ##\hat{\rho} = \sum p_a |\psi_a\rangle\langle\psi_a|##, for that operator it is unique. But measure different incompatible operators and you get different decompositions.
 
  • Like
Likes atyy
  • #77
vanhees71 said:
In this way the states are referring on the one hand to single objects (preparation procedure for single objects). On the other hand they don't have much of a meaning for the single object and measurements [...]

It is misleading to speak (and think) of quantum "objects" and their properties. Quantum theory is about the statistical correlations between preparation and measurement events. Never mind the anthropocentric terms "preparation" and "measurement"; the nuclear reactions in the interior of the sun proceed without any observers "preparing" and "measuring" them.
 
  • Like
Likes vanhees71
  • #78
Well, to talk about preparation and measurement you must give a meaning to preparing of and measuring observables on the single objects making up the ensemble. Otherwise, of course, I agree. Indeed, all the meaning of the quantum state are the statistical properties of an ensemble defined by an equivalence class of preparation procedures, and these can only be empirically tested on an ensemble of such equally prepared "quantum objects".
 
  • #79
vanhees71 said:
Well, to talk about preparation and measurement you must give a meaning to preparing of and measuring observables on the single objects making up the ensemble.

It's a burden that "measurement" still occupies a central position in QT. It involves classical apparatus, and is guaranteed to produce a definite result (qua postulate). But an apparatus is composed of atoms, and there must be a microscopic picture of what happens when e.g. the polarization of a photon is measured. It would clearly be desirable to keep the microscopic picture in view.

You think of ensembles of objects, and here we disagree. I prefer to think of ensembles of microscopic processes, or events, if you like. Measurement of the polarization of a photon boils down to the absorption of a photon. An absorption coefficient can be expressed in terms of a Fourier integral of the current density fluctuations (Kubo formula). It is a real number that directly represents the expected number of microscopic processes in a given patch of space-time. Polarization of a photon means a statistical correlation of the field components a quarter wavelength apart, and the detection probability is proportional to a similar correlation of the microscopic currents. QT allows us to compute these correlations.

My interpretation is a blend of the statistical and transactional interpretations, but with minimal ontology: neither particles nor waves, only events. The "confirmation waves" of the transactional interpretation are not physical, but merely part of the mathematical (Keldysh-) formalism to predict the probabilities of events.
 
  • #80
What are "events" if not "clicks of a detector"? Measurements always mean the interaction of the measured object with a macroscopic measurement device which enables an irreversibly stored measurement result. That such devices exist is (a) empirically clear, because quantum physicists in all labs successfully measure quantum objects (single photons, elementary particles, etc.) and (b) also follows from quantum statistics, according to which the macroscopic ("coarse grained") observables indeed follow classical laws. It's also not true that we only measure quantum-mechanical expectation values. E.g., in the usual Stern-Gerlach experiment with unpolarized Ag atoms we don't measure 0 spin components but ##\pm \hbar/2## components for each individual silver atom. Of course, on the average over a large example we get 0.

I'm not familiar with the transactional interpretation. So I cannot comment on this.
 
  • #81
vanhees71 said:
What are "events" if not "clicks of a detector"?
...
It's also not true that we only measure quantum-mechanical expectation values.

We are speaking different languages, apparently. :-)

I was speaking of events as points in space time. Of course, these points combine to form larger patterns, like the click of Geiger counter. Then there are huge numbers of atoms involved, and the composite event has only approximate coordinates in space and time.

The Stern-Gerlach apparatus separates different angular momentum states, and of course the formalism can predict the probabilities of each one. Evaluate the expectation value of a projection operator, if you wish. I don't perceive a limitation here.
 
  • Like
Likes vanhees71
  • #82
Formally speaking, an event is a spectral projection of an observable's operator, which can be e.g. logically equivalent to a detector click.
 
Last edited:
  • Like
Likes vanhees71
  • #83
vanhees71 said:
What are "events" if not "clicks of a detector"? Measurements always mean the interaction of the measured object with a macroscopic measurement device which enables an irreversibly stored measurement result. That such devices exist is (a) empirically clear, because quantum physicists in all labs successfully measure quantum objects (single photons, elementary particles, etc.) and (b) also follows from quantum statistics, according to which the macroscopic ("coarse grained") observables indeed follow classical laws.

As usual, (b) is not correct. As Landau and Lifshitz (Vol III, p3) say of quantum mechanics: it contains classical mechanics as a limiting case, yet at the same time it requires this limiting case for its own formulation.

vanhees71 said:
It's also not true that we only measure quantum-mechanical expectation values. E.g., in the usual Stern-Gerlach experiment with unpolarized Ag atoms we don't measure 0 spin components but ##\pm \hbar/2## components for each individual silver atom. Of course, on the average over a large example we get 0.

In typical cases the probability distribution can be recovered from the cumulants, which are expectation values. So formally the probability distribution and the expectation values provide the same information.
 
Last edited:
  • Like
Likes vanhees71
  • #84
Landau and Lifshitz is some decades old. There's much progress in the understanding of the classical behavior of macroscopic systems from the point of view of quantum theory. Nevertheless note that Landau and Lifshitz vol. X is the only general textbook containing a derivation of the transport equation (classical) from the Kadanoff-Baym equation (though it's not named so, as is understandable, because it's a Russian textbook ;-)).

Sure if you know all cumulants you know the complete probabilities/probability distribution, but that's far more than just the expectation values. You need an ensemble to measure all (relevant) cumulants to reconstruct the probability function, i.e., you have to measure on single systems of an ensemble with a sufficient accuracy. The resolution of the detector must be the better and the ensemble must be the larger (and the more accurately prepared) the more cumulants you want to resolve.
 
  • Like
Likes WernerQH
  • #85
vanhees71 said:
Landau and Lifshitz is some decades old. There's much progress in the understanding of the classical behavior of macroscopic systems from the point of view of quantum theory. Nevertheless note that Landau and Lifshitz vol. X is the only general textbook containing a derivation of the transport equation (classical) from the Kadanoff-Baym equation (though it's not named so, as is understandable, because it's a Russian textbook ;-)).

Although there has been progress, it doesn't solve the conceptual problems that Landau and Lifshitz were thinking of. You can still see this even in papers by Peres, where a classical apparatus is needed.
https://arxiv.org/abs/quant-ph/9906023
"Is it possible to maintain a strict quantum formalism and treat the intervening apparatus as a quantum mechanical system, without ever converting it to a classical description? We could then imagine not only sets of apparatuses spread throughout spacetime, but also truly delocalized apparatuses [26], akin to Schrodinger cats [27, 28], so that interventions would not be localized in spacetime as required in the present formalism. However, such a process would only be the creation of a correlation between two nonlocal quantum systems. This would not be a true measurement but rather a “premeasurement” [18]. A valid measuring apparatus must admit a classical description equivalent to its quantum description [22] and in particular it must have a positive Wigner function."

vanhees71 said:
Sure if you know all cumulants you know the complete probabilities/probability distribution, but that's far more than just the expectation values. You need an ensemble to measure all (relevant) cumulants to reconstruct the probability function, i.e., you have to measure on single systems of an ensemble with a sufficient accuracy. The resolution of the detector must be the better and the ensemble must be the larger (and the more accurately prepared) the more cumulants you want to resolve.

I mean that each cumulant itself can be considered an expectation. So the mean is is ##E(x)##, and the variance is the ##E((x-E(x))^2)##. So the statement that quantum theory only predicts expectation values, is formally the same as the statement that quantum theory only predicts probabilities of measurement outcomes (although in practice no one would measure all the cumulants in order to measure the probability distribution).
 
Last edited:
  • #86
Of course a "classical apparatus" is needed, but this doesn't imply that it cannot be described by quantum statistics leading to "classical behavior" without the need for ad-hoc assumptions on a quantum-classical cut or a collapse and the explanation, why it shows definite measurement results when measuring an observable on a single quantum system, leading to the verification of the predicted probabilities for the outcome of these measurements on an ensemble.

Again: you need to measure all cumulants with sufficient accuracy and resolution to reconstruct the probability distribution not only the expectation value. Of course the cumulants are expectation values too, but you need to measure all of them or at least some relevant subset to get a sufficiently accurate reconstruction of the probability distributions.
 

Similar threads

Replies
84
Views
4K
Replies
210
Views
8K
Replies
309
Views
12K
Replies
62
Views
3K
Replies
11
Views
1K
Replies
91
Views
6K
Replies
47
Views
5K
Replies
78
Views
5K
Replies
1
Views
2K
Back
Top