In summary: That confirms my (still superficial) understanding that now I'm allowed to interpret ##\hat{\rho}## and the trace operation as expectation values in the usual statistical sense, and that makes the new approach much more understandable than what you called before "thermal interpretation".I also think that the entire conception is not much different from the minimal statistical interpretation. The only change to the "traditional" concept seems to be that you use the more general concept of POVM than the von Neumann filter measurements, which are only a special case.The only objection I have is the statement concerning EPR. It cannot be right, because local realistic theories are not consistent with the quantum-theoretical probability theory, which
  • #1
A. Neumaier
Science Advisor
Insights Author
8,638
4,684
This Insight article presents the main features of a conceptual foundation of quantum physics with the same characteristic features as classical physics – except that the density operator takes the place of the classical phase space coordinates position and momentum. Since everything follows from the well-established techniques of quantum tomography (the art and science of determining the state of a quantum system from measurements) the new approach may have the potential to lead in time to a consensus on the foundations of quantum mechanics. Full details can be found in my paper

A. Neumaier, Quantum mechanics via quantum tomography, Manuscript (2022). arXiv:2110.05294v5

This paper gives for the first time a formally precise definition of quantum measurement that

is applicable without idealization to complex, realistic experiments;
allows one to derive the standard quantum...

Continue reading...
 
Last edited:
  • Like
Likes bhobba, Jarvis323, mattt and 4 others
Physics news on Phys.org
  • #2
Great!

That confirms my (still superficial) understanding that now I'm allowed to interpret ##\hat{\rho}## and the trace operation as expectation values in the usual statistical sense, and that makes the new approach much more understandable than what you called before "thermal interpretation". I also think that the entire conception is not much different from the minimal statistical interpretation. The only change to the "traditional" concept seems to be that you use the more general concept of POVM than the von Neumann filter measurements, which are only a special case.

The only objection I have is the statement concerning EPR. It cannot be right, because local realistic theories are not consistent with the quantum-theoretical probability theory, which is proven by the violation of Bell's inequalities (and related properties of quantum-mechanically evaluated correlation functions, etc) through the quantum mechanical predictions and the confirmation of precisely these violations in experiments.

The upshot is: As quantum theory predicts, the outcomes of all possible measurements on a system, prepared in any state ##\hat{\rho}## (I take it that it is allowed also in your new conception to refer to ##\hat{\rho}## as the description of equivalence classes of preparation procedures, i.e., to interpret the word "quantum source" in the standard way) are not due to predetermined values of the measured observables. All the quantum state implies are the probabilities for the outcome of measurements. The values of observables are thus only determined by the preparation procedure if they take a certain value with 100% probability. I think within your conceptional frame work, "observable" takes a more general meaning as the outcome of some measurement device ("pointer reading") definable in the most general sense as a POVM.
 
  • #3
What's the main new idea here? From this summary, which is written nicely and clearly, I have a feeling that I knew all this before. Do I miss something?
 
  • Like
Likes bhobba and vanhees71
  • #4
Indeed, I think it's just a reformulation of the minimal statistical interpretation, taking into account the more modern approach to represent observables by POVMS rather than the standard formulation with self-adjoint operators (referring to von Neumann filter measurements, which are a special case of POVMS).
 
  • Like
Likes bhobba and Demystifier
  • #5
vanhees71 said:
... the more modern approach to represent observables by POVMS rather than the standard formulation with self-adjoint operators ...

As a layman in QM I looked up POVM and found a function ##\mu\, : \,\mathcal{A}\longrightarrow \mathcal{B(H)}## with ##0\leq \mu(A) \leq \operatorname{id}_{\mathcal{H}}## with self-adjoint operators as values. It seems to be the same difference as a distribution function is to a probability measure, i.e. more of a different wording than a different approach.

Do I miss something?
 
Last edited:
  • #6
fresh_42 said:
Do I miss something?
Physics? :wink:

More seriously, I don't know what the equation you wrote means, so I cannot say what do you miss.
 
  • #7
Demystifier said:
Physics? :wink:

More seriously, I don't know what the equation you wrote means, so I cannot say what do you miss.
A function from a measure space to the space of bounded operators on a Hilbert space.
 
  • #8
Very nice description of how QM marries up with CM. In particular, its operational approach greatly clarifies the Born rule in terms of empiricism, which is the way I view physics as a physicist. I agree that the standard introduction contains otherwise “mysterious” mathematical abstractions. How does this resolve the mystery of entanglement?
 
Last edited:
  • #9
Demystifier said:
What's the main new idea here?
New compared to what?
Demystifier said:
From this summary, which is written nicely and clearly, I have a feeling that I knew all this before.
For example, where did you know from what I said in the very first sentence about quantum phase space coordinates?
Insight summary (first sentence) said:
This Insight article presents the main features of a conceptual foundation of quantum physics with the same characteristic features as classical physics – except that the density operator takes the place of the classical phase space coordinates position and momentum.
Demystifier said:
Do I miss something?
What I consider new for the general reader was specified at the beginning:
Insight summary said:
This Insight article [...] gives for the first time a formally precise definition of quantum measurement that
  • is applicable without idealization to complex, realistic experiments;
  • allows one to derive the standard quantum mechanical machinery from a single, well-motivated postulate;
  • leads to objective (i.e., observer-independent, operational, and reproducible) quantum state assignments to all sufficiently stationary quantum systems.
The paper shows that the amount of objectivity in quantum physics is no less than that in classical physics.
If you know how to do all this consistently you miss nothing. Otherwise you should read the full paper, where everything is argued in full detail, so that it cab be easily integrated into a first course on quantum mechanics.
 
  • #10
RUTA said:
Very nice description of how QM marries up with CM. How does this resolve the mystery of entanglement?
The concept is nowhere needed in this approach to quantum mechanics, hence there is no mystery about it at this level.

Entanged states are just very special cases of density operators expressed in a very specific basis. They become a matter of curiosity only if one looks for extremal situations that can be prepared only for systems in which a very small number of degrees of freedom are treated quantum mechanically.
 
Last edited:
  • #11
fresh_42 said:
As a layman in QM I looked up POVM and found a function ##\mu\, : \,\mathcal{A}\longrightarrow \mathcal{B(H)}## with ##0\leq \mu(A) \leq \operatorname{id}_{\mathcal{H}}## with self-adjoint operators as values. It seems to be the same difference as a distribution function is to a probability measure, i.e. more of a different wording than a different approach.
In the Insight article and the accompanying paper I only use the notion of a discrete quantum measure, defined as a finite family of Hermitian, positive semidefinite that sum to the identity.
This is the quantum version of a discrete probability distribution, a finite family of probabilities summing to one. Thus on the level of foundations there is no need for the POVM concept.

The concept of POVMs is unnecessarily abstract, but there are simple POVMs equivalent to discrete quantum measures; see Section 4.1 of my paper.
 
Last edited:
  • Like
  • Informative
Likes fresh_42, gentzen and vanhees71
  • #13
A. Neumaier said:
The concept is nowhere needed in this approach to quantum mechanics, hence there is no mystery about it at this level.

Entanged states are just very special cases of density operators expressed in a very specific basis. They become a matter of curiosity only if one looks for extremal situations that can be prepared only for systems in which a very small number of states are treated quantum mechanically.
Yes, entangled states produce CM results on average, but that statement simply ignores their violation of the Bell inequality, which can also be couched as a statistical, empirical fact. Indeed, the mystery of entanglement can also be shown empirically in very small (non-statistica) samples of individual measurements. This approach is therefore worthless for resolving that mystery. It does however marry up beautifully with the reconstruction of QM via information-theoretic principles, which does resolve the mystery of the qubit and therefore entanglement.
 
  • #14
vanhees71 said:
But aren't these also special cases of POVMs as described in the Wikipedia

https://en.wikipedia.org/wiki/POVM
Yes, Wikipedia describes them (at the very top of the section headed 'Definition') as the simplest POVMs. But the general concept (as defined in the Definition inside this section of Wikipedia) is an abstract monster far too complicated for most physics students.
 
  • Like
Likes gentzen
  • #15
RUTA said:
Yes, entangled states produce CM results on average, but that statement simply ignores their violation of the Bell inequality, which can also be couched as a statistical, empirical fact. Indeed, the mystery of entanglement can also be shown empirically in very small (non-statistica) samples of individual measurements. This approach is therefore worthless for resolving that mystery.
Most things are worthless if you apply inadequate criteria for measuring their worth. The most expensive car is worthless if you want to climb a tree.

I didn't set out to resolve what you regard here as a mystery. It is not needed for the foundations but a consequence of the general formalism once it has been derived.
RUTA said:
It does however marry up beautifully with the reconstruction of QM via information-theoretic principles, which does resolve the mystery of the qubit and therefore entanglement.
I don't see the qubit presenting a mystery. Everything about it was known in 1852, long before quantum mechanics got off the ground.
 
  • Like
Likes vanhees71
  • #16
vanhees71 said:
Indeed, I think it's just a reformulation of the minimal statistical interpretation, taking into account the more modern approach to represent observables by POVMS rather than the standard formulation with self-adjoint operators (referring to von Neumann filter measurements, which are a special case of POVMS).
It is a minimal well-motivated new foundation for quantum physics including its statistical interpretation, based on a new postulate from which POVMs and everything else can be derived. And it has consequences far beyond the statistical interpretation, see the key points mentioned in post #9.
fresh_42 said:
It seems to be the same difference as a distribution function is to a probability measure, i.e. more of a different wording than a different approach.

Do I miss something?
The point is that there are two quantum generalizations of probability, the old (von Neumann) one based on PVMs (in the discrete case orthogonal projectors summing to 1) and the more recent (1970+), far more generally applicable one, based on POVMs. See the Wikipedia article mentioned in post #14.
 
  • Like
Likes gentzen and vanhees71
  • #17
A. Neumaier said:
Yes, Wikipedia describes them (at the very top of the section headed 'Definition') as the simplest POVMs. But the general concept (as defined in the Definition inside this section of Wikipedia) is an abstract monster far too complicated for most physics students.
The German version is quite short, but it doesn't seem to be too complicated.
 
  • #18
vanhees71 said:
That confirms my (still superficial) understanding that now I'm allowed to interpret ##\hat{\rho}## and the trace operation as expectation values in the usual statistical sense,
There are two senses: One as a formal mathematical construct, giving quantum expectations, and
the other in a theorem stating that when you do actual measurements, the limit of the sample means agree with these theoretical quantum expectations.
vanhees71 said:
and that makes the new approach much more understandable than what you called before "thermal interpretation".
I derive the thermal interpretation from this new approach. See Section 7.3 of my paper, and consider the paper to be a much more understandable pathway to the thermal interpretation, where in my book I still had to postulate many things without being able to derive them.
vanhees71 said:
I also think that the entire conception is not much different from the minimal statistical interpretation. The only change to the "traditional" concept seems to be that you use the more general concept of POVM than the von Neumann filter measurements, which are only a special case.
The beginnings are not much different, but they are already simpler than the minimal statistical interpretation - which needs nontrivial concepts from spectral theory and a very nonintuitive assertion called Born's rule.
vanhees71 said:
The only objection I have is the statement concerning EPR. It cannot be right, because local realistic theories are not consistent with the quantum-theoretical probability theory, which is proven by the violation of Bell's inequalities (and related properties of quantum-mechanically evaluated correlation functions, etc) through the quantum mechanical predictions and the confirmation of precisely these violations in experiments.
Please look at my actual claims in the paper rather than judging from the summary in the Insiight article! EPR is discussed in Section 5.4. There I claim elements of reality for quantum expectations of fields operators, not for Bell-local realistic theories! Thus Bell inequalities are irrelevant.
vanhees71 said:
I take it that it is allowed also in your new conception to refer to ##\hat{\rho}## as the description of equivalence classes of preparation procedures, i.e., to interpret the word "quantum source" in the standard way)
No. A (clearly purely mathematical) construction of equivalence classes is not involved at all!

A quantum source is a piece of equipment emanating a beam - a particular laser, or a fixed piece of radioactive material behind a filter with a hole, etc.. Each quantum source has a time-dependent state ##\rho(t)##, which in the stationary case is independent of time ##t##.
vanhees71 said:
All the quantum state implies are the probabilities for the outcome of measurements.
The quantum state implies known values of all quantum expectations (N-point functions). This includes smeared field expectation values that are (for systems in local equilibrium) directly measurable without any statistics involved. It also includes probabilities for statistical measurements.
vanhees71 said:
I think within your conceptional frame work, "observable" takes a more general meaning as the outcome of some measurement device ("pointer reading") definable in the most general sense as a POVM.
It takes a meaning independent of POVMs.

  • In classical mechanics where observables are the classical phase space variables ##p,q## and everything computable from them; in particular the kinetic and potential energy, forces, etc..
  • In quantum mechanics observables are the quantum phase space variables ##\rho## (or its matrix elements) and everything computable from them, in particular, the N-point functions of quantum field theory. For example, 2-point functions are often measurable through linear response theory.
 
  • Like
Likes mattt
  • #19
fresh_42 said:
The German version is quite short, but it doesn't seem to be too complicated.
Not for a mathematician, who is familiar with measure theory and has mastered the subtleties of countable additivity...

But to a physics student you need to explain (and motivate in a physics context) the notions of a measure space, which is a lot of physically irrelevant overhead!
The German version of Wikipedia then simplifies to the case a discrete quantum measure, which is already everything needed to discuss measurement!
 
  • #20
A. Neumaier said:
No. A (clearly purely mathematical) construction of equivalence classes is not involved at all!

A quantum source is a piece of equipment emanating a beam - a particular laser, or a fixed piece of radioactive material behind a filter with a hole, etc.. Each quantum source has a time-dependent state ##\rho(t)##, which in the stationary case is independent of time ##t##.
The point is the interpretation. In the latter formulation, that's precisely what I mean when I say that ##\hat{\rho}## is an "equivalence class of preparation procedures". It's an equivalence class, because very different equipment can result in the same "emanating beam".
A. Neumaier said:
The quantum state implies known values of all quantum expectations (N-point functions). This includes smeared field expectation values that are (for systems in local equilibrium) directly measurable without any statistics involved. It also includes probabilities for statistical measurements.
This I don't understand: A single measurement leads to some random result, but not the expectation value of these random results.
A. Neumaier said:
It takes a meaning independent of POVMs.

  • In classical mechanics where observables are the classical phase space variables ##p,q## and everything computable from them; in particular the kinetic and potential energy, forces, etc..
  • In quantum mechanics observables are the quantum phase space variables ##\rho## (or its matrix elements) and everything computable from them, in particular, the N-point functions of quantum field theory. For example, 2-point functions are often measurable through linear response theory.
Now I'm completely lost again. In the usual formalism the statistical operator refers to the quantum state and not to an observable. To determine a quantum state you need more than one measurement (of a complete set of compatible observables). See Ballentine's chapter (Sect. 8.2) on "state determination".
 
  • #21
vanhees71 said:
The point is the interpretation. In the latter formulation, that's precisely what I mean when I say that ##\hat{\rho}## is an "equivalence class of preparation procedures". It's an equivalence class, because very different equipment can result in the same "emanating beam".
It results in different emanating beams, though their properties are the same.

Its an equivalence class only in the same irrelevant sense as in the claim that ''momentum is an equivalence class of preparations of particles in a classical source''. Very different equipment can result in particles with the same momentum.

Using mathematical terminology to make such a simple thing complicated is quite unnecessary.
vanhees71 said:
This I don't understand: A single measurement leads to some random result, but not the expectation value of these random results.
A single measurement of a system in local equilibrium leads to a fairly well-determined value for a current, say, and not to a random result.
vanhees71 said:
Now I'm completely lost again.
Because my new approach goes beyond your minimal interpretation. You should perhaps first read the paper rather than base a discussion on just reading the summary exposition. There is a reason why I spent a lot of time to give detailed, physical arguments in the paper!
 
Last edited:
  • #22
vanhees71 said:
To determine a quantum state you need more than one measurement
Yes, that's what quantum tomography is about.

To accurately determine a momentum vector one also needs more than one measurement.

Thus I don't see why your comment affects any of my claims.
 
Last edited:
  • #23
But I also have a deeper objection: the Everett interpretation takes quantum theory in its present form as the currency, in terms of which everything has to be explained or understood, leaving the act of observation as a mere secondary phenomenon. In my view we need to find a different outlook in which the primary concept is to make meaning out of observation and, from that derive the formalism of quantum theory.

So you think that the many-universes approach may still be useful?

Yes, I think one has to work both sides of the railroad track.

But in the meantime you're siding with Bohr.

Yes. As regards the really fundamental foundations of knowledge, I cannot believe that nature has 'built in', as if by a corps of Swiss watchmakers, any machinery, equation or mathematical formalism which rigidly relates physical events separated in time. Rather I believe that these events go together in a higgledy-piggledy fashion and that what seem to be precise equations emerge in every case in a statistical way from the physics of large numbers; quantum theory in particular seems to work like that.

But do you think that quantum theory could be just an approximate theory and that there could be a better theory?

First, let me say quantum theory in an every-day context is unshakeable, unchallengeable, undefeatable - it's battle tested. In that sense it's like the second law of thermodynamics which tells us that heat flows from hot to cold. This too is battle tested - unshakeable, unchallengeable, invincible. Yet we know that the second law of thermodynamics does not go back to any equations written down at the beginning of time, not to any 'built in' machinery - not to any corps of Swiss watchmakers - but rather to the combination of a very large number of events. It's in this sense that I feel that quantum theory likewise will some day be shown to depend on the mathematics of very large numbers. Even Einstein, who opposed quantum theory in so many ways, expressed the point of view that quantum theory would turn out to be like thermodynamics.
 
  • #24
gentzen said:
But I also have a deeper objection: the Everett interpretation takes quantum theory in its present form as the currency, in terms of which everything has to be explained or understood
Your deeper objection seems to have no substance that would one allow to make progress.

Whatever is taken as the currency in terms of which everything has to be explained or understood, it might be something effective due to an even deeper currency. We simply must start somewhere, and your deeper objection will always apply.

But according to current knowledge, quantum theory is a sufficient currency. Unlike in earlier ages, quantum theory explains the properties of physical reality (whatever it is, but in physics it certainly includes measurement devices!)

There are no experimental phenomena not accounted for by quantum physics, which can taken to be the physics of the standard model plus modifications due to neutrino masses and semiclassical gravity plus some version of Born's rule, plus all approximation schemes used to derive the remainder of physics. Thus everything beyond that is just speculation without experimental support.
 
  • Like
Likes bhobba and vanhees71
  • #25
A. Neumaier said:
There are no experimental phenomena not accounted for by quantum physics,
Except maybe the mind bugging need of reconciling gravity with quantum mechanics, both of which have solid experimental verifications.
 
  • #26
A. Neumaier said:
Your deeper objection seems to have no substance that would one allow to make progress.
Those were neither my deeper objections, nor my words. I should have used "Quote" to make it clear that those are not my own words. (I try to always quote the passage I respond to, or otherwise make it clear how my posts relate to the thread.) It is an excerpt from "The Ghost in the Atom" (1986). The questions are from Paul Davies. Maybe I should have said that, and also quoted the previous question for more context:
But when Everett produced his many-universes interpretation for quantum theory you changed your mind for a while. Why was that?
...
What attracted you to this remarkable idea?
... But I also have a deeper objection: ...
I intentionally didn't reveal who actually said this. This is also why I avoided naming the book, because that already reveals the information that it was one of the following 8 scientists: Alain Aspect, John Bell, John Wheeler, Rudolf Peierls, David Deutsch, John Gerald Taylor, David Bohm, or Basil Hiley. The answers in that book are often "unexpected" from todays perspective. Naming the scientist would make it easy to dismiss his opinion, therefore I avoided it. For example, another one of the interviewees said:
Perhaps they are just vocal.

They are vocal. In fact, I was asked the other day why it is that so few people are willing to stand up and defend Bohr's view, and I didn't have an answer on the spot. But the answer is, of course, that if somebody published a paper arguing that two and two makes five, there wouldn't be many mathematicians writing papers to defend the conventional view!
 
  • #28
gentzen said:
The questions are from Paul Davies.
and the other statements, including the first sentence?

gentzen said:
Those were neither my deeper objections, nor my words.
If you write something without giving credits, everyone assumes it is your statement!
 
  • Like
Likes vanhees71 and Lord Jestocost
  • #29
  • #30
Ravi Mohan said:
So we give up even trying to know the true meaning of Renormalization?
Renormalization does not go beyond the limits of quantum theory.

From a physics point of view, everything about renormalization is understood. The missing logical coherence (due to the lack of a rigorous nonperturbative version of renormalization) is a matter for the mathematicians to resolve.
 
Last edited:
  • Like
Likes dextercioby and vanhees71
  • #31
Well be my guest in explaining it to Natty (Nathan Seiberg)!
 
  • #32
Ravi Mohan said:
Well be my guest in explaining it to Natty (Nathan Seiberg)!
you can send him the link!
 
  • #33
A. Neumaier said:
I don't see the qubit presenting a mystery. Everything about it was known in 1852, long before quantum mechanics got off the ground.
To understand the mystery of the qubit, consider a measurement of some state that results in outcome O1 every time. Then suppose you rotate your measurement of that same state and obtain outcome O2 every time. We would then expect that a measurement between those two should produce an outcome between O1 and O2, according to some classical model. But instead, we get a distribution of O1 and O2 that average to whatever we expected from our classical model. Here is how Koberinski & Mueller put it (as quoted in our paper https://www.mdpi.com/1099-4300/24/1/12):

We suggest that (continuous) reversibility may be the postulate which comes closest to being a candidate for a glimpse on the genuinely physical kernel of ``quantum reality''. Even though Fuchs may want to set a higher threshold for a ``glimpse of quantum reality'', this postulate is quite surprising from the point of view of classical physics: when we have a discrete system that can be in a finite number of perfectly distinguishable alternatives, then one would classically expect that reversible evolution must be discrete too. For example, a single bit can only ever be flipped, which is a discrete indivisible operation. Not so in quantum theory: the state |0> of a qubit can be continuously-reversibly ``moved over'' to the state |1>. For people without knowledge of quantum theory (but of classical information theory), this may appear as surprising or ``paradoxical'' as Einstein's light postulate sounds to people without knowledge of relativity.

So, your approach captures this averaging nicely and therefore will show how quantum results average to classical expectations for whatever experiment. But, it says nothing about why we don’t just get the value between O1 and O2 directly to begin with. That is what’s “surprising or ‘paradoxical’” about the qubit.
 
  • #34
RUTA said:
So, your approach captures this averaging nicely and therefore will show how quantum results average to classical expectations for whatever experiment. But, it says nothing about why we don’t just get the value between O1 and O2 directly to begin with. That is what’s “surprising or ‘paradoxical’” about the qubit.
I find this as little surprising as the case of measuring the state of a die by looking at the number of eyes found at its top when the die comes to rest. Although the die moves continuously we always get a discrete integer between 1 and 6.

Similarly, the measurement of a qubit is - by definition - binary. Hence it can have only two results, though the control in the experiment changes continuously.
 
Last edited:
  • Like
Likes vanhees71
  • #35
A. Neumaier said:
and the other statements, including the first sentence?
John Wheeler
A. Neumaier said:
If you write something without giving credits, everyone assumes it is your statement!
Indeed. Even more so, since there was no first question, and no "Quote". Part of my motivation was that I had also read similar statements (that somehow seem to predict your developments) in Herbert Bernard Callen's book on Thermodynamics. I wanted to be able to quote such statements, without explicitly naming their author.
 
Back
Top