quantum mechanics new approach

Quantum Physics via Quantum Tomography: A New Approach to Quantum Mechanics

Estimated Read Time: 10 minute(s)
Common Topics: quantum, mechanics, density, operator, measurements

This Insight article presents the main features of a conceptual foundation of quantum physics with the same characteristic features as classical physics – except that the density operator takes the place of the classical phase space coordinates position and momentum. Since everything follows from the well-established techniques of quantum tomography (the art and science of determining the state of a quantum system from measurements) the new approach may have the potential to lead in time to a consensus on the foundations of quantum mechanics. Full details can be found in my paper

  • A. Neumaier, Quantum mechanics via quantum tomography, Manuscript (2022). arXiv:2110.05294v5

This paper gives for the first time a formally precise definition of quantum measurement that

  • is applicable without idealization to complex, realistic experiments;
  • allows one to derive the standard quantum mechanical machinery from a single, well-motivated postulate;
  • leads to objective (i.e., observer-independent, operational, and reproducible) quantum state assignments to all sufficiently stationary quantum systems.
  • The new approach shows that the amount of objectivity in quantum physics is no less than that in classical physics.

A modified version of the above manuscript appeared as Part II of my new book

The following is an extensive overview of the most important developments in this new approach.

$$
\def\<{\langle} % expectation \def\>{\rangle} % expectation
\def\tr{{\mathop{\rm tr}\,}}
\def\E{{\bf E}}
$$

Quantum states

The (Hermitian and positive semidefinite) density operator ##\rho## is taken to be the formal counterpart of the state of an arbitrary quantum source. This notion generalizes the polarization properties of light: In the case of the polarization of a source of light, the density operator represents a qubit and is given by a ##2\times 2## matrix whose trace is the intensity of the light beam. If expressed as a linear combination of Pauli matrices, the coefficients define the so-called Stokes vector. Its properties (encoded in the mathematical properties of the density operator) were first described by George Stokes (best known from the Navier-Stokes equations for fluid mechanics) who gave in 1852 (well before the birth of Maxwell’s electrodynamics and long before quantum theory) a complete description of the polarization phenomenon, reviewed in my Insight article ‘A Classical View of the Qubit‘. For a stationary source, the density operator is independent of time.

The detector response principle

A quantum measurement device is characterized by a collection of finitely many detection elements labeled by labels ##k## that respond statistically to the quantum source according to the following detector response principle (DRP):

  • A detection element ##k## responds to an incident stationary source with density operator ##\rho## with a nonnegative mean rate ##p_k## depending linearly on ##\rho##. The mean rates sum to the intensity of the source. Each ##p_k## is positive for at least one density operator ##\rho##.

If the density operator is normalized to intensity one (which we shall do in this exposition) the response rates form a discrete probability measure, a collection of nonnegative numbers ##p_k## (the response probabilities) that sum to 1.

The DRP, abstracted from the polarization properties of light, relates theory to measurement. By its formulation it allows one to discuss quantum measurements without the need for quantum mechanical models for the measurement process itself. The latter would involve the detailed dynamics of the microscopic degrees of freedom of the measurement device – clearly out of the scope of a conceptual foundation on which to erect the edifice of quantum physics.

The main consequence of the DRP is the detector response theorem. It asserts that for every measurement device, there are unique operators ##P_k## which determine the rates of response to every source with density operator ##\rho## according to the formula
$$
p_k=\langle P_k\rangle:=\tr\rho P_k.
$$
The ##P_k## form a discrete quantum measure; i.e., they are Hermitian, positive semidefinite and sum to the identity operator ##1##. This is the natural quantum generalization of a discrete probability measure. (In more abstract terms, a discrete quantum measure is a simple instance of a so-called POVM, but the latter notion is not needed for understanding the main message of the paper.)

Statistical expectations and quantum expectations

Thus a quantum measurement device is characterized formally by means of a discrete quantum measure. To go from detection events to measured numbers one needs to provide a scale that assigns to each detection element ##k## a real or complex number (or vector) ##a_k##. We call the combination of a measurement device with a scale a quantum detector. The statistical responses of a quantum detector define the statistical expectation
$$
\E(f(a_k)):=\sum_{k\in K} p_kf(a_k)
$$
of any function ##f(a_k)## of the scale values. As always in statistics, this statistical expectation is operationally approximated by finite sample means of ##f(a)##, where ##a## ranges over a sequence of actually measured values. However, the exact statistical expectation is an abstraction of this; it works with a nonoperational probabilistic limit of infinitely many measured values so that the replacement of relative sample frequencies by probabilities is justified. If we introduce the quantum expectation
$$
\langle A\rangle:=\tr\rho A
$$
of an operator ##A## and say that the detector measures the quantity
$$
A:=\sum_{k\in K} a_kP_k,
$$
it is easy to deduce from the main result the following version of Born’s rule (BR):

  • The statistical expectation of the measurement results equals the quantum expectation of the measured quantity.
  • The quantum expectations of the quantum measure constitute the probability measure characterizing the response.

This version of Born’s rule applies without idealization to results of arbitrary quantum measurements.
(In general, the density operator is not necessarily normalized to intensity ##1##; without this normalization, we call ##\langle A\rangle## the quantum value of ##A## since it does not satisfy all properties of an expectation.)

Projective measurements

The conventional version of Born’s rule – the traditional starting point relating quantum theory to measurement in terms of eigenvalues, found in all textbooks on quantum mechanics – is obtained by specializing the general result to the case of exact projective measurements. The spectral notions do not appear as postulated input as in traditional expositions, but as consequences of the derivation in a special case – the case where ##A## is a self-adjoint operator, hence has a spectral resolution with real eigenvalues ##a_k##, and the ##P_k## is the projection operators to the eigenspaces of ##A##. In this special case, we recover the traditional setting with all its ramifications together with its domain of validity. This sheds new light on the understanding of Born’s rule and eliminates the most problematic features of its uncritical use.

Many examples of realistic measurements are shown to be measurements according to the DRP but have no interpretation in terms of eigenvalues. For example, joint measurements of position and momentum with limited accuracy, essential for recording particle tracks in modern particle colliders, cannot be described in terms of projective measurements; Born’s rule in its pre-1970 forms (i.e., before POVMs were introduced to quantum mechanics) does not even have an idealized terminology for them. Thus the scope of the DRP is far broader than that of the traditional approach based on highly idealized projective measurements. The new setting also accounts for the fact that in many realistic experiments, the final measurement results are computed from raw observations, rather than being directly observed.

Operational definitions of quantum concepts

Based on the detector response theorem, one gets an operational meaning for quantum states, quantum detectors, quantum processes, and quantum instruments, using the corresponding versions of quantum tomography.

In quantum state tomography, one determines the state of a quantum system with a ##d##-dimensional Hilbert space by measuring sufficiently many quantum expectations and solving a subsequent least squares problem (or a more sophisticated optimization problm) for the ##d^2-1## unknowns of the state. Quantum tomography for quantum detectors, quantum processes, and quantum instruments proceed in a similar way.

These techniques serve as foundations for far-reaching derived principles; for quantum systems with a low-dimensional density matrix, they are also practically relevant for the characterization of sources, detectors, and filters. A quantum process also called a linear quantum filter, is formally described by a completely positive map. The operator sum expansion of completely positive maps forms the basis for the derivation of the dynamical laws of quantum mechanics – the quantum Liouville equation for density operators, the conservative time-dependent Schrödinger equation for pure states in a nonmixing medium, and the dissipative Lindblad equation for states in mixing media – by a continuum limit of a sequence of quantum filters. This derivation also reveals the conditions under which these laws are valid. An analysis of the oscillations of quantum values of states satisfying the Schrödinger equation produces the Rydberg-Ritz combination principle underlying spectroscopy, which marked the onset of modern quantum mechanics. It is shown that in quantum physics, normalized density operators play the role of phase space variables, in complete analogy to the classical phase space variables position and momentum. Observations with highly localized detectors naturally lead to the notion of quantum fields whose quantum values encode the local properties of the universe.

Thus the DRP leads naturally to all basic concepts and properties of modern quantum mechanics. It is also shown that quantum physics has a natural phase space structure where normalized density operators play the role of quantum phase space variables. The resulting quantum phase space carries a natural Poisson structure. Like the dynamical equations of conservative classical mechanics, the quantum Liouville equation has the form of Hamiltonian dynamics in a Poisson manifold; only the manifold is different.

Philosophical consequences

The new approach has significant philosophical consequences. When a source is stationary, response rates, probabilities, and hence quantum values, can be measured in principle with arbitrary accuracy, in a reproducible way. Thus they are operationally quantifiable, independent of an observer. This makes them objective properties, in the same sense as in classical mechanics, positions and momenta are objective properties. Thus quantum values are seen to be objective, reproducible elements of reality in the sense of the famous paper

The assignment of states to stationary sources is as objective as any assignment of classical properties to macroscopic objects. In particular, probabilities appear – as in classical mechanics – only in the context of statistical measurements. Moreover, all probabilities are objective frequentist probabilities in the sense employed everywhere in experimental physics – classical and quantum. Like all measurements, probability measurements are of limited accuracy only, approximately measurable as observed relative frequencies.

Among all quantum systems, classical systems are characterized as those whose observable features can be correctly described by local equilibrium thermodynamics, as predicted by nonequilibrium statistical mechanics. This leads to a new perspective on the quantum measurement problem and connects to the thermal interpretation of quantum physics, discussed in detail in my 2019 book ‘Coherent Quantum Physics‘ (de Gruyter, Berlin 2019).

Conclusion

To summarize, the new approach gives an elementary, and self-contained deductive approach to quantum mechanics. A suggestive notion for what constitutes a quantum detector and for the behavior of its responses leads to a definition of measurement from which the modern apparatus of quantum mechanics can be derived in full generality. The statistical interpretation of quantum mechanics is not assumed, but the version of it that emerges is discussed in detail. The standard dynamical and spectral rules of introductory quantum mechanics are derived with little effort. At the same time, we find the conditions under which these standard rules are valid. A thorough, precise discussion is given of various quantitative aspects of uncertainty in quantum measurements. Normalized density operators play the role of quantum phase space variables, in complete analogy to the classical phase space variables position and momentum.

There are implications of the new approach for the foundations of quantum physics. By shifting the attention from the microscopic structure to the experimentally accessible macroscopic equipment (sources, detectors, filters, and instruments) we get rid of all potentially subjective elements of quantum theory. There are natural links to the thermal interpretation of quantum physics as defined in my book.

The new picture is simpler and more general than the traditional foundations, and closer to actual practice. This makes it suitable for introductory courses on quantum mechanics. Complex matrices are motivated from the start as a simplification of the mathematical description. Both conceptually and in terms of motivation, introducing the statistical interpretation of quantum mechanics through quantum measures is simpler than introducing it in terms of eigenvalues. To derive the most general form of Born’s rule from quantum measures one just needs simple linear algebra, whereas even to write down Born’s rule in the traditional eigenvalue form, unfamiliar stuff about wave functions, probability amplitudes, and spectral representations must be swallowed by the beginner – not to speak of the difficult notion of self-adjointness and associated proper boundary conditions, which is traditionally simply suppressed in introductory treatments.

Thus there is no longer an incentive for basing quantum physics on measurements in terms of eigenvalues – a special, highly idealized case – in place of the real thing.

Postscript

In the mean time I revised the paper. The new version new version is better structured and contains a new section on high precision quantum measurements, where the 12 digit accuracy determination of the gyromagnetic ration through the observation and analysis of a single electron in a Penning trap is discussed in some detail. The standard analysis assumes that the single electron is described by a time-dependent density operator following a differential equation. While in the original papers this involved arguments beyond the traditional (ensemble-based and knowledge-based) interpretations of quantum mechanics, the new tomography-based approach applies without difficulties.

176 replies
Newer Comments »
  1. A. Neumaier says:
    "
    I don't understand this argument. You just measure repeatedly some observable. The measurements (or rather the reaction of the measured system to the coupling to the measurement device) themselves of course have to be taken into account as part of the "preparation" too.
    "
    It is a preparation, but not one to which Born's rule applies. Born's rule is valid only if the ensemble consists of independent and identically prepared states. You need independence because e.g., immediately repeated position measurements of a particle do not respect Born's rule, and you need identical prepartion because there is only one state in Born's formula.

    In the case under discussion, one may interpret the situation as reeated preparation, as you say. But unless the system is stationary (and hence uninteresting in the context of the experiment under discussion), the state prepared before the ##k##th measurement is different for each ##k##. Moreover, due to the preceding measurement this state is only inaccurately known and correlated with the preceding one. Thus the ensemble prepared consists of nonindependent and nonidentically prepared states, for which Born's rule is silent.

  2. vanhees71 says:
    I don't understand this argument. You just measure repeatedly some observable. The measurements (or rather the reaction of the measured system to the coupling to the measurement device) themselves of course have to taken into account as part of the "preparation" too.
  3. A. Neumaier says:
    "
    I still do not understand why you say that the content of the review papers by Dehmelt and Brown contain anything denying the validity of Born's rule. For me it's used all the time!
    "
    Because Born's rule assumes identical preparations which is not the case when a nonstationary system is measured repeatedly. I am not denying the validity but the applicability of the rule!

    I need to read the paper before I can go into details.

  4. vanhees71 says:
    I still do not understand why you say that the content of the review papers by Dehmelt and Brown contain anything denying the validity of Born's rule. For me it's used all the time!
  5. A. Neumaier says:
    "
    I was referring to the measurements on single particles in a trap, not on ALICE photon measurements. There are tons of papers about "direct photons":

    https://inspirehep.net/literature?sort=mostrecent&size=25&page=1&q=find title photons and cn alice

    Polarization measurements for dileptons or photons are very rare today. There's a polarization measurement by the NA60 collaboration on di-muons:

    https://arxiv.org/abs/0812.3100
    "
    Thanks for the pointers. Will reply in more detial after having read more. I expect that it will mean that the instancs of case (B) are not so different from those of case (A) in my earlier classification of single-particle measurements.

  6. A. Neumaier says:
    "
    That's a nice article. However I somehow miss an explanation, what actually is meant with "quantum tomography" and one has to revert to the arxiv preprint to get an explanation. Given the title of the insights article, maybe you could add some words on what is meant with quantum tomography.
    "
    Thanks. I added to the Insight article a link to Wikipedia and an explaining paragraph.
  7. A. Neumaier says:
    "
    Another review paper, which may be more to the point, because it covers both theory and experiment, is
    https://doi.org/10.1103/RevModPhys.58.233
    […] you can look at some papers by the ALICE collaboration as one example for what's measured concerning photons created in pp, pA, and AA collisions (pT spectra, elliptic flow, etc.). Concerning polarization measurements (particularly for dileptons) that's a pretty new topic,
    "
    Are the papers where I can read about ALICE measurements and about polarization measurements in the above review?
  8. vanhees71 says:
    I don't think that we reach consensus about this issue. For me Born's rule is one of the fundamental postulates of QT (including QFT). You calculate the correlation functions (Green's functions) in QFT to get statistical information about observables like cross sections. How these correlation functions are related to the statistics of measurement outcomes is derived based on the fundamental postulates of QT, including Born's rule. Of course, that's what Weinberg and any other book on QFT does. A cross-section measurement consists of course always of collecting statistics over very many collision events using not the same particles again and again.

    You use yourself Born's rule all the time since everything is based on taking averages of all kinds defined by ##\langle A \rangle=\mathrm{Tr} \hat{\rho} \hat{A}## (if you use normalized ##\hat{\rho}##'s).

  9. A. Neumaier says:
    "
    All this is based on standard quantum theory and thus after all on Born's rule.
    "
    The 'thus' is not warranted.

    Quantum field theory is completely independent of Born's rule. It is about computing ##N##-point functions of interest.

    Weinberg's QFT book (Vol.1) mentions Born's rule exactly twice – once in its review of quantum mechanics, and once where the probabilistic interpretation of the scatttering amplitude is derived. In the latter he assumes an ensemble of identically prepared particles to give a probabilistic meaning in terms of the statistics of collision experiment.

    Nothing at all about single systems!

  10. A. Neumaier says:
    "
    I still don't understand, why you think there cannot be statistics collected using a single quantum.
    "
    I don't think that, and I explicitly said this. The point is that this statistics is not statistics about an ensemble of identically prepared systems hence has nothing to do with what Born's rule is about.
    "
    I can also get statistics of throwing a single coin again and again to check whether it's a fair one or not.
    "
    In this case the system identically prepared is the throw, not the coin. The coin is a system described by a rigid body, with a 12D phase space state ##z(t)##, in contact with an environment that randomizes its motion through its collision with the table. The throw is what you can read off when the coin is finally at rest.

    The state of the coin is complicated and cannot be identically prepared (otherwise it would fall identically and produce identical throws);. But the state of the throw is simple – just a binary variable, and the throwing setup prepares its state identically. Each throuw is different – only the coin is the same; that's why one gets an ensemble.

    This is quite different from a quantum particle in a trap, unless (as in a throw) you reset before each measuement the state of the particle in the trap. But then the observation bevomes uninteresting. The interesting thing is to observe the particle's time dependence. Here the state changes continuously, as with the coin and not as with the throw.

  11. vanhees71 says:
    "
    I didn't call these results "of limited precision" but said that they determine the state to limited precision only. The state in these experiments is a continuous stochastic function ##\rho(t)## of time with ##d^2-1## independent real components, where ##d## is the dimension of the Hilbert space. Experiments resulting in ##N## measured numbers can determine this function only to limited precision. By the law of large numbers, the error is something like ##O((dN^{1/2})^{-1})##.

    What is usually done is to simply assume a Lindblad equation (which ignores the fluctuating part of the noise due to the environment) for a truncated version of ##\rho## with very small ##d##. Then one estimates from it and the experimental results a very few parameters or quantum expectations.

    This is very far from an accurate state determination….

    Since Born's rule is a statement about the probability distribution of results for an ensemble of identically prepared systems, it is logically impossible to obtain from it conclusions about a single of these systems. A probability distribution almost never determines an individual result.

    I'll read the review once I can access it and then comment on your claim that it derives statements about a single particle from Born's rule.

    Then it should be easy for you to point to a page of a standard reference describing how the measurement of photon momentum and polarization in collision experiments is done, in sufficient detail that one can infer the assumptions and approximations made. I am not an expert on collision experiments and would appreciate your inpu.
    "
    In Dehmelt's paper it is describe, how various quantities using single electrons/ions in a Penning trap are measured. I still don't understand, why you think there cannot be statistics collected using a single quantum. I can also get statics of throwing a single coin again and again to check whether it's a fair one or not. I just do the "random experiment" again and again using the same quantum and collect statistics and evaluate confidence levels and all that. Another review paper, which may be more to the point, because it covers both theory and experiment, is

    https://doi.org/10.1103/RevModPhys.58.233

    I also think that very rarely one does full state determinations. What's done are preparations and subsequent measurements of observables of interest.

    I'm also not an experimental physicist and far from knowing any details, how the current CERN experiments (ATLAS, CMS, and ALICE) measure electrons and photons. I use their results to compare to theoretical models, which are based on standard many-body QFT and simulations of the fireball created in heavy-ion collisions. All this is based on standard quantum theory and thus after all on Born's rule. Here you can look at some papers by the ALICE collaboration as one example for what's measured concerning photons created in pp, pA, and AA collisions (pT spectra, elliptic flow, etc.). Concerning polarization measurements (particularly for dileptons) that's a pretty new topic, and of course an even greater challenge than the spectra measured for decades now. After all these are "rare probes".

  12. DrDu says:
    That's a nice article. However I somehow miss an explanation, what actually is meant with "quantum tomography" and one has to revert to the arxiv preprint to get an explanation. Given the title of the insights article, maybe you could add some words on what is meant with quantum tomography.
  13. A. Neumaier says:
    "
    I wouldn't call results of experiments with single electrons, protons, ions etc. in Penning traps which are among the most precise ever "of limited precision".
    "
    I didn't call these results "of limited precision" but said that they determine the state to limited precision only. The state in these experiments is a continuous stochastic function ##\rho(t)## of time with ##d^2-1## independent real components, where ##d## is the dimension of the Hilbert space. Experiments resulting in ##N## measured numbers can determine this function only to limited precision. By the law of large numbers, the error is something like ##O((dN^{1/2})^{-1})##.

    What is usually done is to simply assume a Lindblad equation (which ignores the fluctuating part of the noise due to the environment) for a truncated version of ##\rho## with very small ##d##. Then one estimates from it and the experimental results a very few parameters or quantum expectations.

    This is very far from an accurate state determination….
    "
    The theoretical description uses standard quantum theory based on Born's rule (see Dehmelt's above quoted review).
    "
    Since Born's rule is a statement about the probability distribution of results for an ensemble of identically prepared systems, it is logically impossible to obtain from it conclusions about a single of these systems. A probability distribution almost never determines an individual result.

    I'll read the review once I can access it and then comment on your claim that it derives statements about a single particle from Born's rule.
    "
    Detectors measure particles and photons of coarse. Real and virtual photons (dileptons) are among the most interesting signals in pp, pA, and heavy-ion collisions at CERN for some decades.
    "
    Then it should be easy for you to point to a page of a standard reference describing how the measurement of photon momentum and polarization in collision experiments is done, in sufficient detail that one can infer the assumptions and approximations made. I am not an expert on collision experiments and would appreciate your input.

  14. A. Neumaier says:
    "
    With what "is known" I think you effectively refer to human science. But if we even here consider and obsererver: What is the real difference between what an observers knows, and what it THINKS it knows? And does it make difference to the observes betting strategy? (action)
    "
    Science has no single betting strategy. Each scientist makes choices of his or her own preference, but published is only what passed the rules of scientific discourse, which rules out most poor judgment on the individual's side. What schience knows is an approximation to what it thinks it knows, and this approximation is quite good, otherwise resulting technology based on it would not work and not sell.
  15. vanhees71 says:
    I wouldn't call results of experiments with single electrons, protons, ions etc. in Penning traps which are among the most precise ever "of limited precision" ;-). The tgeoretical description uses standard quantum theory based on Born's rule (see Dehmelt's above quoted review).

    Detectors measure particles and photons of coarse. Real and virtual photons (dileptons) are among the most interesting signals in pp, pA, and heavy-ion collisions at CERN for some decades.

  16. A. Neumaier says:
    "
    One can do statistics using a single particle in, e.g., a Penning trap, as described here:
    https://doi.org/10.1088/0031-8949/1988/T22/016
    but isn't this indeed a paradigmatic example for your formulation?
    "
    One can do statistics with any collection of measurement results.
    But in the case you mention, where the data come from a single particle, the statistics is not governed by Born's rule. Each data point is obtained at a different time, and at each time the particle is in a different state affected in an unspecified way by the previous measurement. So how could you calculate the statistics from Born's rule?

    Instead, the statistics is treated in the way I discussed in case (A).
    "
    Also nondestructive photon measurements are done,
    "
    If the nondestructive single photon measurements result in a time series, the situation for this photon is the same as for the particle in the Penning trap.
    "
    but also the standard photon detection of course measures properties of single photons like energy, momentum, and polarization, or what else do you think the photon measurements in all the accelerators in HEP and heavy-ion physics provide?
    "
    I didn't know that accelerators measure momentum and polarization of individual photons. Could you provide me with a reference where I can read details? Then I'll be able to show you how it matches the description in my paper.

  17. vanhees71 says:
    This we discussed repeatedly. One can do statistics using a single particle in, e.g., a Penning trap, as described here:

    https://doi.org/10.1088/0031-8949/1988/T22/016

    but isn't this indeed a paradigmatic example for your formulation?

    Also nondestructive photon measurements are done, but also the standard photon detection of course measures properties of single photons like energy, momentum, and polarization, or what else do you think the photon measurements in all the accelerators in HEP and heavy-ion physics provide?

  18. A. Neumaier says:
    "
    In this way it makes a lot of sense again, but as I said before, today the experimentalists are able to prepare single-quantum (particles, atoms, molecules, photons) states and observe them. The observations are of course always via macroscopic measurement devices.
    "
    In this case the traditional interpretation in terms of Born's rule is vacuous since probabilities are meaningful only in an ensemble context but individual quantum systems do not form an ensemble.

    Instead, these experiments are traditionally interpreted in the fashion of classical stochastic processes, which have a trajectory interpretation for individual realizations, so that individual systems can be discussed. For example, this explains observed quantum jumps in atoms in an ion trap subject to external fields; see, e.g.,

    • Plenio, M. B., & Knight, P. L. (1998). The quantum-jump approach to dissipative dynamics in quantum optics. Reviews of Modern Physics, 70(1), 101.

    In the context of the present paper, one has to distinguish several cases of tiny quantum systems.

    (A) A tiny quantum system mounted on a macroscopic objects.

    1. If mounted for subsequent view in a scanning tunneling microscope, the tiny system acts as a stationary quantum source of the kind discussed in my paper, and its state can be determned by quantum tomography.
    2. If mounted on an ion trap, the tiny quantum system can be manipulated by applying external classical field. This turns it into an instationary quantum system, for which standard quantum tomography is limited to short times within which the system cn be regarded as stationary. This means that its state can be measured only to limited accuracy – only very limited information can be reliably collected. This is discussed in Section 4.5 of my paper.
    3. Nonstationary quantum tomography has hardly been studies, so it is too early to tell in detail which kind of limitations this imposes on what can be achieved experimentally.
    4. For example, in the quantum jump experiments, observations interpretable in terms of the system state are restricted to observing a noisy piecewise constant time series that shows that apart from very short times of transitions (jumps) the system is stationary – being in one of to eigenstates of the appropriate Hamiltonian. Thus, per time step, one only gets one bit of information about the changing density operator.

    (B) A tiny quantum system freely moving in a homogeneous macroscopic medium.

    1. Single massive particles in flight can be observed in a bubble chamber or, in the context of modern particle accelerators, in a time projection chamber. The latter case is discussed in Section 3.4 of my paper; the former can be done in a similar way.
    2. Single photons in flight cannot be observed; at best one can observe their death. It is impossible to do quantum tomography on them. Thus it is experimentally impossible to measure their state. Hence assigning a state to them is very questionable. Instead, a highly nonstationary photon state must be assigned to the cavity producing the photon, and case (A) applies.
  19. vanhees71 says:
    "
    It describes the real use of QT as a physical theory as done by physicists since 1970, when the more comprehensive view of measurement that goes beyond von Neumann's was introduced.

    Yes. You probably need to read the whole to see the differences and understand the new point of view.

    Experiments in physics laboratories use controllable sources, filters and detectors to explore the nature of the microscopic world. All these are macroscopic objects, the only things that can be observed. The microscopic aspecs are not obsved but inferred; they define inferables, not observables. On p.13 of my paper I quote:

    A quantum source is simply a piece of equipment that has some measurable effect on detectors placed at some distance from them. How this effect comes about is not observed but described by theoretical models whose consequences can be checked against experimental results. The techniques and results described in my paper are agnostic about the models (Section 7.1), they are just about the observable (and hence macoscopic) aspects.

    This is the reason why the state is assigned to the source and not to something postulated as being transmitted. More precisely, the measured property (a quantum expecation) is assigned to the particular location at which the measurement is done, which leads naturally to a quantum field picture (Section 5.4).
    "
    In this way it makes a lot of sense again, but as I said before, today the experimentalists are able to prepare single-quantum (particles, atoms, molecules, photons) states and observe them. The observations are of course always via macroscopic measurement devices.

  20. vanhees71 says:
    "
    Given your ambitions, I guess this makes good sense. It's just that it does not solve the mysteries, at least not for me.

    I still see your perspective as as limiting case of a general (yet unknown) theory.
    "
    It may well be that there is a more comprehensive theory then contemporary quantum theory. Who knows, what the solution of the problem to describe the gravitational interaction quantum mechanically and what this might imply for the description of spacetime, but there's no hint for such a new theory. Empirically there are no phenomena which are not described with our standard theories, as incomplete they might be.
    "

    As a macroscopic object is essentially just a part of the classical reality, this to me seems quite close to Bohrs angle to the CI (in contrast to Heisenbergs). You still need a CONTEXT, and this context is classical reality. This is of course a good thing from the perspective of human science… and as the context is the good old classical reality, it becomes more trivial as except for relativity, the observer-observer interaction is more trivial.
    "
    I think the contrary makes more sense. Macroscopic objects as we observe them are only consistently describable with quantum theory, given the atomistic structure. There'd not even be stable atoms as bound states of charged particles let alone molecules and condensed matter within classical physics. The "classical reality" is emergent, understood via the classical physical laws as effective description of the dynamical behavior of macroscopic coarse-grained observables.
    "

    But it's a bad thing if you think there is explanatory power to be found by considering the logic of interacting observers. And I am not sure how it helps with fine tuning problems or unification quests. As I understand it, it isn't the ambition either. Then it's fine. I think part of the confusion, is different expectations, which I think what you wrote yourself in post 15 as well.

    /Fredrik
    "

  21. A. Neumaier says:
    "
    I've started to read the paper, and I first had the impression, it's now much closer to the real use of QT as a physical theory as done by physicists since 1926,
    "
    It describes the real use of QT as a physical theory as done by physicists since 1970, when the more comprehensive view of measurement that goes beyond von Neumann's was introduced.
    "
    but now it seems again, I'm completely misunderstanding its intended meaning. Obviously I misunderstood what you mean by "source". For me a "source" is just some device which "prepares quantum systems", and this can be also a single "particle" or even a single "photon" and not only macroscopic systems.
    "
    Yes. You probably need to read the whole to see the differences and understand the new point of view.
    "
    From the perspective of the present considerations, quantum particles appear to be ghosts in the beams. This explains their spooky properties in the quantum physics literature!
    "
    Experiments in physics laboratories use controllable sources, filters and detectors to explore the nature of the microscopic world. All these are macroscopic objects, the only things that can be observed. The microscopic aspecs are not obsved but inferred; they define inferables, not observables. On p.13 of my paper I quote:
    "
    If you visit a real laboratory, you will never find there Hermitian operators. All you can see are emitters (lasers, ion guns, synchrotrons and the like) and detectors. The experimenter controls the emission process and observes detection events. […] Quantum mechanics tells us that whatever comes from the emitter is represented by a state ρ (a positive operator, usually normalized to 1). […] Traditional concepts such as ”measuring Hermitian operators”, that were borrowed or adapted from classical physics, are not appropriate in the quantum world. In the latter, as explained above, we have emitters and detectors.
    "
    A quantum source is simply a piece of equipment that has some measurable effect on detectors placed at some distance from them. How this effect comes about is not observed but described by theoretical models whose consequences can be checked against experimental results. The techniques and results described in my paper are agnostic about the models (Section 7.1), they are just about the observable (and hence macroscopic) aspects.
    "
    The present approach works independent of the nature or even the presence of a mediating substance: What is measured are properties of the source, and this has a well-define macroscopic existence. We never needed to make an assumption on the nature of the medium passed from the source to the detector. Thus the present approach is indifferent to the microscopic cause of detection events. It does not matter at all whether one regards such an event as caused by a quantum field or by the arrival of a particle. In particular, a microscopic interpretation of the single detection events as arrival of particles is not needed, not even an ontological statement about the nature of what arrives. Nor would these serve a constructive purpose.
    "
    This is the reason why the state is assigned to the source and not to something postulated as being transmitted. More precisely, the measured property (a quantum expectation) is assigned to the particular location at which the measurement is done, which leads naturally to a quantum field picture (Section 5.4).
    "
    Suppose that we have a detector that is sensitive only to quantum beams entering a tiny region in space, which we call the detector’s tip. We assume that we can move the detector such that its tip is at an arbitrary point x in the medium, and we consider a fixed source, extended by layers of the medium so that x is at the boundary of the extended source. The measurement performed in this constellation is a property of the source. The results clearly depend only on what happens at x, hence they may count as a measurement of a property of whatever occupies the space at x. Thus we are entitled to consider it as a local property of the world at x at the time during which the measurement was performed.
    "
  22. A. Neumaier says:
    "
    I still see your perspective as as limiting case of a general (yet unknown) theory.
    "
    I prefer to frame the known in an optimally rational way, rather than to speculate about the unknown.
    "

    As a macroscopic object is essentially just a part of the classical reality, this to me seems quite close to Bohrs angle to the CI (in contrast to Heisenbergs). You still need a CONTEXT, and this context is classical reality.
    "
    I define classical reality in Section 7.2 of my paper as that part of quantum reality that can be deduced from it in the form of a local equilibrium description. Thus the context is quantum physics itself.
    "

    But it's a bad thing if you think there is explanatory power to be found by considering the logic of interacting observers.
    "
    Everything in my paper is observer-independent. It doesn't matter who observes, excpt that poor observations lead to poor approximations of the state.

  23. vanhees71 says:
    I've started to read the paper, and I first had the impression, it's now much closer to the real use of QT as a physical theory as done by physicists since 1926, than the previous papers on your "thermal interpretation", but now it seems again, I'm completely misunderstanding its intended meaning. Obviously I misunderstood what you mean by "source". For me a "source" is just some device which "prepares quantum systems", and this can be also a single "particle" or even a single "photon" and not only macroscopic systems.
  24. A. Neumaier says:
    "
    As I understand your paper, it seems one major improvement in your description, is to make mathematically more explicit, the process of inferring the "quantum state" from REAL interactions – rather than considering and imaginary ensemble that is defined outside the formalism?

    Ie. the fictive "equivalence class" is replaced by a real construction – as per quantum tomography – essentially from a sequence or history of interactions. And this works fine, as long as the sources are as you say "stationary" or does not change until the process of tomography is completed by margin?
    "
    The real advance is to make the quantum state a property of the source – i.e., of a macroscopic object.

    This makes all talk about fictitious stuff (like ensembles, equivalence classes, multiple worlds, or presumable changes of states of knowledge) obsolete, without a change in the operational content of quantum mechanics.

  25. A. Neumaier says:
    "
    Ok, well yes, the original SG experiment did not resolve single-atom results
    "
    Can you point to the report of another SG experiment that resolves single silver atoms?

    Even then, one only measures atom position and computes from these measurements fairly crude approximations of ##\hbar##. It simply isn't a projective (textbook) measurement.
    "
    it may be well interesting to describe the original SG experiment in terms of the POVM formalism to better understand this formalism.
    "
    You can understand the formalism by reading Sections 2 and 3 of my paper. I did not treat the Stern-Gerlach experiment in detail since a precise description is quite involved. But in Section 3 I discuss in some detail several other measurement situations, which should be enough to get a clear understanding of what the approach means.
    "
    Unfortunately in the quoted book they do this on p. 165ff but in a pretty abstract way instead of (approximately) solving the Schrödinger equation…
    "
    Many papers and books using POVMs are quite abstract because they employ a measure-theoretic approach rather than simple quantum measures in the sense of my new paper. This is why my paper is a big step forward towards making the approach more understandable to everyone. Still, you need to do some reading to get the correct picture.

  26. vanhees71 says:
    Ok, well yes, the original SG experiment did not resolve single-atom results, and it may be well interesting to describe the original SG experiment in terms of the POVM formalism to better understand this formalism. Unfortunately in the quoted book they do this on p. 165ff but in a pretty abstract way instead of (approximately) solving the Schrödinger equation…
  27. A. Neumaier says:
    "
    Of course, when you measure a spin component with the standard ideal SG setup, you don't measure expectation values on a single silver atom but the spin component, which gives either ##\hbar/2## or ##-\hbar/2## as a result with a probability determined by the spin state ##\hat{\rho}## the silver atom is prepared in. When it comes from an oven as in the original experiment, this state is of course ##\hat{\rho}=\hat{1}/2##. It's a paradigmatic example, for which a von Neumann filter measurement can be realized.
    "
    If your interpretation of the measurement results were correct, Stern and Gerlach could have deduced the value of ##\hbar## to infinite precision.

    Instead I look at Figure 13 with the actual measurement results of Stern and Gerlach, and see (like Busch et al. in the quote and like everyone who can see) a large number of scattered dots, not two exact numbers involving ##\hbar##. Clearly what was measured for each silver atom was position (with a continuous distribution), not spin.

    Tu turn these position measurements into a projective spin measurement of ##\hbar/2## or ##-\hbar/2## you need to invoke heavy idealization, including additional theory and uncontrolled approximations.

  28. vanhees71 says:
    "
    No. The quote describes the experimental findings of the original paper by Stern and Gerlach. Nobody ever thought this would disagree with QM.

    You probably never saw a discussion of the real experiment, only its heavily idealized caricature described in introductory textbooks on quantum mechanics!
    "
    Which quote are you talking about? On p. 12 of your paper there's the quote by Fröhlich:

    "
    The only form of ”interpretion” of a physical theory that I find legiti-
    mate and useful is to delineate approximately the ensemble of natural
    phenomena the theory is supposed to describe and to construct some-
    thing resembling a ”structure-preserving map” from a subset of mathe-
    matical symbols used in the theory that are supposed to represent phys-
    ical quantities to concrete physical objects and phenomena (or events)
    to be described by the theory. Once these items are clarified the theory
    is supposed to provide its own ”interpretation”.
    J¨urg Fr¨ohlich, 2021 [45, p.238]
    "
    with which I fully agree, of course, but that's not referring to the SG experiment.

    Of course, when you measure a spin component with the standard ideal SG setup, you don't measure expectation values on a single silver atom but the spin component, which gives either ##\hbar/2## or ##-\hbar/2## as a result with a probability determined by the spin state ##\hat{\rho}## the silver atom is prepared in. When it comes from an oven as in the original experiment, this state is of course ##\hat{\rho}=\hat{1}/2##. It's a paradigmatic example, for which a von Neumann filter measurement can be realized.

  29. A. Neumaier says:
    "
    4. What failure would you confront when you switch and uplift Poincare invariance with GCT in its full glory?
    "
    Invariance under general coordinate transformations is a consequence of Poincare invariance together with the gauge structure of massless spin 2 particles. This was already shown by Weinberg 1964. Thus no failure is expected, and no need to extend the causal formalism.
    "
    5. And lastly, for the fun of it, enlighten us "Are the particles in your Insight context pointlike or points?"
    "
    They are approximations emerging from the quantum fields under conditions corresponding to the validity of geometric optics; this makes them definitely not points. See the discussion in Section 7.1 of my paper (and far more details in my 2019 book on coherent quantum mechanics).
  30. A. Neumaier says:
    "
    3. Can you or can you not relate your note to CoBordism formulation
    "
    I haven't seen work on such a relation but Tomonaga-Schwinger dynamics based on the perturbatively constructed fields should provide a connection.
  31. A. Neumaier says:
    "
    1. What about theories with no expectations of even having an Action?
    "
    for these, causal perturbation theory is not applicable.
    "

    2. What would you comment on Locality of QFT, within that Insights context?
    "
    This is built in into the causal approach.

  32. A. Neumaier says:
    "
    Before I delve into them, tell me, how much are they aligned with Wightman axioms?
    The fidelity is of no importance here, I just want to see if you can digest both in same context.
    "
    Causal perturbation theory is consistent with the Wightman axioms. It constructs the Wightman N-point functions and field operators perturbatively in a mathematically rigorous way. The only missing thing to constructing Wightman fields is the lack of a rigorous nonperturbative resummation formula.
  33. A. Neumaier says:
    "
    There are two distinct measurement outcomes predicted for a qubit and you are claiming the experimental result is a continuum.
    "
    Stern and Gerlach obtained in their figure a huge number of distinct measurement outcomes, visible for everyone. Only idealization can reinterpret this as binary measurement outcomes 1 and -1.

    By your reasoning, a low energy particle in a double well potential would only take two possible positions!!!

  34. A. Neumaier says:
    "
    Renormalization is not just about taming the family of field theories (or some dynamics in the Moduli Space so to speak), you need find a right way for Mathematician friends to do their thing.
    "
    For the right – mathematically rigorous – way see this Insight article!
  35. RUTA says:
    "
    I am claiming that the measurement results form a continuum and the binarization is an idealization. This is in agreement with experiment and with quantum mechanics.

    Whether or not you are interested does not matter here.
    "
    There are two distinct measurement outcomes predicted for a qubit and you are claiming the experimental result is a continuum. Therefore, you are claiming the QM prediction is wrong. It's that simple.

  36. A. Neumaier says:
    "
    Again, the mathematical description of the outcome is given by spin 1/2 qubit Hilbert space. If you disagree with that, then you are claiming QM is wrong and I am not interested.
    "
    I am claiming that the measurement results form a continuum and the binarization is an idealization. This is in agreement with experiment and with quantum mechanics.

    Whether or not you are interested does not matter here.

  37. RUTA says:
    "
    Figure 13 in the reference you cited shows the Stern-Gerlach results. The picture agrees with the description in my quote: The split is not into two separate thin lines at 1 and -1 as you claim but into two broad overlapping lips occupying in each cross section a continuous range, which may be connected or seemingly disconnected depending on where you draw the intersecting line.

    Thus the measurement results form a bimodal continuum with an infinite number of possible values.
    "
    Again, the mathematical description of the outcome is given by spin 1/2 qubit Hilbert space. If you disagree with that, then you are claiming QM is wrong and I am not interested.

  38. A. Neumaier says:
    "
    Here is what we cite https://plato.stanford.edu/entries/physics-experiment/app5.html ; it contains reproductions of SG figures and results.
    "
    Figure 13 in the reference you cited shows the Stern-Gerlach results. The picture agrees with the description in my quote: The split is not into two separate thin lines at 1 and -1 as you claim but into two broad overlapping lips occupying in each cross section a continuous range, which may be connected or seemingly disconnected depending on where you draw the intersecting line.
    "
    There is an intensity minimum in the center of the pattern, and the separation of the beam into two components is clearly seen.
    "
    Thus the measurement results form a bimodal continuum with an infinite number of possible values.
  39. RUTA says:
    "
    No. The quote describes the experimental findings of the original paper by Stern and Gerlach. Nobody ever thought this would disagree with QM.

    You probably never saw a discussion of the real experiment, only its heavily idealized caricature described in introductory textbooks on quantum mechanics!
    "
    Here is what we cite https://plato.stanford.edu/entries/physics-experiment/app5.html ; it contains reproductions of SG figures and results. There is nothing that contradicts QM spin 1/2 Hilbert space predictions therein. No experiment that I have seen does so and everything that I've said here (as contained in our published papers https://www.mdpi.com/1099-4300/24/1/12 and https://www.nature.com/articles/s41598-020-72817-7) conforms to that fact. If you disagree with that, then you're claiming QM is wrong.

  40. A. Neumaier says:
    "
    It's exactly true, it's the expectation value for spin 1/2 measurements. I infer from the quote you reference that you therefore disagree with QM.
    "
    No. The quote describes the experimental findings of the original paper by Stern and Gerlach. Nobody ever thought this would disagree with QM.

    You probably never saw a discussion of the real experiment, only its heavily idealized caricature described in introductory textbooks on quantum mechanics!

  41. RUTA says:
    "
    This is far from true. See the quote at the top of p.12 of the paper summarized by the Insight article, and the book from which this quote is taken.
    "
    It's exactly true, it's the expectation value for spin 1/2 measurements. I infer from the quote you reference that you therefore disagree with QM. I'm not doing that.
  42. A. Neumaier says:
    "I would like to know that. One lesson I learnt is that you cannot renormalise the usual canonical gravity (the Hamiltonian formulation of GR) or the entire "certain" community wouldn't exist."
    This was true in the old days before effective field theories were seriously studied.

    But in modern terms, nonrenormalizable does no longer mean ''not renormalizable'' but only ''renormalization defines an infinite-parameter family of theories'', while standard renormalizability means ''renormalization defines a finite-parameter family of theories''. For example, QED is a 2-dimensional family of QFTs parameterized by 2 parameters (electron mass and charge), while canonical quantum gravity defines an infinite-dimensional family of QFTs parameterized by infinitely many parameters (of which the gravitational constant is just the first) .
    "A page number would be helpful. I wish I had infinite time!"
    I gave detailed references here: https://www.mat.univie.ac.at/~neum/physfaq/topics/renQG.html

  43. A. Neumaier says:
    "
    e.g., Stern-Gerlach spin measurements. Instead, you still obtain +1 and -1, but distributed so they average to the expected intermediate outcome, e.g., via vector projection for SG measurements.
    "
    This is far from true. See the quote at the top of p.12 of the paper summarized by the Insight article, and the book from which this quote is taken.
  44. RUTA says:
    "
    I find this as little surprising as the case of measuring the state of a die by looking at the number of eyes found at its top when the die comes to rest. Although the die moves continuously we always get a discrete integer between 1 and 6.

    Similarly, the measurement of a qubit is – by definition – binary. Hence it can have only two results, though the control in the experiment changes continuously.
    "
    The die is the counterpart of a "classical bit," we're talking about the qubit, they differ precisely as I (and Koberinski & Mueller) pointed out. That is, it makes no sense to talk about measurements that you would expect to yield 1.5 or 2.3, etc., for a die. But, when measuring a qubit, the measurement configurations of a particular state vary continuously between that yielding +1 and that yielding -1, so one would expect those "in-between" measurements to produce something between +1 and -1, e.g., Stern-Gerlach spin measurements. Instead, you still obtain +1 and -1, but distributed so they average to the expected intermediate outcome, e.g., via vector projection for SG measurements. Your approach simply articulates that fact without offering any reason for why we don't just get the expected outcome to begin with.

  45. A. Neumaier says:
    "
    I had a rough reading and I don't see any hints of gravity in your analysis.

    Your claim "everything about renormalisation is understood" hasn't been demonstrated one way or the other.
    "
    The renormalization problem is independent of gravity, and can be understood independent of it.

    The only apparent problem with gravity is its apparent nonrenormalizability, but this is not a real problem as discussed in the link mentioned in post #27.

  46. A. Neumaier says:
    "
    I wanted to be able to quote such statements, without explicitly naming their author.
    "
    This is against the conventions for good scientific conduct. Hiding such information may be good in a game but not in scientific discourse. If you don't want to name authors use your own words and speak in your own authority!
  47. A. Neumaier says:
    "So, your approach captures this averaging nicely and therefore will show how quantum results average to classical expectations for whatever experiment. But, it says nothing about why we don’t just get the value between O1 and O2 directly to begin with. That is what’s “surprising or ‘paradoxical’” about the qubit."
    I find this as little surprising as the case of measuring the state of a die by looking at the number of eyes found at its top when the die comes to rest. Although the die moves continuously we always get a discrete integer between 1 and 6.

    Similarly, the measurement of a qubit is – by definition – binary. Hence it can have only two results, though the control in the experiment changes continuously.

  48. RUTA says:
    "
    I don't see the qubit presenting a mystery. Everything about it was known in 1852, long before quantum mechanics got off the ground.
    "
    To understand the mystery of the qubit, consider a measurement of some state that results in outcome O1 every time. Then suppose you rotate your measurement of that same state and obtain outcome O2 every time. We would then expect that a measurement between those two should produce an outcome between O1 and O2, according to some classical model. But instead, we get a distribution of O1 and O2 that average to whatever we expected from our classical model. Here is how Koberinski & Mueller put it (as quoted in our paper https://www.mdpi.com/1099-4300/24/1/12):

    We suggest that (continuous) reversibility may be the postulate which comes closest to being a candidate for a glimpse on the genuinely physical kernel of “quantum reality''. Even though Fuchs may want to set a higher threshold for a “glimpse of quantum reality'', this postulate is quite surprising from the point of view of classical physics: when we have a discrete system that can be in a finite number of perfectly distinguishable alternatives, then one would classically expect that reversible evolution must be discrete too. For example, a single bit can only ever be flipped, which is a discrete indivisible operation. Not so in quantum theory: the state |0> of a qubit can be continuously-reversibly “moved over'' to the state |1>. For people without knowledge of quantum theory (but of classical information theory), this may appear as surprising or “paradoxical'' as Einstein's light postulate sounds to people without knowledge of relativity.

    So, your approach captures this averaging nicely and therefore will show how quantum results average to classical expectations for whatever experiment. But, it says nothing about why we don’t just get the value between O1 and O2 directly to begin with. That is what’s “surprising or ‘paradoxical’” about the qubit.

  49. A. Neumaier says:
    "
    So we give up even trying to know the true meaning of Renormalization?
    "
    Renormalization does not go beyond the limits of quantum theory.

    From a physics point of view, everything about renormalization is understood. The missing logical coherence (due to the lack of a rigorous nonperturbative version of renormalization) is a matter for the mathematicians to resolve.

  50. A. Neumaier says:
    "
    The questions are from Paul Davies.
    "
    and the other statements, including the first sentence?

    "
    Those were neither my deeper objections, nor my words.
    "
    If you write something without giving credits, everyone assumes it is your statement!

  51. A. Neumaier says:
    "
    But I also have a deeper objection: the Everett interpretation takes quantum theory in its present form as the currency, in terms of which everything has to be explained or understood
    "
    Your deeper objection seems to have no substance that would one allow to make progress.

    Whatever is taken as the currency in terms of which everything has to be explained or understood, it might be something effective due to an even deeper currency. We simply must start somewhere, and your deeper objection will always apply.

    But according to current knowledge, quantum theory is a sufficient currency. Unlike in earlier ages, quantum theory explains the properties of physical reality (whatever it is, but in physics it certainly includes measurement devices!)

    There are no experimental phenomena not accounted for by quantum physics, which can taken to be the physics of the standard model plus modifications due to neutrino masses and semiclassical gravity plus some version of Born's rule, plus all approximation schemes used to derive the remainder of physics. Thus everything beyond that is just speculation without experimental support.

  52. A. Neumaier says:
    "
    To determine a quantum state you need more than one measurement
    "
    Yes, that's what quantum tomography is about.

    To accurately determine a momentum vector one also needs more than one measurement.

    Thus I don't see why your comment affects any of my claims.

  53. A. Neumaier says:
    "
    The point is the interpretation. In the latter formulation, that's precisely what I mean when I say that ##\hat{\rho}## is an "equivalence class of preparation procedures". It's an equivalence class, because very different equipment can result in the same "emanating beam".
    "
    It results in different emanating beams, though their properties are the same.

    Its an equivalence class only in the same irrelevant sense as in the claim that ''momentum is an equivalence class of preparations of particles in a classical source''. Very different equipment can result in particles with the same momentum.

    Using mathematical terminology to make such a simple thing complicated is quite unnecessary.
    "
    This I don't understand: A single measurement leads to some random result, but not the expectation value of these random results.
    "
    A single measurement of a system in local equilibrium leads to a fairly well-determined value for a current, say, and not to a random result.
    "
    Now I'm completely lost again.
    "
    Because my new approach goes beyond your minimal interpretation. You should perhaps first read the paper rather than base a discussion on just reading the summary exposition. There is a reason why I spent a lot of time to give detailed, physical arguments in the paper!

  54. vanhees71 says:
    "

    No. A (clearly purely mathematical) construction of equivalence classes is not involved at all!

    A quantum source is a piece of equipment emanating a beam – a particular laser, or a fixed piece of radioactive material behind a filter with a hole, etc.. Each quantum source has a time-dependent state ##\rho(t)##, which in the stationary case is independent of time ##t##.
    "
    The point is the interpretation. In the latter formulation, that's precisely what I mean when I say that ##\hat{\rho}## is an "equivalence class of preparation procedures". It's an equivalence class, because very different equipment can result in the same "emanating beam".
    "

    The quantum state implies known values of all quantum expectations (N-point functions). This includes smeared field expectation values that are (for systems in local equilibrium) directly measurable without any statistics involved. It also includes probabilities for statistical measurements.
    "
    This I don't understand: A single measurement leads to some random result, but not the expectation value of these random results.
    "

    It takes a meaning independent of POVMs.

    • In classical mechanics where observables are the classical phase space variables ##p,q## and everything computable from them; in particular the kinetic and potential energy, forces, etc..
    • In quantum mechanics observables are the quantum phase space variables ##\rho## (or its matrix elements) and everything computable from them, in particular, the N-point functions of quantum field theory. For example, 2-point functions are often measurable through linear response theory.

    "
    Now I'm completely lost again. In the usual formalism the statistical operator refers to the quantum state and not to an observable. To determine a quantum state you need more than one measurement (of a complete set of compatible observables). See Ballentine's chapter (Sect. 8.2) on "state determination".

  55. A. Neumaier says:
    "
    The German version is quite short, but it doesn't seem to be too complicated.
    "
    Not for a mathematician, who is familiar with measure theory and has mastered the subtleties of countable additivity….

    But to a physics student you need to explain (and motivate in a physics context) the notions of a measure space, which is a lot of physically irrelevant overhead!
    The German version of Wikipedia then simplifies to the case a discrete quantum measure, which is already everything needed to discuss measurement!

  56. A. Neumaier says:
    "
    That confirms my (still superficial) understanding that now I'm allowed to interpret ##\hat{\rho}## and the trace operation as expectation values in the usual statistical sense,
    "
    There are two senses: One as a formal mathematical construct, giving quantum expectations, and
    the other in a theorem stating that when you do actual measurements, the limit of the sample means agree with these theoretical quantum expectations.
    "
    and that makes the new approach much more understandable than what you called before "thermal interpretation".
    "
    I derive the thermal interpretation from this new approach. See Section 7.3 of my paper, and consider the paper to be a much more understandable pathway to the thermal interpretation, where in my book I still had to postulate many things without being able to derive them.
    "
    I also think that the entire conception is not much different from the minimal statistical interpretation. The only change to the "traditional" concept seems to be that you use the more general concept of POVM than the von Neumann filter measurements, which are only a special case.
    "
    The beginnings are not much different, but they are already simpler than the minimal statistical interpretation – which needs nontrivial concepts from spectral theory and a very nonintuitive assertion called Born's rule.
    "
    The only objection I have is the statement concerning EPR. It cannot be right, because local realistic theories are not consistent with the quantum-theoretical probability theory, which is proven by the violation of Bell's inequalities (and related properties of quantum-mechanically evaluated correlation functions, etc) through the quantum mechanical predictions and the confirmation of precisely these violations in experiments.
    "
    Please look at my actual claims in the paper rather than judging from the summary in the Insiight article! EPR is discussed in Section 5.4. There I claim elements of reality for quantum expectations of fields operators, not for Bell-local realistic theories! Thus Bell inequalities are irrelevant.
    "
    I take it that it is allowed also in your new conception to refer to ##\hat{\rho}## as the description of equivalence classes of preparation procedures, i.e., to interpret the word "quantum source" in the standard way)
    "
    No. A (clearly purely mathematical) construction of equivalence classes is not involved at all!

    A quantum source is a piece of equipment emanating a beam – a particular laser, or a fixed piece of radioactive material behind a filter with a hole, etc.. Each quantum source has a time-dependent state ##\rho(t)##, which in the stationary case is independent of time ##t##.

    "
    All the quantum state implies are the probabilities for the outcome of measurements.
    "
    The quantum state implies known values of all quantum expectations (N-point functions). This includes smeared field expectation values that are (for systems in local equilibrium) directly measurable without any statistics involved. It also includes probabilities for statistical measurements.
    "
    I think within your conceptional frame work, "observable" takes a more general meaning as the outcome of some measurement device ("pointer reading") definable in the most general sense as a POVM.
    "
    It takes a meaning independent of POVMs.

    • In classical mechanics where observables are the classical phase space variables ##p,q## and everything computable from them; in particular the kinetic and potential energy, forces, etc..
    • In quantum mechanics observables are the quantum phase space variables ##\rho## (or its matrix elements) and everything computable from them, in particular, the N-point functions of quantum field theory. For example, 2-point functions are often measurable through linear response theory.
  57. fresh_42 says:
    "
    Yes, Wikipedia describes them (at the very top of the section headed 'Definition') as the simplest POVMs. But the general concept (as defined in the Definition inside this section of Wikipedia) is an abstract monster far too complicated for most physics students.
    "
    The German version is quite short, but it doesn't seem to be too complicated.
  58. A. Neumaier says:
    "
    Indeed, I think it's just a reformulation of the minimal statistical interpretation, taking into account the more modern approach to represent observables by POVMS rather than the standard formulation with self-adjoint operators (referring to von Neumann filter measurements, which are a special case of POVMS).
    "
    It is a minimal well-motivated new foundation for quantum physics including its statistical interpretation, based on a new postulate from which POVMs and everything else can be derived. And it has consequences far beyond the statistical interpretation, see the key points mentioned in post #9.
    "
    It seems to be the same difference as a distribution function is to a probability measure, i.e. more of a different wording than a different approach.

    Do I miss something?
    "
    The point is that there are two quantum generalizations of probability, the old (von Neumann) one based on PVMs (in the discrete case orthogonal projectors summing to 1) and the more recent (1970+), far more generally applicable one, based on POVMs. See the Wikipedia article mentioned in post #14.

  59. A. Neumaier says:
    "
    Yes, entangled states produce CM results on average, but that statement simply ignores their violation of the Bell inequality, which can also be couched as a statistical, empirical fact. Indeed, the mystery of entanglement can also be shown empirically in very small (non-statistica) samples of individual measurements. This approach is therefore worthless for resolving that mystery.
    "
    Most things are worthless if you apply inadequate criteria for measuring their worth. The most expensive car is worthless if you want to climb a tree.

    I didn't set out to resolve what you regard here as a mystery. It is not needed for the foundations but a consequence of the general formalism once it has been derived.
    "
    It does however marry up beautifully with the reconstruction of QM via information-theoretic principles, which does resolve the mystery of the qubit and therefore entanglement.
    "
    I don't see the qubit presenting a mystery. Everything about it was known in 1852, long before quantum mechanics got off the ground.

  60. A. Neumaier says:
    "
    But aren't these also special cases of POVMs as described in the Wikipedia

    https://en.wikipedia.org/wiki/POVM
    "
    Yes, Wikipedia describes them (at the very top of the section headed 'Definition') as the simplest POVMs. But the general concept (as defined in the Definition inside this section of Wikipedia) is an abstract monster far too complicated for most physics students.

  61. RUTA says:
    "
    The concept is nowhere needed in this approach to quantum mechanics, hence there is no mystery about it at this level.

    Entanged states are just very special cases of density operators expressed in a very specific basis. They become a matter of curiosity only if one looks for extremal situations that can be prepared only for systems in which a very small number of states are treated quantum mechanically.
    "
    Yes, entangled states produce CM results on average, but that statement simply ignores their violation of the Bell inequality, which can also be couched as a statistical, empirical fact. Indeed, the mystery of entanglement can also be shown empirically in very small (non-statistica) samples of individual measurements. This approach is therefore worthless for resolving that mystery. It does however marry up beautifully with the reconstruction of QM via information-theoretic principles, which does resolve the mystery of the qubit and therefore entanglement.

  62. A. Neumaier says:
    "
    As a layman in QM I looked up POVM and found a function ##\mu\, : \,\mathcal{A}\longrightarrow \mathcal{B(H)}## with ##0\leq \mu(A) \leq \operatorname{id}_{\mathcal{H}}## with self-adjoint operators as values. It seems to be the same difference as a distribution function is to a probability measure, i.e. more of a different wording than a different approach.
    "
    In the Insight article and the accompanying paper I only use the notion of a discrete quantum measure, defined as a finite family of Hermitian, positive semidefinite that sum to the identity.
    This is the quantum version of a discrete probability distribution, a finite family of probabilities summing to one. Thus on the level of foundations there is no need for the POVM concept.

    The concept of POVMs is unnecessarily abstract, but there are simple POVMs equivalent to discrete quantum measures; see Section 4.1 of my paper.

  63. A. Neumaier says:
    "
    Very nice description of how QM marries up with CM. How does this resolve the mystery of entanglement?
    "
    The concept is nowhere needed in this approach to quantum mechanics, hence there is no mystery about it at this level.

    Entanged states are just very special cases of density operators expressed in a very specific basis. They become a matter of curiosity only if one looks for extremal situations that can be prepared only for systems in which a very small number of degrees of freedom are treated quantum mechanically.

  64. A. Neumaier says:
    "
    What's the main new idea here?
    "
    New compared to what?
    "
    From this summary, which is written nicely and clearly, I have a feeling that I knew all this before.
    "
    For example, where did you know from what I said in the very first sentence about quantum phase space coordinates?
    "
    This Insight article presents the main features of a conceptual foundation of quantum physics with the same characteristic features as classical physics – except that the density operator takes the place of the classical phase space coordinates position and momentum.
    "
    "
    Do I miss something?
    "
    What I consider new for the general reader was specified at the beginning:
    "
    This Insight article […] gives for the first time a formally precise definition of quantum measurement that

    • is applicable without idealization to complex, realistic experiments;
    • allows one to derive the standard quantum mechanical machinery from a single, well-motivated postulate;
    • leads to objective (i.e., observer-independent, operational, and reproducible) quantum state assignments to all sufficiently stationary quantum systems.

    The paper shows that the amount of objectivity in quantum physics is no less than that in classical physics.
    "
    If you know how to do all this consistently you miss nothing. Otherwise you should read the full paper, where everything is argued in full detail, so that it cab be easily integrated into a first course on quantum mechanics.

  65. RUTA says:
    Very nice description of how QM marries up with CM. In particular, its operational approach greatly clarifies the Born rule in terms of empiricism, which is the way I view physics as a physicist. I agree that the standard introduction contains otherwise “mysterious” mathematical abstractions. How does this resolve the mystery of entanglement?
  66. fresh_42 says:
    "
    Physics? :wink:

    More seriously, I don't know what the equation you wrote means, so I cannot say what do you miss.
    "
    A function from a measure space to the space of bounded operators on a Hilbert space.

  67. fresh_42 says:
    "
    … the more modern approach to represent observables by POVMS rather than the standard formulation with self-adjoint operators …
    "

    As a layman in QM I looked up POVM and found a function ##\mu\, : \,\mathcal{A}\longrightarrow \mathcal{B(H)}## with ##0\leq \mu(A) \leq \operatorname{id}_{\mathcal{H}}## with self-adjoint operators as values. It seems to be the same difference as a distribution function is to a probability measure, i.e. more of a different wording than a different approach.

    Do I miss something?

  68. vanhees71 says:
    Indeed, I think it's just a reformulation of the minimal statistical interpretation, taking into account the more modern approach to represent observables by POVMS rather than the standard formulation with self-adjoint operators (referring to von Neumann filter measurements, which are a special case of POVMS).
  69. Demystifier says:
    What's the main new idea here? From this summary, which is written nicely and clearly, I have a feeling that I knew all this before. Do I miss something?
  70. vanhees71 says:
    Great!

    That confirms my (still superficial) understanding that now I'm allowed to interpret ##\hat{\rho}## and the trace operation as expectation values in the usual statistical sense, and that makes the new approach much more understandable than what you called before "thermal interpretation". I also think that the entire conception is not much different from the minimal statistical interpretation. The only change to the "traditional" concept seems to be that you use the more general concept of POVM than the von Neumann filter measurements, which are only a special case.

    The only objection I have is the statement concerning EPR. It cannot be right, because local realistic theories are not consistent with the quantum-theoretical probability theory, which is proven by the violation of Bell's inequalities (and related properties of quantum-mechanically evaluated correlation functions, etc) through the quantum mechanical predictions and the confirmation of precisely these violations in experiments.

    The upshot is: As quantum theory predicts, the outcomes of all possible measurements on a system, prepared in any state ##\hat{\rho}## (I take it that it is allowed also in your new conception to refer to ##\hat{\rho}## as the description of equivalence classes of preparation procedures, i.e., to interpret the word "quantum source" in the standard way) are not due to predetermined values of the measured observables. All the quantum state implies are the probabilities for the outcome of measurements. The values of observables are thus only determined by the preparation procedure if they take a certain value with 100% probability. I think within your conceptional frame work, "observable" takes a more general meaning as the outcome of some measurement device ("pointer reading") definable in the most general sense as a POVM.

Newer Comments »

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply