Quantum Physics via Quantum Tomography: A New Approach to Quantum Mechanics
This Insight article presents the main features of a conceptual foundation of quantum physics with the same characteristic features as classical physics – except that the density operator takes the place of the classical phase space coordinates position and momentum. Since everything follows from the well-established techniques of quantum tomography (the art and science of determining the state of a quantum system from measurements) the new approach may have the potential to lead in time to a consensus on the foundations of quantum mechanics. Full details can be found in my paper
- A. Neumaier, Quantum mechanics via quantum tomography, Manuscript (2022). arXiv:2110.05294v3
This paper gives for the first time a formally precise definition of quantum measurement that
- is applicable without idealization to complex, realistic experiments;
- allows one to derive the standard quantum mechanical machinery from a single, well-motivated postulate;
- leads to objective (i.e., observer-independent, operational, and reproducible) quantum state assignments to all sufficiently stationary quantum systems.
- The new approach shows that the amount of objectivity in quantum physics is no less than that in classical physics.
The following is an extensive overview of the most important developments in this new approach.
$$
\def\<{\langle} % expectation \def\>{\rangle} % expectation
\def\tr{{\mathop{\rm tr}\,}}
\def\E{{\bf E}}
$$
Table of Contents
Quantum states
The (Hermitian and positive semidefinite) density operator ##\rho## is taken to be the formal counterpart of the state of an arbitrary quantum source. This notion generalizes the polarization properties of light: In the case of the polarization of a source of light, the density operator represents a qubit and is given by a ##2\times 2## matrix whose trace is the intensity of the light beam. If expressed as a linear combination of Pauli matrices, the coefficients define the so-called Stokes vector. Its properties (encoded in the mathematical properties of the density operator) were first described by George Stokes (best known from the Navier-Stokes equations for fluid mechanics) who gave in 1852 (well before the birth of Maxwell’s electrodynamics and long before quantum theory) a complete description of the polarization phenomenon, reviewed in my Insight article ‘A Classical View of the Qubit‘. For a stationary source, the density operator is independent of time.
The detector response principle
A quantum measurement device is characterized by a collection of finitely many detection elements labeled by labels ##k## that respond statistically to the quantum source according to the following detector response principle (DRP):
- A detection element ##k## responds to an incident stationary source with density operator ##\rho## with a nonnegative mean rate ##p_k## depending linearly on ##\rho##. The mean rates sum to the intensity of the source. Each ##p_k## is positive for at least one density operator ##\rho##.
If the density operator is normalized to intensity one (which we shall do in this exposition) the response rates form a discrete probability measure, a collection of nonnegative numbers ##p_k## (the response probabilities) that sum to 1.
The DRP, abstracted from the polarization properties of light, relates theory to measurement. By its formulation it allows one to discuss quantum measurements without the need for quantum mechanical models for the measurement process itself. The latter would involve the detailed dynamics of the microscopic degrees of freedom of the measurement device – clearly out of the scope of a conceptual foundation on which to erect the edifice of quantum physics.
The main consequence of the DRP is the detector response theorem. It asserts that for every measurement device, there are unique operators ##P_k## which determine the rates of response to every source with density operator ##\rho## according to the formula
$$
p_k=\langle P_k\rangle:=\tr\rho P_k.
$$
The ##P_k## form a discrete quantum measure; i.e., they are Hermitian, positive semidefinite and sum to the identity operator ##1##. This is the natural quantum generalization of a discrete probability measure. (In more abstract terms, a discrete quantum measure is a simple instance of a so-called POVM, but the latter notion is not needed for understanding the main message of the paper.)
Statistical expectations and quantum expectations
Thus a quantum measurement device is characterized formally by means of a discrete quantum measure. To go from detection events to measured numbers one needs to provide a scale that assigns to each detection element ##k## a real or complex number (or vector) ##a_k##. We call the combination of a measurement device with a scale a quantum detector. The statistical responses of a quantum detector define the statistical expectation
$$
\E(f(a_k)):=\sum_{k\in K} p_kf(a_k)
$$
of any function ##f(a_k)## of the scale values. As always in statistics, this statistical expectation is operationally approximated by finite sample means of ##f(a)##, where ##a## ranges over a sequence of actually measured values. However, the exact statistical expectation is an abstraction of this; it works with a nonoperational probabilistic limit of infinitely many measured values so that the replacement of relative sample frequencies by probabilities is justified. If we introduce the quantum expectation
$$
\langle A\rangle:=\tr\rho A
$$
of an operator ##A## and say that the detector measures the quantity
$$
A:=\sum_{k\in K} a_kP_k,
$$
it is easy to deduce from the main result the following version of Born’s rule (BR):
- The statistical expectation of the measurement results equals the quantum expectation of the measured quantity.
- The quantum expectations of the quantum measure constitute the probability measure characterizing the response.
This version of Born’s rule applies without idealization to results of arbitrary quantum measurements.
(In general, the density operator is not necessarily normalized to intensity ##1##; without this normalization, we call ##\langle A\rangle## the quantum value of ##A## since it does not satisfy all properties of an expectation.)
Projective measurements
The conventional version of Born’s rule – the traditional starting point relating quantum theory to measurement in terms of eigenvalues, found in all textbooks on quantum mechanics – is obtained by specializing the general result to the case of exact projective measurements. The spectral notions do not appear as postulated input as in traditional expositions, but as consequences of the derivation in a special case – the case where ##A## is a self-adjoint operator, hence has a spectral resolution with real eigenvalues ##a_k##, and the ##P_k## is the projection operators to the eigenspaces of ##A##. In this special case, we recover the traditional setting with all its ramifications together with its domain of validity. This sheds new light on the understanding of Born’s rule and eliminates the most problematic features of its uncritical use.
Many examples of realistic measurements are shown to be measurements according to the DRP but have no interpretation in terms of eigenvalues. For example, joint measurements of position and momentum with limited accuracy, essential for recording particle tracks in modern particle colliders, cannot be described in terms of projective measurements; Born’s rule in its pre-1970 forms (i.e., before POVMs were introduced to quantum mechanics) does not even have an idealized terminology for them. Thus the scope of the DRP is far broader than that of the traditional approach based on highly idealized projective measurements. The new setting also accounts for the fact that in many realistic experiments, the final measurement results are computed from raw observations, rather than being directly observed.
Operational definitions of quantum concepts
Based on the detector response theorem, one gets an operational meaning for quantum states, quantum detectors, quantum processes, and quantum instruments, using the corresponding versions of quantum tomography.
In quantum state tomography, one determines the state of a quantum system with a ##d##-dimensional Hilbert space by measuring sufficiently many quantum expectations and solving a subsequent least squares problem (or a more sophisticated optimization problm) for the ##d^2-1## unknowns of the state. Quantum tomography for quantum detectors, quantum processes, and quantum instruments proceed in a similar way.
These techniques serve as foundations for far-reaching derived principles; for quantum systems with a low-dimensional density matrix, they are also practically relevant for the characterization of sources, detectors, and filters. A quantum process also called a linear quantum filter, is formally described by a completely positive map. The operator sum expansion of completely positive maps forms the basis for the derivation of the dynamical laws of quantum mechanics – the quantum Liouville equation for density operators, the conservative time-dependent Schrödinger equation for pure states in a nonmixing medium, and the dissipative Lindblad equation for states in mixing media – by a continuum limit of a sequence of quantum filters. This derivation also reveals the conditions under which these laws are valid. An analysis of the oscillations of quantum values of states satisfying the Schrödinger equation produces the Rydberg-Ritz combination principle underlying spectroscopy, which marked the onset of modern quantum mechanics. It is shown that in quantum physics, normalized density operators play the role of phase space variables, in complete analogy to the classical phase space variables position and momentum. Observations with highly localized detectors naturally lead to the notion of quantum fields whose quantum values encode the local properties of the universe.
Thus the DRP leads naturally to all basic concepts and properties of modern quantum mechanics. It is also shown that quantum physics has a natural phase space structure where normalized density operators play the role of quantum phase space variables. The resulting quantum phase space carries a natural Poisson structure. Like the dynamical equations of conservative classical mechanics, the quantum Liouville equation has the form of Hamiltonian dynamics in a Poisson manifold; only the manifold is different.
Philosophical consequences
The new approach has significant philosophical consequences. When a source is stationary, response rates, probabilities, and hence quantum values, can be measured in principle with arbitrary accuracy, in a reproducible way. Thus they are operationally quantifiable, independent of an observer. This makes them objective properties, in the same sense as in classical mechanics, positions and momenta are objective properties. Thus quantum values are seen to be objective, reproducible elements of reality in the sense of the famous paper
- A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 (1935), 777-781.
The assignment of states to stationary sources is as objective as any assignment of classical properties to macroscopic objects. In particular, probabilities appear – as in classical mechanics – only in the context of statistical measurements. Moreover, all probabilities are objective frequentist probabilities in the sense employed everywhere in experimental physics – classical and quantum. Like all measurements, probability measurements are of limited accuracy only, approximately measurable as observed relative frequencies.
Among all quantum systems, classical systems are characterized as those whose observable features can be correctly described by local equilibrium thermodynamics, as predicted by nonequilibrium statistical mechanics. This leads to a new perspective on the quantum measurement problem and connects to the thermal interpretation of quantum physics, discussed in detail in my 2019 book ‘Coherent Quantum Physics‘ (de Gruyter, Berlin 2019).
Conclusion
To summarize, the new approach gives an elementary, and self-contained deductive approach to quantum mechanics. A suggestive notion for what constitutes a quantum detector and for the behavior of its responses leads to a definition of measurement from which the modern apparatus of quantum mechanics can be derived in full generality. The statistical interpretation of quantum mechanics is not assumed, but the version of it that emerges is discussed in detail. The standard dynamical and spectral rules of introductory quantum mechanics are derived with little effort. At the same time, we find the conditions under which these standard rules are valid. A thorough, precise discussion is given of various quantitative aspects of uncertainty in quantum measurements. Normalized density operators play the role of quantum phase space variables, in complete analogy to the classical phase space variables position and momentum.
There are implications of the new approach for the foundations of quantum physics. By shifting the attention from the microscopic structure to the experimentally accessible macroscopic equipment (sources, detectors, filters, and instruments) we get rid of all potentially subjective elements of quantum theory. There are natural links to the thermal interpretation of quantum physics as defined in my book.
The new picture is simpler and more general than the traditional foundations, and closer to actual practice. This makes it suitable for introductory courses on quantum mechanics. Complex matrices are motivated from the start as a simplification of the mathematical description. Both conceptually and in terms of motivation, introducing the statistical interpretation of quantum mechanics through quantum measures is simpler than introducing it in terms of eigenvalues. To derive the most general form of Born’s rule from quantum measures one just needs simple linear algebra, whereas even to write down Born’s rule in the traditional eigenvalue form, unfamiliar stuff about wave functions, probability amplitudes, and spectral representations must be swallowed by the beginner – not to speak of the difficult notion of self-adjointness and associated proper boundary conditions, which is traditionally simply suppressed in introductory treatments.
Thus there is no longer an incentive for basing quantum physics on measurements in terms of eigenvalues – a special, highly idealized case – in place of the real thing.
Postscript
In the mean time I revised the paper. The new version new version is better structured and contains a new section on high precision quantum measurements, where the 12 digit accuracy determination of the gyromagnetic ration through the observation and analysis of a single electron in a Penning trap is discussed in some detail. The standard analysis assumes that the single electron is described by a time-dependent density operator following a differential equation. While in the original papers this involved arguments beyond the traditional (ensemble-based and knowledge-based) interpretations of quantum mechanics, the new tomography-based approach applies without difficulties.
Full Professor (Chair for Computational Mathematics) at the University of Vienna, Austria
I forgot to give the link:
Once more the citation of Peres's book:
A. Peres, Quantum Theory: Concepts and Methods, Kluwer
Academic Publishers, New York, Boston, Dordrecht, London,
Moscow (2002).
I don't know, whether he uses the phrase "weak measurement", but he discusses POVMs and gives a very concise description of what's predicted by QT. It seems to be very much along the lines you propose in your paper (as far as I think I understand it).
Please give a page number. If I remember correctly, Peres never mentions the notion of weak measurement. A search in scholar.google.com for
gives no hits at all.
I am also talking about the meaning of the formalism, but using more careful language. I do this without invoking Born's rule, which you take to be a blanket phrase covering everything probabilistic, independent of its origin. This blurs the conceptual distinctions and makes it impossible to discuss details with you.
In the mathematical formalism there is also no Born's rule, but only the trace rule defining quantum expectations. Born's rule only relates the trace rule to measurements, and it does so only in special cases – namely when measurements are made on independent and identically prepared ensembles.
As long as there are no measurements – and this includes everything in books on quantum mechanics or quantum field theory when they derive formulas for scattering amplitudes or N-point functions -, everything is independent of Born's rule. The formula ##\langle A\rangle:=##Tr##\rho A## is just a definition of the meaning of the string on the left in terms of that on the right. It has a priori nothing to do with measurement, and hence with Born's rule.
But it seems to me that you simply equate Born's rule with the trace rule, independent of its relation to measurement. Equating this makes trivially everything dependent on Born's rule. But this makes Born's rule vacuous, and its application to measurements invalid in contexts where no ensemble of independent and identically prepared ensembles. exist.
Could you please point out to which paper (and which page) you refer here? I found no mention of weak measurements or POVMs in the geonium paper by Brown and Gabrielse that you mentioned earlier. The latter is quite interesting but very long, so it takes a lot of time to digest the details. I'll comment on it in due time in a new thread.
I discussed a different single photon scenario, that of ''photons on demand'', in a lecture given some time ago:
Of course in measurements there is no Hilbert space, no operators, no trace rule, no Born's rule. You just measure observables and evaluate the statistics of their outcomes, take into account the specifics of the apparatus etc. There is no generally valid formalism for this but it has to be analyzed for any experimental setup. That's not what I'm discussing and it's not related to the interpretation of QT.
Is it so difficult to understand that
Once you can accept that one can make this difference, you'll be able to understand everything I said. And you'll benefit a lot from this understanding!
No.
Point 3 is a mathematically precise version of your statement that a state is given by an equivalence class of identically prepared systems.
These are your magic wand and your magic spell, with which everything done in quantum mechanics looks as being based on Born's rule.
But your magic ignores the assumptions in Born's rule, hence is like concluding ##1=2## from ##x=2x## by division through ##x## without checking the assumption ##x\ne 0##.
They are understandable with the pragmatic use of the quantum formalism that uses whatever interpretation explains an experiment. They are not understandable in terms of only Born's rule, since In experiments with single quantum systems, the assumption in Born's rule cannot be satisfied.
These results are very accurately described by Q(F)T, which uses only mathematics (and not Born's rule) to predict this value of g-2. QED predicts the correct value of g-2 from the QED action purely by mathematical calculations, without any reference to measurement. Hence one has nowhere an opportunity to use Born's rule, since the latter only says something about quantum observables measured by means of averaging over measurement results obtained from independent and identically prepared.
Born's rule would however be needed to interpret probabilities measured from scattering experiments, for which Weinberg correctly invokes Born's rule. This is a typical case where the assumption present in Born's rule is satisfied.
How then can it be that these results are very accurately described by Q(F)T, which uses Born's rule to predict this value of (g-2)?
If Born's rule were not applicable here, the experimental results couldn't be understood with standard QT, but they obviously are!
To get expectation values you need the probabilities/probability distributions, which are given by Born's rule in the formalism. That interpretation of the state, ##\hat{\rho}##, leads immediately to ##\langle A \rangle=\mathrm{Tr}(\hat{\rho} \hat{A})##. For me all that is subsumed under "Born's rule". Instead of saying "Born's rule" I also could say "the probabilistic interpretation of ##\hat{\rho}##", but that's very unusual among physicists.
The measurement of the gyrofactor of the electron using a Penning trap is as precise as it is
because certain experimental situations happen to have very accurate descriptions in terms of a few-parameter quantum stochastic process, and the gyrofactor is one of these parameters.
Though not interpretable in terms of Born's rule or POVMs, such processes are able to describe single time-dependent quantum systems, just as classical stochastic process are able to describe single time-dependent classical systems.
The facts that there are only very few parameters and that one can measure arbitrarily long time series imply that one can use statistical parameter estimation techniques to find the parameters to arbitrary accuracy. The fact that the models are accurate imply that the parameters found for the gyrofactor accurately represent the gyrofactor.
I am now reading the papers you and Fra cited and will give details once I have digested them.
There I discuss the case of nonstationary quantum systems.
Please do not confuse contradictions and non-applicability! These are two very different things!
Born's rule is not just taking averages of anything!
I use quantum expectations all the time, but Born's rule only when I interpret a quantum expectation in terms of measuring independent and identical prepared systems – which is a necessary requirement for Born's rule to hold.
How do you define the experimental meaning of ##\langle A\rangle## when ##A## is not normal, which is often the case in QFT?
Electrons in accelerators come in large bunches, not a single electrons….
It is only relative to the speed and accuracy with which reliable measurements can be taken. This is independent of any information processing on the side of the agent.
My understanding of the paper is that it is very close to the view as provided, e.g., by Asher Peres in his book
A. Peres, Quantum Theory: Concepts and Methods, Kluwer
Academic Publishers, New York, Boston, Dordrecht, London,
Moscow (2002).
What's new is the order of presentation, i.e., it is starting from the most general case of "weak measurements" (described by POVMs) and then brings the standard-textbook notion of idealized von Neumann filter measurements as a special case, and this makes a lot of sense, if you are aiming at a deductive (or even axiomatic) formulation of QT. The only problem seems to be that this view is not what the author wants to express, and I have no idea what the intended understanding is.
Maybe it would help, when a concrete measurement is discussed, e.g., the nowadays standard experiment with single ("heralded") photons (e.g., produced with parametric down conversion using a laser and a BBO crystal, using the idler photon as the "herald" and then doing experiments with the signal photon). In my understanding such a "preparation procedure" determines the state, i.e., the statistical operator in the formalism. Then one can do an experiment, e.g., a Mach-Zender interferometer with polarizers, phase shifters etc. in the two arms and then you have photon detectors to do single-photon measurements. It should be possible to describe such a scenario completely with the formalism proposed in the paper and then pointing out where, in the view of the author, this contradicts the standard statistical interpretation a la Born.
I didn't claim a contradiction with, I claimed the nonapplicability of Born's rule. These are two very different claims.
You seem to follow the magic interpretation of quantum mechanics. Whenever you see statistics on measurements done on a quantum system you cast the magic spell "Born's probability interpretation", and whenever you see a calculation involving quantum expectations you wave a magic wand and say "ah, an application of Born's rule". In this way you pave your way through every paper on quantum physics and say with satisfaction at the end, "This paper proves again what I knew for a long time, that the interpretation of quantum mechanics is solely based on the probabilistic interpretation of the state a la Born".
You simply cannot see the difference between the two statements
The first statement is Born's rule, in the generalized form discussed in my paper.
The second statement (which you repeatedly employed in your argumentation) is an invalid generalization, since the essential hypothesis is missing under which the statement holds. Whenever one invokes Born's rule without having checked that the ensemble involved is actually independent and identically prepared, one commits a serious scientific error.
It is an error of the same kind as to conclude from x=2x through division by x that 1=2, because the assumption necessary for the argument was ignored.
This is not a contradiction since both the gyro-factor of electrons and the charge-mass ratio of the antiproton are not observables in the traditional quantum mechanical sense but constants of Nature.
A constant is stationary and can in principle be arbitrarily well measured, while the arbitrarily accurate measurement of the state of a nonstationary system is in principle impossible. This holds already in classical mechanics, and there is no reason why less predictable quantum mechanical systems should behave otherwise.
This is because of your magic practices in conjunction with mixing up "contradition to" and "not applicable". Both prevent you from seeing what everyone else can see.
Nowhere in your paper I can see, that there is anything NOT based on Born's rule, although you use the generalization to POVMS, but I don't see that this extension is in contradiction to Born's rule. Rather, it's based on it.
This statement is indeed true if you restrict standard quantum theory to mean the formal apparatus plus Born's rule in von Neumann's form. Already the Stern-Gerlach experiment discussed above is a counterexample.
This is because standard quantum theory was never restricted to a particular interpretation of the formalism. Physicists advancing the scope of applicability of quantum theory were always pragmatic and used whatever they found suitable to match the mathematical quantum formalism to particular experimental situations. This – and not what the introductory textbooks tell – was and is the only relevant criterion for the interpretation of quantum mechanics. The textbook version is only a simplified a posteriori rationalization.
This pragmatic approach worked long ago for the Stern-Gerlach experiment. The same pragmatic stance also works since decades for the quantum jump and quantum diffusion approaches to nonstationary individual quantum systems, to the extent of leading to a Nobel prize. They simply need more flexibility in the interpretation than Born's rule offers. What is needed is discussed in Section 4.5 of my paper.
It is a preparation, but not one to which Born's rule applies. Born's rule is valid only if the ensemble consists of independent and identically prepared states. You need independence because e.g., immediately repeated position measurements of a particle do not respect Born's rule, and you need identical prepartion because there is only one state in Born's formula.
In the case under discussion, one may interpret the situation as reeated preparation, as you say. But unless the system is stationary (and hence uninteresting in the context of the experiment under discussion), the state prepared before the ##k##th measurement is different for each ##k##. Moreover, due to the preceding measurement this state is only inaccurately known and correlated with the preceding one. Thus the ensemble prepared consists of nonindependent and nonidentically prepared states, for which Born's rule is silent.
Because Born's rule assumes identical preparations which is not the case when a nonstationary system is measured repeatedly. I am not denying the validity but the applicability of the rule!
I need to read the paper before I can go into details.
Thanks for the pointers. Will reply in more detial after having read more. I expect that it will mean that the instancs of case (B) are not so different from those of case (A) in my earlier classification of single-particle measurements.
Thanks. I added to the Insight article a link to Wikipedia and an explaining paragraph.
https://inspirehep.net/literature?sort=mostrecent&size=25&page=1&q=find title photons and cn alice
Polarization measurements for dileptons or photons are very rare today. There's a polarization measurement by the NA60 collaboration on di-muons:
https://arxiv.org/abs/0812.3100
Are the papers where I can read about ALICE measurements and about polarization measurements in the above review?
You use yourself Born's rule all the time since everything is based on taking averages of all kinds defined by ##\langle A \rangle=\mathrm{Tr} \hat{\rho} \hat{A}## (if you use normalized ##\hat{\rho}##'s).
The 'thus' is not warranted.
Quantum field theory is completely independent of Born's rule. It is about computing ##N##-point functions of interest.
Weinberg's QFT book (Vol.1) mentions Born's rule exactly twice – once in its review of quantum mechanics, and once where the probabilistic interpretation of the scatttering amplitude is derived. In the latter he assumes an ensemble of identically prepared particles to give a probabilistic meaning in terms of the statistics of collision experiment.
Nothing at all about single systems!
I don't think that, and I explicitly said this. The point is that this statistics is not statistics about an ensemble of identically prepared systems hence has nothing to do with what Born's rule is about.
In this case the system identically prepared is the throw, not the coin. The coin is a system described by a rigid body, with a 12D phase space state ##z(t)##, in contact with an environment that randomizes its motion through its collision with the table. The throw is what you can read off when the coin is finally at rest.
The state of the coin is complicated and cannot be identically prepared (otherwise it would fall identically and produce identical throws);. But the state of the throw is simple – just a binary variable, and the throwing setup prepares its state identically. Each throuw is different – only the coin is the same; that's why one gets an ensemble.
This is quite different from a quantum particle in a trap, unless (as in a throw) you reset before each measuement the state of the particle in the trap. But then the observation bevomes uninteresting. The interesting thing is to observe the particle's time dependence. Here the state changes continuously, as with the coin and not as with the throw.
In Dehmelt's paper it is describe, how various quantities using single electrons/ions in a Penning trap are measured. I still don't understand, why you think there cannot be statistics collected using a single quantum. I can also get statics of throwing a single coin again and again to check whether it's a fair one or not. I just do the "random experiment" again and again using the same quantum and collect statistics and evaluate confidence levels and all that. Another review paper, which may be more to the point, because it covers both theory and experiment, is
https://doi.org/10.1103/RevModPhys.58.233
I also think that very rarely one does full state determinations. What's done are preparations and subsequent measurements of observables of interest.
I'm also not an experimental physicist and far from knowing any details, how the current CERN experiments (ATLAS, CMS, and ALICE) measure electrons and photons. I use their results to compare to theoretical models, which are based on standard many-body QFT and simulations of the fireball created in heavy-ion collisions. All this is based on standard quantum theory and thus after all on Born's rule. Here you can look at some papers by the ALICE collaboration as one example for what's measured concerning photons created in pp, pA, and AA collisions (pT spectra, elliptic flow, etc.). Concerning polarization measurements (particularly for dileptons) that's a pretty new topic, and of course an even greater challenge than the spectra measured for decades now. After all these are "rare probes".
I didn't call these results "of limited precision" but said that they determine the state to limited precision only. The state in these experiments is a continuous stochastic function ##\rho(t)## of time with ##d^2-1## independent real components, where ##d## is the dimension of the Hilbert space. Experiments resulting in ##N## measured numbers can determine this function only to limited precision. By the law of large numbers, the error is something like ##O((dN^{1/2})^{-1})##.
What is usually done is to simply assume a Lindblad equation (which ignores the fluctuating part of the noise due to the environment) for a truncated version of ##\rho## with very small ##d##. Then one estimates from it and the experimental results a very few parameters or quantum expectations.
This is very far from an accurate state determination….
Since Born's rule is a statement about the probability distribution of results for an ensemble of identically prepared systems, it is logically impossible to obtain from it conclusions about a single of these systems. A probability distribution almost never determines an individual result.
I'll read the review once I can access it and then comment on your claim that it derives statements about a single particle from Born's rule.
Then it should be easy for you to point to a page of a standard reference describing how the measurement of photon momentum and polarization in collision experiments is done, in sufficient detail that one can infer the assumptions and approximations made. I am not an expert on collision experiments and would appreciate your input.
Science has no single betting strategy. Each scientist makes choices of his or her own preference, but published is only what passed the rules of scientific discourse, which rules out most poor judgment on the individual's side. What schience knows is an approximation to what it thinks it knows, and this approximation is quite good, otherwise resulting technology based on it would not work and not sell.
Detectors measure particles and photons of coarse. Real and virtual photons (dileptons) are among the most interesting signals in pp, pA, and heavy-ion collisions at CERN for some decades.
One can do statistics with any collection of measurement results.
But in the case you mention, where the data come from a single particle, the statistics is not governed by Born's rule. Each data point is obtained at a different time, and at each time the particle is in a different state affected in an unspecified way by the previous measurement. So how could you calculate the statistics from Born's rule?
Instead, the statistics is treated in the way I discussed in case (A).
If the nondestructive single photon measurements result in a time series, the situation for this photon is the same as for the particle in the Penning trap.
I didn't know that accelerators measure momentum and polarization of individual photons. Could you provide me with a reference where I can read details? Then I'll be able to show you how it matches the description in my paper.
https://doi.org/10.1088/0031-8949/1988/T22/016
but isn't this indeed a paradigmatic example for your formulation?
Also nondestructive photon measurements are done, but also the standard photon detection of course measures properties of single photons like energy, momentum, and polarization, or what else do you think the photon measurements in all the accelerators in HEP and heavy-ion physics provide?
In this case the traditional interpretation in terms of Born's rule is vacuous since probabilities are meaningful only in an ensemble context but individual quantum systems do not form an ensemble.
Instead, these experiments are traditionally interpreted in the fashion of classical stochastic processes, which have a trajectory interpretation for individual realizations, so that individual systems can be discussed. For example, this explains observed quantum jumps in atoms in an ion trap subject to external fields; see, e.g.,
In the context of the present paper, one has to distinguish several cases of tiny quantum systems.
(A) A tiny quantum system mounted on a macroscopic objects.
(B) A tiny quantum system freely moving in a homogeneous macroscopic medium.
In this way it makes a lot of sense again, but as I said before, today the experimentalists are able to prepare single-quantum (particles, atoms, molecules, photons) states and observe them. The observations are of course always via macroscopic measurement devices.
It may well be that there is a more comprehensive theory then contemporary quantum theory. Who knows, what the solution of the problem to describe the gravitational interaction quantum mechanically and what this might imply for the description of spacetime, but there's no hint for such a new theory. Empirically there are no phenomena which are not described with our standard theories, as incomplete they might be.
I think the contrary makes more sense. Macroscopic objects as we observe them are only consistently describable with quantum theory, given the atomistic structure. There'd not even be stable atoms as bound states of charged particles let alone molecules and condensed matter within classical physics. The "classical reality" is emergent, understood via the classical physical laws as effective description of the dynamical behavior of macroscopic coarse-grained observables.
It describes the real use of QT as a physical theory as done by physicists since 1970, when the more comprehensive view of measurement that goes beyond von Neumann's was introduced.
Yes. You probably need to read the whole to see the differences and understand the new point of view.
Experiments in physics laboratories use controllable sources, filters and detectors to explore the nature of the microscopic world. All these are macroscopic objects, the only things that can be observed. The microscopic aspecs are not obsved but inferred; they define inferables, not observables. On p.13 of my paper I quote:
A quantum source is simply a piece of equipment that has some measurable effect on detectors placed at some distance from them. How this effect comes about is not observed but described by theoretical models whose consequences can be checked against experimental results. The techniques and results described in my paper are agnostic about the models (Section 7.1), they are just about the observable (and hence macroscopic) aspects.
This is the reason why the state is assigned to the source and not to something postulated as being transmitted. More precisely, the measured property (a quantum expectation) is assigned to the particular location at which the measurement is done, which leads naturally to a quantum field picture (Section 5.4).
I prefer to frame the known in an optimally rational way, rather than to speculate about the unknown.
I define classical reality in Section 7.2 of my paper as that part of quantum reality that can be deduced from it in the form of a local equilibrium description. Thus the context is quantum physics itself.
Everything in my paper is observer-independent. It doesn't matter who observes, excpt that poor observations lead to poor approximations of the state.
The real advance is to make the quantum state a property of the source – i.e., of a macroscopic object.
This makes all talk about fictitious stuff (like ensembles, equivalence classes, multiple worlds, or presumable changes of states of knowledge) obsolete, without a change in the operational content of quantum mechanics.