Quantum Physics via Quantum Tomography: A New Approach to Quantum Mechanics
This Insight article presents the main features of a conceptual foundation of quantum physics with the same characteristic features as classical physics – except that the density operator takes the place of the classical phase space coordinates position and momentum. Since everything follows from the well-established techniques of quantum tomography (the art and science of determining the state of a quantum system from measurements) the new approach may have the potential to lead in time to a consensus on the foundations of quantum mechanics. Full details can be found in my paper
- A. Neumaier, Quantum mechanics via quantum tomography, Manuscript (2022). arXiv:2110.05294v3
This paper gives for the first time a formally precise definition of quantum measurement that
- is applicable without idealization to complex, realistic experiments;
- allows one to derive the standard quantum mechanical machinery from a single, well-motivated postulate;
- leads to objective (i.e., observer-independent, operational, and reproducible) quantum state assignments to all sufficiently stationary quantum systems.
- The new approach shows that the amount of objectivity in quantum physics is no less than that in classical physics.
The following is an extensive overview of the most important developments in this new approach.
$$
\def\<{\langle} % expectation \def\>{\rangle} % expectation
\def\tr{{\mathop{\rm tr}\,}}
\def\E{{\bf E}}
$$
Table of Contents
Quantum states
The (Hermitian and positive semidefinite) density operator ##\rho## is taken to be the formal counterpart of the state of an arbitrary quantum source. This notion generalizes the polarization properties of light: In the case of the polarization of a source of light, the density operator represents a qubit and is given by a ##2\times 2## matrix whose trace is the intensity of the light beam. If expressed as a linear combination of Pauli matrices, the coefficients define the so-called Stokes vector. Its properties (encoded in the mathematical properties of the density operator) were first described by George Stokes (best known from the Navier-Stokes equations for fluid mechanics) who gave in 1852 (well before the birth of Maxwell’s electrodynamics and long before quantum theory) a complete description of the polarization phenomenon, reviewed in my Insight article ‘A Classical View of the Qubit‘. For a stationary source, the density operator is independent of time.
The detector response principle
A quantum measurement device is characterized by a collection of finitely many detection elements labeled by labels ##k## that respond statistically to the quantum source according to the following detector response principle (DRP):
- A detection element ##k## responds to an incident stationary source with density operator ##\rho## with a nonnegative mean rate ##p_k## depending linearly on ##\rho##. The mean rates sum to the intensity of the source. Each ##p_k## is positive for at least one density operator ##\rho##.
If the density operator is normalized to intensity one (which we shall do in this exposition) the response rates form a discrete probability measure, a collection of nonnegative numbers ##p_k## (the response probabilities) that sum to 1.
The DRP, abstracted from the polarization properties of light, relates theory to measurement. By its formulation it allows one to discuss quantum measurements without the need for quantum mechanical models for the measurement process itself. The latter would involve the detailed dynamics of the microscopic degrees of freedom of the measurement device – clearly out of the scope of a conceptual foundation on which to erect the edifice of quantum physics.
The main consequence of the DRP is the detector response theorem. It asserts that for every measurement device, there are unique operators ##P_k## which determine the rates of response to every source with density operator ##\rho## according to the formula
$$
p_k=\langle P_k\rangle:=\tr\rho P_k.
$$
The ##P_k## form a discrete quantum measure; i.e., they are Hermitian, positive semidefinite and sum to the identity operator ##1##. This is the natural quantum generalization of a discrete probability measure. (In more abstract terms, a discrete quantum measure is a simple instance of a so-called POVM, but the latter notion is not needed for understanding the main message of the paper.)
Statistical expectations and quantum expectations
Thus a quantum measurement device is characterized formally by means of a discrete quantum measure. To go from detection events to measured numbers one needs to provide a scale that assigns to each detection element ##k## a real or complex number (or vector) ##a_k##. We call the combination of a measurement device with a scale a quantum detector. The statistical responses of a quantum detector define the statistical expectation
$$
\E(f(a_k)):=\sum_{k\in K} p_kf(a_k)
$$
of any function ##f(a_k)## of the scale values. As always in statistics, this statistical expectation is operationally approximated by finite sample means of ##f(a)##, where ##a## ranges over a sequence of actually measured values. However, the exact statistical expectation is an abstraction of this; it works with a nonoperational probabilistic limit of infinitely many measured values so that the replacement of relative sample frequencies by probabilities is justified. If we introduce the quantum expectation
$$
\langle A\rangle:=\tr\rho A
$$
of an operator ##A## and say that the detector measures the quantity
$$
A:=\sum_{k\in K} a_kP_k,
$$
it is easy to deduce from the main result the following version of Born’s rule (BR):
- The statistical expectation of the measurement results equals the quantum expectation of the measured quantity.
- The quantum expectations of the quantum measure constitute the probability measure characterizing the response.
This version of Born’s rule applies without idealization to results of arbitrary quantum measurements.
(In general, the density operator is not necessarily normalized to intensity ##1##; without this normalization, we call ##\langle A\rangle## the quantum value of ##A## since it does not satisfy all properties of an expectation.)
Projective measurements
The conventional version of Born’s rule – the traditional starting point relating quantum theory to measurement in terms of eigenvalues, found in all textbooks on quantum mechanics – is obtained by specializing the general result to the case of exact projective measurements. The spectral notions do not appear as postulated input as in traditional expositions, but as consequences of the derivation in a special case – the case where ##A## is a self-adjoint operator, hence has a spectral resolution with real eigenvalues ##a_k##, and the ##P_k## is the projection operators to the eigenspaces of ##A##. In this special case, we recover the traditional setting with all its ramifications together with its domain of validity. This sheds new light on the understanding of Born’s rule and eliminates the most problematic features of its uncritical use.
Many examples of realistic measurements are shown to be measurements according to the DRP but have no interpretation in terms of eigenvalues. For example, joint measurements of position and momentum with limited accuracy, essential for recording particle tracks in modern particle colliders, cannot be described in terms of projective measurements; Born’s rule in its pre-1970 forms (i.e., before POVMs were introduced to quantum mechanics) does not even have an idealized terminology for them. Thus the scope of the DRP is far broader than that of the traditional approach based on highly idealized projective measurements. The new setting also accounts for the fact that in many realistic experiments, the final measurement results are computed from raw observations, rather than being directly observed.
Operational definitions of quantum concepts
Based on the detector response theorem, one gets an operational meaning for quantum states, quantum detectors, quantum processes, and quantum instruments, using the corresponding versions of quantum tomography.
In quantum state tomography, one determines the state of a quantum system with a ##d##-dimensional Hilbert space by measuring sufficiently many quantum expectations and solving a subsequent least squares problem (or a more sophisticated optimization problm) for the ##d^2-1## unknowns of the state. Quantum tomography for quantum detectors, quantum processes, and quantum instruments proceed in a similar way.
These techniques serve as foundations for far-reaching derived principles; for quantum systems with a low-dimensional density matrix, they are also practically relevant for the characterization of sources, detectors, and filters. A quantum process also called a linear quantum filter, is formally described by a completely positive map. The operator sum expansion of completely positive maps forms the basis for the derivation of the dynamical laws of quantum mechanics – the quantum Liouville equation for density operators, the conservative time-dependent Schrödinger equation for pure states in a nonmixing medium, and the dissipative Lindblad equation for states in mixing media – by a continuum limit of a sequence of quantum filters. This derivation also reveals the conditions under which these laws are valid. An analysis of the oscillations of quantum values of states satisfying the Schrödinger equation produces the Rydberg-Ritz combination principle underlying spectroscopy, which marked the onset of modern quantum mechanics. It is shown that in quantum physics, normalized density operators play the role of phase space variables, in complete analogy to the classical phase space variables position and momentum. Observations with highly localized detectors naturally lead to the notion of quantum fields whose quantum values encode the local properties of the universe.
Thus the DRP leads naturally to all basic concepts and properties of modern quantum mechanics. It is also shown that quantum physics has a natural phase space structure where normalized density operators play the role of quantum phase space variables. The resulting quantum phase space carries a natural Poisson structure. Like the dynamical equations of conservative classical mechanics, the quantum Liouville equation has the form of Hamiltonian dynamics in a Poisson manifold; only the manifold is different.
Philosophical consequences
The new approach has significant philosophical consequences. When a source is stationary, response rates, probabilities, and hence quantum values, can be measured in principle with arbitrary accuracy, in a reproducible way. Thus they are operationally quantifiable, independent of an observer. This makes them objective properties, in the same sense as in classical mechanics, positions and momenta are objective properties. Thus quantum values are seen to be objective, reproducible elements of reality in the sense of the famous paper
- A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 (1935), 777-781.
The assignment of states to stationary sources is as objective as any assignment of classical properties to macroscopic objects. In particular, probabilities appear – as in classical mechanics – only in the context of statistical measurements. Moreover, all probabilities are objective frequentist probabilities in the sense employed everywhere in experimental physics – classical and quantum. Like all measurements, probability measurements are of limited accuracy only, approximately measurable as observed relative frequencies.
Among all quantum systems, classical systems are characterized as those whose observable features can be correctly described by local equilibrium thermodynamics, as predicted by nonequilibrium statistical mechanics. This leads to a new perspective on the quantum measurement problem and connects to the thermal interpretation of quantum physics, discussed in detail in my 2019 book ‘Coherent Quantum Physics‘ (de Gruyter, Berlin 2019).
Conclusion
To summarize, the new approach gives an elementary, and self-contained deductive approach to quantum mechanics. A suggestive notion for what constitutes a quantum detector and for the behavior of its responses leads to a definition of measurement from which the modern apparatus of quantum mechanics can be derived in full generality. The statistical interpretation of quantum mechanics is not assumed, but the version of it that emerges is discussed in detail. The standard dynamical and spectral rules of introductory quantum mechanics are derived with little effort. At the same time, we find the conditions under which these standard rules are valid. A thorough, precise discussion is given of various quantitative aspects of uncertainty in quantum measurements. Normalized density operators play the role of quantum phase space variables, in complete analogy to the classical phase space variables position and momentum.
There are implications of the new approach for the foundations of quantum physics. By shifting the attention from the microscopic structure to the experimentally accessible macroscopic equipment (sources, detectors, filters, and instruments) we get rid of all potentially subjective elements of quantum theory. There are natural links to the thermal interpretation of quantum physics as defined in my book.
The new picture is simpler and more general than the traditional foundations, and closer to actual practice. This makes it suitable for introductory courses on quantum mechanics. Complex matrices are motivated from the start as a simplification of the mathematical description. Both conceptually and in terms of motivation, introducing the statistical interpretation of quantum mechanics through quantum measures is simpler than introducing it in terms of eigenvalues. To derive the most general form of Born’s rule from quantum measures one just needs simple linear algebra, whereas even to write down Born’s rule in the traditional eigenvalue form, unfamiliar stuff about wave functions, probability amplitudes, and spectral representations must be swallowed by the beginner – not to speak of the difficult notion of self-adjointness and associated proper boundary conditions, which is traditionally simply suppressed in introductory treatments.
Thus there is no longer an incentive for basing quantum physics on measurements in terms of eigenvalues – a special, highly idealized case – in place of the real thing.
Postscript
In the mean time I revised the paper. The new version new version is better structured and contains a new section on high precision quantum measurements, where the 12 digit accuracy determination of the gyromagnetic ration through the observation and analysis of a single electron in a Penning trap is discussed in some detail. The standard analysis assumes that the single electron is described by a time-dependent density operator following a differential equation. While in the original papers this involved arguments beyond the traditional (ensemble-based and knowledge-based) interpretations of quantum mechanics, the new tomography-based approach applies without difficulties.
Full Professor (Chair for Computational Mathematics) at the University of Vienna, Austria
But they are involved in the dynamics of the particles, not in their measurement. Thus their appearance is independent of the measurement itself (which only involves the screen) and no eigenvalues.
Of course the eigenvalues of the spin component (magnetic moment, which is proportional to it) in direction of the magnetic field are involved, leading to the prediction of two strips on the screen. In the standard description for an electron moving through an inhomogeneous magnetic field of the right kind it's simply that the magnetic fields leads to an entanglement between position and this spin component, i.e., an Ag-atom beam splits in two pieces which are pretty well separated, and in one beam are with almost 100% probability spin-up and in the other spin-down Ag-atoms. Blocking one beam is an almost perfect preparation for Ag-atoms with determined spin components.
Not really.
What is measured in a Stern-Gerlach experiment is simply the silver intensity on the screen. To interpret the latter as a spin measurement you need to invoke the quantum mechanical model of the whole setting – and this in the shut up and calculate version only. Eigenvalues or Born's rule are not involved here at all! So one doesn't expect a use for POVMs either.
The experiment is described and explained just using shut up and (semiclassical) calculate! Probablilities are not needed, only beam intensities. Using either Born's rule and POVMs would be theoretical overkill.
Stern and Gerlach had neither POVMs nor Born's rule, and their experiment (including variations) can be interpreted without any probability arguments; the only information needed beyond semiclassical models is the fact that with low intensity beams one needs a longer exposure to produce the detailed pattern.
In fact the math is both simpler and more general, and hence to be preferred over Born's rule. No eigenvalue stuff is needed.
One makes the connection with equipment without any reference to either POVM or Born's rule. Instead one refers to known properties of the equipment and the established relations to theory.
The math doesn't seem so much more difficult than the standard QT description. The problem is that it's not clear how to make the connection with equipment in the lab, which is not a problem in the standard description at all.
In the standard description you simply predict the probability for finding the silver atoms on the screen after having gone through the magnet and compare it to what's measured. The agreement is not great but satisfactory.
The high-accuracy version of measuring magnetic moments with a Penning trap, which we also have discussed above, is also described in such a way, but also in this case you didn't derive the POVM from this setup but just describe something with words. It's not better than the standard descriptions of experiments about quantum objects, and indeed you are right, mostly the measurement device is treated with classical physics, but that's not so surprising since measurement devices are macroscopic objects which are well described by classical physics. But that still doesn't answer my question, how to get a concrete POVM. Of course also the other direction would be interesting, i.e., how to design an experiment for a given POVM. But also this I've not seen yet anywhere.
The physical idea behind POVMs is just the detector response principle discussed in Section 2.2 of my paper, together with the statement of Theorem 2.1. The proof is relevant only for those who want to understand the mathematical concept behind.
That's not necessary to apply the POVM principles. They also don't base it on the projective paradigm of quantum measurement theory that you favor.
Instead they rely on the claims of manufacturers or peers how the equipment works. Almost every experimental reasoning is completely classical, together with a little semiclassical quantum mechanics for some crucial points.
This is because your question is not adapted to how POVMs are actually used.
In practice, people (i) either have a given setup and want to calibrate it; so they do quantum tomography to find the POVM. Or (ii) they want to realize a given POVM with a suitable experiment. The latter is particularly relevant for potential applications in quantum computing.
In my paper, (i) is described in full detail and full generality in Section 2.2, and (ii) is described for a multiphoton setting in Section 4.1, and in more detail in the papers by Leonhardt cited there.
The exact POVM of concrete experiments is as complex as the experimental setting itself. But when one measures something one is not interested in these details unless ones want to construct a more accurate detector. Thus one usually idealizes the description to the bare minimum.
One can do the same for POVMs. For a joint measurement of position and momentum this is done in Section 4.2. The formula there is physically motivated and as simple as one can want it; the experimental realization is given in the paper cited there. For the partition of unity one can take any collection of hat functions describing the smearing, divided by their sum.
It seems to be very difficult to construct it for even a so much simpler setup as a measurement of "particle tracks" with a cloud chamber…
The words define in the first sentence a unique quantum measure. The relabeling needed to get the POVM is described in the remainder, and can be exactly described by the computer programs used to analyze the video (in your version of the experiment), summing the contributions that lead to the same label. Since you described the analysis of the video in words only, I cannot do better.
Thus the construction of the POVM is as concrete as your gedanken experiment.
The very concrete answer is in the last paragraph of Section 4.4 on p.32. I had several times referred to it. The POVM consists of the quantum measure together with a reindexing of the matrices by the measurement results.
You never gave a concrete answer. That's the problem.
Were is the concrete POVM given in this paper?
This is not too naive but too sloppy to be correct. In fact you define
##|D_H\rangle :=U(|H\rangle|D_0\rangle)## and similarly ##|D_V\rangle##, and your formula follows.
That's not true. You always got the answer that it can be done only if you specify the full measuring process from the interaction of the measured system till the measurement results. And then you lost patience or interest, and didn't follow up on my answers.
Mott does not perform a single measurement in his analysis. So how can one extract a POVM from his discussion if nothing is measured? The POVM would depend on details about how the cloud chamber track is observed to actually get the results of the measurement.
Now that you defined a recipe for getting a position and momentum that can be carried out experimentally, this is indeed possible. Your recipe leads to a POVM in essentially the same way as my analysis in Section 4.4 of v4 of my quantum tomography paper, except that the grid of wires is replaced by a grid of pixels encoding the video. That this POVM is complicated comes from the fact that extracting a position and a momentum from a video is complicated.
My challenge stands: The most simple example for a unsharp joint measurement of position and momentum of a particle seems to be the example that is described (imho fully satisfactory) by Mott in the famous paper about the tracks of a charged particle in a cloud chamber. One can extend this indeed to an observation of approximate positions and momenta by simply taking a movie and measuring the position of the end of the track as a function of time and then deduce both a position and the momentum of the particle along this track. Shouldn't it be possible to describe this (gedanken) experiment in terms of a POVM?
The typical example is a faint temporary interaction which changes the system state only slightly (not by projection) but leaves an irreversible record, hence counts as measurement. Particle track detectors are based on this.
The distinction is the word 'projective'. Dirac and von Neumann considered only measurements whose repetition yield the same result each time, and were thus lead to the class of projective measurements.
But most measurements in practice are not of this kind, as either each realization of the system can be typically measured only once, or each measurement on the system measures a different state of the system but never the projected one.
The POVM description of this is similar to that of the joint position-momentum measurement of particle tracks in my Section 4.4, except that the arrangement of wires is replaced by the pixels of the video taken.
The whole of Section 4 of my paper is devoted to real-world experiments that use POVMs rather than projective measurement, with reference to other people's work.
This can only mean that you never bothered to read the associated literature. For example, the quantum information textbook by Nielsen and Chuang is full of POVMs.
Another (gedanken) experiment is the measurement of a "trajectory" of a single particle in a cloud chamber. You can put some radioactive material in there and make a movie of the tracks forming, i.e., you can measure directly how the track forms, i.e., a position-momentum joint measurement. The standard quantum description a la Mott is very clear and for me describes the appearance of "trajectories" in this setup satisfactory, but maybe it's interesting to discuss it within the POVM framework too?
What's problematic, especially for the ensemble interpretation (and Rigo et al. explicitly acknowledge that it goes beyond the standard treatment) is that they use a density operator to describe a single electron rather than an ensemble of electrons. This ensemble is purely imagined (sa, Brown and Gabrielse state explicitly) and has no physical reality. It is needed to derive the formula by which the gyromagnetic ratio is measured.
Their paper says that they measured two particular frequencies (how doesn't really matter for my paper, but you can find more details by reading their paper yourself), whose quotient gives the gyromagnetic ratio to 12 decimal places.
This is because parameter determination such as that of the gyromagnetic ratio, and in fact most of spectroscopy, is not a quantum measurement in the sense of Born's rule nor is it one in in the sense of POVMs. But is uses the objective existence of the density operator and its dissipative dynamics, which are consequences of the detector response principle DRP on which the whole paper is based. That's why I added the material to the paper.
Note that my paper is not primarily about POVMs but about how quantum tomography explains quantum mechanics. Deriving the POVM formalism, including Born's rule where it applies is only a small part of the whole story.
I also don't understand, what's problematic with the standard treatment in Brown and Gabrielse's RMP article. It's just 1st-order perturbation theory in RPA approximation (in Sect. V.A).
In the mean time I revised my tomography paper. The new version is better structured and contains a new section on high precision quantum measurements, where the 12 digit accuracy determination of the gyromagnetic ration through the observation and analysis of a single electron in a Penning trap is discussed in some detail.
The standard analysis assumes that the single electron is described by a time-dependent density operator following a differential equation. While in the original papers this involved arguments beyond the traditional (ensemble-based and knowledge-based) interpretations of quantum mechanics, the new tomography-based approach applies without difficulties.
Yes, in the special case where the system is conservative; otherwise no hamiltonian exists, only a system of Lindblad generators.
This is called self-calibrating tomography. See p.38 of my paper, where I also give some references where further details can be found.
Probably not. Less formally a Von-Neumann observation is represented by disjoint positive valued operators Ei such that sum Ei = 1. A POVM is simply a generalisation that removes the need to be disjoint. It turns out Gleason's Theorem is much easier to prove for POVM's. In practice, they occur when for example you observe a system with a probe then observe the probe. See for example:
http://www.quantum.umb.edu/Jacobs/QMT/QMT_Chapter1.pdf
Thanks
Bill
I agree. No interpretation is needed for this; it predates Born's rule by at least a year.
But it shows that Born's rule doesn't explain quantum measurements of a spectroscopic nature. The latter includes all high precision determinations of constants of Nature such as gyrofactors, mass ratios, etc.
I agree. But this is independent of the value of the frequencies, and only the latter are measured in the experiment under discussion.
I stop this discussion at this point.
More precisely, on the quantum formalism without Born's rule.
It is of course not related at all to it.
It is, again by the quantum formalism without Born's rule.
Only because you apply again your magic wand that turns every quantum calculation into an instance of Born's rule.
But nothing is burnt in a Penning trap, which is the example under discussion.
But the intensity of the lines gives no information at all about the energy differences.
The frequency of the photons emitted by the trapped electron, and hence the determination of the position of the resonance peaks from which the high precision gyrofactor is computed is independent of Born's rule.
behind all these??
How is Born's rule behind the measurement of the energy difference of two levels of a quantum system?
Sure, but the Born rule is one of the basic postulates that's behind all these pragmatic approaches. In Newtonian mechanics there's also much more then the "three laws" to apply it to the analysis of the pgenomena, but they are behind all the corresponding methods.
Yes. The estimation of constants in mathematical models of reality (whether a growth parameter in a biological model or a gyrofactor in a model of a Penning trap) from noisy measurements is always an inverse problem.
So you finally agree that standard quantum theory involves more than Born's rule in order to relate the mathematical formalism to experiment!.
Indeed, standard quantum physics has a most pragmatic approach to the interpretation of the formalism: Anything goes that gives agreement with experiment, and Born's rule is just a tool applicable in some situations, whereas other tools (such as resonance observations or POVMs) apply in other situations.
Can we agree on that?
I have my own interpretation of what is going on, and it does not involve Born's rule.
But you claimed that the experiment is (like all experiments with ion traps) explained by Born's rule. For your convenience and those of the other readers I collected the whole train of your arguments.
I am challenging you to provide a proof of your claim in this particular instance. If you can't do it in the simple case of measuring energy differences, your claim is without any substance!
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.38.310
This measurement recipe is not covered by Born's rule since there is no operator on the electron Hilbert space whose eigenvalues are the energy differences.
So how do you think Born's rule applies in this case?
I agree. In this version nothing needs to be explained.
I was thinking of potential applications in quantum information processing, where the situation is different.
Which quantum observables of the electrons are measured by these currents? If Born's rule were involved, you should be able to point to the operators to which Born's rule is applied in this case.
Usually one uses heralded photons to prepare single-photon states, which are in fact not so easy to produce (in contradistinction to "dimmed down coherent states", which however are not equivalent to true single-photon states but consist largely of the vacuum state). One way, nowadays kind of standard, is to shine with a laser on a BBO crystal and use the entangled photon pairs from parametric down conversion. Then you use one photon ("idler") to "herald" the other photon ("signal"), which you then use for experiments. This gives an ensemble of identically prepared single photons.
In the experiments with single atoms in a trap you usually use an external em. field to excite these atoms many times an measure the emitted photons. Another example with a single electron in a Penning trap is to measure the currents of the "mirror charges" in response to the motion of the electron in the trap, which also provides the statistics you need (see Dehmelt's or Brown's review papers quoted above).
There is no standard way beyond pragmatism (anything successful goes) to do the matching of formalism to complex experiments.
From
From
I never claimed a contradiction, just a non-applicability. One cannot derive from a postulate that only applies to large ensembles of independent and identically prepared systems any statement about a single system!
If the processes are carried out identically, this indeed gives an ensemble of identically prepared photons. But if one only sends a handful of photons on demand to transmit a message (the primary reason why one would want to produce them on demand), one only gets an ensemble of not-identically prepared photons!
Through repeated measurements, with stochasticity induced by the unmodelled interaction with the environment. Just like in classical stochastic processes!
Yes, and I don't see anything contradicting the standard way to relate the formalism of QED to observations. I also think that the idea that ##|\psi(t,\vec{x})|^2## refers to some kind of intensity in Schrödingers first interpretation of the wave function was in analogy to the intensity of light, where it was known to be measured in terms of the energy density. This was however very soon be realized not to be in accordance with the detection of particles (particularly electrons) which indeed leave a single point on a photo plate and not a smeared distribution, and this brought Born to his probabilistic interpretation (in a footnote of his paper on scattering theory of 1926). Today we can use QED to derive that for the em. field the detection probability is indeed proportional to the expectation value of the energy density: It's just following from the first-order perturbation theory and the dipole approximation to describe the photo effect. The formula to evaluate these expectation values is of course based on Born's rule (or postulate). That's all in the standard textbooks about quantum optics and used also in the papers referring to experiments with single photons and/or entangled photon pairs, including all kinds of Bell tests, entanglement swapping, teleportation, and all that.
I still also don't see, why you think that collecting statistics by coupling a single quantum in a trap to a electrical circuit or repeated excitation-dexcitation events via the emitted photons, etc. cannot be understood with standard quantum theory although that's done for decades. Indeed, the many excitation-relaxation processes via an external laser field is defining the ensemble in this example. How else should you get statistics with a single quantum?
The realization of "weak measurements" and the description with the more general concept of POVMs is pretty recent, and as far as I can see, it's not something contradicting the fundamental Born postulate, how QT probabilities and expectation values are related to the formalism (statistical operators to represent the state and self-adjoint (or unitary) operators for observables).
Yes, but he assumes everywhere stationary sources, i.e., ensembles of identically prepared systems. Moreover, he assumes unphysical mathematical constructs called ancillas to reduce POVM measurements on these ensembles to Born's rule.
They use for their analysis a pragmatic approach (i.e., whatever gives agreement with experiments serves as interpretation), not one strictly based on Born's rule. The latter has essential restrictions to apply!
Yes, he discusses POVM in the usual, very abstract terms. But everywhere he assumes stationary sources, i.e., ensembles of identically prepared systems. Under this condition he gets the same as what I propose (with different assumptions, not assuming Born's rule).
Peres never discusses single quantum systems and does not use the term "weak measurement". In the Wikipedia reference I cited, the (standard) derivation of the quantum trajectories describing weak measurements only tells what the state is after a sequence of POVM measurements and what is the probability distribution for getting the whole sequence of results. To give meaning to this probability distribution via Born's rule one needs an ensemble of identically prepared systems giving an ensemble of sequences of measurement results! Otherwise one has only a single sequence of measurement results and the probability of getting this single one is 100%!
As we had discussed some years ago, Peres noticed (and does not resolve) this conflict when he enters philosophical discussions in the last chapter of his book (if I recall correctly, don't have the book at hand).