Exploring the Connection Between Quantum Mechanics and Quantum Field Theory

In summary: It applies to the blobs but is not used as far as I know later - at least I haven't seen it. One can almost certainly find a use for it - its just at my level of QFT I haven't seen it. Some others who know more may be able to comment. BTW the link I gave which proved Gleason showed its not really an axiom - but rather a consequence of non-contextuality - but that is also a whole new...
  • #106
Schroedinger equation is one in infinite configurations to contruct QFT,so QFT can be reduced to QM?
 
Physics news on Phys.org
  • #107
fxdung said:
Schroedinger equation is one in infinite configurations to contruct QFT,so QFT can be reduced to QM?

As many have said throughout this thread, including bhobba and vanhees71: QM is the general framework.

Relativistic QFT is a specific type of QM in which there is a classical Minkowskian spacetime, and measurement outcomes are classical relativistic events.
 
  • #108
And there is not the ''difference'' between quantum measure theory and the making average by statistical ensemble.The saying about the ''collapse'' of eigenstate in processes of measurement is equivalent with saying about statistical ensemble?I think the saying about statistical ensemble more general than the saying about the collapse of eigenstate in a process of measurement.The later is a special case of the former.
 
  • #109
fxdung said:
And there is not the ''difference'' between quantum measure theory and the making average by statistical ensemble.The saying about the ''collapse'' of eigenstate in processes of measurement is equivalent with saying about statistical ensemble?I think the saying about statistical ensemble more general than the saying about the collapse of eigenstate in a process of measurement.The later is a special case of the former.

I am not sure exactly what A. Neumaier means, reading through the thread it is clear his view is extremely unconventional, whereas Demystifier, bhobba, Orodruin, vanhees71 have all agreed on the conventional view. QM is the overarching framework. Relativistic QFT is a type of QM. You can also see meopemuk's post #45 - there is a slight difference in terminology (meopemuk's terminology might be better), but his idea is also the conventional view.

To get from relativistic QFT to non-relativistic QM, we note that non-relativistic QM of many-identical particles can be formulated exactly as non-relativistic QFT. This key point is found in condensed matter books about "many-body physics", and the two equivalent forms of many-particle physics are "first quantization" and "second quantization". "Second quantization" is a misleading name - its correct meaning is that it allows you to write the usual non-relativistic QM of many identical particles as a non-relativistic QFT.

So as Demystifier pointed out earlier in post #23, one can do relativistic QFT -> non-relativistic QFT -> non-relativistic QM.
 
Last edited:
  • #110
The ''collapse'' of eigenstate is a result of the making many measurements on the same particle due to the probability character.Then in QFT if we consider many times on the same quantum of field , the Born's rule (meaning the ''collapse'')will appear.Is that right?
 
Last edited:
  • #111
atyy said:
I am not sure exactly what A. Neumaier means, reading through the thread it is clear his view is extremely unconventional
This is because I interpret what people actually do when doing statistical physics and QFT, rather than what they say in the motivational introduction. It is very easy to verify that my view is the correct one for statistical mechanics and finite time QFT, no matter how unconventional it may sound on first sight.
atyy said:
So as Demystifier pointed out earlier in post #23, one can do relativistic QFT -> non-relativistic QFT -> non-relativistic QM.
But as I had pointed out in post #31, during this apparent ''derivation'' one has to introduce in an ad hoc way
  • (i) particle position and momentum operators by hand - via a nonphysical extension of the Hilbert space, and
  • (ii) an external classical reality that collapses the probabilities to actualities.
This makes the difference between the ontologies.

The predictions of QFT (field values, correlation functions, semiconductor behavior, chemical reaction rates) are valid for each single macroscopic system, without needing any foundational blabla on eigenvalues, probability, or collapse.

While QM, if strictly based on the traditional axioms, is valid only for measuring discrete observables exactly, and predicts for an individual system nothing at all, for almost all observables.

I should add that most practitioners in QM and QFT get useful results since they don't care about the traditional, far too restrictive axioms or postulates of QM. They apply whatever is needed in any way that is convincing enough for their colleagues. The foundations are not true foundations but post hoc attempts to put the mess on a seemingly sounder footing.
 
Last edited:
  • #112
fxdung said:
in QFT if we consider many times on the same quantum of field , the Born's rule (meaning the ''collapse'')will appear.Is that right?
Fields are space-time dependent. If you look at a field at different times or different places you look at different observables. Thus, strictly speaking, it is impossible to measure anything repeatedly. (It can be done only under an additional stationarity assumption.)
 
Last edited:
  • Like
Likes Mentz114
  • #113
A. Neumaier said:
This is because I interpret what people actually do when doing statistical physics and QFT, rather than what they say in the motivational introduction. It is very easy to verify that my view is the correct one for statistical mechanics and finite time QFT, no matter how unconventional it may sound on first sight.

But as I had pointed out in post #31, during this apparent ''derivation'' one has to introduce in an ad hoc way
  • (i) particle position and momentum operators by hand - via a nonphysical extension of the Hilbert space, and
  • (ii) an external classical reality that collapses the probabilities to actualities.
This makes the difference between the ontologies.

The predictions of QFT (field values, correlation functions, semiconductor behavior, chemical reaction rates) are valid for each single macroscopic system, without needing any foundational blabla on eigenvalues, probability, or collapse.

While QM, if strictly based on the traditional axioms, is valid only for measuring discrete observables exactly, and predicts for an individual system nothing at all, for almost all observables.

I should add that most practitioners in QM and QFT get useful results since they don't care about the traditional, far too restrictive axioms or postulates of QM. They apply whatever is needed in any way that is convincing enough for their colleagues. The foundations are not true foundations but post hoc attempts to put the mess on a seemingly sounder footing.

Yes, there are some mathematical difficulties in introducing position operators, for example, but they are at the level of mathematical physics. At the non-rigourous level of ordinary physics, one can simply start with lattice QED, which is already non-relativistic, and get everything in QM. This is the same as the Wilsonian paradigm, and if one wants to argue that the Wilsonian paradigm is not properly justified in rigourous mathematics, that is fine.

However, it is definitely not true that QFT solves the foundational problems. QFT has all the same postulates as QM (state is vector in Hilbert space, probabilities given by Born rule, collapse of the wave function etc), including the need for the classical apparatus, with all the problems that entails. One way to see this is that a QFT like QED is really just non-relativistic QM because it is lattice QED.
 
  • #114
atyy said:
QFT has all the same postulates as QM (state is vector in Hilbert space, probabilities given by Born rule, collapse of the wave function etc), including the need for the classical apparatus, with all the problems that entails.
You didn't understand. Statistical mechanics can start with Hilbert spaces, unitary dynamics for operators, density operators for Heisenberg states, the definition of

(EX)##~~~~~\langle A\rangle:=\mbox{tr}~\rho A##

as mathematical framework, and the following rule for interpretation, call it (SM) for definiteness:
A. Neumaier said:
the practice of statistical mechanics says:
''Upon measuring a Hermitian operator ##A##, the measured result will be approximately ##\bar A=\langle A\rangle##, with an uncertainty at least of the order of ##\sigma_A=\sqrt{\langle (A-\bar A)^2\rangle}##. If the measurement can be sufficiently often repeated (on an object with the same or sufficiently similar state) then ##\sigma_A## will be a lower bound on the standard deviation of the measurement results.''
Everything deduced in statistical mechanics about macroscopic properties follows from this without ever invoking ''probabilities given by Born rule, collapse of the wave function etc), including the need for the classical apparatus, with all the problems that entails''. Look into an arbitrary book on statistical physics and you'll never find such an invocation, except in the beginning, where the formula ##\langle A\rangle:=\mbox{tr}~\rho A## is derived! Thus one can skip this derivation, make this formula an axiom, and has a completely self-consistent setting in which the classical situation is simply the limit of a huge number of particles.

Note that it is impossible to deduce the Born rule from the rules (EX) and (SM) without introducing the notion of external measurement which is not present in the interpretation of quantum theory based upon (EX) and (SM) alone. This shows that the ontologies are indeed different!
 
Last edited:
  • #115
A. Neumaier said:
You didn't understand. Statistical mechanics can start with Hlbert spaces, unitary dynamics for operators, density operators for Heisenberg states, the definition of ##\langle A\rangle:=\mbox{tr}~\rho A## as mathematical framewok, and the following rule for interpretation:

Everything deduced in staistical mechanics about macroscopic properties follows from this without ever invoking ''probabilities given by Born rule, collapse of the wave function etc), including the need for the classical apparatus, with all the problems that entails''. Look into an arbitrary book on statistical physics and you'll never find such an invocation, exept in the beginning, where the formula ##\langle A\rangle:=\mbox{tr}~\rho A## is derived! Thus one can skip this derivation, make this formula an axiom, and has a completely self-consistent setting in which the classical situation is simply the limit of a huge number of particles.

What is the difference? ##\langle A\rangle:=\mbox{tr}~\rho A## is the Born rule.

Also, there is quantum mechanics without statistical mechanics (eg. T=0).
 
  • #116
atyy said:
##\langle A\rangle:=\mbox{tr}~\rho A## is the Born rule.
Neither Wikipedia nor Dirac nor Messiah calls this the Born rule.

Note that this formula is shut-up-and-calculate since it is a purely mathematical definition. A definition (the left hand side is defined to be an abbreviation for the right hand side), not a postulate or axiom! Hence it cannot represent Born's rule. The interpretation is not in the formula but in the meaning attached to it. The meaning in statistical mechanics is the one given in (SM) of my updated post #114.

The meaning according to Born's probability definition is unclear as it is ''derived'' using plausibility arguments that lack a clear support in the postulates. Born's original paper says only something about the probability of simultaneously measuring all particle positions. One can deduce from this a statistical interpretation of ##\langle A\rangle## only if ##A## is a funcion of the position operators. But even if one generalizes this to arbitrary Hermitian operators, as it is generally done, the derivation says nothing about the individual case but only asserts that if you measure ##A## sufficiently often you'll get on the average ##\langle A\rangle##. However, Born's rule says that you always get exact values ##0## or ##1## when you measure a projection operator (whatever this is supposed to mean for an abitrary projection opeator - the fondations are silent about when a measurement measures ##A##) - which is statement different from (SM). Thus the interpretations are not equivalent.
atyy said:
Also, there is quantum mechanics without statistical mechanics (eg. T=0).
##T=0## is an unphysical limiting case that can be derived as such a limit from statistical mechanics. The meaning of the rules (EX) and (SM) remains intact in this limit.
 
Last edited:
  • #117
A. Neumaier said:
The meaning according to Born's probability definition is unclear as it is ''derived'' using plausibility arguments that lack a clear support in the postulates. Born's original paper says only something about the probability of simultaneously measuring all particle positions. One can deduce from this a statistical interpretation of ##\langle A\rangle## only if ##A## is a funcion of the position operators. But even if one generalizes this to aritrary Hermitian operators, as it is generally done, the derivation says nothing about the individual case but only asserts that if you measure ##A## sufficiently often you'll get on the average ##\langle A\rangle##. However, Born's rule says that you always get exact values ##0## or ##1## when you measure a projection operator (whatever this is supposed to mean for an abitrary projection opeator - the fondations are silent about when a measurement measures ##A##) - which is statemnt different from (SM). Thus the interpretations are not equivalent.

Hmmm, the Born rule should give the complete probability distribution, from which we know the only values are 0 or 1. The complete probability distribution is given by assuming that the Born rule (meaning ##\langle A\rangle:=\mbox{tr}~\rho A##) gives the expectation values of all observables that commute with A.
 
  • #118
atyy said:
Hmmm, the Born rule should give the complete probability distribution, from which we know the only values are 0 or 1. The complete probability distribution is given by assuming that the Born rule gives the expectation values of all observables that commute with A.
You would have to derive this from the Born rule as given in the official sources. The precise form given depends on the source, though, so you'd be clear about which form you are using.
 
  • #119
A. Neumaier said:
You would have to derive this from the Born rule as given in the official sources. The precise form given depends on the source, though, so you'd be clear about which form you are using.

I think I should be able to get all cumulants from the Born rule, since the cumulants commute with A and are expectation values .. ?
 
  • #120
atyy said:
I think I should be able to get all cumulants from the Born rule, since the cumulants commute with A and are expectation values .. ?
Something in ths statement is strange since cumulants are numbers, not operators, so they commute with everything.

I know of different ways to ''get'' the result you want from appropriate versions of the Born rule. But the ''derivations'' in the textbooks or other standard references I know of are all questionable. The challenge is to provide a derivation for which all steps are physically justified.
 
  • #121
A. Neumaier said:
Something in ths statement is strange since cumulants are numbers, not operators, so they commute with everything.

I know of different ways to ''get'' the result you want from appropriate versions of the Born rule. But the ''derivations'' in the textbooks or other standard references I know of are all questionable. The challenge is to provide a derivation for which all steps are physically justified.

I was thinking of doing like you did above, so that the variance is ##\sigma_A=\sqrt{\langle (A-\langle A \rangle)^2\rangle}##.

Actually, there is a different definition of the Born rule eg. http://arxiv.org/abs/1110.6815 given as rule II.4 on p8:

##p_{x} = Tr [P_{x} \rho P_{x}]##

But I have always assumed the two forms are equivalent.
 
  • #122
Demystifier said:
I was not sufficiently precise. What I meant is that in some interpretation only one of the pictures may be appropriate. For example, in the many-world interpretation only the Schrodinger picture is appropriate.
As I said, that cannot be. Both pictures are completely equivalent. So the interpretation about the relation of the formalism to observations in physics cannot depend on the picture of time evolution used (modulo mathematical problems a la Haag's theorem concerning the non-existence of the interaction picture of relativistic QFT; here you have to take the common practice of using the perturbative (partially resummed) evaluations of S-matrix elements, being compared to measured cross sections and spectral shapes of unstable resonances with the usual renormalization prescriptions as the theory).

Admittedly, I've never understood the point of the many-worlds interpretation, but if it depends on the choice of the picture, it's not compatible with standard QT.
 
  • #123
A. Neumaier said:
One can generalize everything to weaken arguments aimed at the ungeneralized version. The conventional axioms of QM say how the state of a system changes through a perfect measurement. [See, e.g., Messiah I, end of Section 8.1, or Landau & Lifschitz, Vol. III, Chapter I, Par. 7.] This is a context that makes sense only in the ordinary Schroedinger picture.
In Landau and Lifshitz (and most probably also in Messiah, which I can't check at the moment) everything is discussed in terms of wave functions, which is a picture-independent quantity, i.e., of the form ##\psi(t,\alpha)=\langle \alpha,t|\psi,t \rangle##, where the state ket ##|\psi,t \rangle## and the eigenvectors of operators ##|\alpha,t \rangle## develop in time with two arbitrary self-adjoint operators ##\hat{X}(t)## and ##\hat{Y}(t)## with ##\hat{X}(t)+\hat{Y}(t)=\hat{H}##, where ##\hat{H}## is the Hamiltonian of the system. These operators define two unitary time-evolution operators through the equations of motion
$$\dot{\hat{A}}(t)=\mathrm{i} \hat{X}(t) \hat{A}(t), \quad \hat{A}(t=0)=1,$$
$$\hat{\hat{C}}(t)=-\mathrm{i} \hat{Y}(t) \hat{C}(t), \quad \hat{C}(t=0)=1.$$
Then
$$|\alpha,t \rangle=\hat{A}(t) |\alpha,t=0 \rangle, \quad |\psi,t \rangle=\hat{C}(t) |\psi,t \rangle,$$
and from that you get for the wave function
$$\psi(t,\alpha)=\langle \alpha,t|\psi,t \rangle = \langle \alpha,t=0 |\hat{A}^{\dagger}(t) \hat{C}(t)|\psi,t=0 \rangle,$$
and thus the equation of motion of the wave function is picture independently given by the usual Schrödinger equation
$$\mathrm{i} \partial_t \psi(t,\alpha)=\hat{H} \psi(t,\alpha),$$
where ##\hat{H}## here stands for the representation of the Hamilton operator in the ##\alpha## basis.
 
  • #124
vanhees71 said:
As I said, that cannot be. Both pictures are completely equivalent. So the interpretation about the relation of the formalism to observations in physics cannot depend on the picture of time evolution used (modulo mathematical problems a la Haag's theorem concerning the non-existence of the interaction picture of relativistic QFT; here you have to take the common practice of using the perturbative (partially resummed) evaluations of S-matrix elements, being compared to measured cross sections and spectral shapes of unstable resonances with the usual renormalization prescriptions as the theory).

Admittedly, I've never understood the point of the many-worlds interpretation, but if it depends on the choice of the picture, it's not compatible with standard QT.
The two pictures are not equivalent. They only have the same measurable predictions, just as all interpretations have the same measurable predictions. Of course, you may say that this means that all interpretations are also equivalent, but that would miss the very point of interpretations. The point of interpretations is not merely to make predictions. The point of interpretations is to give an intuitive idea of what is really going on. If some interpretation (such as MWI) says that ##\psi(t)## is a really existing physical quantity (not merely a calculation tool) that really depends on time ##t##, then it makes sense only in the Schrodinger picture. From the MWI point of view, the true physics happens only in the Schrodinger picture, while Heisenberg picture is only a convenient calculation tool.
 
  • #125
vanhees71 said:
In Landau and Lifshitz (and most probably also in Messiah, which I can't check at the moment) everything is discussed in terms of wave functions, which is a picture-independent quantity, i.e., of the form ##\psi(t,\alpha)=\langle \alpha,t|\psi,t \rangle##, where the state ket ##|\psi,t \rangle## and the eigenvectors of operators ##|\alpha,t \rangle## develop in time with two arbitrary self-adjoint operators ##\hat{X}(t)## and ##\hat{Y}(t)## with ##\hat{X}(t)+\hat{Y}(t)=\hat{H}##, where ##\hat{H}## is the Hamiltonian of the system. These operators define two unitary time-evolution operators through the equations of motion
$$\dot{\hat{A}}(t)=\mathrm{i} \hat{X}(t) \hat{A}(t), \quad \hat{A}(t=0)=1,$$
$$\hat{\hat{C}}(t)=-\mathrm{i} \hat{Y}(t) \hat{C}(t), \quad \hat{C}(t=0)=1.$$
Then
$$|\alpha,t \rangle=\hat{A}(t) |\alpha,t=0 \rangle, \quad |\psi,t \rangle=\hat{C}(t) |\psi,t \rangle,$$
and from that you get for the wave function
$$\psi(t,\alpha)=\langle \alpha,t|\psi,t \rangle = \langle \alpha,t=0 |\hat{A}^{\dagger}(t) \hat{C}(t)|\psi,t=0 \rangle,$$
and thus the equation of motion of the wave function is picture independently given by the usual Schrödinger equation
$$\mathrm{i} \partial_t \psi(t,\alpha)=\hat{H} \psi(t,\alpha),$$
where ##\hat{H}## here stands for the representation of the Hamilton operator in the ##\alpha## basis.
Sure - there is no difference in the treatment of the unitary case. The differences in derivation, claims, and interpretation appear only when discussing measurement, which is interaction with an - unmodelled - detector. Then there is a considerable difference how different authors proceed, unless one copied from the other. My statement was made in the context of a perfect (von Neumann) measurement.
 
  • #126
Demystifier said:
The two pictures are not equivalent. They only have the same measurable predictions, just as all interpretations have the same measurable predictions. Of course, you may say that this means that all interpretations are also equivalent, but that would miss the very point of interpretations. The point of interpretations is not merely to make predictions. The point of interpretations is to give an intuitive idea of what is really going on. If some interpretation (such as MWI) says that ##\psi(t)## is a really existing physical quantity (not merely a calculation tool) that really depends on time ##t##, then it makes sense only in the Schrodinger picture. From the MWI point of view, the true physics happens only in the Schrodinger picture, while Heisenberg picture is only a convenient calculation tool.
I don't know, what you mean by ##\psi(t)##. Is it a Hilbert-space vector representing a pure state? If so, then it's picture dependent. Is it a wave function ##\psi(t,\vec{x})## for a single particle wrt. to the position representation? Then it's picture independent and its physical meaning is that ##|\psi(t,\vec{x})|^2## is probability distribution to find the particle at position ##\vec{x}##. That's observable by making a measurement on an ensemble of equally and stochatically independently (uncorrelated) prepared particles. I think this minimal interpretation of QT, referring to the observable facts (and that's what physics is about and not to "explain the world"), is common to all interpretations of QT. If some interpretation differs from this, it's a new theory, contradicting QT in at least one observable fact, and then this is testable empirically. Any interpretation that claims that you have observable differences depending on the picture of time evolution chosen, claims that QT is incorrect and must be substituted by another theory that prefers one picture over any other. As far as I know, there's no hint that such a modification of QT is necessary.
 
  • #127
A. Neumaier said:
Sure - there is no difference in the treatment of the unitary case. The differences in derivation, claims, and interpretation appear only when discussing measurement, which is interaction with an - unmodelled - detector. Then there is a considerable difference how different authors proceed, unless one copied from the other. My statement was made in the context of a perfect (von Neumann) measurement.
The description of filter preparation procedure (often inaccurately called a "measurement") is also independent of the choice of the picture of time evolution. It is also not defined in terms of abstract mathematical entities of the formalism but by a concrete experimental setup. Any description of a Stern-Gerlach experiment for the "advanced lab" ("Fortgeschrittenenpraktikum") is a paradigmatic example.
 
  • #128
atyy said:
I was thinking of doing like you did above, so that the variance is ##\sigma_A=\sqrt{\langle (A-\langle A \rangle)^2\rangle}##.

Actually, there is a different definition of the Born rule eg. http://arxiv.org/abs/1110.6815 given as rule II.4 on p8:

##p_{x} = Tr [P_{x} \rho P_{x}]##

But I have always assumed the two forms are equivalent.
Everyone seems to make the assumptions that the various forms are equivalent, but few seem prepared to prove it...

The paper by Paris that you cite states on p.2.,
Paris said:
by system we refer to a single given degree of freedom (spin, position, angular momentum,...) of a physical entity. Strictly speaking we are going to deal with systems described by finite-dimensional Hilbert spaces and with observable quantities having a discrete spectrum.
This is an extremely special case of QM, far too special for anything that could claim to be a foundation for all of quantum mechanics. It can serve as a motivation and introduction, but not as a foundation. (And the author doesn't claim to give one.)

If ##X## is a Hermitian operator with a discrete spectrum (which Paris assumes on p.2) then the calculation in Postulate 2 on p.3 is valid and gives a valid derivation of the meaning of the expectation two lines after (1) from the Born rule one line before (1). If the spectrum contains a continuous part, Born's rule as stated in the line before (1) is invalid, as the probability of measuring ##x## inside the continuous spectrum is exactly zero, although a measurement result is always obtained. Instead,
the squared absolute amplitude should give the probability density at ##x##. Wikipedia's Born rule has a technical annex for the case a general spectrum that is formally correct but sounds a bit strange for fundamental postulates (that should be reasonably intuitive). But it is not formulated generally enough since the deduction from it,
wikipedia said:
If we are given a wave function
d2e79802c0615b1460d3934878f3fd5f.png
for a single structureless particle in position space, this reduces to saying that the probability density function
28f76236d5f213947e8079c1dfc5aac5.png
for a measurement of the position at time
6d523d2156a1f903c9cd55ab12627d5f.png
will be given by [PLAIN]https://upload.wikimedia.org/math/8/2/b/82b20d585c65a498143e1efda64eefa5.png[PLAIN]https://upload.wikimedia.org/math/9/8/c/98c5c92074973386aa8bca86bde81273.png[/QUOTE]
(which is essentially Born's original interpretation from 1926 - he didn't consider observables other than position coordinates)
doesn't follow but needs even more machinery from functional analysis about the existence of the joint spectrum for a set of commuting self-adjoint operators. it is very strange that the foundations of quantum mechanics should depend on deep results in functional analysis...

Paris goes on to say on p.3,
Paris said:
As it is apparent from their formulation, the postulates of quantum mechanics, as reported above, are about a closed isolated system.
This is incorrect since according to every interpretation of quantum mechanics, the dynamics of an isolated system is always unitary and it cannot be observed, since observation is possible only when the system interacts with a detector.

The correct formulation (in the finite-dimensional case discussed by Paris) should be:

Postulate 2a. As long as a system is isolated the dynamics of its state is given by the Schroedinger equation. During the interaction with an instrument the state changes in such a way that (in the interaction picture) the state of a system in a pure state ##\psi## before the entering the instrument changes upon leaving the instrument with probability ##|P_x\psi|^2## to a pure state proportional to ##P_x\psi##, where ##\sum_x P_x^*P_x=1## (i.e., the ##P_x^*P_x## form a POVM). The ##P_x## are characteristic for the instrument, and can (in principle) be predicted from a quantum treatment of the instrument.

This is the observer-free formulation. It can be complemented by the following assertion involving observation:

Postulate 2b. If the final state is proportional to ##P_x\psi##, one can (in principle) deduce the value of ##x## from observations of the instrument and its surrounding. But the change of state happens whether or not the instrument is observed.

There is a corresponding version for mixed states that involves density matrices (which also needs a replacement of Postulate 1 of Paris). The resulting set of postulates is a much better set of postulates for (finite-dimensional) quantum mechanics. In particular, after an appropriate extension to POVMs with infinitely many components, they (unlike the Born rule) fairly faithfully reflect most of what is done in modern QM.

A mutilated, unnecessarily rigid and subjective form of the postulates in the density matrix version was stated by Paris on p.9. Note that in this process he completely changed the postulates! Postulate 1 (pure states) was silently dropped on p.4 where he remarks that ''different ensembles leading to the same density operator are actually the same state, i.e. the density operator provides the natural and most fundamental quantum description of physical systems''. (How can something be more fundamental than the very foundations one starts with? How can obviously different ensembles, if they mean anything physically, ''actually be the same state''? Only by changing the notion of a state.) Postulate 3 (unitarity) is dropped on the same page by observing that ''the action of measuring nothing should be described by the identity operator'', while according to Postulate 3 it should be described by the Hamiltonian dynamics. (He is assuming an interaction picture, without mentioning it anywhere!) Finally, Postulate 2 (the definition of an observable and the Born amplitude squaring rule) is replaced by a new definition of observables in II.1 and a generalized Born rule II.3 that was invented only much later (probably around the time Born died). In II.5 he adds a rule that is in direct conflict with II.3 since a measurement performed in which we find a particular outcome cannot lead to two different states depending on whether or not we record the result.

Thus Paris documents in some detail that modern quantum mechanics is, fundamentally, neither based on state vectors nor on observables being Hermitian operators nor on instantaneous collapse nor on Born's rule for the probability of finding results. Instead, it is based on states described by density matrices, observables described by POVMs, interactions in finite time described by multiplication with a POVM component, and a generalized Born rule for the selection of this component. This generalized setting is necessary and sufficient to describe modern quantum optics experiment at a level where efficiency issues and measuring imperfections can be taken into account.

Apart from the Hilbert space, nothing is kept from the textbook foundations, except that the latter serve as a simplified (but partially misleading) introduction to the whole subject.
 
Last edited by a moderator:
  • #129
vanhees71 said:
observable facts (and that's what physics is about and not to "explain the world")
For me, physics is about both. But of course, anybody has freedom to use physics for whatever one wants.
 
  • #130
A. Neumaier said:
One can generalize everything to weaken arguments aimed at the ungeneralized version. The conventional axioms of QM say how the state of a system changes through a perfect measurement. [See, e.g., Messiah I, end of Section 8.1, or Landau & Lifschitz, Vol. III, Chapter I, Par. 7.] This is a context that makes sense only in the ordinary Schroedinger picture.
That's just an example of the general principle: The axiomatization of the theory, so natural in mathematical physics, is often not a good idea in theoretical physics. Theoretical physics should be open to frequent modifications and reformulations.
 
  • #131
A. Neumaier said:
Thus Paris documents in some detail that modern quantum mechanics is, fundamentally, neither based on state vectors nor on observables being Hermitian operators nor on instantaneous collapse nor on Born's rule for the probability of finding results. Instead, it is based on states described by density matrices, observables described by POVMs, interactions in finite time described by multiplication with a POVM component, and a generalized Born rule for the selection of this component. This generalized setting is necessary and sufficient to describe modern quantum optics experiment at a level where efficiency issues and measuring imperfections can be taken into account.

I agree with that, but to me, it seems that the switch from idealized measurements whose outcomes are eigenvalues with probabilities given by the Born rule to the density matrix interpretation is not such a big deal. It's important for practical reasons, but I don't see how it does anything to clarify the foundational questions about quantum mechanics. Other than, perhaps, making it harder to ask those questions...
 
  • #132
Demystifier said:
That's just an example of the general principle: The axiomatization of the theory, so natural in mathematical physics, is often not a good idea in theoretical physics. Theoretical physics should be open to frequent modifications and reformulations.
Yes. My point is that a foundation that has to be modified when the building is mostly erected, wasn't a good foundation and doesn't really deserve that name. As understanding in physics grows, the foundations should be adapted as well.
 
  • Like
Likes Troubled Muppet and Demystifier
  • #133
stevendaryl said:
I agree with that, but to me, it seems that the switch from idealized measurements whose outcomes are eigenvalues with probabilities given by the Born rule to the density matrix interpretation is not such a big deal. It's important for practical reasons, but I don't see how it does anything to clarify the foundational questions about quantum mechanics. Other than, perhaps, making it harder to ask those questions...
Actually it simplifies to ask appropriate questions and closes the door to others asked. For example, in the version I gave (which is what is used in quantum optics and quantum information theory), it says what happens independent of the measurement process, and in particular independent of any human observation of results. This already rules out consciousness as an agent, while the latter is implicitly present as a possibility in the traditional foundations.

One can still specialize to the case where the ##P_x## are rank one projectors, and gets the pure von Neumann case as a (very special) situation, sufficient to analyze nonlocality issues. But one then knows that one is in a very special situation.
This puts Bell-experiments into perspective as being a very special, hard to prepare situation. Normally, one hasn't this kind of nonlocality; otherwise doing physics would be impossible. Seeking out these extremes is like doing the same in the classical domain:

A. Neumaier said:
People very experienced in a particular area of real life can easily trick those who don't understand the corresponding matter well enough into believing that seemingly impossible things can happen. This is true in the classical domain, amply documented by magic tricks where really weird things happen, such as rabbits being pulled out of empty hats, etc..

The art of a magician consists in studying particular potentially weird aspects of Nature and presenting them in a context that emphasizes the weirdness. Part of the art consists of remaining silent about the true reasons why things work rationally, since then the weirdness is gone, and with it the entertainment value.

The same is true in the quantum domain. Apart from being technically very versed experimental physicists, people like Anton Zeilinger are quantum magicians entertaining the world with well-prepared quantum weirdness. And the general public loves it! Judging by its social impact, quantum weirdness will therefore never go away as long as highly reputed scientists are willing to play this role.
 
  • #134
A. Neumaier said:
This puts Bell-experiments into perspective as being a very special, hard to prepare situation. Normally, one hasn't this kind of nonlocality; otherwise doing physics would be impossible. Seeking out these extremes is like doing the same in the classical domain:

It depends on what you're after. If you only want to say that, in practice, it's possible to ignore nonlocality and other quantum weirdness, I agree. That's why "shut up and calculate" works fine as an interpretation.
 
  • #135
stevendaryl said:
it seems that the switch from idealized measurements whose outcomes are eigenvalues with probabilities given by the Born rule to the density matrix interpretation is not such a big deal. It's important for practical reasons, but I don't see how it does anything to clarify the foundational questions about quantum mechanics. Other than, perhaps, making it harder to ask those questions...
Even the concepts are simpler since instead of requiring knowledge about eigenvalues and eigenvectors one only needs to assume that the reader can correctly interpret the relation ##\sum_x P_x^*P_x=1##, which is sufficient to get the POVM property, so it can be substituted for it.
stevendaryl said:
It depends on what you're after. If you only want to say that, in practice, it's possible to ignore nonlocality and other quantum weirdness, I agree. That's why "shut up and calculate" works fine as an interpretation.
If you only want to say that QM is nonlocal it is sufficient to point out that the Born rule specifies for a particle prepared at time ##t## in the local lab in a coherent state ##\psi(t)## a positive probability ##p_\Omega=\int_\Omega dx |\psi(x)|^2## that it is found instead at time ##t+\epsilon## in a given region ##\Omega## anywhere ##10^{100}## lightyears away in the universe. The probability is very small, admitted. But isn't it very weird and very nonlocal that it is positive and hence possible?

The fact that quantum mechanics works based on these nonlocal assumptions was known already in 1926. Understanding didn't increase by experiments that demonstrated the violation of Bell inequalities. Only some classical reasons how this could possibly be understood in simpler terms were eliminated.
 
  • #136
It seems to me that saying that QFT tells "what happens independent of the measurement process" is misleading, if not false. Yes, you can interpret QFT as giving statistical information about fields, and that doesn't seem to involve measurement. But to me that's no different than just picking position as a preferred basis in QM, and saying that QM gives us statistics about position. If you want to say that position has a privileged status in QM, you can do that--that's the Bohmian interpretation, basically, which is explicitly nonlolcal. If you don't give position a privileged status, then it seems to me that you have a measurement problem: quantum probabilities only make sense once a basis is chosen.

I really don't think you're right that QFT solves any of these problems.
 
  • #137
A. Neumaier said:
The fact that quantum mechanics works based on these nonlocal assumptions was known already in 1926. Understanding didn't increase by experiments that demonstrated the violation of Bell inequalities. Only some classical reasons how this could possibly be understood in simpler terms were eliminated.

Yes, I think that that's a very important, often overlooked point. Many people act as if Bell's inequality tells us something new about QM. It really doesn't. Bell's inequality (and their violation) rule out a particular class of theories--the locally realistic theories. But we already knew that QM was not that type of theory. So the only impact of Bell's inequality was to dash the hopes of people like Einstein who thought that QM might be someday replaced by such a theory.
 
  • #138
The same is true in the quantum domain. Apart from being technically very versed experimental physicists, people like Anton Zeilinger are quantum magicians entertaining the world with well-prepared quantum weirdness. And the general public loves it! Judging by its social impact, quantum weirdness will therefore never go away as long as highly reputed scientists are willing to play this role.

To me, the comparison with magicians seems more like this:

We see a magician saw a lady in half and then put her back together, unharmed. There are three different reactions possible:
  1. Some people say: Wow, that guy really has magical powers.
  2. Some people (such as "The Amazing Randi") say: There is some trick involved---I want to figure out what it is.
  3. Other people say: Why are we focusing on such an extreme, unnatural case? In the vast majority of actual cases, when someone is sawed in half, they don't recover. Let's just worry about these typical cases.
 
  • #139
stevendaryl said:
It seems to me that saying that QFT tells "what happens independent of the measurement process" is misleading, if not false. Yes, you can interpret QFT as giving statistical information about fields, and that doesn't seem to involve measurement.
You took my statement out of context. Here I was arguing not about QFT but about the modern foundation of quantum mechanics described in post #128. It is a much more powerful formulation of the Copenhagen interpretation than the usual ones. (Though to save time I didn't make the density matrix version explicit, and that I assumed as Paris a finite-dimensional Hilbert space. For a completely specified set of postulates appropriate for modern quantum mechanics (fully compatible but in detail differing from post #128) see my Postulates for the formal core of quantum mechanics from my theoretical physics FAQ. If you want to discuss these, please do so in a separate thread.)

All this is purely about QM in its standard form - just making explicit what people in the literature actually do rather than basing it on published - out-of-date or poorly designed - postulates. My postulates say what happens whether or not something is measured, and tell you how you can verify it statistically by experiment if you are inclined to do so, have the means to prepare the corresponding experiments, and figured out how to extract the ##x## from the detector or its environment. The latter only requires standard qualitative reasoning that experimenters are familiar with.

Everything is in principle verifiable, without ever having to pick a preferred basis. In place of the preferred basis one has the ##P_x##, which is determined by the instrument in a completely rational fashion. Books and lecture notes on quantum information theory teach you how to determine experimentally the ##P_x## for some instrument if you don't know them, and how to find out the density matrix in which a sufficiently stationary source is prepared. All probabilities can be checked by calculating the frequencies of having measured various ##x## and dividing by the frequencies obtained when in place of the instruments one has only a detector that counts the number of systems arriving.

But this is not QFT. In QFT at finite times there is no particle picture, and only field expectations and correlation functions make operational sense. This leads to important differences; see posts #31 and #101.
 
  • #140
I just have had the pleasure to listen to a brillant colloquium talk by Zeilinger. He's far from behaving like a magician but very careful on the "no-nonsense side" if it comes to interpretation. His experiments over the years do not show any hint of "weirdness" but just verifies the predictions of standard quantum theory with high precision, including very successful Bell tests, double-slit/grating experiments with buckyballs demonstrating decoherence etc. etc.
 

Similar threads

Replies
36
Views
4K
Replies
182
Views
12K
Replies
11
Views
909
Replies
15
Views
2K
Replies
113
Views
8K
Replies
69
Views
5K
Back
Top