The thermal interpretation of quantum physics

In summary: I like your summary, but I disagree with the philosophical position you take.In summary, I think Dr Neumaier has a good point - QFT may indeed be a better place for interpretations. I do not know enough of his thermal interpretation to comment on its specifics.
Physics news on Phys.org
  • #107
A. Neumaier said:
The meaning is enigmatic only when viewed in terms of the traditional interpretations, which look at the same matter in a very different way.$$\def\<{\langle} \def\>{\rangle}$$
Given a physical quantity represented by a self-adjoint operator ##A## and a state ##\rho## (of rank 1 if pure),
  • all traditional interpretations give the same recipe for computing a number of possible idealized measurement values, the eigenvalues of ##A##, of which one is exactly (according to most formulations) or approximately (according to your cautious formulation) measured with probabilities computed from ##A## and ##\rho## by another recipe, Born's rule (probability form), while
  • the thermal interpretation gives a different recipe for computing a single possible idealized measurement value, the q-expectation ##\<A\>:=Tr~\rho A## of ##A##, which is approximately measured.
  • In both cases, the measurement involves an additional uncertainty related to the degree of reproducibility of the measurement, given by the standard deviation of the results of repeated measurements.
  • Tradition and the thermal interpretation agree in that this uncertainty is at least ##\sigma_A:=\sqrt{\<A^2\>-\<A\>^2}## (which leads, among others, to Heisenberg's uncertainty relation).
  • But they make very different assumptions concerning the nature of what is to be regarded as idealized measurement result.
Why do you ssay bullet 2 is different from bullet 1? I use the same trace formula of course. What else? It's the basic definition of an expectation value in QT, and it's the most general representation-free formulation of Born's rule. Also with the other 3 points, you say on the one hand the thermal interpretation uses the same mathematical formalism, but it's all differently interpreted. You even use specific probabilistic/statistical notions like "uncertainty" and define it in the usual statistical terms as the standard deviation/2nd cumulant. Why is it then not right to have the same heuristics about it in your thermal interpretation (TI) as in the minimal interpretation (MI)?
That quantities with large uncertainty are erratic in measurement is nothing special to quantum physics but very familiar form the measurement of classical noisy systems. The thermal interpretation asserts that all uncertainty is of this kind, and much of my three papers are about arguing why this is indeed consistent with the assumptions of the thermal interpretation.
But this overlooks that QT assumes that not all observables can have determined values at once. At best, i.e., if technically feasible for simple systems, you can only prepare a state such that a complete compatible set of observables takes determined values. All to this set incompatible observables (almost always) have indetermined values, and this is not due to unideal measurement devices but it's an inherent feature of the system.
Now it is experimentally undecidable what an ''idealized measurement result'' should be since measured are only actual results, not idealized ones.

What to consider as idealized version is a matter of interpretation. What one chooses determines what one ends up with!

As a result, the traditional interpretations are probabilistic from the start, while the thermal interpretation is deterministic from the start.

The thermal interpretation has two advantages:
  • It assumes at the levels of the postulates less technical mathematics (no spectral theorem, no notion of eigenvalue, no probability theory).
  • It allows to make definite statements about each single quantum system, no matter how large or small it is.
Well, using the standard interpretation it's pretty simple to state, what an idealized measurement is: Given the possible values of the observable (e.g., some angular momentum squared ##\vec{J}^2## and one component, usually ##J_z##) you perform an idealized measurement if the resolution of the measurement device is good enough to resolve the (necessarily discrete!) spectral values of the associated self-adjoint operators of this measured quantity. Of course, in the continuous spectrum you don't have ideal measurements in the real world, but also any quantum state predicts an inherent uncertainty given by the formula above. To verify this prediction you need an apparatur which resolves the measured quantity much better than this quantum-mechanical uncertainty.

You can of course argue against this very theroetical definition of "ideal measurements", because sometimes there are even more severe constraints, but these are also fundamental and not simply due to our inability to construct "ideal apparati". E.g., in relativistic QT there's a principle uncertainty for the localization of (massive) particles due to the uncertainty relation and the finiteness of the limit speed of light (Bohr and Rosenfeld, Landau). But here the physics is also clear, why it doesn't make sense to resolve the position better than this fundamental limit. It's because then rather than localizing (i.e. preparing the particle) better you produce more particles, and the same holds for measurement, i.e., the attempt of measuring the position much more precisely involves interactions with other particles leading again to the creation of more particles rather than a better localization measurement.

Of course, it is very difficult to consider in general terms all these subtle special cases.

But let's see, whether I understood the logic behind your TI now better. From #84 I understand that you start with defining the formal core just by the usual Hilbert-space formulation with a stat. op. representing the state and self-adjoint (or maybe even more general) operators representing the observables, but without the underlying physical probabilistic meaning of the MI. The measurable values of the observables are not the spectral values of the self-adjoint operators representing observables but the q-expectations abstractly defined by the generalized Born rule (the above quoted trace formula). At the first glance this makes a lot of sense. You are free to define a theory mathematically without a physical intuition behind it. The heuristics must come of course in the teaching of the subject but is not inherent to the "finalized" theory.

However, I think this interpretation of what's observable is flawed, because it intermingles the uncertainties inherent of the preparation of the system in a given quantum state with the precision of the measurement device. This is a common miconception leading to much confusion. The Heisenberg uncertainty relation (HUR) is not a description of the unavoidable perturbation of a quantum system, which can be made negligible in principle only for sufficiently large/macroscopic systems, but it's a description of the impossibility to prepare incompatible observables with a better common uncertainty than given by the HUR. E.g., having prepared a particle with a quite accurate momentum, its position has a quite large uncertainty, but nothing prevents you from measuring the position of the particle much more accurately (neglecting the above mentioned exception for relativistic measurements, i.e., arguing for non-relativistic conditions). Of course, there is indeed the influence of the measurement procedure on the measured system, but that's not described by the HUR. There's plenty of recent work on this issue (if I remember right, one of the key authors about these aspects is Busch).
 
  • #108
A. Neumaier said:
Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''. A subsystem is selected by picking a vector space of quantities (linear operators) relevant to the subsystem. Regarding a tensor product of two systems as two separate subsystems (as traditionally done) is therefore allowed only when all quantities that correlate the two systems are deemed irrelevant. Thinking in terms of the subsystems only hence produces the weird features visible in the traditional way of speaking.
Einstein has coined the very precise word "inseparability" for this particular feature of quantum entanglement. As Einstein has clarified in a German paper of 1948 he was quite unhappy about the fact that the famous EPR paper doesn't really represent his particular criticism of QT, because it's not so much the "action at a distance" aspect (which indeed only comes into the game from adding collapse postulates to the formalism as in the Heisenberg and von Neumann versions of the Copenhagen interpretation, which imho is completely unnecessary to begin with) but about the inseparability. Einstein thus preferred a stronger version of the "linked-cluster principle" than predicted by QT due to entanglement, i.e., he didn't even like the correlations due to entanglement, which however can never be used to contradict the "relativistic signal-propagation speed limit", but he insisted on the separability of the objectively observable physical world.

It's of course the very point of Bell's contribution to the issue. It's of course impossible to guess what Einstein would have argued about the fact that the modern Bell measurements show that you either have to give up locality or determinism. I think at the present state of affairs, where the predictions of QT are confirmed with an amazing significance in all these measurements, and all the successful attempts to close several loop holes, we have to live with the inseparability according to QT and relativistic local microcausal QFT. As long as there's nothing better, it's only speculative to argue about possible more comprehensive theories, let alone to be sure that such a theory exists at all!
 
  • #109
DarMM said:
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:

1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. [...]

2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.

3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.

Viewed in this way, one sees the core similarities between Mermin's (embryonic) interpretation and Arnold's (independent, more developed) interpretation.

Mermin's "Ithaca interpretation" boils down to: "Correlations have physical reality; that which they correlate does not." A couple of the quotes at the start of his paper are also enlightening:

[W]e cannot think of any object apart from the possibility of its connection with other things.

Wittgenstein, Tractatus, 2.0121

If everything that we call “being” and “non-being” consists in the existence and non-existence of connections between elements, it makes no sense to speak of an element’s being (non-being)... Wittgenstein, Philosophical Investigations, 50.

Mermin also relates this to earlier points of view expressed by Lee Smolin and with Carlo Rovelli's "Relational QM". (References are in Mermin's paper above).

4. Stochastic features of QM are generated by the system interacting with the environment. [...]
... i.e., the decoherence from non-diagonal to diagonal state operator (hence, quantum statistics -> classical statistics), driven by interaction with the environment (e.g., random gravitational interaction with everything else). Even extremely weak such interactions can diagonalize a state operator extraordinarily quickly.
 
  • Like
Likes *now* and vanhees71
  • #110
Okay a possibly more accurate rendering of point 4.

In the Thermal Interpretation, as discussed in point 1, we have quantities ##\langle A \rangle## which take on a specific value in a specific state ##\rho##. Although ##\langle A \rangle## uses the mathematics of (generalized) statistics this is of no more significance than using a vector space in applications where the vectors are not to be understood as displacements, i.e. the physical meaning of a mathematical operation is not tied to the original context of its discovery. ##\langle A \rangle## is simply a quantity.

However it is an uncertain value. Quantum Mechanical systems are intrinsically delocalised/blurred in their quantities in the same kind of fundamental sense that "Where is a city, where is a wave?" is a fuzzy concept. I say blurred because delocalised seems appropriate to position alone. This is neither to say it has a precise position that we are uncertain of (as in a statistical treatment of Newtonian Mechanics) or a fundamentally random position (as in some view of QM). For example particles actually possesses world tubes rather than world lines.

However standard scientific practice is to treat such "blurred" uncertainties statistically, the same mathematics one uses to treat precise quantities of which one is ignorant. This similarity in the mathematics used however is what has lead to viewing quantum quantities as being intrinsically random.

For microscopic quantities their blurring is so extreme that a single observation cannot be regarded as an accurate measurement of a microscopic quantity. For example in the case of a particle when we measure position the measuring device simply becomes correlated with a point within the tube, giving a single discrete reading, but this does not give one an accurate picture of the tube.

Thus we must use the statistics of multiple measurements to construct a proper measurement of the particle's blurred position/world tube.

We are then left with the final question of:
"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"

Section 5.1 of Paper III discusses this. In essence the environment drives the "slow large scale modes" of the measuring device into one of a discrete set of states. These modes correspond to macroscopically observable properties of the device, e.g. pointer reading. Since one cannot track the environment and information is lost into it via dissipation, this takes the form of the macroscopic slow modes stochastically evolving into a discrete set of states.

Thus we have our measuring devices develop an environmentally driven effectively "random" discrete reading of the truly continuous/blurred quantities of the microscopic system. We then apply standard statistical techniques to multiple such discrete measurements to reconstruct the actual continuous quantities.
 
  • Like
Likes julcab12, vanhees71 and Mentz114
  • #111
DarMM said:
Okay a possibly more accurate rendering of point 4.

In the Thermal Interpretation, as discussed in point 1, we have quantities ##\langle A \rangle## which take on a specific value in a specific state ##\rho##. Although ##\langle A \rangle## uses the mathematics of (generalized) statistics this is of no more significance than using a vector space in applications where the vectors are not to be understood as displacements, i.e. the physical meaning of a mathematical operation is not tied to the original context of its discovery. ##\langle A \rangle## is simply a quantity.

However it is an uncertain value. Quantum Mechanical systems are intrinsically delocalised/blurred in their quantities in the same kind of fundamental sense that "Where is a city, where is a wave?" is a fuzzy concept. I say blurred because delocalised seems appropriate to position alone. This is neither to say it has a precise position that we are uncertain of (as in a statistical treatment of Newtonian Mechanics) or a fundamentally random position (as in some view of QM). For example particles actually possesses world tubes rather than world lines.

However standard scientific practice is to treat such "blurred" uncertainties statistically, the same mathematics one uses to treat precise quantities of which one is ignorant. This similarity in the mathematics used however is what has lead to viewing quantum quantities as being intrinsically random.

For microscopic quantities their blurring is so extreme that a single observation cannot be regarded as an accurate measurement of a microscopic quantity. For example in the case of a particle when we measure position the measuring device simply becomes correlated with a point within the tube, giving a single discrete reading, but this does not give one an accurate picture of the tube.

Thus we must use the statistics of multiple measurements to construct a proper measurement of the particle's blurred position/world tube.

We are then left with the final question of:
"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"

Section 5.1 of Paper III discusses this. In essence the environment drives the "slow large scale modes" of the measuring device into one of a discrete set of states. These modes correspond to macroscopically observable properties of the device, e.g. pointer reading. Since one cannot track the environment and information is lost into it via dissipation, this takes the form of the macroscopic slow modes stochastically evolving into a discrete set of states.

Thus we have our measuring devices develop an environmentally driven effectively "random" discrete reading of the truly continuous/blurred quantities of the microscopic system. We then apply standard statistical techniques to multiple such discrete measurements to reconstruct the actual continuous quantities.
What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.
 
  • Like
Likes *now* and DarMM
  • #112
Correct, I forgot to include that. If I get the chance I'll write it up later today.
 
  • #113
A. Neumaier said:
What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.
By this do you simply mean the fact that states over all modes (the full manifold) are metastable and decay to states on the slow mode manifold, which is disconnected, under environmental action. The disconnectedness of the slow manifold providing discreteness.
 
  • #114
So finally there are the same consistent histories? (And the Copenhagen QM can then be derived from them)
 
  • #115
AlexCaledin said:
So finally there are the same consistent histories? (And the Copenhagen QM can then be derived from them)
The whole approach is quite different from consistent histories. The measuring device's large scale features are driven into a disconnected manifold. States on each component of which represent a "pointer" outcome. This evolution is deterministic, but stochastic under lack of knowledge of the environment. You just aren't sure of the environmental state.

This is very different from consistent histories where everything is fundamentally probabilistic, but one shows the emergence of classical probability for certain macro-observables.
 
  • Like
Likes vanhees71
  • #116
After all it seems as if it's also probabilistic or statistical, but it's in the same sense statistical as is classical statistics; it's the lack of detailed knowledge about macroscopic systems which brings in the probabilistic element.

This contradicts, however, the very facts addressed by EPR or better by Einstein in his 1948 essay concerning the inseparability. The paradigmatic example is the simple polarization entangled two-photon states. Two photons can be very accurately prepared in such states via, e.g., parametric downconversion. Then you have two photons with quite well-defined momenta and strictly entangled polarization states. Just concentrating on the polarization states and measuring the two photons by detectors in appropriate (far distant) regions of space we can describe the polarization state as the pure state ##\hat{\rho}=|\Psi \rangle \langle \Psi \rangle## with
$$|\Psi \rangle=\frac{1}{\sqrt{2}} (|H,V \rangle - |V,H \rangle).$$
The single-photon states are given by the usual partial traces, leading to
$$\hat{\rho}_1=\hat{\rho}_2=\frac{1}{2} \hat{1}.$$
Here ##1## and ##2## label the places of the polarization measurement devices, i.e., a polarization filter plus a photodetector behind it.

Here we have no description of the measurement devices included, but irreducibly probabilistic states for the single-photon polarizations, i.e., the polarization of each single photon is maximally indetermined (in sense of information theory with the usual Shannon-Jaynes-von Neumann entropy as information measure). According to the minimal interpretation the polarization state is not indetermined due to lack of knowledge about the environment or the state of the measurement device but it's inherent in the state of the complete closed quantum system, i.e., the two-photon system. It's in a pure state and thus we have as complete knowledge about it as we can have, but due to the entanglement and the consequential inseparability of the two photons, the single-photon polarization states are irreducibly and maximally indetermined, i.e., we have complete knowledge about the system but still there are observables indetermined.

In this case you cannot simply redefine some expectation values as what the true observables are and then just argue that the Ehrenfest equations for these expectation values describe a deterministic process and lump the probabilistic nature of the real-world observations (i.e., just measuring unpolarized photons when looking at one of the polarization entangled two-photon states).
 
  • #117
So, according to the thermal QM, every event (including all this great discussion) was pre-programmed by the Big Bang's primordial fluctuations?
 
Last edited:
  • Like
Likes Lord Jestocost
  • #118
DarMM said:
By this do you simply mean the fact that states over all modes (the full manifold) are metastable and decay to states on the slow mode manifold, which is disconnected, under environmental action. The disconnectedness of the slow manifold providing discreteness.
Yes, since you wanted to answer the question
DarMM said:
"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"
but the argument you then gave didn't answer it.

vanhees71 said:
t's in the same sense statistical as is classical statistics; it's the lack of detailed knowledge about macroscopic systems which brings in the probabilistic element.
Yes. The dynamics of the q-expectations of the universe is deterministic, and stochastic features appear in exactly the same way as in Laplace's deterministic classical mechanics of the universe.

vanhees71 said:
This contradicts, however, the very facts addressed by EPR or better by Einstein in his 1948 essay concerning the inseparability.
No, because my notion of indecomposability (post #101) together with extended causality (Section 4.4 of Part II) allows more than Einstein's requirement of separability. But it still ensures locality in the quantum field sense and Poincare invariance and hence correctly addresses relativity issues.

vanhees71 said:
The paradigmatic example is the simple polarization entangled two-photon states.
Two-photon states are discussed in some detail in Section 4.5 of Part II.

vanhees71 said:
Why the heck, hasn't he written this down in a 20-30p physics paper rather than with so much text obviously addressed to philosophers? (It's not meant in as bad a way as it may sound ;-))).
Maybe you can understand now why. There is a lot to be said to make sure that one doesn't fall back into the traditional interpretations and can see how the old issues are settled in a new way. In the past, different people asked different questions and had different problems with the thermal interpretation as I had discussed them earlier in a more informal way. Orginally I wanted to write a 20 page paper but it grew and grew into the present 4 part series.

vanhees71 said:
the polarization of each single photon is maximally indetermined (in sense of information theory with the usual Shannon-Jaynes-von Neumann entropy as information measure).
This is again reasoning from the statistical interpretation, which doesn't apply to the thermal interpretation.

In the thermal interpretation, the single photons in your 2-photon state are unpolarized. This is simply a different (but completely determined) state than one of the polarized states. There is no experimental way to get more information about an unpolarized photon. Thus it is a state of maximal information; the Shannon entropy (a purely classical concept) is here irrelevant.

The fact that unpolarized light can be turned into polarized light by a filter is not against this since the light coming out of the filter is different from that going into the filter. One can also turn a neutron into a proton plus an electron just by isolating it and waiting long enough, but this doesn't prove that the neutron is composed of a proton and an electron!

vanhees71 said:
According to the minimal interpretation [...] we have complete knowledge about the system but still there are observables indetermined.
Whereas according to the thermal interpretation, if we have complete knowledge about the system we know all its idealized measurement results, i.e., all q-expectations values, which is equivalent with knowing its density operator.

If you want to assess the thermal interpretation you need to discuss it in terms of its own interpretation and not in terms of the statistical interpretation!
 
  • Like
Likes dextercioby
  • #119
AlexCaledin said:
So, according to the thermal QM, every event (including all this great discussion) was pre-programmed by the Big Bang's primordial fluctuations?
Did Laplace have the same complaint with his clockwork universe?

Yeah, God must be an excellent programmer who knows how to create an interesting universe! But maybe consciousness is not preprogrammed and allows for some user decisions (cf. my fantasy ''How to Create a Universe'' - written 20 years ago when the thermal interpretation was still an actively pursued dream rather than a reality)? We don't know yet...

In any case, the deterministic universe we live in is much more interesting than Conway's game of life which already creates quite interesting toy universes in a deterministic way by specifying initial conditions and deterministic rules for the dynamics.
 
Last edited:
  • Like
Likes eloheim and AlexCaledin
  • #120
A. Neumaier said:
Yes, since you wanted to answer the question

but the argument you then gave didn't answer it.
I wasn't sure if it also required an argument that slow mode manifold was in fact disconnected and I couldn't think of one. Metastability of states on the full manifold decaying into those on the slow manifold is enough provided the slow mode manifold is disconnected. Is there a reason to expect this in general?

A. Neumaier said:
Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''.
This somewhat confuses me. I would have thought the notion of reducibility just means it can be completely decomposed, i.e. the total system is simply a composition of the two subsystems and nothing more. What subtlety am I missing?
 
  • #121
DarMM said:
I wasn't sure if it also required an argument that slow mode manifold was in fact disconnected and I couldn't think of one. Metastability of states on the full manifold decaying into those on the slow manifold is enough provided the slow mode manifold is disconnected. Is there a reason to expect this in general?
As soon as there are two local minima in the compactified universe (i.e., including minima at infinity), the answer is yes. For geometric reasons, each local minimizer has its own catchment region, and these are disjoint. This accounts for the case where the slow modes are fixed points. But similar things hold more generally. It is the generic situation, while the situation of a connected slow manifold is quite special (though of course quite possible).

DarMM said:
This somewhat confuses me. I would have thought the notion of reducibility just means it can be completely decomposed, i.e. the total system is simply a composition of the two subsystems and nothing more. What subtlety am I missing?
to 'reduce' is a vague notion that can mean many things. For example, reductionism in science means the possibility of reducing all phenomena to physics.

In the three papers I used ''reduced description'' for any description of a single system obtained by coarse-graining, whereas decomposing a system into multiple subsystems and a description in these terms is a far more specific concept.
 
  • Like
Likes DarMM
  • #122
A. Neumaier said:
What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.
For those reading what is lacking here is that I simply said:
DarMM said:
In essence the environment drives the "slow large scale modes" of the measuring device into one of a discrete set of states
The "drives" here is vague and doesn't explain the mechanism.

What happens is that if the slow manifold is disconnected then states on the manifold of all modes of the device are metastable and under disturbance from the environment decay into a state on one of the components of the slow manifold.

In other words the macroscopically observable features of the device only need a small amount of environmental noise to fall into one of a discrete set of minima, corresponding to the discrete outcome readings, as other more general states of the device are metastable only.
 
  • #123
Thanks @A. Neumaier , I think I've an okay (I hope!) grasp of this view now.
 
  • #124
DarMM said:
I think I've an okay (I hope!) grasp of this view now.
Why don't you state a revised 4-point summary of your view of my view? Then I'll give you (again) my view of your view of my view!

Note that I edited my post #121 to account for the case where the slow manifold isn't just centered around a discrete number of fixed points.
 
  • Like
Likes DarMM
  • #125
vanhees71 said:
You even use specific probabilistic/statistical notions like "uncertainty" and define it in the usual statistical terms as the standard deviation/2nd cumulant.
I spent several pages on uncertainty (Subsection 2.3 of Part II) to show that uncertainty is much more fundamental than statistical uncertainty. For example, consider the uncertainty of the diameter of the city of Vienna. It is not associated with any statistics but with the uncertainty of the concept itself.

The thermal interpretation treats all uncertainty as being of this kind, and says that the quantum formalism predicts always this conceptual, nonstatistical uncertainty. In addition, in the special case where we have a large sample of similarly prepared systems, it also predicts the statistical uncertainty.
 
Last edited:
  • #126
A. Neumaier said:
Y

Whereas according to the thermal interpretation, if we have complete knowledge about the system we know all its idealized measurement results, i.e., all q-expectations values, which is equivalent with knowing its density operator.

If you want to assess the thermal interpretation you need to discuss it in terms of its own interpretation and not in terms of the statistical interpretation!
Ok, this makes sense again. So for you determinism doesn't refer to the observables but to the statistical operators (or states). That's of course true in the minimal statistical interpretation either. So the thermal interpretation is again equivalent to the standard interpretation, you only relabel the language associated with the math, not talking about probability and statistics but only about q-expectation values. I can live with that easily :-).
 
  • #127
A. Neumaier said:
our deterministic universe

So do you thing alpha decay or spontaneous emission is also deterministic ?
 
  • #128
ftr said:
So do you think alpha decay or spontaneous emission is also deterministic ?
On the fundamental level, yes, since each observed alpha decay is something described by some of the observables in the universe. But as casting a die, it is practically indeterministic.
 
  • #129
vanhees71 said:
So for you determinism doesn't refer to the observables but to the statistical operators (or states).
No; it is not an alternative but both! It refers to the (partially observable) beables, which are the q-expectations. This determinism is equivalent to the determinism of the density operator.
vanhees71 said:
That's of course true in the minimal statistical interpretation either. So the thermal interpretation is again equivalent to the standard interpretation,
No, because the meaning assigned to ''observable'' and ''state'' is completely different.

For you, observed are only eigenvalues; for the thermal interpretation, eigenvalues are almost never observed. As in classical physics!

For you, the state of the universe makes no sense at all; for the thermal interpretation, the state of the universe is all there is (on the conceptual level), and every other system considered by physicists is a subsystem of it, with a state completely determined by the state of the universe. As in classical physics!

For you, quantum probability is something irreducible and unavoidable in the foundations; for the thermal interpretation, probability is not part of the foundations but an emergent phenomenon. As in classical physics!

How can you think that both interpretations are equivalent?

Only the things they try to connect - the formal theory and the experimental record are the same, but how they mediate between them is completely different (see post #99).
vanhees71 said:
you say on the one hand the thermal interpretation uses the same mathematical formalism, but it's all differently interpreted.
vanhees71 said:
you only relabel the language associated with the math, not talking about probability and statistics but only about q-expectation values. I can live with that easily :-).
The language associated with the math - that's the interpretation!

One can associate with it Copenhagen language or minimal statistical language - which is what tradition did, resulting in nearly a century of perceived weirdness of quantum mechanics by almost everyone - especially
  • by all newcomers without exception and
  • by some of the greatest physicists (see the quotes at the beginning of Section 5 of Part III).
Or one can associate thermal, nonstatistical language with it, restoring continuity and common sense.

Everyone is free to pick their preferred interpretation. It is time to change preferences!
 
Last edited:
  • Like
Likes dextercioby and Mentz114
  • #130
vanhees71 said:
Einstein [...] insisted on the separability of the objectively observable physical world. [...] It's of course impossible to guess what Einstein would have argued about the fact that the modern Bell measurements show that you either have to give up locality or determinism.
Well, I am not Einstein, and therefore have more freedom.

Also, the nonlocality of Bell has nothing to do with locality in the sense of relativity theory. I explained this in Subsection 2.4 of Part II, where I showed that Bell nonlocality is fully compatible with special relativity when the extendedness of physical objects is properly taken into account.

The thermal interpretation has both determinism and properly understood locality, namely as independent preparability in causally disjoint regions of spacetime, consistent with the causal commutation rules of relativistic quantum field theory. It also has Bell nonlocality in the form of extended causality; see Subsection 2.4 of Part II and the discussion in this PF thread.

Maybe Einstein would have been satisfied.
 
  • #131
A. Neumaier said:
Maybe Einstein would have been satisfied.
Maybe not, he would have found a hole in your theory right a way :smile:. Seriously, what about tunneling?
 
  • #132
ftr said:
Maybe not, he would have found a hole in your theory right a way :smile:.
Maybe. There are many others who might want to try and find such a hole in my interpretation! Not my theory - the theory is standard quantum physics!
ftr said:
what about tunneling?
This is just a particular way a state changes with time.

Consider a bistable symmetric quartic potential in 1D with a tiny bit of dissipation inherited from the environment. Initially, the density is concentrated inside the left well, say; at large enough time it is concentrated essentially equally in both wells. The position has little uncertainty initially and a lot of uncertainty once the tunneling process is completed.
 
Last edited:
  • #133
Let's see if I got the terminology straight.

In standard QM, observables are self-adjoint operators. The thermal interpretation refers to these as q-observables instead (paper I, p.3). Historically, before their mathematical nature was completely understood, Dirac referred to them as q-numbers.

The standard QM usage of the term "observable" is a bit strange because self-adjoint operators are not observable in the everyday sense of the word. The thermal interpretation tries move closer to the everyday usage of the word and defines observables as "numbers obtainable from observations" (paper I, p.3). This is similar to what Dirac called c-numbers (although I think he included complex numbers in the concept and the thermal interpretation probably doesn't).

In standard QM, expectation values and probabilities are inherently probabilistic properties of self-adjoint operators. Since they are "numbers obtainable from observations", they are observables in the thermal interpretation and calling them q-expectations and q-probabilities is done in reference to the usage in standard QM but doesn't reflect anything probabilistic in their mathematical definition. I'm not sure whether eigenvalues should also be called observables. Is the definition of observable tied to whether an experiment can actually be performed in a sufficiently idealized form?

Is this correct so far? I think it is a bit unfortunate to use the term "observable" in the thermal interpretation at all because the term is so deeply ingrained in standard QM which makes it prone to misunderstandings.
 
  • #134
kith said:
Let's see if I got the terminology straight.

In standard QM, observables are self-adjoint operators. The thermal interpretation refers to these as q-observables instead (paper I, p.3). Historically, before their mathematical nature was completely understood, Dirac referred to them as q-numbers.

The standard QM usage of the term "observable" is a bit strange because self-adjoint operators are not observable in the everyday sense of the word. The thermal interpretation tries move closer to the everyday usage of the word and defines observables as "numbers obtainable from observations" (paper I, p.3). This is similar to what Dirac called c-numbers (although I think he included complex numbers in the concept and the thermal interpretation probably doesn't).

In standard QM, expectation values and probabilities are inherently probabilistic properties of self-adjoint operators. Since they are "numbers obtainable from observations", they are observables in the thermal interpretation and calling them q-expectations and q-probabilities is done in reference to the usage in standard QM but doesn't reflect anything probabilistic in their mathematical definition. I'm not sure whether eigenvalues should also be called observables. Is the definition of observable tied to whether an experiment can actually be performed in a sufficiently idealized form?

Is this correct so far?
yes. Eigenvalues of a q-observable are state-independent, hence are not even beables.
In the thermal interpretation, q-expectations are defined for many nonhermitian operators (for example annihilation operators!) and then may be complex-valued.
Note that an observation in the thermal interpretation can be anything that can be reproducibly computed from experimental raw data - reproducible under repetition of the experiment, not of the computations. This can be real or complex numbers, vectors, matrices, statements, etc..
kith said:
I think it is a bit unfortunate to use the term "observable" in the thermal interpretation at all because the term is so deeply ingrained in standard QM which makes it prone to misunderstands.
Actually, in the papers I avoid the notion of an observable because of the possible confusion. I use beable for what exists (all functions of q-expectations) and say that some beables are observable (not observables!) But I sometimes call the traditional selfadjoint operators q-observables (the prefix q- labels all traditional notions that in the thermal interpretation would result in a misleading connotation) and sometimes call informally the observable beables ''observables'' (which matches the classical notion of an observable). However, if you see this done in the papers, please inform me (not here but preferably by email) so that I can eliminate it in the next version.
 
Last edited:
  • #135
vanhees71 said:
whether one can use these concepts to teach QM 1 from scratch, i.e., can you start by some heuristic intuitive physical arguments to generalize the Lie-algebra approach of classical mechanics in terms of the usual Poisson brackets of classical mechanics? Maybe that would be an alternative approach to QM which avoids all the quibbles with starting with pure states and then only finally arrive at the general case of statistical operators as description of quantum states?
I would introduce quantum mechanics with the qubit, which is just 19th century optics. This produces the density operator, the Hilbert space, the special case of pure states, Born's rule (aka Malus' law), the Schrödinger equation, and the thermal interpretation - all in a very natural way.

To deepen the understanding, one can discuss classical mechanics in terms of the Lie algebra of phase space functions given by the negative Poisson bracket, and then restrict to a rigid rotor, described by an so(3) given by the generators of angular momentum. This example is the one given in the last two paragraphs of post #63, and also provides the Lie algebra for the qubit.

Next one shows that this Lie algebra is given by a scaled commutator. This generalizes and defines the Lie algebras that describe quantum mechanics. Working out the dynamics in terms of the q-expectations leads to the Ehrenfest equations. Then one can introduce the Heisenberg, Schrödinger, and interaction picture and their dynamics.

Then one has everything, without any difficult concepts beyond the Hilbert space and the trace, which appeared naturally. There is no need yet to mention eigenvalues and eigenvectors (these come when discussing stationary states), the subtle problems with self-adjointness (needed when discussing boundary conditions), and the spectral theorem (needed when defining the exponentials ##U(t)=e^{\pm itH}##). The latter two issues are completely absent as long as one works within finite-dimensional Hilbert spaces; so perhaps doing initially some quantum information theory makes sense.
vanhees71 said:
First one has to understand the most simple cases to understand the meaning of an interpretation.
The Stern-Gerlach experiment is a very good example for that. [...] how would the analogous calculation work with the thermal representation
The calculations are of course identical, since calculations are not part of the interpretation.

But the interpretation of the calculation is different: In the thermal interpretation, the Ag field is concentrated along the beam emanating from the source, with a directional mass current. The beam is split by the magnetic field into two beams, and the amount of silver on the screen at the end measures the integrated beam intensity, the total transported mass. This is in complete analogy to the qubit treated in the above link. Particles need not be invoked.
 
Last edited:
  • Like
Likes Mentz114
  • #136
vanhees71 said:
for me your thermal interpretation is not different from the standard interpretation as expressed by van Kampen in the following informal paper: https://doi.org/10.1016/0378-4371(88)90105-7
It is very different.

van Kampen uses the standard assumptions of the Copenhagen interpretation, with pure states associated to single systems (p.99, after theorem III) and with collapse (which he claims to deduce on p.106, but his argument is sketchy exactly here: he deduces the collapse of the measured system from the silently assumed collapse of system+detector). His alleged ''proof'' is discussed in detail in Bell's paper ''http://www.johnboccio.com/research/quantum/notes/bell.pdf'' on pp.14-17.
 
  • #137
A. Neumaier said:
It is very different.

van Kampen uses the standard assumptions of the Copenhagen interpretation, with pure states associated to single systems (p.99, after theorem III) and with collapse (which he claims to deduce on p.106, but his argument is sketchy exactly here: he deduces the collapse of the measured system from the silently assumed collapse of system+detector). His alleged ''proof'' is discussed in detail in Bell's paper ''http://www.johnboccio.com/research/quantum/notes/bell.pdf'' on pp.14-17.

Thanks for posting a link to that essay. I think Bell summarizes pretty well what I find unsatisfactory about most textbook descriptions of quantum mechanics.
 
  • #138
stevendaryl said:
Thanks for posting a link to that essay. I think Bell summarizes pretty well what I find unsatisfactory about most textbook descriptions of quantum mechanics.
Then you should like the thermal interpretation, which suffers from none of what Bell complains about! It just requires a little to get used to...
 
Last edited:
  • Like
Likes dextercioby and stevendaryl
  • #139
A. Neumaier said:
No; it is not an alternative but both! It refers to the (partially observable) beables, which are the q-expectations. This determinism is equivalent to the determinism of the density operator.

No, because the meaning assigned to ''observable'' and ''state'' is completely different.

For you, observed are only eigenvalues; for the thermal interpretation, eigenvalues are almost never observed. As in classical physics!

For you, the state of the universe makes no sense at all; for the thermal interpretation, the state of the universe is all there is (on the conceptual level), and every other system considered by physicists is a subsystem of it, with a state completely determined by the state of the universe. As in classical physics!

For you, quantum probability is something irreducible and unavoidable in the foundations; for the thermal interpretation, probability is not part of the foundations but an emergent phenomenon. As in classical physics!How can you think that both interpretations are equivalent?
Well, we obviously have very different views on the fundamental meaning of QT, and that leads to mutual misunderstandings.

If there were really only the very coarse grained FAPP deterministic macroscopic values were observables (or "beables" to use this confusing funny language), QT never would have been discovered. In fact we can observe more detailed things for small systems, and these details are even very important to make the observed fact of the atomistic structure of matter consistent with classical physics, particularly the everyday experience of the stability of matter.

You are of course right that neither the state nor the observable operators for themselves are deterministic in standard QT but only the expectation values (of which the probabilities or probability distributions are special cases) are "beables", i.e., they are the picture and representation independent observables predictions of the theory.

It is also of course true that for macroscopic systems the possible resolution of real-world measurement devices is well too coarse to measure the "eigenvalues", i.e., the microcopic details of ##\mathcal{O}(10^{23})## microscopic degrees of freedom.

What I think is still not clarified is the operational meaning of what you call q-expectations. For me they have no different meaning in either the standard minimal interpretation and you thermal interpretation, because they are the same in the formal math (##\Tr \hat{A} \hat{\rho}##, and obeying the same EoS) and also the same operationally, namely just what they are called, i.e., expectation values.

Of course, "the state of the entire universe" is a fiction in any physics. It's principally unobservable and thus subject of metaphysical speculation in QT as well as classical physics.

Only the things they try to connect - the formal theory and the experimental record are the same, but how they mediate between them is completely different (see post #99).
So in fact they ARE the same.
The language associated with the math - that's the interpretation!

One can associate with it Copenhagen language or minimal statistical language - which is what tradition did, resulting in nearly a century of perceived weirdness of quantum mechanics by almost everyone - especially
  • by all newcomers without exception and
  • by some of the greatest physicists (see the quotes at the beginning of Section 5 of Part III).
Or one can associate thermal, nonstatistical language with it, restoring continuity and common sense.

Everyone is free to pick their preferred interpretation. It is time to change preferences!
For me it's very hard to follow any interpretation which forbids me to understand "thermal language" that is "not statistical". I've already a very hard time with traditional axiomatized "phenomenological thermodynamics", where, e.g., the central notion of entropy is its definition by introducing temperature as an integrating factor of an abstract Pfaffian form. The great achievement by the Berrnoulli's, Maxwell, and mostly Boltzmann were to connect these notions with the underlying fundamental deynamical laws of their time in terms of statistical physics, and that very general foundation so far withstood all the "revolutions" of 20-century physics, i.e., relativity (which anyway is just a refined classical theory for the description of space and time and thus not as revolutionary as it appeared at the turn to the 20th century) and QT (which indeed in some sense can be considered as really revolutionary in breaking with the deterministic world view).

The "apparent weirdness" of QT is for me completely resolved by the minimal statistical interpretation. It's not QT is weird but our prejudice that our "common sense", trained by everyday experience with rough macroscopic observables (or preceptions if you wish), tells us the full structure of matter.

In my opinion we should stop talking about QT as "something weird nobody understands" and rather state that it's the most detailed theory we have so far. The real problems are not in these metaphysical quibbles of the last millenium but in the open unsolved questions of contemporary physics, which are

-a consistent quantum description of the gravitational interaction; does it imply "quantization of spacetime" as suggested by the close connection between the mathematical description of gravity as a geometrical feature of the space-time manifold (as a pseudo-Riemannian/Lorentzian manifold as in GR or rather the more natural extension to an Einstein-Cartan manifold, gauging the Lorentz group), or is there something completely new ("revolutionary") needed? I think the answers to these questions are completely open at the moment, and despite many mathematically fascinating ideas (string and M-theory, loop quantum gravity,...) I fear we'll have a very hard time without any empirical glimpse into what might be observational features of whatever "quantum effect on gravitation and/or the space-time structure".

-the nature of what's dubbed "Dark Energy" and "Dark Matter", which may be related to the question of quantum gravity too. Also here, I think it's hard to think of any progress without some empirical guidance of what the "physics of the standard model" may be.
 
  • #140
vanhees71 said:
The "apparent weirdness" of QT is for me completely resolved by the minimal statistical interpretation. It's not QT is weird but our prejudice that our "common sense", trained by everyday experience with rough macroscopic observables (or preceptions if you wish), tells us the full structure of matter.

I don't think that's true. It's neither true that the weirdness is resolved by the minimal interpretation, nor is it true that it has anything to do with prejudice by "common sense". The minimalist interpretation is pretty much what Bell was criticizing in his essay. To quote from it:
Here are some words which, however legitimate and necessary in application, have no place in a formulation with any pretension to physical precision: system, apparatus, environment, microscopic, macroscopic, reversible, irreversible, observable, information, measurement.
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
24
Views
3K
  • Quantum Interpretations and Foundations
Replies
0
Views
159
  • Quantum Interpretations and Foundations
2
Replies
42
Views
5K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
Replies
25
Views
2K
  • Quantum Interpretations and Foundations
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
48
Views
4K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
12K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
  • Quantum Interpretations and Foundations
7
Replies
218
Views
12K
Back
Top