The thermal interpretation of quantum physics

In summary: I like your summary, but I disagree with the philosophical position you take.In summary, I think Dr Neumaier has a good point - QFT may indeed be a better place for interpretations. I do not know enough of his thermal interpretation to comment on its specifics.
  • #71
DarMM said:
Could you explain this a bit more? Surely a finite subregion of spacetime contains a maximum energy level and the compactness criterion is known to be valid for free fields (as is the Nuclearity condition), generally in AQFT it is considered that the Hilbert space of states in a finite subregion is finite dimensional as this condition implies a sensible thermodynamics and asymptotic particle interpretation.

I appreciate how dissipation allows a realist account of the stochastic nature of QM in your interpretation (based on the lucid account in section 5.2 of Paper III), so no argument there. I'm simply wondering about the need for infinite-dimensional Hilbert spaces in finite spacetime volumes.
Unbounded space and unbounded energy are needed to make dissipation possible!

Classically it ensures for example that Poincare''s recurrence theorem cannot be applied. I don't know what the right quantum analogy should be.

I don't know yet the precise mechanism that could rigorously lead to dissipation. The common wisdom is to employ the thermodynamic limit and an associated phase transition, but this limit is an idealization that is unlikely to be the full truth.

Thus there are many interesting open questions with significant mathematical challenges. In my opinion, these are much more important than proving or analyzing no-go theorems that assume that the Born rule is an exact law of Nature.
 
Last edited:
  • Like
Likes dextercioby and DarMM
Physics news on Phys.org
  • #72
What exactly does "exact" mean, when applied to a probabilistic rule?
 
  • #73
AlexCaledin said:
What exactly does "exact" mean, when applied to a probabilistic rule?
Exact refers to that
  1. the possible measurement values are the exact eigenvalues (according to most interpretations),
  2. that theoretical conclusions are drawn on the level of probability theory (which is exact, except for its application to reality), and
  3. that the probabilities follow exactly the law of large numbers (when compared with experiment).
 
  • Like
Likes AlexCaledin
  • #74
Thank you... So exact rules can be in mathematical models.
 
  • #75
A. Neumaier said:
Your yesterday revised lecture notes on statistical mechanics (p.20 in the version of 5th March, 2019) is a little more cautious in formulating the traditional Born rule:

With this formulation, my argument only shows that there are no ''precise measurements'' of energy.

But then with your foundations, the whole of statistical mechanics hangs in the air because these foundations are too imprecise!

You seem to interpret the total energy in statistical thermodynamics as a mean of somehow measured energies of the zillions of atoms in the macroscopic body.

But your postulates in the lecture notes apply (as stated) only to measurements, not to unmeasured averages over unobserved fluctuations. Thus it seems that you assume that a body in equilibrium silently and miraculously performs ##10^{23}## measurements and averages these. But how are these measured? how often? how long does it take? Where are the recorded measurement results? What is the underlying notion of measurement? And how do these surely very inaccurate and tiny measurements result in a highly accurate q-expectation value? Where is an associated error analysis guaranteeing the observed accuracy of the total energy measured by the thermal engineer?

You cannot seriously assume these zillions of measurements. But then you cannot conclude anything from your postulates, which are explicitly about measured stuff.

Or are they about unmeasured stuff? But then it is not a bridge to the observed world, and the word 'measurement' is just pretense that it were so.

The thermal interpretation has no such problems! It only claims that the q-expectation is approximately measured when it is known to be measured and a measurement result is obtained by the standard measurement protocols.
The meaning of your interpretation gets more and more enigmatic to me.

In the standard interpretation the possible values of observables are given by the spectral values of self-adjoint operators. To find these values you'd have to measure energy precisely. This is a fiction of course. It's even a fiction in classical physics, because real-world measurements are always uncertain, and that's why we need statistics from day one in the introductory physics lab to evaluate our experiments. Quantum theory has nothing to do with these uncertainties of real-world measurements.

At the same time you say the very same about measurements within your thermal interpretation I express it within the standard interpretation. As long as the meaning of q-averages is not clarified, I cannot even understand the difference of the statements. That's the problem.

In another posting you claim, I'd not have read Section 3.3. I have read it, but obviously it did not convey to me what you really wanted to say. Because already in the very beginning, I cannot make any sense of the words without the standard probabilistic interpretation of the meaning of the trace formula. That's the meaning the Ehrenfest theorem has in QT. I've no clue, what you mean by "Ehrenfest picture". I know the Schrödinger, the Heisenberg and the general Dirac picture, but that's something completely different. Misunderstanding a text is not always and always not solely the fault of the reader...

As I'd already suggested in a private e-mail conversation, for me your thermal interpretation is not different from the standard interpretation as expressed by van Kampen in the following informal paper:

https://doi.org/10.1016/0378-4371(88)90105-7

There's also no problem with single measurements in the standard representation. The definite reading of a meausurement apparatus's pointer is due to the coarse graining of the reading: The macroscopic pointer position is an average over many fluctuations over macroscopically small but microcopically huge times, the fluctuations being invisible to us within the resolution of the reading.

A classical analogon is the definite reading of a galvanometer measuring of a rectified DC current. The inertia of the pointer leads to an effective time-averaging over the fluctuating current, leading to the "effective current" (via appropriate gauging of the scale). For the unrectified DC current the same setup gives a 0 reading of the galvanometer through the same "averaging process".

Averaging in the standard representation of QT is not necessarily the repetition of a measurement in the sense of a Gibbs ensemble!
 
Last edited:
  • #76
A. Neumaier said:
The thermal interpretation says that particles are fiction

So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
 
  • #77
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
This isn't really something confined to @A. Neumaier 's thermal interpretation. In interacting QFTs particles only exist asymptotically in scattering processes. In the Standard model Hydrogen is a state which (under scattering processes) can evolve to a state with large overlap with a proton and electron product state.

In QFT the only sense you can give to one particle "being made of" a collection of others is that at asymptotic times it has large overlap with the multiparticle state of such a collection. However for many particles it doesn't overlap asymptotically with a single unique multiparticle state, so you have freedom in what you choose to say something is made of.
 
  • Like
Likes vanhees71 and dextercioby
  • #78
DarMM said:
This isn't really something confined to @A. Neumaier 's thermal interpretation. In interacting QFTs particles only exist asymptotically in scattering processes. In the Standard model Hydrogen is a state which (under scattering processes) can evolve to a state with large overlap with a proton and electron product state.

In QFT the only sense you can give to one particle "being made of" a collection of others is that at asymptotic times it has large overlap with the multiparticle state of such a collection. However for many particles it doesn't overlap asymptotically with a single unique multiparticle state, so you have freedom in what you choose to say something is made of.

But in non relativistic QM we do have the concept of single electron. In Thermal interpretation( for NQM) the claim is that there are no particles, that is puzzling.
 
  • #79
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
A manifestation of the electron field with a computable charge distribution, covering more or less the classical size of the atom.
ftr said:
But in non relativistic QM we do have the concept of single electron. In Thermal interpretation( for NQM) the claim is that there are no particles, that is puzzling.
The concept of a single electron is a convenient approximation of the more fundamental concept of the electron field from QED.

The nonexistence of single electrons inside a nonrelativistic multi-electron system can also be seen from the fact that on the Hilbert space of a multi-electron system (the space of antisymmetrized wave functions) there are no position operators for single electrons, while there are distribution-valued operators for the charge density at any space-time point.

Only in certain approximations, one can talk in some sense about single electrons. For example, in the Hartree-Fock approximation of an atom, one can talk about the outermost electron, namely the one whose energy is largest. This is possible because in this approximation, the total energy of an ##N##-electron system can be naturally decomposed into a sum of ##N## energies for single electrons.

In general, secondary concept in physics are emergent approximate concepts arising from an appropriate approximate version of a more fundamental concept. Just like an atom has no temperature, but a macroscopic body has one.
 
  • Like
Likes dextercioby
  • #80
A. Neumaier said:
A manifestation of the electron field with a computable charge distribution, covering more or less the classical size of the atom.

In effect you are saying that the electron has a size, what is inside it. what is charge distribution?
 
  • #81
ftr said:
In effect you are saying that the electron has a size, what is inside it. what is charge distribution?
No. The electron field has a charge density concentrated in a small region of atom size if bounded, of beam shape if fast moving.
 
  • Like
Likes vanhees71 and dextercioby
  • #82
A. Neumaier said:
No. The electron field has a charge density concentrated in a small region of atom size if bounded, of beam shape if fast moving.

I am sorry I did not get what you meant, I ask again what is "charge density" what gives rise to it. Moreover, the electron "cloud" surrounds the proton, so the electron "field" does not seem to be contiguous, is it like a glass of water and the proton an ice cube!
 
  • #83
How does the Ehrenfest-Tolman effect affect this?
 
Last edited:
  • #84
A. Neumaier said:
It means that there are additional correlation degrees of freedom Take your observables to be fields you get pair correlations of the fluctuations. Locally via a Wigner transformation this gives kinetic contributions, but if A and B refer to casually disjoint regions, say, you get nonlocal correlations, the beables needed to violate the assumptions of Bell's theorem.
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment without transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
 
Last edited:
  • #85
*now* said:
How does the Ehrenfest-Tolman effect affect this?
Please give a reference for discussion.
 
  • #86
ftr said:
I am sorry I did not get what you meant, I ask again what is "charge density" what gives rise to it. Moreover, the electron "cloud" surrounds the proton, so the electron "field" does not seem to be contiguous, is it like a glass of water and the proton an ice cube!
What is informally viewed as an electron cloud or drawn as orbitals are aspects of the electron field extending over some region around the nuclei. Similarly, the nuclei, often modeled as points or in more detail as fluids are aspects of the nucleon field, or ob a more detailed level of the quark field.
 
  • Like
Likes dextercioby
  • #87
DarMM said:
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
 
Last edited:
  • #88
ftr said:
So what is an electron in a hydrogen atom? Or electrons in a silver atom for that matter?
Well, even in classical relativistic physics "point particles are strangers", as Sommerfeld put it. The troubles with the point-particle concept became apparent already from the very beginning of Lorentz's "Theory of Electrons". I'm not sure, whether it's the first source, but already in 1916 the troubles with divergent self-energy in the context of the attempt to find closed equations for the motion of charged point particles ("electrons") and the electromagnetic fields became apparent. The trouble has been attact by some of the greatest physicists like Dirac or Schwinger with no real success. Today, as far as we know, the best one can do is to even approximate the famous Abraham-Lorentz-Dirac equation further, boiling it down to the Landau-Lifshitz equation, as it can be found in the famous textbook series (vol. 2, among the best textbooks on classical relativistic field theory ever written).

Even in the classical regime the most natural way to describe the mechanics of charged particles is a continuum description like hydrodynamics or relativistic kinetic theory (aka Boltzmann equation). One very practical application is the construction of modern particle accelerators like the FAIR accelerator here in Darmstadt, Germany, where the high-intensity particle bunches need a description taking into account not only the interaction between the particles ("space-charge effects") but also radiation losses, and there a hydro simulation (i.e., continuum desription of the particles) leads to the conclusion that for the discrete-particle picture the Landau-Lifshitz approximation to the Abraham-Lorentz-Dirac equation, describing the (accelerated) motion of charged particles, including the radiation-reaction forces.

The most fundamental theory we have today about "elementary particles" is the Standard Model of elementary-particle physics, which is based on relativistic, local (microcausal) quantum field theory (QFT). Here the trouble persists but is quite a lot milder. The early failed attempts to formulate a relativistic quantum mechanics clearly show that relativity needs many-body description even if you start with a few particles only as in the usual scattering experiments, where you consider reactions of two particles in the initial state. The reason is that at relativistic collision energies (i.e., where these energies come into the same order of magnitude as the masses (##\times c^2##, but I set ##c=\hbar=1##) of the lightest particles allowed to be created in the reaction (where allowed means not violating any of the empirically known conservation laws like energy, momentum, angular momentum and several conserved-charge conservation laws) there's always some probability to create new particles and/or destroying the initial colliding particles.

In QFT the fundamental concept are fields, as the name suggests. QFT was known from the very beginning of the development of modern QFT. Immediately after Heisenberg's ingenious insight during his hay-fever enforced stay on Helgoland in the summer of 1925, his vague ideas were amazingly quickly worked out by Born and Jordan and also Heisenberg himself as a formalism today known as "matrix mechanics", and already in one of these very early papers (the famous "Dreimännerarbeit" with Born, Jordan, and Heisenberg) everything was "quantized", i.e., not only the particles (electrons) but also the electromagnetic field. At the time ironically man physicists thought to also quantized the em. field is "too much of a revolution", and it was considered as unnecessary for a short while. The reason is simple: It is not so easy to see the necessity for field quantization at lower energies, available in atomic physics at this time. Although it was well known that for some phenomena a "particle picture for radiation", as proposed in Einstein's famous paper of 1905 on what we nowadays call "light quanta", can more easily explain several phenomena (like the photoelectric effect and Compton scattering) than the classical-field picture, to understand atomic physics for almost everything a treatment, where only the electrons were quantized and the interaction was described by electrostatics and the radiation by classical electromagnetic fields. What, however, was known at the time was the necessity for "spontaneous emission", i.e., if if there's no radiation field present which could lead to induced emission, there must be some probability for an excited atomic state (i.e., an energy eigenstate of the electrons around a nucleus) to emit a photon. This is the only phenomenon at the time which cannot be described by the semiclassical theory, where only the electrons were quantized but not the electromagnetic field. Everything else, including the photoelectric effect and Compton scattering as well as first applications to condensed-matter phenomena like the theory of dispersion of em. waves in matter can be successfully described in the semiclassical approximation.

The idea of field quantization was rediscovered by Dirac in 1927 when he formulated the theory of emission and absorption of electromagnetic radiation in terms of annihilation and creation operators for photons, leading to the correct postdiction of spontaneous emission, which was needed to explain Plancks black-body radiation formula which started the entire quantum business in 1900. It was well known by Einstein's (also very famous) paper of 1917 on the quantum-kinetic derivation of the Planck spectrum within "old quantum mechanics" that the spontaneous emission had to be postulated in addition to induced emission and absorption to get the correct Planck formula from kinetic considerations, but before Dirac there was no clear formalism for it.

Shortly thereafter among others Heisenberg and Pauli formulated quantum electrodynamics, and the use of perturbation theory lead to quite some success as long as one used only the lowest-order approximations (what we nowadays call the tree-level approximations using the pictorial notation in terms of Feynman diagrams). But to go to higher orders was plagued by the old demon of divergences known from the classical theory of radiation reactions, i.e., the interaction of charged particles with their own radiation fields, leading to the same self-energy divergences known already from classical theory, but the divergences were less severe than in classical theory, and the solution of the problem within perturbation theory was found in 1948 when Tomonaga, Schwinger, and Feynman developed their renormalization theory, also largely triggered by the fact that the "radiative corrections", i.e., the higher-order corrections leading to divergences in naive perturbation theory, became measurable (particularly Lamb's discovery of a little shift in the finestructure of the hydrogen-atom spectrum, now named after him "Lamb shift"). The final solution of the problem within perturbative QFT came in the late 1960ies, proving then crucial for the Standard Model, when in 1971 't Hooft and Veltman could prove the perturbative renormalizability of Abelian as well as non-Abelian gauge theories to any order of perturbation theory.

The upshot of this long story is that the particle picture of subatomic phenomena is quite restricted. One cannot make true sense of the particle picture accept in the sense of asymptotically free states, i.e., only when the quantum fields can be seen as essentially non-interacting a particle interpretation of quantum fields in terms of Fock states (eigenstates of the particle-number operators) becomes sensible.

Particularly for photons a classical-particle picture, as envisaged by Einstein in his famous 1905 paper on "light quanta", carefully titled as "a heuristic approach", is highly misleading. There's not even a formal way to define a position operator for massless quanta (as I prefer to say instead of "particles") in the narrow sense. All we can calculate is a probability for a photon to hit a detector at the place where this detector is located.
 
  • Like
Likes dextercioby, *now*, ftr and 1 other person
  • #89
AlexCaledin said:
- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
I wouldn't say so. The thermal reservoir, the environment, is responsible for the stochastic nature of subsystems when you don't track the environment. However it doesn't guide them like the Bohmian potential, it's not an external object of a different class/type to the particles it's just another system. Also it's not universal, i.e. the environment is just whatever external source of noise is relevant for the current system, e.g. air in the lab, thermal fluctuations of atomic structure of the measuring device.
 
  • #90
DarMM said:
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment with transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.

It's indeed clear that in the above considered description of the two-photon Bell experiments, the photons are not localizable in a classical sense but that the localization is through the localization of the detectors' "click events", which are clearly and well-defined macroscopic manifestations (plus the fundamental assumption of locality, microcausality leading to the validity of the linked-cluster theorem for the QFT S-matrix).

Of course the q-expectation values have somehow to be heuristically introduced too to make sense to a physicist, and I still don't see, how this heuristics can be given without recurse to the standard probabilistic interpretation of the "state" (i.e., the statistical operator of the orthodox minimal interpretation), but as an axiomized final formalism it makes perfect sense.
 
  • #91
DarMM said:
I wouldn't say so. The thermal reservoir, the environment, is responsible for the stochastic nature of subsystems when you don't track the environment. However it doesn't guide them like the Bohmian potential, it's not an external object of a different class/type to the particles it's just another system. Also it's not universal, i.e. the environment is just whatever external source of noise is relevant for the current system, e.g. air in the lab, thermal fluctuations of atomic structure of the measuring device.
In addition the most important point in contradistinction to the Bohmian theory, which I think still convincingly only works in the non-relativistic approximation, the thermal interpreation (if I finally understand it right as meant by @A. Neumaier) as summarized in #84, there's no need for a Bohmian non-local description but one can use the standard description in terms of local relativistic QFTs without the need to develop a pilot-wave theory (which would be needed for fields rather than particles, I'd guess).
 
  • Like
Likes DarMM
  • #92
vanhees71 said:
Particularly for photons a classical-particle picture, as envisaged by Einstein in his famous 1905 paper on "light quanta", carefully titled as "a heuristic approach", is highly misleading. There's not even a formal way to define a position operator for massless quanta (as I prefer to say instead of "particles") in the narrow sense. All we can calculate is a probability for a photon to hit a detector at the place where this detector is located.
It's interesting that in Haag's book "Local Quantum Physics" and Steinmann's "Perturbative Quantum Electrodynamics and Axiomatic Field Theory" the notation of a detector operator or probe is introduced to give formal meaning to the particle concept. With an n-particle state being a state that can activate at most n such probes.
 
  • Like
Likes Peter Morgan, dextercioby and vanhees71
  • #93
AlexCaledin said:
- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
vanhees71 said:
In addition the most important point in contradistinction to the Bohmian theory, which I think still convincingly only works in the non-relativistic approximation, the thermal interpretation as summarized in #84, there's no need for a Bohmian non-local description but one can use the standard description in terms of local relativistic QFTs without the need to develop a pilot-wave theory.
There are similarities and differences:

The thermal interpretation is deterministic, and the nonlocal multipoint q-expectations ignored in approximate calculations are hidden variables accounting for the stochastic effects observed in the coarse-grained descriptions of the preparation and detection processes.

But there are no additional particle coordinates as in Bohmian mechanics that would need to be guided; instead, the particle concept is declared to be an approximation only.
 
Last edited:
  • Like
Likes AlexCaledin
  • #94
vanhees71 said:
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.
Yes, assuming I'm right of course! :nb)

I would say the major difference is that the q-expectations ##\langle A\rangle## are seen as actual quantities, not averages of a quantity over an ensemble of results. So for instance ##\langle A(t)B(s) \rangle## isn't some kind of correlation between ##A(t)## and ##B(s)## but a genuinely new property. Also these properties are fundamentally deterministic, there is no fundamental randomness, just lack of control of the environment.
 
  • #95
I guess @A. Neumaier will tell us. Why the heck, hasn't he written this down in a 20-30p physics paper rather than with so much text obviously addressed to philosophers? (It's not meant in as bad a way as it may sound ;-))).
 
  • #96
vanhees71 said:
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.
It is intended to be precisely the latter, without the partially misleading probabilistic underpinning in the foundations that gave rise to nearly a century of uneasiness and dispute.
Part III said:
The thermal interpretation is inspired by what physicists actually do rather than what they say. It is therefore the interpretation that people actually work with in the applications (as contrasted to work on the foundations themselves), rather than only paying lipservice to it.
DarMM said:
Yes, assuming I'm right of course!
vanhees71 said:
I guess @A. Neumaier will tell us. Why the heck, hasn't he written this down in a 20-30p physics paper
It is partially right, but a number of details need correction. I have little time today and tomorrow, will reply on Sunday afternoon.
 
Last edited:
  • Like
Likes dextercioby, vanhees71 and DarMM
  • #97
No rush! I least I'm right to first order, I await the nonperturbative corrections!
 
  • Like
Likes vanhees71
  • #98
vanhees71 said:
The upshot of this long story is that the particle picture of subatomic phenomena is quite restricted.
Thank you for the long post. I am aware of what you wrote, but your summary is very good.
 
  • #99
vanhees71 said:
The meaning of your interpretation gets more and more enigmatic to me.

In the standard interpretation the possible values of observables are given by the spectral values of self-adjoint operators. To find these values you'd have to measure energy precisely. This is a fiction of course. It's even a fiction in classical physics, because real-world measurements are always uncertain, and that's why we need statistics from day one in the introductory physics lab to evaluate our experiments. Quantum theory has nothing to do with these uncertainties of real-world measurements.

At the same time you say the very same about measurements within your thermal interpretation I express it within the standard interpretation. As long as the meaning of q-averages is not clarified, I cannot even understand the difference of the statements. That's the problem.
The meaning is enigmatic only when viewed in terms of the traditional interpretations, which look at the same matter in a very different way.$$\def\<{\langle} \def\>{\rangle}$$
Given a physical quantity represented by a self-adjoint operator ##A## and a state ##\rho## (of rank 1 if pure),
  • all traditional interpretations give the same recipe for computing a number of possible idealized measurement values, the eigenvalues of ##A##, of which one is exactly (according to most formulations) or approximately (according to your cautious formulation) measured with probabilities computed from ##A## and ##\rho## by another recipe, Born's rule (probability form), while
  • the thermal interpretation gives a different recipe for computing a single possible idealized measurement value, the q-expectation ##\<A\>:=Tr~\rho A## of ##A##, which is approximately measured.
  • In both cases, the measurement involves an additional uncertainty related to the degree of reproducibility of the measurement, given by the standard deviation of the results of repeated measurements.
  • Tradition and the thermal interpretation agree in that this uncertainty is at least ##\sigma_A:=\sqrt{\<A^2\>-\<A\>^2}## (which leads, among others, to Heisenberg's uncertainty relation).
  • But they make very different assumptions concerning the nature of what is to be regarded as idealized measurement result.
That quantities with large uncertainty are erratic in measurement is nothing special to quantum physics but very familiar form the measurement of classical noisy systems. The thermal interpretation asserts that all uncertainty is of this kind, and much of my three papers are about arguing why this is indeed consistent with the assumptions of the thermal interpretation.

Now it is experimentally undecidable what an ''idealized measurement result'' should be since measured are only actual results, not idealized ones.

What to consider as idealized version is a matter of interpretation. What one chooses determines what one ends up with!

As a result, the traditional interpretations are probabilistic from the start, while the thermal interpretation is deterministic from the start.

The thermal interpretation has two advantages:
  • It assumes at the levels of the postulates less technical mathematics (no spectral theorem, no notion of eigenvalue, no probability theory).
  • It allows to make definite statements about each single quantum system, no matter how large or small it is.
 
  • #100
So what is a wavefunction?
 
  • #101
DarMM said:
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment with transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
Point 1, is fine.

Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''. A subsystem is selected by picking a vector space of quantities (linear operators) relevant to the subsystem. Regarding a tensor product of two systems as two separate subsystems (as traditionally done) is therefore allowed only when all quantities that correlate the two systems are deemed irrelevant. Thinking in terms of the subsystems only hence produces the weird features visible in the traditional way of speaking.

Point 3 then follows.

Point 4 is valid only in a very vague sense, and I cannot repair it quickly; so please wait, or rethink it until I answer.
 
  • Like
Likes vanhees71 and DarMM
  • #102
ftr said:
So what is a wavefunction?
A vector in the image of the operator ##\rho##.
 
  • #103
Representing what?
 
  • #104
ftr said:
Representing what?
In general nothing. For a system in a pure state it represents the state, as in this case ##\rho## can be reconstructed as the multiple ##\psi\psi^*## with trace 1.
 
  • #105
A. Neumaier said:
Point 4 is valid only in a very vague sense, and I cannot repair it quickly; so please wait, or rethink it until I answer.
Thank you for the response. I'll try to rethink it, I got the B&P book mentioned in Paper III.
 

Similar threads

Replies
24
Views
4K
Replies
4
Views
1K
Replies
42
Views
6K
Replies
1
Views
2K
Replies
25
Views
3K
Replies
53
Views
6K
Replies
7
Views
2K
Back
Top