The thermal interpretation of quantum physics

In summary: I like your summary, but I disagree with the philosophical position you take.In summary, I think Dr Neumaier has a good point - QFT may indeed be a better place for interpretations. I do not know enough of his thermal interpretation to comment on its specifics.
  • #421
atyy said:
I'm not sure historically whether that is more what Bohr thought
My reading of Bohr is that he thought further progress was blocked because of the need for classical concepts in the description of the experiment and the complimentarity principle. Could be wrong though, I find him a bit hard to read.
 
Physics news on Phys.org
  • #422
A bit? Bohr was almost enigmatic. I never understood the enthusiasm about his non-physical writings by his contemporary colleagues.
 
  • Like
Likes DarMM
  • #423
kith said:
You have the positions "I don't care" and "I'm open to the measurement problem being solved by a more fundamental theory". What about the position "I don't think that the measurement problem can be solved by a more fundamental theory."? I think that one characterizes Bohr's position better because he emphasized the indispensability of classical concepts. Your Ia sounds more like shut-up-and-calculate.

This comment and similar ones belong in @Demystifier's branch discussion, because it's not really about the thermal interpretation.
 
  • Like
Likes DarMM
  • #424
atyy said:
Good point. I'm not sure historically whether that is more what Bohr thought. Personally, I put it more as a necessary clause of of the "I am open" class, since it may be how nature and mathematics really work. It doesn't seem rational to believe in it without further evidence [...]
"The measurement problem can't be solved" is not how Bohr would have phrased it because he didn't think of it as a problem but as an integral part of physics at the scale of actions of [itex]h[/itex] and smaller. An argument he gave for the necessity of the concept of measurement in the foundations was that at this scale, we cannot view the measurement as reading off a property of the system but need to view it as an interaction which produces the property in the first place (see his Como lecture). It just doesn't seem right to me to summarize this position as either "I don't care" or "I am open".

I would classify the positions something like this:
In orthodox QM, the concept of measurement is built into the foundations.
(I) I don't care about this. (shut-up-and-calculate)
(II) This is not a problem but an indispensable feature of fundamental physics at a certain scale and beyond. (Bohr, Heisenberg)
(III) This is a problem which needs to / may be solved. (Dirac, Bell, Weinberg, most interpretations)

The sensible terminology would be to call position (II) Copenhagen. A quite common usage, however, is to already call orthodox QM Copenhagen.

The actual work of most people in camp (III) consists of trying to derive orthodox QM from something which doesn't contain the concept of measurement. The interesting thing about the thermal interpretation is that Arnold questions that orthodox QM accurately reflects the actual practise of QM and takes this as his starting point.
 
Last edited:
  • Like
Likes atyy, vanhees71 and DarMM
  • #425
stevendaryl said:
This comment and similar ones belong in @Demystifier's branch discussion, because it's not really about the thermal interpretation.
Good point. The first post in this direction seems to be this post. I have suggested to the mods that these posts will be moved.
 
  • Like
Likes DarMM
  • #426
ftr said:
@A. Neumaier With regards to your interpretation, suppose you have an ideal particle (say an electron) in a box (1D ..etc), what can you measure and what is the relation between repeated measurements?
In principle one can approximately measure q-expectations of operators, to a (sometimes very) limited accuracy bounded below by the their theoretical uncertainty. ##N##-fold repetition on independent systems and averaging improves the accuracy by a factor of ##1/\sqrt{N}##.

Nothing of this is special to the system being a particle in a box.
 
Last edited:
  • Like
Likes dextercioby
  • #427
stevendaryl said:
The EPR experiment seems to violate this notion of locality. Alice measuring the z-component of spin of her particle in region 1 tells her what Bob will get if he measures the z-component of spin of his particle in region 2. But Bell's inequality implies that there are no facts about what happened in Region 5 that would allow allow the prediction of Bob's result.
Well, Alice's prediction of what Bob is doing is conditioned on knowing that Bob and his device are properly functioning at the time Bob measures. Whether his is the case depends, however, on information in region 2, which in a deterministic model stems from information in region 4 or 5. Thus local relativistic logic only requires that what Bob will get depends on information in regions 4 or 5.

The thermal interpretation accounts for this through the discussion of conditional information in Subsection 4.5 of Part II.
 
  • #428
vanhees71 said:
The equation for the measurement device's macroscopic pointer readings alone is not according to linear quantum-time evolution, as is the case for any open system. A measurement device necessarily has some dissipation to lead to an irreversible storage of the measured result.
A. Neumaier said:
Ah, so you change the fundamental law of quantum mechanics and say that it applies never. For the only truly closed system we have access to is the whole universe, and you mentioned repeatedly that to apply quantum mechanics to it is nonsense.

So where does the dissipative description of the measurement device that you invoke come from, from a fundamental perspective?
vanhees71 said:
I don't change any fundamental law. The fundamental laws are, if interpreted such that a physicist can make sense of it, what you've given in your papers we are discussing.

One last time: The macroscopic observables are coarse-grained, i.e., averages over many microscopic degrees of freedom. Another name is "collective modes" or the like. You can systematically derive semiclassical transport, classical transport, viscous and ideal hydrodynamics (including dissipation), Fokker-Planck/Langevin equations (including dissipation and fluctuation and their relation) etc. etc. from these principles. All this many-body physics applies to measurement devices as to any other macroscopic system, and there's no new fundamental rule
vanhees71 said:
As a macroscopic system a measurement device cannot be described in all microscopic details that indeed follows unitary time evolution but it is described by statistical quantum theory mostly in terms of macroscopic, i.e., over many microscopic degrees of freedom averaged observables, leading to classical behavior of these macroscopic observables. E.g., there's no use to describe a galvanometer measuring a macroscopic electric current in all microscopic details using QED. However, you can use many-body quantum statistics to derive its macroscopic behavior.
According to the fundamental laws of textbook quantum mechanics, any quantum system, is describable by a pure state changing according to the Schrödinger equation, though we almost never know which one (unless we restrict attention to one or two degrees of freedom). This includes all macroscopic system. On this level of description, Stevendaryl's setting leads to a superposition of the state of the detector, and his criticism of the statistical interpretation applies.

Note that the traditional interpretation of density operators is as a classical mixture of pure states, (proper mixtures) where the deviation from pureness is only due to our ignorance and not due to reasons of principle, so one is allowed to replace the analysis in terms of density operators by an in-principle analysis by pure states, as Stevendaryl wanted.

You seem to say that a macroscopic system is in principle not describable by a pure state changing according to the Schrödinger equation. Thus you restrict the validity of the reversible, conservative fundamental laws to systems with one or two degrees (or how many?) of freedom, and say that for larger systems one must only use irreversible, dissipative foundations appropriate to open quantum systems, derived from improper density operators (system intrinsic, not due to ignorance of a true, pure state) and the closed time path (CTP) formalism.

According to you, where do these improper density operators come from, if not from the following:
A. Neumaier said:
the Born-Markov approximation turns all open systems into systems no longer described by a wave function but by a density operator of rank ##>1##. In the Born-Markov approximation, this density operator is an improper mixture in the standard terminology, hence cannot be thought of as being ''in reality'' a classical mixture of pure states!
These improper mixtures usually arise from taking a partial trace over the unmodeled environment,
where system+detector+environment are assumed to be described by a pure state changing according to the Schrödinger equation. But this is even a bigger system, and you claim the latter description is disallowed for large systems.

Thus, in your setting, there is no starting point for justifying the use of the standard quantum mechanics for open quantum systems. (Note that we are discussing foundations, not the practice of quantum mechanics, where one usually glosses over foundational issues.)

The thermal interpretation, on the other hand, has no size restrictions on the applicability of its foundation. It is used for quantum systems of any size in precisely the same way.
 
  • Like
Likes dextercioby, Mentz114 and mattt
  • #429
A. Neumaier said:
In principle one can approximately measure q-expectations of operators, to a (sometimes very) limited accuracy bounded below by the their theoretical uncertainty. ##N##-fold repetition on independent systems and averaging improves the accuracy by a factor of ##1/sqrt{N}##.

Nothing of this is special to the system being a particle in a box.
Again, I'm puzzled. The "q-expectations of operators" are not what's measured according to the experimentally well-established standard interpretation of quantum mechanics. This is evident from the fact that they depend on the state of the quantum system.

The state of the quantum system, however, operationally does not refer to the measurement of an observable but to an equivalence class of preparation procedure of the quantum system to be measured. Being described by a selfadjoint positive semidefinite operator with trace 1, the statistical operator ##\hat{\rho}## the expectation value of an observable ##A## described by a self-adjoint operator ##\hat{A}## is given by what you call "q-expectation value",
$$\langle A \rangle_{\hat{\rho}}=\mathrm{Tr}(\hat{\rho} \hat{A}).$$
What's measured (in each single measurement) depends on the used measurement apparatus and is independent of the quantum state of the measured system, i.e., the preparation procedure done on the system before it is measured.

It is a common misconception to think that what's measured are the expectation values of the formalism, and this is a misconception at least as old as the discovery of the uncertainty relation by Heisenberg. Heisenberg found the uncertainty relation by thinking about his (in)famous "Heisenberg microscope" gedanken experiment about position measurements on electrons. He also thought that the uncertainty relation refers to the accuracy of measurements and disturbance of the measured system by the corresponding observation due to the interaction of the measured system with the measurement apparatus. He was immediately corrected by Bohr about this misconception, but as usual misconceptions are remembered better than the corrections (Murphy's Law of physics didactics).

Taking the position-momentum uncertainty relation, the Heisenberg paper refers to, ##\Delta x \Delta p_x \geq \hbar/2## as an example, the misconception is very clear: You can prepare an electron with quite some accuracy of its momentum. For simplicity assume that it's then prepared in a some pure quantum state, represented by a normalized vector ##|\psi \rangle##. Then the momentum representation gives the probability distribution for momentum
$$P_{\psi}(p)=|\tilde{\psi}(p)|^2 \quad \text{with} \quad \tilde{\psi}(p)=\langle p|\psi \rangle.$$
This is, by assumption a quite narrowly peaked distribution around some value ##p_0## of momentum. Since the position probability distribution is given by
$$P_{\psi}(x)=|\psi(x)|^2 \quad \text{with} \quad \psi(x)=\langle x|\psi \rangle = \int_{\mathbb{R}} \mathrm{d} p \frac{1}{\sqrt{2 \pi \hbar}} \exp(\mathrm{i} p x) \tilde{\psi}(p),$$
the position probability is a rather broad distribution as a result of being the Fourier transform of a rather narrow distribution.

Now this doesn't hinder the physicist to measure the position as accurately as he likes. He can choose to resolve the position of the particle much better than given by the probability distribution according to the preparation in the state ##\hat{\rho}_{\psi} =|\psi \rangle \langle \psi|##. Only he measures with such a rather accurate device, he'll be able to resolve the true standard deviation ##\Delta x## due to the preparation in the quantum state ##\hat{\rho}_{\psi}##. In the other case, if his position measurement is less accurate, he'll rather get a much wider distribution than predicted by the preparation in this quantum state.
 
  • #430
A. Neumaier said:
In principle one can approximately measure q-expectations of operators, to a (sometimes very) limited accuracy bounded below by the their theoretical uncertainty. ##N##-fold repetition on independent systems and averaging improves the accuracy by a factor of ##1/sqrt{N}##.

Nothing of this is special to the system being a particle in a box.
Thanks. Just to understand, If we consider a particle in a box I presume if we compute the "q-expectation" for the position, then it will be in the middle. So does your interpretation says that there is a particle in the middle, or the particle is most likely to be found in the middle, or the particle is spread out in the box and the q-expectation is indicative or what else?
 
  • #431
A. Neumaier said:
According to the fundamental laws of textbook quantum mechanics, any quantum system, is describable by a pure state changing according to the Schrödinger equation, though we almost never know which one (unless we restrict attention to one or two degrees of freedom). This includes all macroscopic system. On this level of description, Stevendaryl's setting leads to a superposition of the state of the detector, and his criticism of the statistical interpretation applies.
This is not what I said, and I wonder, which textbook you are quoting. A quantum system's state is not necessarily in a pure state. It may well be in a mixed state. E.g., the polarization of each of the single photons in a pure polarization entangled biphoton state frequently use in Bell-test experiments (because it's easily available through parametric downconversion) is described by the statistical operator ##\hat{\rho}=1/2 \mathbb{1}##, i.e., it's not in a pure state.

A macroscopic system may well be in a pure state (or very close to one). An example are Bose-Einstein condensates created in labs. The very fact that this has been achieved as late as 1995 only shows, how difficult this is in practice. The "normal state" of a macroscopic body around us in everyday life is rather something close to thermal equilibrium, i.e., very far from a pure state.

Of course, it is true that also the dynsmics of a closed macroscopic system is in principle described by unitary time evolution. It's however impossible to really calculate this since then you'd need a complete microscopic description of the pure or mixed state of the system. This is neither possible nor useful. That's why we describe very much coarse grained macrocopic observables, which are averages over very many microscopic degrees of freedom. This also implies that we describe only a part of the system, and thus the macroscopic observables behave usually pretty much classical and also show dissipation. This almost always always happens if you look at the effective dynamics of open systems and coarse-grained macroscopic observables.

Note that the traditional interpretation of density operators is as a classical mixture of pure states, (proper mixtures) where the deviation from pureness is only due to our ignorance and not due to reasons of principle, so one is allowed to replace the analysis in terms of density operators by an in-principle analysis by pure states, as Stevendaryl wanted.

You seem to say that a macroscopic system is in principle not describable by a pure state changing according to the Schrödinger equation. Thus you restrict the validity of the reversible, conservative fundamental laws to systems with one or two degrees (or how many?) of freedom, and say that for larger systems one must only use irreversible, dissipative foundations appropriate to open quantum systems, derived from improper density operators (system intrinsic, not due to ignorance of a true, pure state) and the closed time path (CTP) formalism.
That's not what I wanted to say. What I said is that a measurement device is usually a macroscopic device which cannot be described in any microscopic detail and to be a measurement device it needs to use macroscopic observables of the above mentioned type, i.e., you need dissipation to get an irreversible measurment result. This doesn't imply that it is not possible in principle to prepare macroscopic systems in a pure quantum state. It's only rather difficult. Only the progress in ultra-low-temperature physics (starting from Kammerling-Onnes in the 1910s) has enabled physicists to achieve this (liquid He, BECs etc.).
According to you, where do these improper density operators come from, if not from the following:

These improper mixtures usually arise from taking a partial trace over the unmodeled environment,
where system+detector+environment are assumed to be described by a pure state changing according to the Schrödinger equation. But this is even a bigger system, and you claim the latter description is disallowed for large systems.
To the contrary that's precisely what I always said! To the contrary I claim that's the solution of the so-called measurement problem through the description of open systems, applied to measurement devices and, more generally, the problem to explain the classical behavior of macroscopic systems by quantum-many-body theory.
Thus, in your setting, there is no starting point for justifying the use of the standard quantum mechanics for open quantum systems. (Note that we are discussing foundations, not the practice of quantum mechanics, where one usually glosses over foundational issues.)

The thermal interpretation, on the other hand, has no size restrictions on the applicability of its foundation. It is used for quantum systems of any size in precisely the same way.
If you only would in clear words explain to me, what the INTERPRETATION of what you call "statistical interpretation" is! To claim that the observables of the formalism are what you call "q-expectation" values is highly misleading and an early misconception which has been as early also explained as such by the founding fathers of QT (see my posting #457).
 
  • #432
ftr said:
Thanks. Just to understand, If we consider a particle in a box I presume if we compute the "q-expectation" for the position, then it will be in the middle. So does your interpretation says that there is a particle in the middle, or the particle is most likely to be found in the middle, or the particle is spread out in the box and the q-expectation is indicative or what else?
It depends on the state. If the state is symmetric w.r.to the center of the box, the q-expectation will be the center (otherwise generally not). But your measurement will be inaccurate and usually not precisely the center. Each measurement will have an uncertainty of at least ##\sigma_q##, a value also depending on the state and never zero. In many cases, it will be of the order of the size of the box, so that one can say the particle is in the box but nothing significantly better.
 
  • Like
Likes dextercioby
  • #433
A. Neumaier said:
It depends on the state. If the state is symmetric w.r.to the center of the box, the q-expectation will be the center (otherwise generally not). But your measurement will be inaccurate and usually not precisely the center. Each measurement will have an uncertainty of at least ##\sigma_q##, a value also depending on the state and never zero. In many cases, it will be of the order of the size of the box, so that one can say the particle is in the box but nothing significantly better.
Thanks, got that. Now, suppose you measure the energy and if we assume the particle is in a superposition what is exactly measured that correspond to q-expectation, the mean energy levels, or one energy level.
 
  • #434
vanhees71 said:
I wonder, which textbook you are quoting. A quantum system's state is not necessarily in a pure state. It may well be in a mixed state.

Which textbook are you referring to for your interpretation? I'd like to see a single complete set of postulates of what - in discussions about foundations - you actually assume (about small and large, closed and open systems, and how big these systems are allowed to be), so that I can phrase the inconsistency of your assumptions (if there remains one) in terms of this set of postulates. As it is now, it is difficult to argue with you as you don't have a clear and fixed stance.

You seem to contradict yourself in what you allow in different places; hence it is difficult to argue with you. I was repeatedly quoting your German lecture notes, where you state in postulate 1 (p.14) that states of a closed system are described by rays in a Hilbert space, hence by pure states. The CTP approach assumes that the full system (described by quantum fields before coarse-graining) is closed (since the dynamics is still unitary), but nevertheless it uses a density operator for the description, which clashes with your postulate.

You never gave a formal definition of postulates for an open system, so I was inquiring about the roots of your assumptions about open systems. Since it now seems that you work fundamentally with density operators, I want to find out what the mixed equivalent of an eigenstate for pure states is in your interpretation. Let ##\rho## be the mixed state of a system and you would measure, say, the spin ##S_z## of a single silver atom at its surface. What are the precise conditions on ##\rho## under which you get always the result ##1/2##? (In a tensor product of spin space and everything else.)
 
  • Like
Likes dextercioby
  • #435
ftr said:
Thanks, got that. Now, suppose you measure the energy and if we assume the particle is in a superposition what is exactly measured that correspond to q-expectation, the mean energy levels, or one energy level.
Exact measurements are usually meaningless in the thermal interpretation. Almost all measurements have a positive uncertainty ##\sigma_A##, and measurements more precise as this are meaningless, just as it is meaningless to ask for the position of a car to mm accuracy.
 
  • Like
Likes dextercioby
  • #436
A. Neumaier said:
Exact measurements are usually meaningless in the thermal interpretation. Almost all measurements have a positive uncertainty ##\sigma_A##, and measurements more precise as this are meaningless, just as it is meaningless to ask for the position of a car to mm accuracy.
If we drop 'exactly' from @ftr's question - does it have an answer ?
 
  • #437
A. Neumaier said:
Exact measurements are usually meaningless in the thermal interpretation. Almost all measurements have a positive uncertainty ##\sigma_A##, and measurements more precise as this are meaningless, just as it is meaningless to ask for the position of a car to mm accuracy.

Sorry for the misunderstanding you can ignore the word exactly.
 
  • #438
On a personal note what I like about the thermal interpretation is the presence of clear open problems. Even if you don't "believe" it trying to see if discrete outcomes from experiments can be derived via open quantum systems and the structure of the device's set of slow modes is a very interesting problem.
 
  • Like
Likes dextercioby and A. Neumaier
  • #439
ftr said:
Thanks, got that. Now, suppose you measure the energy and if we assume the particle is in a superposition what is exactly measured that correspond to q-expectation, the mean energy levels, or one energy level.
ftr said:
Sorry for the misunderstanding you can ignore the word exactly.
Suppose the energy levels are ##E_1## and ##E_2##, and the system is in state ##a_1|E_1\rangle+a_2|E_2\rangle##, where ##|a_1|^2=p,|a_2|^2=1-p##. Then the approximately measured q-expectation can be exactly calculated to be ##\bar E=pE_1+(1-p)E_2##, and its uncertainty can be exactly calculated to be ##\sigma_E=\sqrt{p(1-p)}|E_1-E_2|##. Thus whatever ##E## is measured (which depends on the method used) satisfies ##|E-\bar E|## of order at least ##\sigma_E##.
 
  • Like
Likes dextercioby
  • #440
A. Neumaier said:
Suppose the energy levels are ##E_1## and ##E_2##, and the system is in state ##a_1|E_1\rangle+a_2|E_2\rangle##, where ##|a_1|^2=p,|a_2|^2=1-p##. Then the approximately measured q-expectation can be exactly calculated to be ##\bar E=pE_1+(1-p)E_2##, and its uncertainty can be exactly calculated to be ##\sigma_E=\sqrt{p(1-p)}|E_1-E_2|##. Thus whatever ##E## is measured (which depends on the method used) satisfies ##|E-\bar E|## of order at least ##\sigma_E##.

That is quite radical from conventional interpretation, isn't it?
 
  • #441
ftr said:
That is quite radical from conventional interpretation, isn't it?
Yes; it is completely different.

The thermal interpretation rejects Born's rule as basic, universally valid connection between states and measurements, and replaces it by
The measurement principle said:
A macroscopic quantum device qualifies as an instrument for approximately, with uncertainty ##\Delta a##, measuring a Hermitian quantity ##A## of a system with density operator ##\rho##, if it satisfies the following two conditions:

(i) (uncertainty) All measured results ##a## deviate from
$$\bar A:=\langle A\rangle=Tr~ \rho A$$
by approximately ##\Delta a##. The measurement uncertainty is bounded below by
$$\Delta a\ge \sigma_A:=\sqrt{\langle A^2\rangle-\langle A\rangle^2}.$$

(ii) (reproducability) If the measurement can be sufficiently often repeated on systems with the same or a sufficiently similar state then the sample mean of ##(a-\bar A)^2## approaches ##\Delta a^2##.
This is enough to explain everything physicists do theoretically and experimentally, including explaining the Born rule for binary measurements (with measurement results zero and one only, approximating a q-expectation in ##[0,1]##).
 
  • Like
Likes Auto-Didact and dextercioby
  • #442
Nice summery. The p and 1-p do you call them probability or something else?
 
  • #443
ftr said:
Nice summary. The p and 1-p do you call them probability or something else?
I call them q-probabilities in general (i.e., independent of an interpretation in terms of frequencies or beliefs). They are approximated by relative frequencies when measured.

It is not needed to invoke anywhere beliefs or knowledge - except in defining a system, its preparation, and how it is measured.
 
  • #444
See section 3.4 of this paper for some context on this question: https://arxiv.org/abs/1409.1570

I was just thinking about this a bit more and have a question about quantum computation. In the thermal interpretation in the idealised case of a pure state, the entire pure state is ontic. This means that the degrees of freedom of a quantum system of ##N## particles scales exponentially with ##N##. This would suggest you could find a polynomial time solution to the Travelling salesman problem by attempting a path with each component.

However we know quantum computers are not that efficient. It seems to me that in the Thermal interpretation one could imagine that the solution to the Traveling salesman problem is "computed" via the evolution of some q-observable ##\langle A \rangle##, but the issue of metastability of the measuring device washes out the solution.

Would this be the right way to think about it? Nature can solve NP-complete problems in polynomial time, but you can't extract the answer reliably.
 
  • #445
A. Neumaier, you keep mentioning beams, is it possible to have a beam of single electron.
 
  • #446
@ftr, I have edited your post #445 to remove the bold large font. That is the equivalent of shouting here. There is no need to shout.
 
  • #447
PeterDonis said:
@ftr, I have edited your post #445 to remove the bold large font. That is the equivalent of shouting here. There is no need to shout.
I did not mean to, just my mobile difficulty. Thanks.
 
  • #448
DarMM said:
My reading of Bohr is that he thought further progress was blocked because of the need for classical concepts in the description of the experiment and the complimentarity principle. Could be wrong though, I find him a bit hard to read.

vanhees71 said:
A bit? Bohr was almost enigmatic. I never understood the enthusiasm about his non-physical writings by his contemporary colleagues.

kith said:
"The measurement problem can't be solved" is not how Bohr would have phrased it because he didn't think of it as a problem but as an integral part of physics at the scale of actions of [itex]h[/itex] and smaller. An argument he gave for the necessity of the concept of measurement in the foundations was that at this scale, we cannot view the measurement as reading off a property of the system but need to view it as an interaction which produces the property in the first place (see his Como lecture). It just doesn't seem right to me to summarize this position as either "I don't care" or "I am open".

I would classify the positions something like this:
In orthodox QM, the concept of measurement is built into the foundations.
(I) I don't care about this. (shut-up-and-calculate)
(II) This is not a problem but an indispensable feature of fundamental physics at a certain scale and beyond. (Bohr, Heisenberg)
(III) This is a problem which needs to / may be solved. (Dirac, Bell, Weinberg, most interpretations)

The sensible terminology would be to call position (II) Copenhagen. A quite common usage, however, is to already call orthodox QM Copenhagen.

The actual work of most people in camp (III) consists of trying to derive orthodox QM from something which doesn't contain the concept of measurement. The interesting thing about the thermal interpretation is that Arnold questions that orthodox QM accurately reflects the actual practise of QM and takes this as his starting point.

Yeah, that is interesting, although like @vanhees71 and @DarMM, I confess most of Bohr is very mystifying. So I only took the tiny little bit of Bohr that resonates with me.

While we are talking about interpretation of interpretation, am I allowed to quote Bohr's "It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we say about Nature." as showing that it is maybe ok to classify him under "I don't care"?
 
  • Like
Likes kith
  • #449
A. Neumaier said:
Which textbook are you referring to for your interpretation? I'd like to see a single complete set of postulates of what - in discussions about foundations - you actually assume (about small and large, closed and open systems, and how big these systems are allowed to be), so that I can phrase the inconsistency of your assumptions (if there remains one) in terms of this set of postulates. As it is now, it is difficult to argue with you as you don't have a clear and fixed stance.

You seem to contradict yourself in what you allow in different places; hence it is difficult to argue with you. I was repeatedly quoting your German lecture notes, where you state in postulate 1 (p.14) that states of a closed system are described by rays in a Hilbert space, hence by pure states. The CTP approach assumes that the full system (described by quantum fields before coarse-graining) is closed (since the dynamics is still unitary), but nevertheless it uses a density operator for the description, which clashes with your postulate.

You never gave a formal definition of postulates for an open system, so I was inquiring about the roots of your assumptions about open systems. Since it now seems that you work fundamentally with density operators, I want to find out what the mixed equivalent of an eigenstate for pure states is in your interpretation. Let ##\rho## be the mixed state of a system and you would measure, say, the spin ##S_z## of a single silver atom at its surface. What are the precise conditions on ##\rho## under which you get always the result ##1/2##? (In a tensor product of spin space and everything else.)
Any complete QT textbook I know discusses mixed states later on. It's clear that you start with pure states to establish the formalism. QT textbooks are written for physicists who need an interpretation since they are doing physics not pure mathematics. It's obviously very difficult if not impossible to communicate between mathematicians and physicists what an interpretation is.

I don't know, what the closed-time-path approach has to do with our discussion, but it's a very good different mathematical formulation of general quantum dynamics to start discussing what many seem to still consider a "measurement problem" in QT since it's the ideal starting point to discuss macroscopic systems and their effective description in terms of macroscopic observables.

It starts from the description of closed systems, which is governed by unitary time evolution. That's independent of the state, i.e. it's valid for both pure and mixed states. Both cases can be described with statistical operators (pure states are special cases when the statistical operator is a projection operator).

Now, if you have a many-body system, and a measurement apparatus is a quantum system with a macroscopic (!) observable providing a "pointer reading" that measures an observable of the system to be measured. The macroscopic pointer observable is described by coarse-graining, averaging over many microscopic degrees of freedom, e.g., via gradient expansion of the Kadanoff-Baym equation which still describes the full microscopic dynamics, i.e., an infinite tower of correlation functions, which however cannot be solved for practically relevant systems (there are numerical studies about simple ##\phi^4## theory in the late 1990ies, early 2000s). That's why one coarse grains the KB equations to transport equations, i.e., effective descriptions of one-body distribution functions (of particles or quasiparticles). In this first step you get non-Markovian dynamics, including dissipation. In a next step you boil it down to non-Markovian Boltzmann-like equations for "off-shell transport". If you are lucky the quasi-particle description is also sufficient, and then you are at the well-established classical Boltzmann-Uehling-Uhlenbeck equation, describing collective macroscopic ("collective") modes in terms a gas of classical (quasi-)particles, including the quantum corrections from Bose- or Fermi statistics for the final states in the scattering processes described by the collsion term ("Bose enhancement" or "Pauli blocking", respectively).

This is a description at the level of semi-classical phase-space distribution functions. If you are sufficiently close to thermal equilibrium on the macroscopic time scales (i.e., if the thermalization time scales are small compared to the typical macroscopic time scales) you can simplify the description further to viscous or even ideal hydrodynamics.

It is clear that the applicability of this sequence of coarser and coarser descriptions and how far you can go down this "ladder of descriptions", interpreted as approximations to the solution of the quantum many-body dynamics, depends on the system you want to describe. That's why you cannot put it in a mathematical axiomatic description, and of course most importantly finally the real justification of all these approximations is the experimental validation in the lab. Physics after all is an empirical science!
 
  • #450
A. Neumaier said:
Yes; it is completely different.

The thermal interpretation rejects Born's rule as basic, universally valid connection between states and measurements, and replaces it by

This is enough to explain everything physicists do theoretically and experimentally, including explaining the Born rule for binary measurements (with measurement results zero and one only, approximating a q-expectation in ##[0,1]##).
You cannot replace something with itself. The trace formula is nothing else than the Born rule, generalized to general mixed states!

It is by far not sufficient to understand how the formalism is used to the real-world observations. As I've stressed frequently in this discussion, the idea that expectation values are what describes observables is an old misconception of some of the founding fathers of QT (most prominently Heisenberg). It has been corrected, however, almost immediately by the very same founding fathers of QT (most prominently Bohr).
 
  • #451
atyy said:
Yeah, that is interesting, although like @vanhees71 and @DarMM, I confess most of Bohr is very mystifying. So I only took the tiny little bit of Bohr that resonates with me.

While we are talking about interpretation of interpretation, am I allowed to quote Bohr's "It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we say about Nature." as showing that it is maybe ok to classify him under "I don't care"?
Well, what resonates with me are the physical (rather than philosophical) writings by Bohr, most notably his ingenious insight into the importance of collective dynamics of atomic nuclei ;-)).

One exeption is the last quoted paragraph, i.e., that of course any mathematical description of any physics is of course epistemic. In nature are no Hilbert spaces or pseudo-Riemannian spacetime manifolds. That's our description of what we objectively and quantitatively (are able to) observe about nature.
 
  • #452
DarMM said:
It seems to me that in the Thermal interpretation one could imagine that the solution to the Traveling salesman problem is "computed" via the evolution of some q-observable ##\langle A \rangle##, but the issue of metastability of the measuring device washes out the solution.
Surely metastability is the limiting factor in quantum computing, since the results are extremely sensitive to both modeling errors (compared to the real devices) and to decoherence.

Whether one can say that Nature computes a solution just because it can be computed by some simple recipe from its state is questionable since Nature just is, and doesn't prepare or analyze anything.
 
  • #453
ftr said:
A. Neumaier, you keep mentioning beams, is it possible to have a beam of single electron.
See this thread and the posts referred to there.
 
  • #454
vanhees71 said:
Any complete QT textbook I know discusses mixed states later on.
I discussed in detail in Section 3.5 of Part I what is wrong with the traditional textbook introduction of mixed states by Landau and Lifschitz. But he doesn't discuss the minimal interpretation as he has explicit collapse.

So please refer to a specific source that you regard as authoritative for your interpretation.
vanhees71 said:
It's obviously very difficult if not impossible to communicate between mathematicians and physicists what an interpretation is.
Its the difference between mathematical physicists (who want precise starting points and then refer in their further discussion only to these starting points and logic) and theoretical physicists (who want results and don't care about logical clarity as long as everything is intuitively plausible). DarMM and I belong to the former, you belong to the latter.
vanhees71 said:
The "normal state" of a macroscopic body around us in everyday life is rather something close to thermal equilibrium, i.e., very far from a pure state.
Thus according to you, the state of a normal macroscopic body is definitely not a pure state, but an improper mixture? This would be in opposition to the (very widely used) setting given by Landau and Lifshitz, who claim that the true state of a normal macroscopic body is a pure state, but our lack of detailed knowledge of it requires that we treat it as a classical mixture of pure states (a proper mixture).

This is the reason why I want you to point out a definite authoritative source specifying the full set of assumptions you make on the fundamental level, and which I can use to make myself understood when discussing with you. At present it often feels like pushing a cloud since whatever I criticize you say its different, without clarifying your foundations.
vanhees71 said:
To claim that the observables of the formalism are what you call "q-expectation" values is highly misleading and an early misconception which has been as early also explained as such by the founding fathers of QT (see my posting #457).
The post numbers have changed since some subthreads where moved.
 
Last edited:
  • #455
A. Neumaier said:
Surely metastability is the limiting factor in quantum computing, since the results are extremely sensitive to both modeling errors (compared to the real devices) and to decoherence.
Even without decoherence or errors a quantum computer cannot compute NP-hard problems, so there's more to these limits than that.
 

Similar threads

Replies
24
Views
4K
Replies
4
Views
894
Replies
42
Views
6K
Replies
1
Views
2K
Replies
25
Views
3K
Replies
53
Views
6K
Replies
7
Views
2K
Back
Top