The thermal interpretation of quantum physics

In summary: I like your summary, but I disagree with the philosophical position you take.In summary, I think Dr Neumaier has a good point - QFT may indeed be a better place for interpretations. I do not know enough of his thermal interpretation to comment on its specifics.
  • #596
vanhees71 said:
not giving the explanation what this means in the lab if not an expectation value in the sense of probability theory.
I granted your interpretation in the important case where you actually make a large number of experiments, since in this case, the statistical interpretation follows form the thermal interpretation, as I explained in detail in Section 3 of Part II. However, there are many cases where one never actually measures more than once, and these have a different interpretation since statistics is mute about such instances.
 
Physics news on Phys.org
  • #597
A. Neumaier said:
For a density operator ##\rho## with positive spectrum, ##S:=-k_B\log\rho## is a well-defined operator, and I can give it any name I like. I call it the entropy operator, since its q-expectation is your entropy. In this way, the entropy operator is well-defined, and its q-expectation agrees in equilibrium with the observable thermodynamic entropy, just as the q-expectation of the Hamiltonian agrees in equilibrium with the observable thermodynamic internal energy. Thus everything is fully consistent.

Nothing information theoretic is involved, unless you read it into the formulas.
This is not an operator representing an observable, because it's time evolution in a general picture is not given by the time-evolution operator for an observable but by that for a state.
 
  • Like
Likes Demystifier
  • #598
A. Neumaier said:
I granted your interpretation in the important case where you actually make a large number of experiments, since in this case, the statistical interpretation follows form the thermal interpretation, as I explained in detail in Section 3 of Part II. However, there are many cases where one never actually measures more than once, and these have a different interpretation since statistics is mute about such instances.
We argue in circles :-(. If you never actually measure more than once, the expectation value is provided by the measurement device. That's for sure the case for any measurement concerning a system, where classical (i.e., non-quantum) physics is a good approximation. E.g., measuring the length of the edge of my desk with an ordinary meter provides such a coarse grained observable, namely the "length of my table".

I also don't understand, why you deny to define your interpretation but insist on providing a new interpretation. It's a contraditio in adjecto!
 
  • #599
vanhees71 said:
This is not an operator representing an observable, because it's time evolution in a general picture is not given by the time-evolution operator for an observable but by that for a state.
But this is the operator ##S## I was talking about in post #591, and hence gives sense to my comments about the state of the universe. Its transformation behavior is the same as that of the density operator, which is adequate for this purpose.
vanhees71 said:
If you never actually measure more than once, the expectation value is provided by the measurement device.
Well, this is why the q-expectation is measurable. But it is not an expectation value in the sense of Born's rule, which is about actual measurements and not about imagined ones.
vanhees71 said:
measuring the length of the edge of my desk with an ordinary meter provides such a coarse grained observable, namely the "length of my table".
And why is this an expectation value?? Of which operator?
vanhees71 said:
Interpretation is about the connection of the formal entities of the theory (for QT the Hilbert space, the statistical operators, and the operators representing observables) with physics.
vanhees71 said:
I also don't understand, why you deny to define your interpretation
Whereas I don't understand why you don't accept my definition as a definition; it satisfy your quoted requirement and others had no problem with this. You didn't say why my post #479 is not sufficient interpretation - it refers to plenty of connections between the formal entities of quantum theory with experiment (which surely is physics).

It only interprets it differently from how you want to have it interpreted. This is why it is a different interpretation.
 
Last edited:
  • #600
vanhees71 said:
This is not an operator representing an observable, because it's time evolution in a general picture is not given by the time-evolution operator for an observable but by that for a state.
Yes, physicists (including myself) often forget that von Neumann equation
$$\dot{\rho}=-i[H,\rho]$$
and Heisenberg equation
$$\dot{O}=i[H,O]$$
have the opposite sign and are not simultaneously valid. One is valid in the Schrodinger picture and the other in the Heisenberg picture.
 
  • Like
Likes julcab12
  • #601
That's true in the Heisenberg picture only. It becomes much more clear, if you choose a general picture, decomposing
$$\hat{H}=\hat{H}_1+\hat{H}_2$$
and then letting the observable operators time-evolve with ##\hat{H}_1## and the statistical operator with ##\hat{H}_2## then the time dependence is given by
$$\dot{\hat{\rho}}(t)=-\mathrm{i} [\hat{H}_2(t),\hat{\rho}(t)] + \partial_t \hat{\rho}(t),$$
where the partial derivative refers to explict time dependence, while the observable operators evolve according to
$$\dot{\hat{O}}(t)=\mathrm{i} [\hat{H}_1(t),\hat{O}(t)]+\partial_t \hat{O}.$$
Observable quantities like probabilities for measuring some value of an observable, expectation values of observables, etc. have to be independent of the picture, and this is indeed the case, since the probability to find ##o## when (precisely) measuring ##O##, the system being prepared in the state described by ##\hat{\rho}##
$$P(o|\hat{\rho})=\sum_{\alpha} \langle o,\alpha,t|\hat{\rho}(t)|o,\alpha,t \rangle,$$
where ##|o,\alpha,t \rangle## are the (time-dependent!) eigenvectors of ##\hat{O}(t)## to eigenvalue ##o##, evolving with ##\hat{H}_1##:
$$\mathrm{i} \mathrm{d}_t |o,\alpha,t \rangle = -\hat{H}_1 |o,\alpha,t \rangle.$$
The time dependences of these eigenstates and of the statistical operator by construction "conspire" such as that the probabilities do not depend on the choice of the picture of time evolution, defined by the (arbitrary!) split of the Hamiltonian into ##\hat{H}_1## and ##\hat{H}_2##.
 
  • #602
vanhees71 said:
That's true in the Heisenberg picture only. It becomes much more clear, if you choose a general picture
A general picture is not clearer than the Heisenberg picture just because it is more general. The Heisenberg picture is much less confusing, and hence more clear. One can look at the other pictures when one needs them, e.g., to do calculations in perturbation theory. But this is off-topic here.

My entropy operator transforms like the density operator, in any picture. In particular it is fixed in the Heisenberg picture and satisfies von Neumann's equation in the Schrödinger picture.
 
  • #603
A. Neumaier said:
Possibly cooling to absolute zero, but this is impossible.

Sorry I haven't studied this proposal in detail yet, but I am still stuck on your quantum mechanics of a bipartite system. In the other thread you claim the state of the universe is pure. So what prevents me from simply purifying region A (a system + measuring device) with region B (the rest of the universe). That I can do this operation follows from the properites of hermiticity and positive semi definiteness of the density matrix only.

Then I can further restrict to a subregion C within B, that is defined as containing all d.o.f entangled with A (in the support) and throw out the rest. Why can't I do that?
 
  • #604
Haelfix said:
Sorry I haven't studied this proposal in detail yet, but I am still stuck on your quantum mechanics of a bipartite system. In the other thread you claim the state of the universe is pure. So what prevents me from simply purifying region A (a system + measuring device) with region B (the rest of the universe). That I can do this operation follows from the properites of hermiticity and positive semi definiteness of the density matrix only.

Then I can further restrict to a subregion C within B, that is defined as containing all d.o.f entangled with A (in the support) and throw out the rest. Why can't I do that?
I never claimed that the state of the universe is pure. On the contrary, since there is a notion of temperature of the universe, it must be in a mixed state.

I also don't know what it means to purify a region A with another region B. The state of the universe determines the state of A at all times as a reduced density operator, and unless A or the dynamics is very trivial, the state of A is always mixed.

Note that whoever prepares the system is also a subsystem C, and it's actions are also encoded in the state of the universe. We can prepare certain systems in certain states only because the conditions of the universe encoded in ##\rho## allow us to do this.
 
  • #605
A. Neumaier said:
I never claimed that the state of the universe is pure. On the contrary, since there is a notion of temperature of the universe, it must be in a mixed state.

Ahh, sorry my apologies, I must have misread.

A. Neumaier said:
I also don't know what it means to purify a region A with another region B. The state of the universe determines the state of A at all times as a reduced density operator, and unless A or the dynamics is very trivial, the state of A is always mixed.

Sure, the state of A is mixed, but you are presumably allowed to enlarge the system, and it seems like you can't in your setup, and this I don't understand. To see this, suppose you have some state of the world with a density matrix ##\rho_{a}## in a region A, we have:

$$Tr_{\mathcal{H}_{a}}\rho_{a}=1$$

then there exists a region B such that this density matrix ##\rho_{a}## is a pure state in the larger bipartite system AB. To see this, observe that any hermitian matrix such as ##\rho_{a}## is diagonalizable, so we have in some basis
$$\rho_{a}= \sum_{i}p_{i}|\psi_{A}^{i}\rangle\langle\psi_{A}^{i}|$$

note that b/c of the positive semi-definite condition we have all the ##p_{i}## as positive. By construction, we now have a region B with Hilbert space ##\mathcal{H}_{B}##, with a complete set of orthonormal states ##\psi_{B}^{i}## and we see that ##\rho_{A}## is the density matrix of the following pure state:

$$\psi _{AB}=\sum_{i} \sqrt{p_{i}}\psi_{A}^{i}\otimes \psi_{B}^{i}\in\mathcal{H}_{A}\otimes \mathcal{H}_{B}$$

The existence of this 'purification' is one of those decidedly quantum facts that have no classical counterpart. What I am not understanding of your setup, is what the obstruction to the above is. Something that seemingly follows from matrix mathematics alone and should not depend on the 'interpretation'.
 
  • #606
Haelfix said:
Sure, the state of A is mixed, but you are presumably allowed to enlarge the system, and it seems like you can't in your setup, and this I don't understand. [...]
The existence of this 'purification' is one of those decidedly quantum facts that have no classical counterpart. What I am not understanding of your setup, is what the obstruction to the above is. Something that seemingly follows from matrix mathematics alone and should not depend on the 'interpretation'.
The obstruction is that there is exactly one Hilbert space for the whole universe, and everything physically real happens there.

The enlarged ancilla Hilbert space you constructed is not naturally embedded in this universal Hilbert space, hence is devoid of any physical meaning - i.e., its states do not have beable status but are purely theoretical constructs.
 
  • #607
A. Neumaier said:
The obstruction is that there is exactly one Hilbert space for the whole universe, and everything physically real happens there.

Yes, but region A does not have to be the whole universe. It can be a region with an experiment and a measuring device. Your interpretation requires that this be a mixed state, but then I am allowed to consider a joint system AB (where B is often called the region of the environment, but there are certainly additional purifications and so is certainly not unique) such that this quantity is the density matrix of a pure state. Which of course seems to be in tension with your interpretation.

Note that I have made no assumptions about whether these states possesses a temperature, what sort of volumes are involved, or the entanglement structure. This follows uniquely from the algebraic features of quantum mechanics.
 
  • #608
Haelfix said:
I am allowed to consider a joint system AB(...) such that this quantity is the density matrix of a pure state. Which of course seems to be in tension with your interpretation.
I don't understand.
Haelfix said:
there exists a region B such that this density matrix ##\rho_{a}## is a pure state in the larger bipartite system AB. To see this, observe that any hermitian matrix such as ##\rho_{a}## is diagonalizable, so we have in some basis
$$\rho_{a}= \sum_{i}p_{i}|\psi_{A}^{i}\rangle\langle\psi_{A}^{i}|$$

note that b/c of the positive semi-definite condition we have all the ##p_{i}## as positive. By construction, we now have a region B with Hilbert space ##\mathcal{H}_{B}##, with a complete set of orthonormal states ##\psi_{B}^{i}## and we see that ##\rho_{A}## is the density matrix of the following pure state:

$$\psi _{AB}=\sum_{i} \sqrt{p_{i}}\psi_{A}^{i}\otimes \psi_{B}^{i}\in\mathcal{H}_{A}\otimes \mathcal{H}_{B}$$
Nothing in the construction you give guarantees that this pure state is the reduced density matrix of the region AB obtained by taking the partial trace over the complement. But this reduced density matrix is the only state on AB that counts in the thermal interpretation.

In the Schrödinger picture, the universe has at each time ##t## a density operator ##\rho(t)##, and each subsystem A has a corresponding reduced density operator ##\rho_A(t):=Tr_A~\rho(t)##. These are the only states the thermal interpretation is concerned with at all - because these are the states containing precisely the information about the q-expectations of operators of the universe attached to the subsystem A.

For suitably prepared systems with very few degrees of freedom, physicists can make ##\rho_A(t)## pure by utilizing the laws of Nature and the control facilities these impart on humans or machines, and only for these. How to do this is part of the art of experimental preparation.

But your pure state ##\rho_{AB}=\psi _{AB}\psi _{AB}^*## is not of this form, but a purely formal construct, so it has no physical meaning in the thermal interpretation.
 
Last edited:
  • #609
A. Neumaier said:
Suppose the energy levels are ##E_1## and ##E_2##, and the system is in state ##a_1|E_1\rangle+a_2|E_2\rangle##, where ##|a_1|^2=p,|a_2|^2=1-p##. Then the approximately measured q-expectation can be exactly calculated to be ##\bar E=pE_1+(1-p)E_2##, and its uncertainty can be exactly calculated to be ##\sigma_E=\sqrt{p(1-p)}|E_1-E_2|##. Thus whatever ##E## is measured (which depends on the method used) satisfies ##|E-\bar E|## of order at least ##\sigma_E##.

Sorry for going to an old post , but I just want to make sure I understand.

Now, the energy "levels" E1,E2 seems to me that you are considering them as virtual since they cannot exist simultaneously, in another word what are they if not superposition.
 
  • #610
ftr said:
Sorry for going to an old post , but I just want to make sure I understand.

Now, the energy "levels" E1,E2 seems to me that you are considering them as virtual since they cannot exist simultaneously, in another word what are they if not superposition.
I assumed for simplicity a pure state, which is realistic at least for certain 2-state systems. This is the standard notation; nothing is virtual here.

The existing things (the beables) are the properties of the 2-state system, given by the q-expectations computed from this state, and everything computable from these, hence the numbers $E_1,E_2$, and $p$. They characterize the state system, hence exist simultaneously as properties.

On the other hand, the eigenstates $|E_k\rangle$ are formal objects only, not existing in a physical sense.
 
Last edited:
  • #611
A. Neumaier said:
I assumed for simplicity a pure state, which is realistic at least for certain 2-state systems. This is the standard notation; nothing is virtual here.

The existing things (the beables) are the properties of the 2-state system, given by the q-expectations computed from this state, and everything computable from these, hence the numbers $E_1,E_2$, and $p$. They characterize the state system, hence exist simultaneously as properties.

On the other hand, the eigenstates $|E_k\>$ are formal objects only, not existing in a physical sense.
If you have a two-state system with energy-eigenvectors ##|E_1 \rangle##, ##|E_2 \rangle##, the corresponding energy eigenstates are true pure states, i.e., states in which the system can be prepared, and represented by the statistical operators ##|E_k \rangle \langle E_k|##. Being prepared in this state the energy has the determined value ##|E_k \rangle##.

If you now consider the state ##|\psi \rangle \langle \psi|## with ##|\psi \rangle=a_1 |E_1 \rangle + a_2 |E_2 \rangle## with ##|a_1|^2+|a_2|^2=1##. The meaning in the standard statistical interpretation, and that's what's confirmed by real-world experiments is that, if you measure the energy precisely, then you find with probability ##p=|a_1|^2## the energy value ##E_1## and with probability ##1-p=|a_2|^2##, i.e., if measuring the energy precisely you don't get the expectation value of the energy, ##\langle E \rangle=p E_1 +(1-p) E_2##, but you always get either ##E_1## or ##E_2##. In the real world, of course, you don't have arbitrarily precise measurement devices, and you'll have some additional (statistical and systematic) error due to your device. So what's described above is an idealized theoretical situation of absolutely precise measurements. However, these technical problems to precisely measure the energy is not the point of interpretation of QT, but it's a challenge of engineering. As you well know, there are energy measurements with a phantastic precision, e.g., for the Lamb shift of the hydrogen spectrum (this is achieved mostly, because this ultraprecise measurement is of some interest for fundamental physics and that's why one has taken a great effort to make the measurement that precise).

Maybe you can construct a measurement device which somehow averages over the energy values if the system is prepared in said state. This is just one more example for the fact that you have to consider both and properly distinguish between the preparation of the system (represented by the statistical operator) and the observables (represented by self-adjoint operators) and, last but not least, finally what's really measured of course depends on the measurement apparatus. If you don't measure the energy in your example precisely (enough) you don't get one of the eigenvalues but some other value, maybe the expectation value given by the quantum-mechanical formalism.

It's still not clear how your "thermal interpretation" interprets the formalism. Simply saying, what's measured (or in the still either not well-defined expression "what's a beable") is the expectation value according to the prepared quantum state, is definitely proven wrong by zillions of quantum measurements performed in labs all over the world!
 
  • #612
vanhees71 said:
So what's described above is an idealized theoretical situation of absolutely precise measurements.
vanhees71 said:
If you don't measure the energy in your example precisely (enough) you don't get one of the eigenvalues but some other value
Thus an absolutely precise measurement is a theoretical construct, hence needs a convention to be defined.

The traditional convention is that an absolutely precise measurement would give one of the eigenvalues.

The convention of the thermal interpretation is that an absolutely precise measurement would give the q-expectation.

These are two different theoretical conventions catering for the same experimental situation. My convention has the advantage that it is applicable to single systems rather than only to large ensembles of systems and gets rid of probabilities at the foundational level, thus simplifying the foundations.
vanhees71 said:
It's still not clear how your "thermal interpretation" interprets the formalism.
Your beables are eigenvalues, my beables are q-expectations. These differing conventions make different interpretations. Depending on which numbers are declared to be beables, the measurement errors (deviation from the true values defined by the beables) are different.
vanhees71 said:
Simply saying, what's measured (or in the still either not well-defined expression "what's a beable") is the expectation value according to the prepared quantum state, is definitely proven wrong by zillions of quantum measurements performed in labs all over the world!
No.

The thermal interpretation says that the beable ##\langle E\rangle## is measured with an error of the order of at least ##\sigma_E## (depending on the measurement apparatus), while the statistical interpretation says that the beable ##E_1## or ##E_2## is measured with another error depending on the measurement apparatus.

Both statements are definitely proven correct by zillions of quantum measurements performed in labs all over the world!
 
  • Like
Likes dextercioby
  • #613
A. Neumaier said:
Thus an absolutely precise measurement is a theoretical construct, hence needs a convention to be defined.

The traditional convention is that an absolutely precise measurement would give one of the eigenvalues.

The convention of the thermal interpretation is that an absolutely precise measurement would give the q-expectation.
But that's not what is observed in the real world, but the standard interpretation that what's found if an observable is precisely measured are the eigenvalues of the observable operators and not expectation values of observables given by the prepared state of the system.

Take an electron in the double-slit experiment: A single electron's measured position on the screen is usually not found to be at the place given by the position expectation value. It's of course pretty probable to land in the main maximum of the distribution, which in this case is the expectation value, but it's also found somewhere else. Thus in this case, which is the paradigmatic example for the probabilistic interpretation of QT, solving the infamous "wave-particle dualism self-contradiction" of the "old quantum theory": There's nothing that precisely determins at which position the electron will hit the screen. All that can be known are the probabilities where it hits the screen, and what's found is that it hits the screen anywhere, but in a single measurement it's not found to have hit the screen on the average position given by the quantum state!
These are two different theoretical conventions catering for the same experimental situation. My convention has the advantage that it is applicable to single systems rather than only to large ensembles of systems and gets rid of probabilities at the foundational level, thus simplifying the foundations.
It's not convention. Your "thermal interpretation" makes obviously the wrong prediction that what's found when measuring an observable you find the expectation value of the observable given by the state the system is prepared in. Thus your "thermal interpretation" is not getting rid of the probabilities of usual quantum mechanics but just substites them with something that's obviously wrong!
Your beables are eigenvalues, my beables are q-expectations. These differing conventions make different interpretations. Depending on which numbers are declared to be beables, the measurement errors (deviation from the true values defined by the beables) are different.
I don't use the word "beable", because I have no clue what Bell meant when he coined this word. I'm a great fan of Bell's achievement to free the unfortunate EPR argument from being just philosophical gibberish and making a scientifically, i.e., empirically, decidable prediction of a model (named "local realistic hidden-variable model") that contradicts QT. As is well known today, the decision is with around ##100 \sigma## significance against local hiddel variable models and in favor for QT.

The eigenvalues of the observable operators are the possible outcomes of precise measurements of the corresponding observable, independent of the state the system is prepared in.
No.

The thermal interpretation says that the beable ##\langle E\rangle## is measured with an error of the order of at least ##\sigma_E## (depending on the measurement apparatus), while the statistical interpretation says that the beable ##E_1## or ##E_2## is measured with another error depending on the measurement apparatus.

Both statements are definitely proven correct by zillions of quantum measurements performed in labs all over the world!
Again, the thermal interpretation is then wrong! It depends on the measurement apparatus, how precisely you can measure the observable, not on the quantum state. E.g., the error of an energy measurement is given by the energy resolution of the apparatus, which is independent of the preparation of the measured system. If you want to empirically check whether your quantum description, giving of course an energy expectation value and a standard deviation (it gives even more, namely the entire probability distribution!) you need to use an apparatus with much better precision than given by the quantum mechanical standard deviation of the measured quantity. E.g., if you want to measure the mass and width (or mean lifetime) of the ##Z## boson you need a mass resolution much better than this width (of about 2.5 GeV).

Indeed, the standard interpretation says that a precise measurement of the energy in this example gives either ##E_1## or ##E_2##. Any real apparatus has of course a finite precision, but that's indeed unrelated to the preperation of the measured system but depends on the construction of the apparatus. If you want to be able to precisely resolve the two energy values this precision of the apparatus must be much better than ##E_2-E_1##.
 
  • #614
vanhees71 said:
But that's not what is observed in the real world, but the standard interpretation
Your statement is not what is observed in the real world, but only the standard interpretation of what is observed in the real world.

What is observed in the real world is a meter reading. How to interpret this reading in terms of the system measured is a matter of convention, given by the interpretation together with knowledge about the calibration of the measurement device.

vanhees71 said:
the standard interpretation is that what's found if an observable is precisely measured are the eigenvalues of the observable operators and not expectation values of observables given by the prepared state of the system.
In contrast, the thermal interpretation of what is observed is that what's found if a quantity represented by a Hermitian operator is measured are inaccurate approximations of q-expectations of the operator given by the prepared state of the system and not an eigenvalue of the operator. This gives two very different interpretations of exactly the same experimental results that are both correct since no known measurement experiment of a 2-level system disagrees with either interpretation.

vanhees71 said:
It's not convention. Your "thermal interpretation" makes obviously the wrong prediction that what's found when measuring an observable you find the expectation value of the observable given by the state the system is prepared in.
This is a misunderstanding. The thermal interpretation only claims that what's found when measuring an observable is an approximation to the q-expectation $\bar E$, accurate to an error of the order of the predicted uncertainty $\sigma_E$. This is obviously a correct prediction.

vanhees71 said:
It depends on the measurement apparatus, how precisely you can measure the observable, not on the quantum state.
In the thermal interpretation it depends in general on both; in special cases only on one of the two.

Again you only gave the view of the standard interpretation, which is different since it is a different interpretation!

vanhees71 said:
Take an electron in the double-slit experiment: A single electron's measured position on the screen is usually not found to be at the place given by the position expectation value. It's of course pretty probable to land in the main maximum of the distribution, which in this case is the expectation value, but it's also found somewhere else. Thus in this case, which is the paradigmatic example for the probabilistic interpretation of QT, solving the infamous "wave-particle dualism self-contradiction" of the "old quantum theory": There's nothing that precisely determines at which position the electron will hit the screen. All that can be known are the probabilities where it hits the screen, and what's found is that it hits the screen anywhere, but in a single measurement it's not found to have hit the screen on the average position given by the quantum state!
I prefer to discuss in the double-slit experiment light in place of electrons since it makes the underlying principle more clear. Consider the quantum system consisting of the screen and an external (classical) electromagnetic field. This a very good approximation to many experiments, in particular to those where the light is coherent. The standard analysis of the response of the electrons in the screen to the field (see, e.g., Chapter 9 in the quantum optics book by Mandel and Wolf) gives
- according to the standard interpretation - a Poisson process for the electron emission, at a rate proportional to the intensity of the incident field. This is consistent with what is observed when doing the experiment with with coherent light. A local measurement of the parameters of the Poisson process therefore provides a measurement of the intensity of the field.

There is nothing probabilistic or discrete about the field; it is just a term in the Hamiltonian of the system. Thus, according to the standard interpretation, the probabilistic response is in this case solely due to the measurement apparatus - the screen, the only quantum system figuring in the analysis. At very low intensity, the electron emission pattern appears event by event, and the interference pattern emerges only gradually. Effectively, the screen begins to stutter like a motor when fed with gas at an insufficient rate. But nobody ever suggested that the stuttering of a motor is due to discrete eigenvalues of the gas. Therefore there is no reason to assume that the stuttering of the screen is due to discrete eigenvalues of the intensity - which in the analysis given is not even an operator but just a coefficient in the Hamiltonian!

In the thermal interpretation, one assumes a similar stuttering effect at low intensity of a quantum field (whether the photon field or the electron field or a silver field or a water field), illustrated by the quantum bucket introduced in post #272 and post #6 of a companion thread.
 
  • #615
A. Neumaier said:
These are two different theoretical conventions catering for the same experimental situation. My convention has the advantage that it is applicable to single systems rather than only to large ensembles of systems and gets rid of probabilities at the foundational level, thus simplifying the foundations.
vanhees71 said:
the standard interpretation is that what's found if an observable is precisely measured are the eigenvalues of the observable operators and not expectation values of observables given by the prepared state of the system.
I contrasted the standard interpretation and the thermal interpretation in an example in a new thread.
 
Last edited:
  • #616
A. Neumaier said:
No.

The thermal interpretation says that the beable ##\langle E\rangle## is measured with an error of the order of at least ##\sigma_E## (depending on the measurement apparatus), while the statistical interpretation says that the beable ##E_1## or ##E_2## is measured with another error depending on the measurement apparatus.

Both statements are definitely proven correct by zillions of quantum measurements performed in labs all over the world!

I still don't understand where the E1 and E2 in your system come from, is the system oscillating between them or something else. In standard interpretation E1,E2 are very clear in what they mean. Moreover, how can your sigma depend on the measurement apparatus yet its calculation has nothing to do with it.

Finally, Vanhees71 says this
vanhees71 said:
If you want to be able to precisely resolve the two energy values this precision of the apparatus must be much better than E2−E1.
Isn't that possible, that should solve the dispute, right. I mean the difference between ground energy of H and the higher level is 13.6-3.4 ev, that is a lot isn't it.
 
  • #617
ftr said:
I still don't understand where the E1 and E2 in your system come from, is the system oscillating between them or something else.
They are eigenvalues of the Hamiltonian H, and differences of two eigenvalues represent excitable oscillation frequencies of the system (Rydberg–Ritz combination principle). They are also labels of a basis of eigenstates of H. All this is standard quantum mechanics, nothing special to the thermal interpretation.
ftr said:
In standard interpretation E1,E2 are very clear in what they mean.
What else do they mean there?
ftr said:
Moreover, how can your sigma depend on the measurement apparatus yet its calculation has nothing to do with it.
The computed sigma is a lower bound on the actual measurement uncertainty, just as in the Heisenberg uncertainty relation: No matter how contrived your measurement device, the results are never reproducible with an uncertainty of less than the computed sigma. More uncertainty is the normal situation.

ftr said:
the difference between ground energy of H and the higher level is 13.6-3.4 ev, that is a lot isn't it.
The size of the difference doesn't matter.

The standard convention is comparable to saying the location of Paris is either the position of Notre Dame, or the position of the Eiffel tower (or maybe other notable points, corresponding to higher levels in the quantum case), with random results depending on whom you ask. Neither of these gives the location of Paris more precisely than with an uncertainty of a few kilometers, though the position of both Notre Dame and the Eiffel tower can be located far more precisely.
 
  • #618
A. Neumaier said:
They are eigenvalues of the Hamiltonian H, and differences of two eigenvalues represent excitable oscillation frequencies of the system (Rydberg–Ritz combination principle). They are also labels of a basis of eigenstates of H. All this is standard quantum mechanics, nothing special to the thermal interpretation.
A. Neumaier said:
What else do they mean there?

Yes of course I know that, it just that standard interpretation gives precise meaning to these eigenvalues and so bestows on them some physical, but you make their average physical( beable) , so it is hard for me to understand when YOU also look at it as "excitable oscillation" then you take their average. I guess I am missing something.
 
  • #619
A. Neumaier said:
Your statement is not what is observed in the real world, but only the standard interpretation of what is observed in the real world.
Indeed, what's observed in the real world is a meter reading. That's what a measurement device is constructed for, but in general it's not some more or less accurate measurement of the quantum-expectation value. What's measured (or approxmately measured) in terms of the formalism depends on the construction of the measurement apparatus, not on the state of the system. A measurement device with good enough accuracy resolves the possible values as predicted by the formalism, namely the eigenvalues of the operators representing observables. If that was not the case, nobody would care about quantum mechanics from the very beginning.

Of course, often you indeed measure (more or less accurately) q-expectation values, because, e.g., by construction your measurement device measures coarse-grained macroscopic "classical" values (my example for that brought forward is a galvanometer measuring a macroscopic electric current running through it, averaging over sufficiently long time intervals due to the inertia and friction of the device, and that's in this cases wanted and by construction of the device; it's not due to the preparation of the measured object).
 
  • #620
ftr said:
Yes of course I know that, it just that standard interpretation gives precise meaning to these eigenvalues and so bestows on them some physical, but you make their average physical( beable) , so it is hard for me to understand when YOU also look at it as "excitable oscillation" then you take their average. I guess I am missing something.
Well, if you expand in first order perturbation theory a q-expectation ##\langle A(t)\rangle## in the Heisenberg picture into a Fourier integral, you find that these oscillations are excitable.
 
  • #621
ftr said:
I still don't understand where the E1 and E2 in your system come from, is the system oscillating between them or something else. In standard interpretation E1,E2 are very clear in what they mean. Moreover, how can your sigma depend on the measurement apparatus yet its calculation has nothing to do with it.

Finally, Vanhees71 says this

Isn't that possible, that should solve the dispute, right. I mean the difference between ground energy of H and the higher level is 13.6-3.4 ev, that is a lot isn't it.
We discussed a two-level system (which is the most simple case you can think of).

Of course, the measurement device is independent of the measured system. What it measures is due to its construction, not by the preparation of the measured system. To discuss energy measurements is, however, tricky, because energy is somwhat special (for the same reason the time-energy uncertainty relation is special because time is not an observable but a parameter in QT, and that's another subtlety we can discuss later, after the obvious disagreement between @A. Neumaier 's and my claims are hopefully resolved).

So it's better to discuss the measurement of the spin-3 component of a spin-1/2 particle. That's also a two-level system, and it's very clear how to measure the spin-3 component of an uncharged spin-1/2 particle, namely you use a Stern-Gerlach apparatus (SGA), which can be constructed such as to measure FAPP precisely whether the particle has a spin-z component with the value ##+1/2## or ##-1/2##.

Now suppose we have prepared the particle in the state ##|\psi \rangle \langle \psi|## with ##|\psi \rangle=a_1 |+1/2 \rangle + a_2 |-1/2 \rangle##. Then using this precise SGA for each individual particle measured we'll get either ##\sigma_z=+1/2## or ##\sigma_z=-1/2##. Which value we get for any individual particle is indetermined. That's an empirical fact, independent of the interpretation of QT.

The interpretation is in the meaning of the quantum state. In the minimal statitical interpretation we have the following: Only when we use very many particles, we'll find that the probabilities to find ##\sigma_z=1/2## or ##\sigma_z=-1/2## are ##|a_1|^2## and ##|a_2|^2##, repspectively, when the spin-z-component is measured accurately.

@A. Neumaier claims that's not the case but that what's always measured is an approximation to the q-average value,
$$\langle \sigma_z \rangle = |a_1|^2 (1/2) + |a_2|^2 (-1/2).$$
This may well be for some apparatus measuring this expectation value. One could think of a device that doesn't resolve the individual spin-z-values but just measures the intensities on a photo plate where many particles running through the SGA are registered, but that's not what's meant when the foundations of the theory are discussed.

There you discuss first what happens when a single particle's spin-z-component is measured precisely (in the above given sense), and the corresponding findings of all experiments where such a precise measurement of a microscopic quantity has been achieved, the predictions of QT: What's found as the possible values of the measured observable are the eigenvalues of the corresponding self-adjoint operator of the measured observable and not q-expectation values, and indeed the outcome of any individual measurement is not predictable (except when the system is prepared in a state, where the observable has a determined value, which is in our case only possible if either ##|a_1|^2=1## and ##a_2=0## or vice versa) but if the experiment is repeated very often on identically prepared particles you get the probabilities ##|a_1|^2## and ##|a_2|^2=1-|a_1|^2## for finding ##\sigma_z=1/2## or ##\sigma_z=-1/2##, as predicted by Born's rule.

If this was not the case, QT would have for long been modified to something better. Nowadays we have such a precision in preparing and measuring two-level systems, e.g., single-photon polarization states (and also other more complicated quantum systems) where all the results agree with the standard interpretation of QT that I can safely say that the predictions of the "Thermal Interpretation" are not in accord with the observations.
 
  • #622
vanhees71 said:
in general it's not some more or less accurate measurement of the quantum-expectation value.
No, it always is some more or less accurate measurement of the q-expectation. In case the level gap is large a single measurement is just very inaccurate. But by averaging the accuracy becomes better, as always when measurements are inaccurate. See my post in the new thread.
 
Last edited:
  • Like
Likes dextercioby
  • #623
A. Neumaier said:
Well, if you expand in first order perturbation theory a q-expectation ##\langle A(t)\rangle## in the Heisenberg picture into a Fourier integral, you find that these oscillations are excitable.
Aha, I was going to write that two days ago but I was waiting for you to write. Actually I was sitting here for one hour thinking about it and some related issues.
 
  • #624
What has this to do with what's measured, when measuring the observable ##A## discribed by the operator ##\hat{A}(t)## in the Heisenberg picture. BTW, the measurable outcomes of QT are independent of the choice of the picture of time evolution.

What @A. Neumaier seems to have in mind is, how the spectral lines of, e.g., atoms are related to the energy levels of these atoms, and this is in good approximation indeed derived in first-order time-dependent perturbation theory (of course in the Dirac picture, not the Heisenberg picture, but that's a detail), as explained in my Insights article on the photoelectric effect (in semiclassical approximation), which is a standard textbook result of course:

https://www.physicsforums.com/insights/sins-physics-didactics/
 
  • #625
vanhees71 said:
using this precise SGA for each individual particle measured we'll get either ##\sigma_z=+1/2## or ##\sigma_z=-1/2##. Which value we get for any individual particle is indetermined. That's an empirical fact, independent of the interpretation of QT.

The interpretation is in the meaning of the quantum state.
So far I fully agree.
vanhees71 said:
In the minimal statitical interpretation we have the following: Only when we use very many particles, we'll find that the probabilities to find ##\sigma_z=1/2## or ##\sigma_z=-1/2## are ##|a_1|^2## and ##|a_2|^2##, respectively, when the spin-z-component is measured accurately.

A. Neumaier claims [...] that what's always measured is an approximation to the q-average value,
$$\langle \sigma_z \rangle = |a_1|^2 (1/2) + |a_2|^2 (-1/2).$$
There is no conflict here: Both your and my interpretation apply and are consistent with the experimental record. But we use different conventions resulting in different intuitions. The same measured values are approximations to many things. You consider them as approximations to an eigenvalue and then get in this case zero approximation error. I consider them as approximations of the q-expectation (for simplicity, let us say it is zero) and then get an approximation error of 1/2.
Note that I didn't claim better accuracy! But if I repeat the measurement N times and average the results, the accuracy improves to ##\frac12N^{-1/2}##. Just as in all cases where a measurement result is noisy.
 
  • #626
vanhees71 said:
What has this to do with what's measured
I was just answering ftr's question. A bit off-topic but short, so I found it worth explaining.
vanhees71 said:
the measurable outcomes of QT are independent of the choice of the picture of time evolution.
Yes, but my formula for it was not, because there the state was fixed.
 
  • #627
vanhees71 said:
What has this to do with what's measured,
Not really sure. But couple of days ago I used the word virtual which has been very controversial which is usually associated with expansion terms in perturbation theory. I am still thinking:frown:
 
  • #628
A. Neumaier said:
No, it always is some more or less accurate measurement of the q-expectation. In case the level gap is large a single is just very inaccurate. But by averaging the accuracy becomes better, as always when measurements are inaccurate. See my post in the new thread.
Ok, if you don't want to accept experimental facts, it's not possible to discuss in a scientific way. I give up!
 
  • #629
A. Neumaier said:
I was just answering ftr's question. A bit off-topic but short, so I found it worth explaining.

Yes, but my formula for it was not, because there the state was fixed.
If a result is dependnet on the picture of the time evolution, it doesn't describe anything physical. It's not something related to what can be observed (by definition of standard QT).
 
  • #630
ftr said:
Not really sure. But couple of days ago I used the word virtual which has been very controversial which is usually associated with expansion terms in perturbation theory. I am still thinking:frown:
Well, thinking is always good, but what's sometimes controversial (however never among practicing theoretical physicists) is the meaning of "virtual particle", and there @A. Neumaier has written excellent Insight articles about, fortunately using the "standard interpretation", not his very enigmatic (in my opinion almost certainly incorrect) "thermal interpretation".
 

Similar threads

Replies
24
Views
4K
Replies
4
Views
1K
Replies
42
Views
6K
Replies
1
Views
2K
Replies
25
Views
3K
Replies
53
Views
6K
Replies
7
Views
2K
Back
Top