Are there signs that any Quantum Interpretation can be proved or disproved?

In summary, according to the experts, decoherence has not made much progress in improving our understanding of the measurement problem.
  • #246
vanhees71 said:
I give up. I don't understand the logic behind the idea of the "thermal interpretation". I guess you can live with it.
Sad to hear. Your current discussions with A. Neumaier helped me to see more clearly how the thermal interpretations resolves paradoxes I had thought about long before I heard about the thermal interpretation.

My impression with respect to this "please give me an operational definition" request and A. Neumaier's reply "what sort of operational definition would you find satisfactory" is that it is similar to requests for operational definitions of the center of gravity of the sun, the moon, or the Earth in Newton's theory. If you apply it with respect to point particles, then those centers of gravity are the things that the theory talks about, and to which its predictions apply. But you cannot directly observe the center of gravity of the earth. You could instead observe the moon and how it orbits the earth, to indirectly observe it. But even there, you don't directly observe its center of gravity (which would give you the most accurate information), but only an approximation to it that includes a certain fundamental uncertainty. But A. Neumaier still has to insist that "the state of the system" determines those centers of gravity, because that is what the theory talks about.
 
Physics news on Phys.org
  • #247
The problem also is he is contradicting himself all the time. Just in his last posting all of a sudden he admitted that he needs probabilities to define his q-expectations. All the time he denied that this standard definition is abandoned in this thermal interpretation. The arguments get in circles for years, and it seems to be impossible to communicate even the problem with abandoning probabilities from the interpretation.
 
  • Like
Likes physicsworks
  • #248
vanhees71 said:
all of a sudden he admitted that he needs probabilities to define his q-expectations.
No. Probabilities are not needed to define quantum expectations, only the trace formula figures in the (purely mathematical) definition. But in case 2., where many measurement results on identically prepared systems are available, the probabilities used in classical statistics to define expectation values reproduce the quantum expectation values defined by the trace formula.

vanhees71 said:
The arguments get in circles for years, and it seems to be impossible to communicate even the problem with abandoning probabilities from the interpretation.
The argument goes in circles for years because you don't pay attention to details in my statements that make a lot of difference to the meaning.

I don't abandon probabilities from the interpretation but only from the foundations (i.e., from what I assume without discussion), and introduce them later as derived entities that can be used when the circumstances admit a statistical interpretation, namely when one has many actual observations so that a frequentist probability makes sense.
 
  • #249
You don't even understand my question. The trace formula is indeed purely mathematical. I ask for the physics! How are your q-expecation values measured? In standard QT with its probabilistic meaning of states and the trace formula that's clear.

Also what's measured is by far not always an expectation value (no matter how you operationally define them) as the example with the two spots (rather than one big spot) in the most simple case of a spin-component measurement a la Stern and Gerlach demonstrates.
 
  • #250
vanhees71 said:
You don't even understand my question. The trace formula is indeed purely mathematical. I ask for the physics! How are your q-expecation values measured? In standard QT with its probabilistic meaning of states and the trace formula that's clear.
You don't understand my answers.

In general, q-expecation values cannot be measured; the thermal interpretation only asserts that they objectively exist (as beables in Bell's sense). But in many cases they can be measured. I gave in post #240 an operational answer for two classes of instances where they have an operational meaning, which you didn't even try to understand. Nor did you tell me which physics is missing!

There is something to be understood in my postings, not only to be argued against!

vanhees71 said:
Also what's measured is by far not always an expectation value (no matter how you operationally define them) as the example with the two spots (rather than one big spot) in the most simple case of a spin-component measurement a la Stern and Gerlach demonstrates.
In the thermal interpretation, what's measured has a different meaning than in the statistical interpretation, since the thermal interpretation rejects to call observations measurements when they are not reproducible. Only reproducible results have scientific meaning, hence deserve to be called measurement results.

What's measured in a Stern-Gerlach experiments are two silver spots composed of many small random events. From these one forms reproducible measurement results - mean (between the spots), standard deviation (large), and response rates to the impinging electron field intensity. These are the numbers that can be compared with the theoretical q-expectations, with very good agreement. Thus the q-expectations have in this case an operational meaning. Nothing is unphysical in this account of the experiment - on the contrary, everything is intuitive.
 
  • Informative
Likes kith
  • #251
As I said, I give up. I don't understand your definition of what's observable and how it's measured within your reinterpretation of quantum theory. I cannot understand the expression "standard deviation", if I'm not allowed to understand it in the usual statistical meaning. It's also clear that any physics is about reproducible observations and measurements. This is a prerequisite particularly in the standard statistical interpretation.
 
  • #252
vanhees71 said:
I cannot understand the expression "standard deviation", if I'm not allowed to understand it in the usual statistical meaning.
It has the usual statistical meaning, since - as all statistics - it is applied to measurement results.

I don't discard statistics; I just remove it from the foundations. Just as there are many classical random phenomena but statistics has no place in the foundations of classical mechanics, so there are many quantum random phenomena but statistics should have no place in the foundations of quantum mechanics.
 
  • #253
But then there is no physical meaning in the formalism anymore, and at least I fail to understand, where you introduce this physical meaning again.

There is no statistics in the foundations of classical physics, because classical physics is formulated as a deterministic theory. In the standard interpretation the resolution of the discrepancies of the classical description with observation regarding "quantum phenomena" is that one postulates generic randomness to the description of Nature. All attempts for more than 100 years to get back to a deterministic description so far failed, and for sure you cannot achieve this by simply telling there is no statistical foundation without any operational substitute for it. As you've just admitted, when it comes to phenomenology and thus the operational definition of observables in relaation to your formalism you have to introduce the standard statistical meaning again. So I don't understand, why one should not clearly state the statistical meaning of the formalism from the very beginning.
 
  • Like
Likes physicsworks
  • #254
vanhees71 said:
There is no statistics in the foundations of classical physics, because classical physics is formulated as a deterministic theory.
There is no statistics in the foundations of quantum physics, because quantum physics is formulated as a deterministic theory, according to the thermal interpretation.
vanhees71 said:
All attempts for more than 100 years to get back to a deterministic description so far failed
But you discount my successful attempt without trying to understand it from that perspective. You only try to understand it from your statistical perspective.

vanhees71 said:
As you've just admitted, when it comes to phenomenology and thus the operational definition of observables in relation to your formalism you have to introduce the standard statistical meaning again.
I introduced it only in those cases (case 2.) where there are enough and random enough observations to apply standard statistical methods. In such cases you also need it in classical mechanics, and as in the classical case it is introduced a posteriori and not in the foundations.

Macroscopic quantities are the only things that are directly measured, hence the only things that need an operational meaning. For macroscopic quantities you don't need statistics to give the quantum expectations an operational meaning. Microscopic quantities are measured only indirectly (i.e., whatever we know about them is inferred from a large number of macroscopic observations), hence can be assigned an operational meaning in a second step, after having introduced statistics in the same way as in classical physics.

vanhees71 said:
So I don't understand, why one should not clearly state the statistical meaning of the formalism from the very beginning.
Since you understand why one should not clearly state the statistical meaning of classical mechanics
from the very beginning you can proceed by analogy. The thermal interpretation works in exactly the same way.
 
  • #255
atyy said:
As Bohr said: "It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we say about Nature."
Yeah. So physics must be seen as a part of linguistics. More precisely, of socio-linguistics.
 
  • #256
WernerQH said:
Yeah. So physics must be seen as a part of linguistics. More precisely, of socio-linguistics.
If it proves that the world isn't fundamentally deterministic and causal, physics could even be seen as part of psychiatry or psychology.
 
  • Like
Likes AlexCaledin
  • #257
These arguments will go in circles forever, obviously because something is missing from the picture. If I showed a cell phone to an aborigen, he would say that the voice of someone on another continent emanating from the phone is emergent because he'd not be able to identify all the parts and processes going on in the phone. Much like we can't interpret reality based on the limited knowledge that is currently available. It could well be that there are processes, fields, etc. that we haven't detected yet to be able to interpret comprehensively how reality works.
Based on what we currently know, most of what we see around us would be "emergent"(matter, space, time, causality, determinism, even consciousness). :H But in the end, we could be the aborigen standing next to a cell phone unaware of the existence of EM waves.
 
  • Like
Likes er404
  • #258
EPR said:
But in the end, we could be the aborigen standing next to a cell phone unaware of the existence of EM waves.
... unaware of the existence of advanced EM waves.
 
  • #259
vanhees71 said:
The problem also is he is contradicting himself all the time. Just in his last posting all of a sudden he admitted that he needs probabilities to define his q-expectations. All the time he denied that this standard definition is abandoned in this thermal interpretation.
Well, I should say something to this, but what should I say? And I should not wait forever, otherwise it gets even more awkward.

A. Neumaier is not contradicting himself, but there is a strange paradox: Due to the way the thermal interpretation avoids adding unnecessary cruft and clarifies which concepts are possible encodings (wavefunctions, density matrices, ...) and which concepts are more fundamental than mere encodings (q-expectations and q-correlations, their space-time dependence, ...), it is inherently suitable to enable an instrumentalist like me to make simpler computations and better justifiable approximations. So it would be ideally suited for the “let me calculate and explain” approach I liked so much in the writings of Roland Omnès and Robert B. Griffiths about consistent histories. But instead, A. Neumaier sometimes skips even simple calculations (like describing the quantum state in the Stern-Gerlach experiment immediately before the detection interaction) that would seem helpful (from my perspective) to make his explanations easier to grasp. I admit that he gives helpful references to existing papers doing such calculations, but of course those papers won't use the thermal interpretation to simplify their calculations (or justify their approximations).

And in cases where he doesn't try to properly understand what you wrote and instead gives a response he has given in similar form thousand times before, he risks indeed to accidentally contradict himself, and to contribute to the impression of going round in circles.

The arguments get in circles for years, and it seems to be impossible to communicate even the problem with abandoning probabilities from the interpretation.
Well, if your goal is to disprove the thermal interpretation, then I don't get what you expect A. Neumaier to do. I wouldn't even say that he tries to abandon probabilities in his interpretation. He just aims for an ignorance interpretation of probability (like in de Broglie-Bohm), and tries to avoid one circularity that could arise in a (virtual) frequentist interpretation of probability (with respect to uncertainties). Let me quote from section "4. A view from the past" from an article about earthquake prediction which I have reread (yesterday and) today:
We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom,’ and we will call it A for short. ...
Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p ± ϵ for large enough n (the largeness depending on ϵ)’. It can only say ‘the proportion will lie between p ± ϵ with at least such and such probability (depending on ϵ and n0 ) whenever n > n0 ’. The vicious circle is apparent.
 
  • Like
Likes timmdeeg
  • #260
So far for me the new interpretation lacks precisely an interpretation of the math. There seems to be no physical, operational interpretation of the expectation values and correlations anymore, because the standard probability interpretation is explicitly negated and not substituted by anything new.

I don't want to disprove the thermal interpretation. I just want to understand it. For me it is not a physical theory, before it's not clearly stated which meaning the formalism has in connection with observations and measurements in the a lab.

What is for sure wrong is the assumption that what a measurement device measures is always a q-expectation value.
 
  • Like
Likes gentzen
  • #261
vanhees71 said:
What is for sure wrong is the assumption that what a measurement device measures is always a q-expectation value.
The theory talks about q-expectation values, so what needs to be compared with available (or future) observations are "things" that can be derived from those (space and time dependent) q-expectation values. Those "things" are functions of the q-expectation values, where "function" can include averaging over space and time, to take the limited resolution of measurement devices into account. It could also include nonlinear functions of many different (averaged) q-expectation values. So far, so good.

Where it might become objectionable is when A. Neumaier wants to compute statistics of the available observations before the comparison, in case those observations are mostly random like individual silver spots on the screen for a Stern-Gerlach experiment. His argument is that the individual spot is not reproducible, only the statistics of those spots is. And at this point you object and say that you no longer see a difference to the minimal statistical interpretation.

The argument that he would accept something reproducible like a temperature, an electric current, or other macroscopic variables without requiring statistics seems not convincing, because after all a silver spot is also a macroscopic observable, and repeated measurements of properties of an individual silver spot would probably be reproducible. But that doesn't count, because ...

I won't try to convince you. You already stated that you find that whole business confusing and unsatisfactory. Also, I should not try to talk for A. Neumaier, because that would only propagate my own misunderstandings. And if I talk for myself, detailed properties of the q-expectations interest me more than whether measuring silver spots is reproducible or not. What interests me for example is how much gauge-freedom is still left in the q-expectations, whether taking functions of the q-expectations is sufficient for removing all remaining gauge-freedom, how specific q-correlations can be observed, whether certain q-correlations are similar to evanescent modes in being not really directly observable, and stuff like that. And I am interested in interpretations of probabilities, and resolution of the corresponding paradoxes and circularity issues. And I am interested in randomness, because there is no such thing as perfect randomness (or objective randomness), at least that is my guess.
(Sorry for the long reply, and thanks for answering me. I should not overstretch your friendliness too much by going on and on and on in circles.)
 
  • #262
gentzen said:
Where it might become objectionable is when A. Neumaier wants to compute statistics of the available observations before the comparison, in case those observations are mostly random like individual silver spots on the screen for a Stern-Gerlach experiment. His argument is that the individual spot is not reproducible, only the statistics of those spots is. And at this point you object and say that you no longer see a difference to the minimal statistical interpretation.
When I'd be allowed to interpret the expectation values in the usual probabilistic way, there'd be no problem with that, because you can calculate all moments and this reproduces uniquely the probability distribution for finding a specific value when measuring the observable, which is all I can know about this measurement before doing the measurement given the state the system is prepared in.
gentzen said:
The argument that he would accept something reproducible like a temperature, an electric current, or other macroscopic variables without requiring statistics seems not convincing, because after all a silver spot is also a macroscopic observable, and repeated measurements of properties of an individual silver spot would probably be reproducible. But that doesn't count, because ...
But also macroscopic observables show in principle quantum fluctuations, which are however almost always way too small to be significant or even detectable within the accuracy needed to observe them. There are of course also exceptions. E.g., besides there great success in measuring gravitational waves from astronomical sources, gravitational wave detectors are of a precision, where quantum fluctuations of macroscopic objects (the quite heavy mirrors of the Michelson interferometer) can be observed.
gentzen said:
I won't try to convince you. You already stated that you find that whole business confusing and unsatisfactory. Also, I should not try to talk for A. Neumaier, because that would only propagate my own misunderstandings. And if I talk for myself, detailed properties of the q-expectations interest me more than whether measuring silver spots is reproducible or not. What interests me for example is how much gauge-freedom is still left in the q-expectations, whether taking functions of the q-expectations is sufficient for removing all remaining gauge-freedom, how specific q-correlations can be observed, whether certain q-correlations are similar to evanescent modes in being not really directly observable, and stuff like that. And I am interested in interpretations of probabilities, and resolution of the corresponding paradoxes and circularity issues. And I am interested in randomness, because there is no such thing as perfect randomness (or objective randomness), at least that is my guess.
(Sorry for the long reply, and thanks for answering me. I should not overstretch your friendliness too much by going on and on and on in circles.)
I don't know what you mean by "gauge freedom". I also don't see circularity issues with probabilities in the standard minimal interpretation of QT. It's just the basic assumption about the meaning of the quantum state, described by the statistical operator (or generalizing this concept to POVMs which seem to be very important in the thermal interpretation as defining irreducible postulates, but as long as I'm allowed to use the standard probabilistic interpretation of the state that's not a problem but just an extension to the description of non-ideal von Neumann meausrements) to be probabilistic and only probabilistic as described by the postulates of QT and particularly Born's rule (or the corresponding extension in the POVM formalism).

Whether or not there is objective randomness is of course very challenging, even if you are allowed to use the clear standard interpretation of QT. According to what we know today, I'd say it's pretty sure to have objective randomness in Nature, because the violation of Bell's inequality and the confirmation of standard local QED in all these Bell tests with photons I'd say that the assumption of deterministic hidden variables responsible for the randomness of the observables is ruled out within our contemporary experience with all the successful relativistic descriptions of Nature, which are all local, and there is no satisfactory non-local reinterpretation like Bohmian mechanics for nonrelativistic quantum mechanics. Of course we don't have any hard "proof" (proof in the sense of natural science, of course, not in the sense of mathematics) for that assumption, because maybe one day some clever physicists finds such a non-local determinstic description compatible with the causality structure of relativistic spacetime. From what we know however today, there is not the slightest necessity for such a theory, because there is not a single observation hinting at something like this.
 
  • #263
vanhees71 said:
I don't know what you mean by "gauge freedom".
Well, I mean things like the global phase of a wavefunction, or the reference zero energy for a potential. (However, the real gauge freedom is actually the local phase of the wavefunction, so I am not sure what getting rid of the global phase will change.) Or the freedom of a vector potential compared to the electromagnetic fields themselves. If q-expectations are used instead of wavefunctions, then the global phase is no longer there. But maybe other similar degrees of freedom are still there.
vanhees71 said:
I also don't see circularity issues with probabilities in the standard minimal interpretation of QT.
The circularity is not related to QT or the standard minimal interpretation. It just means that if you try to define probability via frequencies, then your definition for what it means in practice for finitely many "measurements" might implicitly already use probabilities.

vanhees71 said:
but as long as I'm allowed to use the standard probabilistic interpretation of the state
The interpretation of the state is indeed different in the thermal interpretation, so it would no longer be the thermal interpretation if you use the standard probabilistic interpretation of the state.
It would be easiest for me to explain this by contrasting it to the corresponding interpretation of the state in QBism, and by explaining why I prefer the thermal interpretation of the state.

vanhees71 said:
I'd say it's pretty sure to have objective randomness in Nature, because
I agree that there is randomness in nature. But it doesn't need to be perfect, it just needs to be good enough to prevent exploiting the non-local randomness observed in Bell-type experiments for faster than light signaling / communication. So by objective randomness, I mean a mathematically perfect randomness, and when I say that believe that there is no such thing as perfect randomness (or objective randomness), I mean that it is not necessary to postulate its existence for making sense of QT and Bell-type experiments.
 
  • #264
gentzen said:
Well, I mean things like the global phase of a wavefunction, or the reference zero energy for a potential. (However, the real gauge freedom is actually the local phase of the wavefunction, so I am not sure what getting rid of the global phase will change.) Or the freedom of a vector potential compared to the electromagnetic fields themselves. If q-expectations are used instead of wavefunctions, then the global phase is no longer there. But maybe other similar degrees of freedom are still there.
These quibbles are resolved by defining statistical operators as the representants of states. Then the "phase gauge freedom" goes away in the sense that you use "gauge invariant" descriptions. This has nothing to do with interpretation but is a well-understood part of the formalism, and it's very important to remember that not "wave functions" represent (pure) states but the corresponding statistical operator or, equivalently, unit rays in Hilbert space. Without this you couldn't do non-relativistic QM by the way, because only central extensions of the Galileo group lead to physically meaningful dynamics (Wigner, Inönü).
gentzen said:
The circularity is not related to QT or the standard minimal interpretation. It just means that if you try to define probability via frequencies, then your definition for what it means in practice for finitely many "measurements" might implicitly already use probabilities.
Exactly. That's why I don't understand that in the thermal interpretation I have to abandon the probabilistic interpretation without any substitute for it to connect the formalism to the "lab".
gentzen said:
The interpretation of the state is indeed different in the thermal interpretation, so it would no longer be the thermal interpretation if you use the standard probabilistic interpretation of the state.
It would be easiest for me to explain this by contrasting it to the corresponding interpretation of the state in QBism, and by explaining why I prefer the thermal interpretation of the state.
The problem is that nobody explained to me how to relate the q-expectation values to experiment. I don't care whether you interpret probabilities in the frequentist or Bayesian way. The Qbists also couldn't explain to me, how their interpretation relates to real-world observations either.
gentzen said:
I agree that there is randomness in nature. But it doesn't need to be perfect, it just needs to be good enough to prevent exploiting the non-local randomness observed in Bell-type experiments for faster than light signaling / communication. So by objective randomness, I mean a mathematically perfect randomness, and when I say that believe that there is no such thing as perfect randomness (or objective randomness), I mean that it is not necessary to postulate its existence for making sense of QT and Bell-type experiments.
I think the only way to realize what you call "perfect" or "objective" randomness is provided QT measurements since to the best of our knowledge these are objectively random events. The perfect unpolarized single-photon
source is to produce the Bell-singlet state by parametric downconversion. Then the single-photon polarization are with certainty maximally uncertain and the single photons are perfectly unpolarized.
 
  • #265
vanhees71 said:
These quibbles are resolved by defining statistical operators as the representants of states. Then the "phase gauge freedom" goes away in the sense that you use "gauge invariant" descriptions. This has nothing to do with interpretation but is a well-understood part of the formalism,
My "quibbles" started with the "question" whether using statistical operators instead of wave functions will make all "gauge freedom" go away. Your "answer" that the "phase gauge freedom" goes away is misleading, because for example the reference zero energy for a potential doesn't go away and remains important.
It may be "a well-understood part of the formalism" for you, and I agree that it obviously should not hold any deep secrets. But I still don't fully understand it, not even in the simpler optics context. Countless times I computed time averaged Poynting vectors, when there were differing opinions on whether some normalization or some computation result or effect were "correct" or a "bug". The actual normalizations or effects were often much simpler than those Poynting vector computations, but I don't know whether (or how) I could have avoided them.
Something responsible for this type of confusion and hard to resolve debates does have connections to interpretation in my book.

vanhees71 said:
and it's very important to remember that not "wave functions" represent (pure) states but the corresponding statistical operator or, equivalently, unit rays in Hilbert space. Without this you couldn't do non-relativistic QM by the way, because only central extensions of the Galileo group lead to physically meaningful dynamics (Wigner, Inönü).
I have now read On the Contraction of Groups and Their Representations by E. Inonu and E. P. Wigner (1953). It reminded me of something I had read previously, namely Missed opportunities by Freeman J. Dyson (1972). So if I understood it correctly, some representations of the Galilei group arise as contractions of representations of the Lorentz group, and only those representations lead to physically meaningful dynamics. And the structure of the contracted part of the representation is that of a central extension.
 
  • Like
Likes vanhees71 and dextercioby
  • #266
gentzen said:
if I understood it correctly, some representations of the Galilei group arise as contractions of representations of the Lorentz group, and only those representations lead to physically meaningful dynamics. And the structure of the contracted part of the representation is that of a central extension.
... of projective representations of the inhomogeneous Lorentz group = Poincaré group
 
Last edited:
  • Like
Likes vanhees71
  • #267
gentzen said:
My "quibbles" started with the "question" whether using statistical operators instead of wave functions will make all "gauge freedom" go away. Your "answer" that the "phase gauge freedom" goes away is misleading, because for example the reference zero energy for a potential doesn't go away and remains important.
This I don't understand. Where is the absolute reference of the energy observable in your opinion? The physics doesn't change by using an Hamiltonian
$$\hat{H}'=\hat{H}+E_0 \hat{1}$$
instead of ##H##. In the Schrödinger picture the time-evolution operators are
$$\hat{U}=\exp(-\mathrm{i} \hat{H} t)$$
and
$$\hat{U}'=\exp(-\mathrm{i} \hat{H}' t) = \exp(-\mathrm{i} E_0 t) \exp(-\mathrm{i} \hat{H} t).$$
The time evolution of the state is
$$\hat{\rho}(t)=\hat{U}(t) \hat{\rho}(0) \hat{U}^{\dagger}(t)=\hat{U}'(t) \hat{\rho}(0) \hat{U}^{\prime \dagger}(t).$$
So there's no change in the dynamics of the state (but of course for the state ket of a pure state, for which the state then of course is ##|\psi(t) \rangle \langle \psi(t)|##, which is again independent of ##E_0##).
 
  • #268
vanhees71 said:
This I don't understand. Where is the absolute reference of the energy observable in your opinion? The physics doesn't change by using an Hamiltonian ...
Of course the physics doesn't change, that is exactly why it is called "gauge freedom". But if I simulate electron matter interaction in the context of scanning electron microscopy, then we talk about the energy of the electrons. And the zero energy reference changes, it is different in vacuum from the reference inside a material. Even worse, for interactions with inner shell electrons, it is different again during the interaction. The trouble is that the details which is the "correct" reference energy can be tricky. One the one hand, it is just a convention. On the other hand, often it is important to determine the correct kinetic energy of the electrons for the concrete interactions. And then you get into "dangerous" discussions where you both risk being the one who is wrong, but also risk being the one who would have been correct, but failed to convince the others.

But when I say that I don't fully understand it, I mean something simpler. Concrete computations are done with concrete zero reference energy and other concrete gauge fixings. And of course the potential is reported directly, and it doesn't seem to cause any problems. The context is enough to clarify its meaning, even its absolute value. But will this also be the case for other gauge dependent magnitudes, or is the potential an exception.
 
  • #269
Do you have a concrete example, where the choice of the absolute reference point of energy leads to problems? I still don't understand what you mean.

Of course, what's observable are only gauge-invariant properties, and you have to carefully define them and ensure that you don't violate gauge invariance (in the stricter sense of choosing a gauge for gauge fields as, e.g., the electromagnetic field in atomic physics).

The em. potentials are not observables. They lack already the very basic microcausality property. What's observable are gauge-independent quantities like the energy-momentum tensor of the em. field, fulfilling the microcausality principle.
 
  • #270
vanhees71 said:
Do you have a concrete example, where the choice of the absolute reference point of energy leads to problems? I still don't understand what you mean.
Let me be clear that the sense in which "the absolute reference point of energy leads to problems" is of the type "And then you get into "dangerous" discussions where you both risk being the one who is wrong, but also risk being the one who would have been correct, but failed to convince the others."

I am not sure how helpful my concrete example will be for helping you understand what I mean. Some concrete example were "ionization energies for inner shells", "reference energy during interactions with inner shell electrons", and "quantum surface transmission". But if I would try to explain them, then first of all this would take quite some background, but after that I might risk to just have those same "dangerous" discussions again, this time with you.

Anyway, let me try to explain the issue with the "ionization energies for inner shells". The ionization energies for outer shells of the material model are measured or calibrated, and that also includes the workfunction. But the ionization energies (and ionization cross sections) for inner shells are taken from precomputed databases for free atoms. If you would assume that their zero reference energy was the vacuum level, then the workfunction would become surprisingly important. However, surface contamination can easily change the workfunction completely. Additionally, the potential from the free atoms "in the long range" is shielded inside a material by the other electrons, additionally questioning whether taking the vacuum level as reference is a good idea.

vanhees71 said:
Of course, what's observable are only gauge-invariant properties
Maybe, but I don't see why the q-observables by themselves will necessarily be gauge-invariant (or that using statistical operators will help me in this respect). I mean, even the Hamiltonian you wrote down to demonstrate to me that the physics doesn't change is a q-observable. It is the energy, but of course the actual value of the energy depends on the zero reference.
 
  • #271
I'm not familiar with measurements of the ionization energies. Of course, you have to define the choice of the "zero point" of you energies you measure since what you measure are always energy differences.

What is a "q-observable"?

It's of great importance to understand that the Hamiltonian in general is not gauge invariant and not an observable. That's so already in classical physics when using the Hamilton formulation of the action principle for a particle in an external electromagnetic field. The Hamiltonian contains the electromagnetic potentials and thus is not gauge invariant. For a nice explanation of the issue in the quantum context, see

Donald H. Kobe and Arthur L. Smirl, Gauge invariant formulation of the interaction of electromagnetic radiation and matter, Am. Jour. Phys. 46, 624 (1978)
https://doi.org/10.1119/1.11264
 
  • Like
Likes gentzen
  • #272
vanhees71 said:
It's of great importance to understand that the Hamiltonian in general is not gauge invariant
Very good, so my impression is that you understood my problem, and I understood your position. Whether or not I used words in a way that seems inappropriate to you is not important ("the Hamilatonian in general is ... not an observable"), because my focus is often less on individual words, but more on the concrete stuff.

vanhees71 said:
What is a "q-observable"?
This is defined at the end of subsection 2.2 Properties in Foundations of quantum physics II. The thermal interpretation as
A subsystem of a system is specified by a choice declaring some of the quantities (q-observables) of the system to be the distinguished quantities of the subsystem. This includes a choice for the Hamiltonian of the subsystem.
If you are not familiar with that paper, looking at equations (6), (7), and (8) in section "2.1 The Ehrenfest picture of quantum mechanics" could be helpful for understanding in which sense I feel that q-expectations share many of the good properties of statistical operators. I once wrote:
The formulation of QM in section "2.1 The Ehrenfest picture of quantum mechanics" via (6), (7), and (8) shows another interesting advantage of using the collection of q-expectation as state instead of the density operator. That presentation unifies the Schrödinger, Heisenberg, and Dirac picture, but the density operator itself is different in each picture. That presentation even unifies classical and quantum mechanics.

However, that unification may be treacherous. It doesn't just include classical mechanics, but also classical mechanics with epistemic uncertainty about the state of the system. But in classical mechanics, there is a clear way to distinguish a state with epistemic uncertainty from a state without. In quantum mechanics, people tried resorting to pure states to achieve this distinction. But the thermal interpretation explicitly denies pure states this privilege, and explains well why it is important to deny pure states any special status.

vanhees71 said:
For a nice explanation of the issue in the quantum context, see

Donald H. Kobe and Arthur L. Smirl, Gauge invariant formulation ...
Thanks for the reference. I will have a look. Maybe it will indeed improve my understanding of "gauge freedom" and its impact on results from concrete computations.
 
  • #273
I don't understand A. Neumairs interpretation, because it takes the only interpretation which makes contact to real-world experiments away (the probabilistic interpretation of the state, described by the statistical operator) and doesn't provide a new reinterpretation for the Born rule, calling just the usual expectation value, ##\langle O \rangle =\mathrm{Tr}(\hat{\rho} \hat{O})## (expectation value in the usual meaning defined by probability theory) "q-expectation value". If there is no probabilistic interpretation allowed, it's not clear how to relate this mathematical formal object to real-world objects dealt with in experiments.

All this has nothing to do with the concrete question of gauge invariance. I think the cited paper by D. H. Kobe (and the many great references therein, particularly the ones by Yang) is bang on your problem.

It's a great exercise to think about the motion of a charged particle in a homogeneous magnetic field leading to the famous Landau levels and formulate it in a gauge invariant way (the energy eigenvalue problem can be completely solved for the gauge-invariant observable probabilities).

Another very good source is also the textbook by Cohen-Tanoudji, Quantum Mechanics, Vol I, Complement H III.
 
Last edited:
  • #274
vanhees71 said:
So what's literally measured is the position of the Ag atoms when hitting this screen.
vanhees71 said:
If there is no probabilistic interpretation allowed, it's not clear how to relate this mathematical formal object to real-world objects dealt with in experiments.
The thermal interpretation gives a deterministic interpretation for q-expectations of macroscopic quantities, which takes account of all measured currents, pointer readings, dots on a screen, counters, etc.. This covers everything that is literally measured, in the sense of the first quote. This relates the quantum formalism to a large class of real-world objects dealt with in experiments.

In addition, there is a derivation of the probabilistic interpretation for q-expectations of microscopic quantities (namely as statistical expectation values), so this has not to be postulated (but neither is it forbidden).

Thus everything of experimental interest is covered. I don't understand why you object.
 
  • #275
How then do you explain the observed fact that the observed position of the Ag atom is hitting the screen is random (provided the initial Ag atoms are not prepared in eigenstates of the spin component under investigation)? The single atom doesn't land around the expectation value (in the usual probabilistic sense, because I don't understand yet the instrumental meaning of your q-expectation values) with some uncertainty (again in the usual probabilistic sense of a standard deviation) but around two spots. The demonstration of this "direction quantization" was the great achivement of the experiment!

I don't object, I only need an instrumental understanding. If you now say you can derive the usual probabilistic interpretation and accept it as the instrumental understanding of the formalism, I don't know, why you all the time negate the validity of the standard probabilistic view. Understood in this way your thermal interpretation is just using another set of postulates to get back the same quantum theory we have since 1926. If this is now an understanding acceptable for you, the only point I have to understand then is why you insist on the collapse of the state as something outside the formalism but necessary for its interpretation.
 
  • #276
vanhees71 said:
How then do you explain the observed fact that the observed position of the Ag atom is hitting the screen is random (provided the initial Ag atoms are not prepared in eigenstates of the spin component under investigation)? The single atom doesn't land around the expectation value (in the usual probabilistic sense
This quantum is explained in the same way as the observed classical fact that the observed values when casting a die are integral although the expectation values are not. But the expectation values of the powers (the moments) allow one to reconstruct the complete probability distribution, and this reveals that the individual values cast are just 1,...,6.

vanhees71 said:
If you now say you can derive the usual probabilistic interpretation and accept it as the instrumental understanding of the formalism, I don't know, why you all the time negate the validity of the standard probabilistic view.

I was never negating the validity of the standard probabilistic view. I just removed it from the foundations! I accept the usual probabilistic interpretation not as the instrumental understanding of the formalism in general but only as the instrumental understanding in cases where one actually measures a whole probability distribution!
vanhees71 said:
If this is now an understanding acceptable for you,
Not yet, because you want to have it as foundation, whereas I want to have it as consequence of more natural (and more complete) foundations.
vanhees71 said:
the only point I have to understand then is why you insist on the collapse of the state as something outside the formalism but necessary for its interpretation.
The collapse is not an assumption in the thermal interpretation, it is just a frequently (but as you correctly remark, not always) observed fact. I insist on its presence only because the collapse cannot be derived from your minimal statistical interpretation but is needed in practice, and hence shows that the minimal statistical interpretation is incomplete, hence too minimal.
 
  • #277
I don't care what you take as the postulates. As long as you end up with a connection between the formalism to the observations that successfully describe the observations. Of course if you have all moments of a distribution, you have the distribution. The important point of the instrumental meaning just is that it's a probability distribution. So it seems to be settled that I can read your q-expectation values simply in terms of the standard interpretation of probabilities defined in the usual way as QT does since 1926.

Of course can the collapse not be derived, because it's by assumption outside the formal description. It has no foundation whatsoever. In ther case of relativistic QFT it's even contradicting its very foundations resting on locality/microcausality.

I don't know, why you insist on its necessity, because I don't see, where you need it. I also don't see, why the minimal statistical interpretation should be incomplete and your interpretation be complete. I thought at the end it's simply equivalent (as soon as I'm allowed to give your mathematical operations, particularly the Born rule for calculating your q-expectation vlaues the standard porbabilistic meaning as expectation values).
 
  • #278
vanhees71 said:
The important point of the instrumental meaning just is that it's a probability distribution.
Only when you have many copies of the same microscopic system.

But for a single ion in an ion trap, probabilities are instrumentally meaningless since to measure probabilities you need many copies!
 
  • #279
vanhees71 said:
I also don't see, why the minimal statistical interpretation should be incomplete and your interpretation as complete.
vanhees71 said:
Of course can the collapse not derived, because it's by assumption outside the formal description.
Well if the minimal statistical interpretation were complete, the formalism together with this interpretation should allow the derivation of the collapse, or in greater generality, should allow one to predict from the microscopic description of a filter how individual input states are transformed into individual output states. This is the measurement problem.
 
  • #280
Again: The achievement of the 2012 Nobelists is that they can use one atom/photon. That doesn't mean that there is not the usual meaning of probability concerning the observables on this one atom/photon. They just repeat the same measurement with the one individual atom/photon. I can use one and the same dice and throw it repeatedly to get the probability for the outcomes. I don't need to use different dices (indeed for macroscopic objects these are strictly speaking two different random experiments, because the dices never are exactly identical, while sufficiently simple "quantum objects" are).

Since the predictions of QT are probabilistic you have to do that to be able to gain "enough statistics" to compare your probabilistic predictions with the statistics of the measurement outcomes.
 

Similar threads

Replies
1
Views
1K
Replies
4
Views
613
Replies
25
Views
2K
Replies
49
Views
3K
Replies
84
Views
3K
Back
Top