Are there signs that any Quantum Interpretation can be proved or disproved?

In summary, according to the experts, decoherence has not made much progress in improving our understanding of the measurement problem.
  • #71
vanhees71 said:
My problem with the approach you call "thermal interpretation" still is that it is not clear what the operational meaning of your expectation values is, because you stress several times it's not considered to have the usual probabilistic meaning, but what then is the operational meaning?

In orthodox quantum mechnaics, most Hermitian operators for a multiparticle systems cannot be prepared (only finitely many can, but there are uncountably many), but you still accept their probabilistic interpretation without an operational recipe.

Thus why do you require more from the thermal interpretation? Some of the beables have an operational meaning, e.g., smeared q-expectation values of the electromagnetic field, or 2-point functions through linear response theory.
vanhees71 said:
it's never worked out how to construct the POVM for a given real-world (say quantum optical) apparatus like a beam splitter, mirrors, lenses, a photodector, etc.
This is not true. There is a large literature how to calibrate POVMs to correspond to actual equipment using quantum tomography.
 
Physics news on Phys.org
  • #72
A. Neumaier said:
Thus why do you require more from the thermal interpretation? Some of the beables have an operational meaning, e.g., smeared q-expectation values of the electromagnetic field, or 2-point functions through linear response theory.
My impression is that the kind of correlations you have in mind for your beables are similar to the mutual intensity function used in computational optics to handle partially coherent (quasi monochromatic) light. Those do have an operational meaning, but it is non-trivial and it needs to be explained how those can be measured given suitable circumstances. (The CoherentVsIncoherent.pdf attached to my previous comment about incoherent light and instrumentalism also contains definitions of mutual coherence and mutual intensity functions.)

The reference to linear response theory is "challenging" for me personally. I would now have to read the
two referenced papers, one with 13 pages and some relevant sections from one with 102 pages. But from browsing those papers, my suspicion is that the answer from those papers will not be good enough to help me really understand what you have in mind.

My impression is that others had similar problems, and tried to clearly explain why they don't get it:
Demystifier said:
I have already discussed the ontology problem of thermal interpretation (TI) of quantum mechanics (QM) several times in the main thread on TI.

For correlation-beables similar to the mutual intensity function, I additionally fear that only those correlations between observable whose (anti-)commutator nearly vanishes will have a comparable operational meaning. My guess is that the plan for the remaining beables is to fall back to:
Callen’s criterion: Operationally, a system is in a given state if its properties are consistently described by the theory for this state.

I am not sure whether this is good enough. I personally would probably prefer to throw out those beables whose only operational meaning is provided by Callen's criterion.
 
  • Like
Likes timmdeeg and Demystifier
  • #73
vanhees71 said:
If you are an instrumentalist you cannot be a Bohmian at the same time, or how are you measuring Bohm's trajectories for a single particle (sic!) in the real-world lab?
I don't measure Bohm's trajectories, just like an ordinary instrumentalist does not measure the wave function. The trajectories, like the wave function, are a tool (an "instrument"). But the trajectories are not merely a computational tool. They are much more a thinking tool, a tool that helps me think about quantum theory intuitively.
 
  • #74
A. Neumaier said:
In orthodox quantum mechnaics, most Hermitian operators for a multiparticle systems cannot be prepared (only finitely many can, but there are uncountably many), but you still accept their probabilistic interpretation without an operational recipe.

Thus why do you require more from the thermal interpretation? Some of the beables have an operational meaning, e.g., smeared q-expectation values of the electromagnetic field, or 2-point functions through linear response theory.

This is not true. There is a large literature how to calibrate POVMs to correspond to actual equipment using quantum tomography.
What do you mean by the "operators can't be prepared". Of course not, because there are no operators in the lab, but there are observables, and I can measure them in the lab with real-world devices with more or less accuracy. I also can prepare systems in states such that some observable(s) take more or less determined values (what I can't do is of course a preparation which would violate the uncertainty relations of the involved observables).

I'm not criticizing the vast literature about POVMs, I for sure don't know enough about. I'm criticizing your approach as a foundational description of what quantum mechanics is. For a physicist it must be founded in phenomenology, i.e., you never say what your expectation values are if not defined in the standard way. If you discard the standard definition, which is usually understood and founded in an operation way to phenomena, you have to give an alternative operational definition, which you however don't do. I'm pretty sure that the edifice is sound and solid mathematically, but it doesn't make sense to me as an introduction of quantum theory as a physical theory. This may well be due to my ignorance though.
 
  • #75
vanhees71 said:
What do you mean by the "operators can't be prepared".
I was referring to the common practice of calling self-adjoint operators observables.
vanhees71 said:
Of course not, because there are no operators in the lab, but there are observables, and I can measure them in the lab with real-world devices with more or less accuracy.
Some observables can be measured in the lab, and they sometimes correspond to operators, more often only to POVMs.

However, most observables corresponding to operators according to Born's rule cannot be observed in the lab!

vanhees71 said:
I'm criticizing your approach as a foundational description of what quantum mechanics is. For a physicist it must be founded in phenomenology,
Why? It must only reproduce phenomenology, not be founded in it.

When the atomic hypothesis was proposed (or rather revitalized), atoms were conceptual tools, not observable items. Theey simplified and organized the understanding of chemistry, hence their introduction was good science - though not founded in phenomenology beyond the requirement of reproducing the known phenomenology.

Similarly, energy is basic in the foundations of physics but has no direct phenomenological description. You need already theory founded on concepts prior to phenomenology to be able to tell how to measure energy differences.

Of course these prior concepts are motivated by phenomenology, but they are not founded in it. Instead they determine how phenomenology is interpreted.

vanhees71 said:
i.e., you never say what your expectation values are if not defined in the standard way.
They are numbers associated to operators. This is enough to working with them and to obtain all quantum phenomenology.

Tradition instead never says what the probabilities figuring in Born's rule (for arbitrary self-adjoint operators) are in terms of phenomenology since for most operators these probabilities cannot be measured. Thus there is the same gap that you demand to be absent, only at another place.

vanhees71 said:
If you discard the standard definition, which is usually understood and founded in an operation way to phenomena, you have to give an alternative operational definition, which you however don't do. I'm pretty sure that the edifice is sound and solid mathematically, but it doesn't make sense to me as an introduction of quantum theory as a physical theory.
As @gentzen mentioned in post #72, Callen's criterion provides the necessary and sufficient connection to phenomenology. Once Callen's criterion is satisfied you can do all of physics - which proves that nothing more is needed.

If you require more you need to justify why this more should be essential for physics to be predictive and explanative.
 
Last edited:
  • Like
Likes mattt and dextercioby
  • #76
gentzen said:
Callen’s criterion: Operationally, a system is in a given state if its properties are consistently described by the theory for this state.

I am not sure whether this is good enough. I personally would probably prefer to throw out those beables whose only operational meaning is provided by Callen's criterion.
The problem with this is that the foundations must be independent of the state of the art in experimental practice. But Callen's criterion gets stronger with improvements in experiments, Hence what are beables would change with time, which is not good for a foundation.

The criterion for beables in the thermal interpretation is simple and clear - both properties making it eminently suitable for foundations.
 
  • #77
A. Neumaier said:
I was referring to the common practice of calling self-adjoint operators observables.

Some observables can be measured in the lab, and they sometimes correspond to operators, more often only to POVMs.

However, most observables corresponding to operators according to Born's rule cannot be observed in the lab!Why? It must only reproduce phenomenology, not be founded in it.

When the atomic hypothesis was proposed (or rather revitalized), atoms were conceptual tools, not observable items. Theey simplified and organized the understanding of chemistry, hence their introduction was good science - though not founded in phenomenology beyond the requirement of reproducing the known phenomenology.
Of course they were founded in phenomenology, and only accepted as a hypothesis by chemist but not by many physicists. I think most physicists got only convinced by Einstein's work on thermodynamical fluctuations like his famous Brownian-motion paper or the critical opalescence paper etc.
A. Neumaier said:
Similarly, energy is basic in the foundations of physics but has no direct phenomenological description. You need already theory founded on concepts prior to phenomenology to be able to tell how to measure energy differences.

Of course these prior concepts are motivated by phenomenology, but they are not founded in it. Instead they determine how phenomenology is interpreted.They are numbers associated to operators. This is enough to working with them and to obtain all quantum phenomenology.

Tradition instead never says what the probabilities figuring in Born's rule (for arbitrary self-adjoint operators) are in terms of phenomenology since for most operators these probabilities cannot be measured. Thus there is the same gap that you demand to be absent, only at another place.
The traditional statistical interpretation of expectation value is very simple. You learn it on day one in the introductory physics lab. You measure a quantity several times at the same system under the same conditions ("ensemble") and take the average.
A. Neumaier said:
As @gentzen mentioned in post #72, Callen's criterion provides the necessary and sufficient connection to phenomenology. Once Callen's criterion is satisfied you can do all of physics - which proves that nothing more is needed.

If you require more you need to justify why this more should be essential for physics to be predictive and explanative.
 
  • #78
vanhees71 said:
Of course they were founded in phenomenology, and only accepted as a hypothesis by chemist but not by many physicists.
Motivated and perhaps suggested by, but not founded, since they were not observable, only their consequences matched experiment. You needed to assume unobservable - nonphenomenological -atoms, and deduce from theory for them predictions (most often actually retrodictions) that were in agreement with experiments.

Thus the relation between concepts and phenomenology is through Callen's principle, as in my thermal approach to quantum mechanics. Both are equally founded in phenomenology, and through the same argument: Theory predicts correct results, hence is appropriate.

vanhees71 said:
The traditional statistical interpretation of expectation value is very simple. You learn it on day one in the introductory physics lab. You measure a quantity several times at the same system under the same conditions ("ensemble") and take the average.
This gives expectation values of a few observables only.

But Born's rule makes claims about observables corresponding to arbitrary self-adjoint operators - most of which are inaccessible to experiment. Thus their phenomenological meaning is completely absent.

Moreover, in QFT one uses (and even you use!) expectation terminology for N-point functions which correspond to non-Hermitian operators, for which Born's rule and the statistical interpretation is inapplicable! Whereas the thermal interpretation has no problem with this.
 
  • Like
Likes mattt and dextercioby
  • #79
I don't argue about the mathematics of your interpretation. I only say I don't understand it's foundation in the phenomenology. The paper you cited doesn't give a concrete treatment how to get the POVMS for the examples you quote nor do you give an operational meaning of the POVMs in general. The orthodox treat does this by clearly saying what are the probabilities for a measurement result with an idealized detector. Of course, for real-world detectors you need to understand their limitations and describe them accordingly, but that's not part of the general formulation of a theory.

In QFT the N-point functions are not observables but functions you calculate (in some approximation like perturbation theory or resummed perturbation theory etc.) to get the observable quantities like the S-matrix, providing decay rates and cross sections which can be measured.

Also in classical electrodynamics we calculate pretty often quantities that are not describing observables like the scalar and vector potential of the em. field, because it's simpler than to directly calculate these observable fields.
 
  • Like
Likes physicsworks, gentzen and WernerQH
  • #80
vanhees71 said:
In QFT the N-point functions are not observables but functions you calculate (in some approximation like perturbation theory or resummed perturbation theory etc.) to get the observable quantities like the S-matrix, providing decay rates and cross sections which can be measured.
One can say the same about all q-expectations in the thermal interpretation. They don't need any further justification - their name is as coincidental as using the expectation terminology for the QFT n-point functions.

That q-expectations are considered to be beables only means that some of them can be accurately observed (namely if they are macroscopic and accessible to humans). This is enough to obtain observable quantities.
vanhees71 said:
The paper you cited doesn't give a concrete treatment how to get the POVMS for the examples you quote nor do you give an operational meaning of the POVMs in general.

Although I may have failed to emphasize it in the paper (it is stated only in a footnote - Footnote 8), this is not true:

The proof of Theorem 1.1 on p.8 of my paper Born’s rule and measurement shows how to get a POVM for an arbitrary experimentally realized response system. Prepare enough states with known density matrices (giving the ##\rho_{ij}##) and collect for them enough statistics to determine the probabilities (giving the ##p_k##) then solve the linear system (5) to get the POVM. This is the standard quantum tomography principle.

The operational meaning is given by the theorem itself, which says how to get predictions for the probabilities from an arbitrary POVM, which can be compared with experiment.

No idealization is involved, except for the limited accuracy necessarily inherent in statistical estimation, due to the limitations imposed by the law of large numbers.
 
Last edited:
  • #81
Well, if your q-expectations are only calculational tools as the n-point functions in QFT then there's no physics in this interpretation as all, because then nowhere it is said, what is to be compared to real-world observables and phenomenology. That's my problem with this new attempt of an interpretation. In other words, there are no "beables" in your interpretation left (though I hate this philosophical lingo without a clear physical meaning, unfortunately introduced by Bell).

On the other hand, now all of a sudden you admit exactly what I say the whole time: There's a probabilistic meaning in the formalism (of the q-expectations I think, but that's what you deny on the other hand always), and it's tested by preparing systems in a given state and by measuring observables with real-world experiments and analyze the outcome in a statistical way. Then there's nothing new with your interpretation but the usual practice of the scientific community, but then it's making sense as a physical theory.

Do I know understand it right that all you want to do is the use POVMs as the probabilistic foundation of QT instead of idealized projective measurements. If this is the case, then you have to sharpen this foundation such that physicists understand, what it has to do with their real-world measurements in the lab.

It's also of course not very convincing if you provide an interpretation depending on the technial state of the art of measurement devices. Physical theories are independent of this. Technological progress may enable to disprove a theory, and a good theory makes predictions that enable this possibility.
 
  • #82
A. Neumaier said:
gentzen said:
I personally would probably prefer to throw out those beables whose only operational meaning is provided by Callen's criterion.
The problem with this is that the foundations must be independent of the state of the art in experimental practice. But Callen's criterion gets stronger with improvements in experiments, Hence what are beables would change with time, which is not good for a foundation.
OK, independent of whether others had similar problems, I should try to only speak for myself. My mental images are heavily influenced by computational (often statistical) optics.

A corresponding question in the optics context would be the status of evanescent waves as beables. Already the question of whether a wave is evanescent has no sharp answer: a wave that is evancescent in vacuum can be optical inside a material. (So a photoresist layer in close proximity to a contact mask could still couple-in some of the evanescent waves present in the vacuum between the mask and the photoresist.)

But ... the refractive index of a photoresist typically will be around 1.7, sometimes maybe 1.9. OK, there are piezoelectric materials like PZT, whose relative permittivity can range from 300 to 20000 (depending upon orientation and doping), so the refractive index can range from 17 to 140. However, this does not help, because strongly evanescent waves would be unable to directly couple-in from vacuum. (Your remark about experimental practice could now mean the invention of some wonder material that improves the in-coupling of evanescent modes so that they are at least no longer exponentially suppressed.)

My conclusion from these concrete practical considerations is that evanescent waves don't abruptly lose their status as beables. However, strongly evanescent waves do get exponentially suppressed, there is no realistic way around it, and it might make sense to suppress their status as beables similarly. One further argument to suppress their status as beables is that the evanescent waves are less a property related to properties of the incoming partially coherent light, but more a property of the experimental setup, i.e. the contact mask in my example above.

That last point is somewhat related to why "I additionally fear that only those correlations between observable whose (anti-)commutator nearly vanishes will have a comparable operational meaning". In those cases, the q-correlations are much more related to the Hamiltonian than to the state of the system.
 
  • #83
It's a bit off-topic, but I wonder, how you can produce an evanescent wave in vacuo? There are of course evanescent waves in wave guides, but that's not vacuum.
 
  • Like
Likes gentzen
  • #84
vanhees71 said:
It's a bit off-topic, but I wonder, how you can produce an evanescent wave in vacuo?
Well, it might actually be a valuable clarification. My first thought was to use a diffraction grating whose pitch is so small that the first diffraction orders are already evanescent. If one now places a photoresist directly behind the mask, one might observe the intensity distribution resulting from the interference between the zeroth and first orders in the photoresist. One problem might be that the zeroth order is too dominant so that not much can be seen.

I better idea might be to use a glass substrate for the mask such that the evanescent wave you want to produce is still an optical wave inside the glass. Now use a diffraction grating (on a flat surface of your glass substrate) such that one of the first diffraction orders is an evanescent mode with exactly the opposite wave-number of the incident wave. Now you should be able to observe much more pronounced interference patterns in the photoresist.

This is related to the most trivial way to generate an evanescent wave in vacuum: Use a prism and tilt the incident wave sufficiently such that the refracted wave is already evanescent in vacuum.
 
  • #85
vanhees71 said:
Well, if your q-expectations are only calculational tools as the n-point functions in QFT then there's no physics in this interpretation as all, because then nowhere it is said, what is to be compared to real-world observables and phenomenology. That's my problem with this new attempt of an interpretation. In other words, there are no "beables" in your interpretation left (though I hate this philosophical lingo without a clear physical meaning, unfortunately introduced by Bell).
Well, beables are calculational tools for making predictions in quantum physics, just as atoms were for Chemists in the early days of modern chemistry.
vanhees71 said:
On the other hand, now all of a sudden you admit exactly what I say the whole time: There's a probabilistic meaning in the formalism (of the q-expectations I think, but that's what you deny on the other hand always), and it's tested by preparing systems in a given state and by measuring observables with real-world experiments and analyze the outcome in a statistical way.
There is a probabilistic meaning in POVMs - they describe the probabilistic part of quantum mechanics (i.e., the part describable by the minimal statistical interpretation). I never had any other place for them.

But the thermal interpretation goes far beyond POVMs, because measurement devices are not made out of POVMs but out of quantum matter. So there must be a microscopic explanation for the ''problem of definite outcomes'' - why detectors produce objectively identifiable (though random) signals in each particular measurement. The minimal statistical interpretation (with a subjective view of the state as representing knowledge) has no explanation for this - it is an irreducible additional input to its view of quantum physics. But the thermal interpretation answers this.
vanhees71 said:
Do I know understand it right that all you want to do is the use POVMs as the probabilistic foundation of QT instead of idealized projective measurements.
This is all I want to do with POVMs - because it gets rid of the idealization and at the same time simplifies the exposition of the foundations.

But with the thermal interpretation I want to do more. I want to give a good explanation for the empirical fact that one does not need to measure an ensemble of many identical iron cubes (as Born's rule requires) and find always essentially the same result to obtain reliably the properties of a single iron cube. As every engineer knows, a single measurement is reliable enough. The statistical interpretation is silent about the single case, even when it is macroscopic.
vanhees71 said:
If this is the case, then you have to sharpen this foundation such that physicists understand, what it has to do with their real-world measurements in the lab.
I described the connection with real-world measurements in the lab through quantum tomography, and explained it again in the present thread. What is not sharp enough in these foundations?
vanhees71 said:
It's also of course not very convincing if you provide an interpretation depending on the technical state of the art of measurement devices. Physical theories are independent of this.
My interpretation is also independent of this, as the paper shows. Dependent on the technical state of the art of measurement devices is only which POVMs are realizable, and with which accuracy.
 
  • Like
Likes gentzen and mattt
  • #86
A. Neumaier said:
But the thermal interpretation goes far beyond POVMs, because measurement devices are not made out of POVMs but out of quantum matter. So there must be a microscopic explanation for the ''problem of definite outcomes'' - why detectors produce objectively identifiable (though random) signals in each particular measurement. The minimal statistical interpretation (with a subjective view of the state as representing knowledge) has no explanation for this - it is an irreducible additional input to its view of quantum physics. But the thermal interpretation answers this.
Could you elaborate a bit on "So there must be a microscopic explanation for the ''problem of definite outcomes'' - why detectors produce objectively identifiable (though random) signals in each particular measurement" in plain language?

Is there some kind of interaction between the wavefunction and the detector on the microscopic level such that the detector "feels" the probability of a given outcome and creates it? (On this level atoms and molecules are vibrating and "feel" electromagnetic radiation, so that this question seems to make no sense unless new physics is involved).

In the double slit experiment the position of a dot on the screen corresponds to it's probability. Does according to the thermal interpretation the whole screen act as a detector in this case?
 
  • #87
timmdeeg said:
Could you elaborate a bit on "So there must be a microscopic explanation for the ''problem of definite outcomes'' - why detectors produce objectively identifiable (though random) signals in each particular measurement" in plain language?
There must be a microscopic explanation for the ''problem of definite outcomes'' because measurement devices are made out of quantum matter, so they are described by a quantum state. The observed pointer position is a property of the measurement device. According to the statistical interpretation all we can know about the quantum system constituted by the measurement device is encoded in its quantum state. The ''problem of definite outcomes'' is to show how this quantum state encodes the definite observed pointer position, and how the unitary dynamics postulated by quantum physics leads to such a definite observed pointer position.

The statistical interpretation has no answer for this but simply assumes it as an irreducible fact - in addition to the quantum dynamics and the state interpretation.
timmdeeg said:
Is there some kind of interaction between the wavefunction and the detector on the microscopic level such that the detector "feels" the probability of a given outcome and creates it? (On this level atoms and molecules are vibrating and "feel" electromagnetic radiation, so that this question seems to make no sense unless new physics is involved).
Quantum theory of course tells what this interaction is. But it does not tell how this interaction actually achieves the observed definite pointer position.
timmdeeg said:
In the double slit experiment the position of a dot on the screen corresponds to it's probability.
No. The dot corresponds to a particular position measured. The probability comes from counting the frequency and distribution of the dots.
timmdeeg said:
Does according to the thermal interpretation the whole screen act as a detector in this case?
It does according to every interpretation.
 
Last edited:
  • Like
Likes timmdeeg and mattt
  • #88
gentzen said:
Well, it might actually be a valuable clarification. My first thought was to use a diffraction grating whose pitch is so small that the first diffraction orders are already evanescent. If one now places a photoresist directly behind the mask, one might observe the intensity distribution resulting from the interference between the zeroth and first orders in the photoresist. One problem might be that the zeroth order is too dominant so that not much can be seen.

I better idea might be to use a glass substrate for the mask such that the evanescent wave you want to produce is still an optical wave inside the glass. Now use a diffraction grating (on a flat surface of your glass substrate) such that one of the first diffraction orders is an evanescent mode with exactly the opposite wave-number of the incident wave. Now you should be able to observe much more pronounced interference patterns in the photoresist.

This is related to the most trivial way to generate an evanescent wave in vacuum: Use a prism and tilt the incident wave sufficiently such that the refracted wave is already evanescent in vacuum.
I still don't understand how there can be evanescent em. waves in the vacuum. For me an evanescent wave is a non-propagating field like in a wave guide (a mode with a frequency below the cut-off frequency), but in the vacuum there is no such thing. The dispersion relation is always ##\omega=c k##, i.e., there are no evanescent modes in the vacuum.
 
  • #89
A. Neumaier said:
There must be a microscopic explanation for the ''problem of definite outcomes'' because measurement devices are made out of quantum matter, so they are described by a quantum state. The observed pointer position is a property of the measurement device. According to the statistical interpretation all we can know about the quantum system constituted by the measurement device is encoded in its quantum state. The ''problem of definite outcomes'' is to show how this quantum state encodes the definite observed pointer position, and how the unitary dynamics postulated by quantum physics dynamics leads to such a definite observed pointer position.

The statistical interpretation has no answer for this but simply assumes it as an irreducible fact - in addition to the quantum dynamics and the state interpretation.

Quantum theory of course tells what this interaction is. But it does not tell how this interaction actually achieves the observed definite pointer position.

No. The dot corresponds to a particular position measured. The probability comes from counting the frequency and distribution of the dots.

It does according to every interpretation.
In the orthodox minimal interpretation the problem of a definite outcome is that a macrostate (like a pointer position) is a very coarse-grained observable and the fluctuations are small compared to the resolution within this macroscopic observable is determined. I always thought that's also the explanation of your "thermal interpretation" until you told me that your expectation values must not be intepreted in the usual statistical sense but as something abstractly defined in the mathematical formalism without relation to an operational realization by a measurement device.

The dot on a CCD screen or photo plate or the "trajectory of a particle" in a cloud chamber are good examples. These are highly coarse-grained macroscopic observables with a resolution well coarser than the quantum limits given by the uncertainty relation.
 
  • #90
vanhees71 said:
The dot on a CCD screen or photo plate or the "trajectory of a particle" in a cloud chamber are good examples. These are highly coarse-grained macroscopic observables with a resolution well coarser than the quantum limits given by the uncertainty relation.
I agree, but this alone does not solve the problem!

What remains unanswered by the statistical interpretation is why in the measurement of a single particle by the screen, the screen is in a macroscopically well-defined state rather than in a superposition of states where the different pixels are activated with the probabilities determined for Born's rule for the particles. For the latter is the result of applying the Schrödinger equation to the combined system (particle + screen)!

vanhees71 said:
I always thought that's also the explanation of your "thermal interpretation" until you told me that your expectation values must not be intepreted in the usual statistical sense but as something abstractly defined in the mathematical formalism without relation to an operational realization by a measurement device.
The statistical interpretation can never turn a superposition of widely spread possible outcomes (any pixel on the screen) into a state where the outcome is definite. Nothing ever is definite in the statistical interpretation, the definiteness is assumed in addition to the quantum formalism.

The thermal interpretation does no yet claim to have fully solved this problem but paves the way to its solution, since it says that certain q-expectations (rather than certain eigenvalues) are the observed things. Hence the macroscopic interpretation is immediate since the highly coarse-grained macroscopic observables are such q-expectations.

The step missing is to prove from the microscopic dynamics of the joint system (particle + screen)
that these macroscopic observables form a stochastic process with the correct probabilities. Here the thermal interpretation currently offers only suggestive hints, mainly through reference to work by others.
 
Last edited:
  • Like
Likes dextercioby, gentzen, PeterDonis and 1 other person
  • #91
vanhees71 said:
I always thought that's also the explanation of your "thermal interpretation" until you told me that your expectation values must not be intepreted in the usual statistical sense but as something abstractly defined in the mathematical formalism without relation to an operational realization by a measurement device.
Based on the current discussion, it occurred to me that the non-ensemble interpretation of q-expectations of the thermal interpretation could be combined with Callen's criterion to arrive at an "operational falsification" interpretation of expectations (probability). That interpretation would be closely related to the frequentist interpretation, but fix its problem related to the assumption/requirement of "virtual" ensembles that allow to arbitrarily often repeat identical experiments (which makes the frequentist interpretation non-operational and non-applicable to many practically relevant scenarios).

In order not to hijack this thread, I will open a separate thread with more explanations when I find the time.
 
  • #92
  • #93
vanhees71 said:
always thought that's also the explanation of your "thermal interpretation" until you told me that your expectation values must not be interpreted in the usual statistical sense but as something abstractly defined in the mathematical formalism without relation to an operational realization by a measurement device.
In special cases, namely for the measurement of macroscopic properties, the q-expectations are directly related to an operational realization by a measurement device - they give the measured value of extensive quantities without any statistics. No expectations are involved in this case, a single measurement gives the value predicted by the theory.

It is only in the general case where one cannot give a relation to an operational realization by a measurement device except statistically. But this is not a drawback. Already in classical physics, one can relate certain classical observable functions of the state to experiment - namely those that do not depend very sensitively on the state. Those with sensitive dependence can only be related statistically.
 
  • #94
A. Neumaier said:
There must be a microscopic explanation for the ''problem of definite outcomes'' because measurement devices are made out of quantum matter, so they are described by a quantum state. The observed pointer position is a property of the measurement device. According to the statistical interpretation all we can know about the quantum system constituted by the measurement device is encoded in its quantum state. The ''problem of definite outcomes'' is to show how this quantum state encodes the definite observed pointer position, and how the unitary dynamics postulated by quantum physics leads to such a definite observed pointer position.
The solution is quite simple and straightforward. It is sufficient to look at a measurement from two points of view, with different cuts between classical and quantum part. Then we see that the intermediate part is described, in one cut, as a quantum object with a wave function, and in the other cut with a classical trajectory.

All one has to do is to accept this as the general picture - there is also a trajectory in the quantum part. The mathematics how to make both compatible is easy and well known - The Bohmian velocity defines the deterministic (in dBB) resp. average (in other realistic interpretations) velocity of that trajectory.
 
  • #95
Sunil said:
The solution is quite simple and straightforward. It is sufficient to look at a measurement from two points of view, with different cuts between classical and quantum part. Then we see that the intermediate part is described, in one cut, as a quantum object with a wave function, and in the other cut with a classical trajectory.
But Nature has no cut. Thus you only replaced the problem by the equivalent problem of explaining that we may replace the quantum description on one side of the cut by a classical description. Nobody ever has derived this from the pure quantum dynamics.
 
  • Like
Likes Lord Jestocost and vanhees71
  • #96
vanhees71 said:
I still don't understand how there can be evanescent em. waves in the vacuum. For me an evanescent wave is a non-propagating field like in a wave guide (a mode with a frequency below the cut-off frequency), but in the vacuum there is no such thing. The dispersion relation is always ##\omega=ck##, i.e., there are no evanescent modes in the vacuum.
If we write the dispersion relation as ##\omega^2/c^2=k_x^2+k_y^2+k_z^2## and assume that ##k_x## and ##k_y## are real, then we see that ##k_z^2## will get negative if ##\omega^2/c^2<k_x^2+k_y^2##. If ##k_z^2## is negative then ##k_z## is imaginary, which corresponds to an evanescent wave.

At a horizontal planar interface (perpendicular to the z-axis) between two homogeneous regions, ##k_x## and ##k_y## cannot change, because they describe the modulation of the electromagnetic field along the interface. So you can have an optical wave in a glass substrate with well defined ##k_x## and ##k_y## based on the direction of the wave. If the direction of the wave is sufficiently gracing with respect to a horizontal planar interface to vacuum, then it will become evanescent in the vacuum below the interface.
(The wave will quickly (exponentially) vanish with respect to increasing distance from the interface. Additionally, the time average of the z-component of the Poynting vector is zero, i.e. there is no energy transported in z-direction on average by the evanscent wave in vacuum.)
 
  • #97
A. Neumaier said:
But Nature has no cut. Thus you only replaced the problem by the equivalent problem of explaining that we may replace the quantum description on one side of the cut by a classical description. Nobody ever has derived this from the pure quantum dynamics.
Of course. You start with a "pure quantum" description of the world, which in fact does not exist. The minimal interpretation is, essentially, only a reduced Copenhagen interpretation, so it prefers not to talk about that cut, classical part, and all that, but it has the results of the experiments formulated in the language of experiments in classical physics, with resulting classical probabilities (instead of many worlds or so). And you add the explicit hypothesis that there are no "hidden variables", in particular that there is no trajectory, even if we see it if we use the classical description between the two cuts. Because this would not be "pure quantum". And you wonder that you are unable to recreate those trajectories out of nothing after forbidding their existence?

The straightforward solution is, of course, that Nature has no cut, thus, once we see trajectories, it follows that there will be trajectories even in the regions where we are unable to see them. This is not only possible, but straightforward, with the simple mathematics of dBB theory which defines the (average in statistical interpretations) velocity out of the phase of the wave function in configuration space, and which comes essentially without mathematical competitors.

Given that such a straightforward solution with trajectories exists, it would be IMHO reasonable to send all those who propose "pure quantum theory" home until they have done their homework of deriving the trajectories we see around us from their "pure quantum theory" which they like to forbid on the fundamental level.
 
  • Skeptical
Likes PeroK
  • #98
A. Neumaier said:
I agree, but this alone does not solve the problem!

What remains unanswered by the statistical interpretation is why in the measurement of a single particle by the screen, the screen is in a macroscopically well-defined state rather than in a superposition of states where the different pixels are activated with the probabilities determined for Born's rule for the particles. For the latter is the result of applying the Schrödinger equation to the combined system (particle + screen)!The statistical interpretation can never turn a superposition of widely spread possible outcomes (any pixel on the screen) into a state where the outcome is definite. Nothing ever is definite in the statistical interpretation, the definiteness is assumed in addition to the quantum formalism.

The thermal interpretation does no yet claim to have fully solved this problem but paves the way to its solution, since it says that certain q-expectations (rather than certain eigenvalues) are the observed things. Hence the macroscopic interpretation is immediate since the highly coarse-grained macroscopic observables are such q-expectations.

The step missing is to prove from the microscopic dynamics of the joint system (particle + screen)
that these macroscopic observables form a stochastic process with the correct probabilities. Here the thermal interpretation currently offers only suggestive hints, mainly through reference to work by others.
Indeed, "nothing is definite in the statistical interpretation", but that's no bug but a feature as the many highly accurate confirmations of the violation of Bell's inequalities show.

Also the famous double-slit experiment for single particles or photons confirm the predicted probability distributions for the detection of these particles or photons. That a single point on the screen is blackened for each particle registered is first of all an empirical fact. It is also well understood quantum mechanically as already shown as early as 1929 in Mott's famous paper about ##\alpha##-particle tracks in a cloud chamber.

I believe that your thermal interpretation is the answer as soon as you allow your q-expectation values to be interpreted in the standard probabilistic way, and of course you cannot describe the macroscopic observables by microscopic dynamics, because it is their very nature to be only a coarse-grained description of the relevant macroscopic degrees of freedom, and that's also the reason for it's classical behavior and the irreversibility of the measurement outcome.

If you see it as a problem to understand this irreversibility from a detailed microscopic dynamical description then also the same problem has to be considered unsolved within classical physics, but I don't know any physicist who does not accept the standard answer given by statistical physics (aka "the H theorem").
 
  • #99
A. Neumaier said:
In special cases, namely for the measurement of macroscopic properties, the q-expectations are directly related to an operational realization by a measurement device - they give the measured value of extensive quantities without any statistics. No expectations are involved in this case, a single measurement gives the value predicted by the theory.

It is only in the general case where one cannot give a relation to an operational realization by a measurement device except statistically. But this is not a drawback. Already in classical physics, one can relate certain classical observable functions of the state to experiment - namely those that do not depend very sensitively on the state. Those with sensitive dependence can only be related statistically.
But macroscopic properties are statistical averages over many microscopic degrees of freedom. It is not clear how to explain the measurement of such an observable without averages and the corresponding (quantum) statistics.

A single measurement, no matter whether you measure "macroscopic" or "microscopic" properties, never establishes a value, let alone, can test any theoretical prediction, as one learns in the first session of the introductory beginner's lab!
 
  • #100
Sunil said:
You start with a "pure quantum" description of the world, which in fact does not exist.
This is not a fact but your assumption. No known fact contradicts the possibility of a a "pure quantum" description of the world; in contrast, there is no sign at all that a classical description must be used in addition. Once the latter is assumed, one must show how to define the classical in terms of the more comprehensive quantum. This is the measurement problem. You simply talk it away by making this assumption.
Sunil said:
The minimal interpretation is, essentially, only a reduced Copenhagen interpretation, so it prefers not to talk about that cut, classical part, and all that, but
... it postulates a classical world in addition to the quantum world. How the two can coexist is unexplained.
Sunil said:
with the simple mathematics of dBB theory which defines the (average in statistical interpretations) velocity [...]
Given that such a straightforward solution with trajectories exists
It does not exist for quantum field theory, which is needed for explaining much of our world!
 
  • Like
Likes PeroK, gentzen and vanhees71
  • #101
gentzen said:
If we write the dispersion relation as ##\omega^2/c^2=k_x^2+k_y^2+k_z^2## and assume that ##k_x## and ##k_y## are real, then we see that ##k_z^2## will get negative if ##\omega^2/c^2<k_x^2+k_y^2##. If ##k_z^2## is negative then ##k_z## is imaginary, which corresponds to an evanescent wave.

At a horizontal planar interface (perpendicular to the z-axis) between two homogeneous regions, ##k_x## and ##k_y## cannot change, because they describe the modulation of the electromagnetic field along the interface. So you can have an optical wave in a glass substrate with well defined ##k_x## and ##k_y## based on the direction of the wave. If the direction of the wave is sufficiently gracing with respect to a horizontal planar interface to vacuum, then it will become evanescent in the vacuum below the interface.
(The wave will quickly (exponentially) vanish with respect to increasing distance from the interface. Additionally, the time average of the z-component of the Poynting vector is zero, i.e. there is no energy transported in z-direction on average by the evanscent wave in vacuum.)
In the vacuum ##\vec{k}## is a real vector. That's precisely what I don't understand!

If there is a planar interface between two homogeneous regions, there's no vacuum of course, and then there are evanescent waves (aka total reflection).
 
  • #102
vanhees71 said:
"nothing is definite in the statistical interpretation", but that's no bug but a feature
... that leaves unexplained why we see definite things in our world.
vanhees71 said:
That a single point on the screen is blackened for each particle registered is first of all an empirical fact. It is also well understood quantum mechanically as already shown as early as 1929 in Mott's famous paper about α-particle tracks in a cloud chamber.
He didn't show this or claim to have shown it. Mott leaves unexplained why there is a first definite ionization in the first place. He explains only that subsequent ionizations are approximately along a straight line. Thus he explains the tracks assuming the first definite ionization happened somehow.
vanhees71 said:
I believe that your thermal interpretation is the answer as soon as you allow your q-expectation values to be interpreted in the standard probabilistic way
Many q-expectations can be interpreted in the standard probabilistic way, namely all cases where an ensemble of many essentially equally prepared systems is measured. The latter is the assumption on which the statistical interpretation rests. Thus whenever the statistical interpretation applies it is fully compatible with the thermal interpretation.

But there are many instances (in particular most macroscopic measurements) where the statistical interpretation cannot apply since only a single measurement is taken. In these cases the statistical interpretation has no explanatory power at all, while the thermal interpretation still applies.
vanhees71 said:
of course you cannot describe the macroscopic observables by microscopic dynamics, because it is their very nature to be only a coarse-grained description of the relevant macroscopic degrees of freedom
Your ''of course you cannot'' is a fallacy. Nothing forbids that a coarse-grained description is not fully determined by the underlying microscopic reality.

All our physical knowledge suggests the contrary. In many cases we have two description levels amenable to complete mathematical analysis, of which one is a coarse-grained version of the other. In all these cases, the latter turned out to be a well-determined approximation of the former, with rigorously established conditions for the validity of the approximation.

I expect that in the context of the thermal interpretation, the analysis of the quantum measurement process along the lines of Breuer & Pettrucione and Allahverdian, Balian & Nieuwenhuizen will, as outlined in my book, sooner or later reach the same status.

vanhees71 said:
But macroscopic properties are statistical averages over many microscopic degrees of freedom.
Only for an ideal gas. For real matter they are integrals of complicated expressions without any statistics in them. You cannot get the measurable free energy of a substance by averaging microscopic free energies.
vanhees71 said:
A single measurement, no matter whether you measure "macroscopic" or "microscopic" properties, never establishes a value, let alone, can test any theoretical prediction, as one learns in the first session of the introductory beginner's lab!
Ask an engineer or a medical doctor, and he will tell you the contrary. Only highly volatile quantities (such as pointer readings of a macroscopically oscillating pointer or measurements of a single spin) need multiple measurements.
 
  • #103
We see definite things in our world, because we simply don't look in too much detail. Our senses take averages over very many microscopic "events" all the time. The same "mechanisms" apply to all kinds of other "measurement devices". You don't need to prepare many Gibbs ensembles of a gas in a container to observe thermal equilibrium. Gibbs ensembles often are just a useful "Gedankenexperiment" to derive statistical predictions. When putting a thermometer in it to measure its temperature the "averaging" is done dynamically by many hits of the gas molecules with the thermometer and after some time with overwhelming probability thermal equilibrium establishes and thus we can "read off a temperature" on a calibrated scale. Looking in more detail we can of course, also in thermal equilibrium, observe (thermal) fluctuations, which was the reason why finally in the beginning of the 20th century the statistical physics approach and the discovery of the atomistic nature of matter got accepted by the physics community.

Of course, sometimes we indeed have to work with "Gibbs ensembles", as, e.g., in particle (and heavy ion) physics. It's not too surprising that also there statistical physics works very well, sometimes even equilibrium statistical physics. Of course, this is partially just due to looking at sufficiently coarse-grained observables, e.g., just the particle abundancies in a heavy-ion collision averaging over very many events amazingly accurately are described by (grand) canonical ensembles (reaching from the most abundant species like pions to very rare ones like light nuclei and antinuclei).

Also I didn't claim that the macroscopic behavior is not described by the underlying microscopic dynamics. To the contrary it is derivable from it by quantum statistics. The only point is that it describes the macroscopic coarse-grained observables, relevant for the description at the level of accuracy of the observed phenomena, and not the microscopic irrelevant details. The latter can become relevant at higher resolution of the observables, and then you have to change the level of description to describe these more detailed then becoming "relevant" observables. That's all understandable within the paradigm of the statistical approach. It's only blurred by your thermal interpretation, if you forbid to interpret the expectation values in the usual statisical/proabilistic way.
 
  • Like
Likes physicsworks and WernerQH
  • #104
I think "thermal interpretation" is a misnomer. How can you call it an interpretation if it carefully avoids the question what quantum theory is about. It is more like an empty shell, big enough to contain theories as diverse as quantum mechanics and thermodynamics.
 
  • Skeptical
Likes PeroK
  • #105
vanhees71 said:
Our senses take averages over very many microscopic "events" all the time.
This is a hypothesis without sufficient basis.

Our senses are physical objects. Thus they don't do mathematical operations of averaging. Instead they behave according to physical laws governed by quantum theory. So whatever they do must be deducable from quantum mechanics. But in quantum mechanics there is no notion of microscopic "events" happening in time - unless you define these in quantum mechanical terms, which you never did.

vanhees71 said:
Gibbs ensembles often are just a useful "Gedankenexperiment" to derive statistical predictions.
But Nature doesn't care about our gedanken but operates on the basis of dynamical laws. Without an actual ensemble no actual averaging, hence no actual statistics, hence Born's rule does not apply.

vanhees71 said:
the "averaging" is done dynamically by many hits of the gas molecules
This assumes that the gas molecules are classical objects that can hit the thermometer. But in standard quantum mechanics all you have is potentialities, nothing actually happening, except in a measurement.


vanhees71 said:
Also I didn't claim that the macroscopic behavior is not described by the underlying microscopic dynamics. To the contrary it is derivable from it by quantum statistics.
Then you need to derive from quantum statistics that, upon interacting with a particle to be detected, the detector is not - as the Schrödinger equation predicts - in a superposition of macroscopic states with pointer positions distributed according to Born's rule, but that it is in a macroscopic state where one of the pointer positions is actually realized, so that we can see it with our senses.

Since you claim that this is derivable, please show me a derivation! If valid, it would solve the measurement problem!
 
Last edited:
  • Like
Likes gentzen

Similar threads

Replies
1
Views
1K
Replies
4
Views
903
Replies
25
Views
3K
Replies
49
Views
4K
Replies
84
Views
4K
Back
Top