- #106
*now*
- 97
- 71
A. Neumaier said:Please give a reference for discussion.
https://journals.aps.org/pr/abstract/10.1103/PhysRev.35.904
https://journals.aps.org/pr/abstract/10.1103/PhysRev.36.1791
A. Neumaier said:Please give a reference for discussion.
Why do you ssay bullet 2 is different from bullet 1? I use the same trace formula of course. What else? It's the basic definition of an expectation value in QT, and it's the most general representation-free formulation of Born's rule. Also with the other 3 points, you say on the one hand the thermal interpretation uses the same mathematical formalism, but it's all differently interpreted. You even use specific probabilistic/statistical notions like "uncertainty" and define it in the usual statistical terms as the standard deviation/2nd cumulant. Why is it then not right to have the same heuristics about it in your thermal interpretation (TI) as in the minimal interpretation (MI)?A. Neumaier said:The meaning is enigmatic only when viewed in terms of the traditional interpretations, which look at the same matter in a very different way.$$\def\<{\langle} \def\>{\rangle}$$
Given a physical quantity represented by a self-adjoint operator ##A## and a state ##\rho## (of rank 1 if pure),
- all traditional interpretations give the same recipe for computing a number of possible idealized measurement values, the eigenvalues of ##A##, of which one is exactly (according to most formulations) or approximately (according to your cautious formulation) measured with probabilities computed from ##A## and ##\rho## by another recipe, Born's rule (probability form), while
- the thermal interpretation gives a different recipe for computing a single possible idealized measurement value, the q-expectation ##\<A\>:=Tr~\rho A## of ##A##, which is approximately measured.
- In both cases, the measurement involves an additional uncertainty related to the degree of reproducibility of the measurement, given by the standard deviation of the results of repeated measurements.
- Tradition and the thermal interpretation agree in that this uncertainty is at least ##\sigma_A:=\sqrt{\<A^2\>-\<A\>^2}## (which leads, among others, to Heisenberg's uncertainty relation).
- But they make very different assumptions concerning the nature of what is to be regarded as idealized measurement result.
But this overlooks that QT assumes that not all observables can have determined values at once. At best, i.e., if technically feasible for simple systems, you can only prepare a state such that a complete compatible set of observables takes determined values. All to this set incompatible observables (almost always) have indetermined values, and this is not due to unideal measurement devices but it's an inherent feature of the system.That quantities with large uncertainty are erratic in measurement is nothing special to quantum physics but very familiar form the measurement of classical noisy systems. The thermal interpretation asserts that all uncertainty is of this kind, and much of my three papers are about arguing why this is indeed consistent with the assumptions of the thermal interpretation.
Well, using the standard interpretation it's pretty simple to state, what an idealized measurement is: Given the possible values of the observable (e.g., some angular momentum squared ##\vec{J}^2## and one component, usually ##J_z##) you perform an idealized measurement if the resolution of the measurement device is good enough to resolve the (necessarily discrete!) spectral values of the associated self-adjoint operators of this measured quantity. Of course, in the continuous spectrum you don't have ideal measurements in the real world, but also any quantum state predicts an inherent uncertainty given by the formula above. To verify this prediction you need an apparatur which resolves the measured quantity much better than this quantum-mechanical uncertainty.Now it is experimentally undecidable what an ''idealized measurement result'' should be since measured are only actual results, not idealized ones.
What to consider as idealized version is a matter of interpretation. What one chooses determines what one ends up with!
As a result, the traditional interpretations are probabilistic from the start, while the thermal interpretation is deterministic from the start.
The thermal interpretation has two advantages:
- It assumes at the levels of the postulates less technical mathematics (no spectral theorem, no notion of eigenvalue, no probability theory).
- It allows to make definite statements about each single quantum system, no matter how large or small it is.
Einstein has coined the very precise word "inseparability" for this particular feature of quantum entanglement. As Einstein has clarified in a German paper of 1948 he was quite unhappy about the fact that the famous EPR paper doesn't really represent his particular criticism of QT, because it's not so much the "action at a distance" aspect (which indeed only comes into the game from adding collapse postulates to the formalism as in the Heisenberg and von Neumann versions of the Copenhagen interpretation, which imho is completely unnecessary to begin with) but about the inseparability. Einstein thus preferred a stronger version of the "linked-cluster principle" than predicted by QT due to entanglement, i.e., he didn't even like the correlations due to entanglement, which however can never be used to contradict the "relativistic signal-propagation speed limit", but he insisted on the separability of the objectively observable physical world.A. Neumaier said:Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''. A subsystem is selected by picking a vector space of quantities (linear operators) relevant to the subsystem. Regarding a tensor product of two systems as two separate subsystems (as traditionally done) is therefore allowed only when all quantities that correlate the two systems are deemed irrelevant. Thinking in terms of the subsystems only hence produces the weird features visible in the traditional way of speaking.
DarMM said:Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).
So in the Thermal Interpretation we have the following core features:
1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. [...]
2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
... i.e., the decoherence from non-diagonal to diagonal state operator (hence, quantum statistics -> classical statistics), driven by interaction with the environment (e.g., random gravitational interaction with everything else). Even extremely weak such interactions can diagonalize a state operator extraordinarily quickly.4. Stochastic features of QM are generated by the system interacting with the environment. [...]
What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.DarMM said:Okay a possibly more accurate rendering of point 4.
In the Thermal Interpretation, as discussed in point 1, we have quantities ##\langle A \rangle## which take on a specific value in a specific state ##\rho##. Although ##\langle A \rangle## uses the mathematics of (generalized) statistics this is of no more significance than using a vector space in applications where the vectors are not to be understood as displacements, i.e. the physical meaning of a mathematical operation is not tied to the original context of its discovery. ##\langle A \rangle## is simply a quantity.
However it is an uncertain value. Quantum Mechanical systems are intrinsically delocalised/blurred in their quantities in the same kind of fundamental sense that "Where is a city, where is a wave?" is a fuzzy concept. I say blurred because delocalised seems appropriate to position alone. This is neither to say it has a precise position that we are uncertain of (as in a statistical treatment of Newtonian Mechanics) or a fundamentally random position (as in some view of QM). For example particles actually possesses world tubes rather than world lines.
However standard scientific practice is to treat such "blurred" uncertainties statistically, the same mathematics one uses to treat precise quantities of which one is ignorant. This similarity in the mathematics used however is what has lead to viewing quantum quantities as being intrinsically random.
For microscopic quantities their blurring is so extreme that a single observation cannot be regarded as an accurate measurement of a microscopic quantity. For example in the case of a particle when we measure position the measuring device simply becomes correlated with a point within the tube, giving a single discrete reading, but this does not give one an accurate picture of the tube.
Thus we must use the statistics of multiple measurements to construct a proper measurement of the particle's blurred position/world tube.
We are then left with the final question of:
"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"
Section 5.1 of Paper III discusses this. In essence the environment drives the "slow large scale modes" of the measuring device into one of a discrete set of states. These modes correspond to macroscopically observable properties of the device, e.g. pointer reading. Since one cannot track the environment and information is lost into it via dissipation, this takes the form of the macroscopic slow modes stochastically evolving into a discrete set of states.
Thus we have our measuring devices develop an environmentally driven effectively "random" discrete reading of the truly continuous/blurred quantities of the microscopic system. We then apply standard statistical techniques to multiple such discrete measurements to reconstruct the actual continuous quantities.
By this do you simply mean the fact that states over all modes (the full manifold) are metastable and decay to states on the slow mode manifold, which is disconnected, under environmental action. The disconnectedness of the slow manifold providing discreteness.A. Neumaier said:What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.
The whole approach is quite different from consistent histories. The measuring device's large scale features are driven into a disconnected manifold. States on each component of which represent a "pointer" outcome. This evolution is deterministic, but stochastic under lack of knowledge of the environment. You just aren't sure of the environmental state.AlexCaledin said:So finally there are the same consistent histories? (And the Copenhagen QM can then be derived from them)
Yes, since you wanted to answer the questionDarMM said:By this do you simply mean the fact that states over all modes (the full manifold) are metastable and decay to states on the slow mode manifold, which is disconnected, under environmental action. The disconnectedness of the slow manifold providing discreteness.
but the argument you then gave didn't answer it.DarMM said:"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"
Yes. The dynamics of the q-expectations of the universe is deterministic, and stochastic features appear in exactly the same way as in Laplace's deterministic classical mechanics of the universe.vanhees71 said:t's in the same sense statistical as is classical statistics; it's the lack of detailed knowledge about macroscopic systems which brings in the probabilistic element.
No, because my notion of indecomposability (post #101) together with extended causality (Section 4.4 of Part II) allows more than Einstein's requirement of separability. But it still ensures locality in the quantum field sense and Poincare invariance and hence correctly addresses relativity issues.vanhees71 said:This contradicts, however, the very facts addressed by EPR or better by Einstein in his 1948 essay concerning the inseparability.
Two-photon states are discussed in some detail in Section 4.5 of Part II.vanhees71 said:The paradigmatic example is the simple polarization entangled two-photon states.
Maybe you can understand now why. There is a lot to be said to make sure that one doesn't fall back into the traditional interpretations and can see how the old issues are settled in a new way. In the past, different people asked different questions and had different problems with the thermal interpretation as I had discussed them earlier in a more informal way. Orginally I wanted to write a 20 page paper but it grew and grew into the present 4 part series.vanhees71 said:Why the heck, hasn't he written this down in a 20-30p physics paper rather than with so much text obviously addressed to philosophers? (It's not meant in as bad a way as it may sound ;-))).
This is again reasoning from the statistical interpretation, which doesn't apply to the thermal interpretation.vanhees71 said:the polarization of each single photon is maximally indetermined (in sense of information theory with the usual Shannon-Jaynes-von Neumann entropy as information measure).
Whereas according to the thermal interpretation, if we have complete knowledge about the system we know all its idealized measurement results, i.e., all q-expectations values, which is equivalent with knowing its density operator.vanhees71 said:According to the minimal interpretation [...] we have complete knowledge about the system but still there are observables indetermined.
Did Laplace have the same complaint with his clockwork universe?AlexCaledin said:So, according to the thermal QM, every event (including all this great discussion) was pre-programmed by the Big Bang's primordial fluctuations?
I wasn't sure if it also required an argument that slow mode manifold was in fact disconnected and I couldn't think of one. Metastability of states on the full manifold decaying into those on the slow manifold is enough provided the slow mode manifold is disconnected. Is there a reason to expect this in general?A. Neumaier said:Yes, since you wanted to answer the question
but the argument you then gave didn't answer it.
This somewhat confuses me. I would have thought the notion of reducibility just means it can be completely decomposed, i.e. the total system is simply a composition of the two subsystems and nothing more. What subtlety am I missing?A. Neumaier said:Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''.
As soon as there are two local minima in the compactified universe (i.e., including minima at infinity), the answer is yes. For geometric reasons, each local minimizer has its own catchment region, and these are disjoint. This accounts for the case where the slow modes are fixed points. But similar things hold more generally. It is the generic situation, while the situation of a connected slow manifold is quite special (though of course quite possible).DarMM said:I wasn't sure if it also required an argument that slow mode manifold was in fact disconnected and I couldn't think of one. Metastability of states on the full manifold decaying into those on the slow manifold is enough provided the slow mode manifold is disconnected. Is there a reason to expect this in general?
to 'reduce' is a vague notion that can mean many things. For example, reductionism in science means the possibility of reducing all phenomena to physics.DarMM said:This somewhat confuses me. I would have thought the notion of reducibility just means it can be completely decomposed, i.e. the total system is simply a composition of the two subsystems and nothing more. What subtlety am I missing?
For those reading what is lacking here is that I simply said:A. Neumaier said:What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.
The "drives" here is vague and doesn't explain the mechanism.DarMM said:In essence the environment drives the "slow large scale modes" of the measuring device into one of a discrete set of states
Why don't you state a revised 4-point summary of your view of my view? Then I'll give you (again) my view of your view of my view!DarMM said:I think I've an okay (I hope!) grasp of this view now.
I spent several pages on uncertainty (Subsection 2.3 of Part II) to show that uncertainty is much more fundamental than statistical uncertainty. For example, consider the uncertainty of the diameter of the city of Vienna. It is not associated with any statistics but with the uncertainty of the concept itself.vanhees71 said:You even use specific probabilistic/statistical notions like "uncertainty" and define it in the usual statistical terms as the standard deviation/2nd cumulant.
Ok, this makes sense again. So for you determinism doesn't refer to the observables but to the statistical operators (or states). That's of course true in the minimal statistical interpretation either. So the thermal interpretation is again equivalent to the standard interpretation, you only relabel the language associated with the math, not talking about probability and statistics but only about q-expectation values. I can live with that easily :-).A. Neumaier said:Y
Whereas according to the thermal interpretation, if we have complete knowledge about the system we know all its idealized measurement results, i.e., all q-expectations values, which is equivalent with knowing its density operator.
If you want to assess the thermal interpretation you need to discuss it in terms of its own interpretation and not in terms of the statistical interpretation!
A. Neumaier said:our deterministic universe
On the fundamental level, yes, since each observed alpha decay is something described by some of the observables in the universe. But as casting a die, it is practically indeterministic.ftr said:So do you think alpha decay or spontaneous emission is also deterministic ?
No; it is not an alternative but both! It refers to the (partially observable) beables, which are the q-expectations. This determinism is equivalent to the determinism of the density operator.vanhees71 said:So for you determinism doesn't refer to the observables but to the statistical operators (or states).
No, because the meaning assigned to ''observable'' and ''state'' is completely different.vanhees71 said:That's of course true in the minimal statistical interpretation either. So the thermal interpretation is again equivalent to the standard interpretation,
vanhees71 said:you say on the one hand the thermal interpretation uses the same mathematical formalism, but it's all differently interpreted.
The language associated with the math - that's the interpretation!vanhees71 said:you only relabel the language associated with the math, not talking about probability and statistics but only about q-expectation values. I can live with that easily :-).
Well, I am not Einstein, and therefore have more freedom.vanhees71 said:Einstein [...] insisted on the separability of the objectively observable physical world. [...] It's of course impossible to guess what Einstein would have argued about the fact that the modern Bell measurements show that you either have to give up locality or determinism.
Maybe not, he would have found a hole in your theory right a way . Seriously, what about tunneling?A. Neumaier said:Maybe Einstein would have been satisfied.
Maybe. There are many others who might want to try and find such a hole in my interpretation! Not my theory - the theory is standard quantum physics!ftr said:Maybe not, he would have found a hole in your theory right a way .
This is just a particular way a state changes with time.ftr said:what about tunneling?
yes. Eigenvalues of a q-observable are state-independent, hence are not even beables.kith said:Let's see if I got the terminology straight.
In standard QM, observables are self-adjoint operators. The thermal interpretation refers to these as q-observables instead (paper I, p.3). Historically, before their mathematical nature was completely understood, Dirac referred to them as q-numbers.
The standard QM usage of the term "observable" is a bit strange because self-adjoint operators are not observable in the everyday sense of the word. The thermal interpretation tries move closer to the everyday usage of the word and defines observables as "numbers obtainable from observations" (paper I, p.3). This is similar to what Dirac called c-numbers (although I think he included complex numbers in the concept and the thermal interpretation probably doesn't).
In standard QM, expectation values and probabilities are inherently probabilistic properties of self-adjoint operators. Since they are "numbers obtainable from observations", they are observables in the thermal interpretation and calling them q-expectations and q-probabilities is done in reference to the usage in standard QM but doesn't reflect anything probabilistic in their mathematical definition. I'm not sure whether eigenvalues should also be called observables. Is the definition of observable tied to whether an experiment can actually be performed in a sufficiently idealized form?
Is this correct so far?
Actually, in the papers I avoid the notion of an observable because of the possible confusion. I use beable for what exists (all functions of q-expectations) and say that some beables are observable (not observables!) But I sometimes call the traditional selfadjoint operators q-observables (the prefix q- labels all traditional notions that in the thermal interpretation would result in a misleading connotation) and sometimes call informally the observable beables ''observables'' (which matches the classical notion of an observable). However, if you see this done in the papers, please inform me (not here but preferably by email) so that I can eliminate it in the next version.kith said:I think it is a bit unfortunate to use the term "observable" in the thermal interpretation at all because the term is so deeply ingrained in standard QM which makes it prone to misunderstands.
I would introduce quantum mechanics with the qubit, which is just 19th century optics. This produces the density operator, the Hilbert space, the special case of pure states, Born's rule (aka Malus' law), the Schrödinger equation, and the thermal interpretation - all in a very natural way.vanhees71 said:whether one can use these concepts to teach QM 1 from scratch, i.e., can you start by some heuristic intuitive physical arguments to generalize the Lie-algebra approach of classical mechanics in terms of the usual Poisson brackets of classical mechanics? Maybe that would be an alternative approach to QM which avoids all the quibbles with starting with pure states and then only finally arrive at the general case of statistical operators as description of quantum states?
The calculations are of course identical, since calculations are not part of the interpretation.vanhees71 said:First one has to understand the most simple cases to understand the meaning of an interpretation.
The Stern-Gerlach experiment is a very good example for that. [...] how would the analogous calculation work with the thermal representation
It is very different.vanhees71 said:for me your thermal interpretation is not different from the standard interpretation as expressed by van Kampen in the following informal paper: https://doi.org/10.1016/0378-4371(88)90105-7
A. Neumaier said:It is very different.
van Kampen uses the standard assumptions of the Copenhagen interpretation, with pure states associated to single systems (p.99, after theorem III) and with collapse (which he claims to deduce on p.106, but his argument is sketchy exactly here: he deduces the collapse of the measured system from the silently assumed collapse of system+detector). His alleged ''proof'' is discussed in detail in Bell's paper ''http://www.johnboccio.com/research/quantum/notes/bell.pdf'' on pp.14-17.
Then you should like the thermal interpretation, which suffers from none of what Bell complains about! It just requires a little to get used to...stevendaryl said:Thanks for posting a link to that essay. I think Bell summarizes pretty well what I find unsatisfactory about most textbook descriptions of quantum mechanics.
Well, we obviously have very different views on the fundamental meaning of QT, and that leads to mutual misunderstandings.A. Neumaier said:No; it is not an alternative but both! It refers to the (partially observable) beables, which are the q-expectations. This determinism is equivalent to the determinism of the density operator.
No, because the meaning assigned to ''observable'' and ''state'' is completely different.
For you, observed are only eigenvalues; for the thermal interpretation, eigenvalues are almost never observed. As in classical physics!
For you, the state of the universe makes no sense at all; for the thermal interpretation, the state of the universe is all there is (on the conceptual level), and every other system considered by physicists is a subsystem of it, with a state completely determined by the state of the universe. As in classical physics!
For you, quantum probability is something irreducible and unavoidable in the foundations; for the thermal interpretation, probability is not part of the foundations but an emergent phenomenon. As in classical physics!How can you think that both interpretations are equivalent?
So in fact they ARE the same.Only the things they try to connect - the formal theory and the experimental record are the same, but how they mediate between them is completely different (see post #99).
For me it's very hard to follow any interpretation which forbids me to understand "thermal language" that is "not statistical". I've already a very hard time with traditional axiomatized "phenomenological thermodynamics", where, e.g., the central notion of entropy is its definition by introducing temperature as an integrating factor of an abstract Pfaffian form. The great achievement by the Berrnoulli's, Maxwell, and mostly Boltzmann were to connect these notions with the underlying fundamental deynamical laws of their time in terms of statistical physics, and that very general foundation so far withstood all the "revolutions" of 20-century physics, i.e., relativity (which anyway is just a refined classical theory for the description of space and time and thus not as revolutionary as it appeared at the turn to the 20th century) and QT (which indeed in some sense can be considered as really revolutionary in breaking with the deterministic world view).The language associated with the math - that's the interpretation!
One can associate with it Copenhagen language or minimal statistical language - which is what tradition did, resulting in nearly a century of perceived weirdness of quantum mechanics by almost everyone - especially
Or one can associate thermal, nonstatistical language with it, restoring continuity and common sense.
- by all newcomers without exception and
- by some of the greatest physicists (see the quotes at the beginning of Section 5 of Part III).
Everyone is free to pick their preferred interpretation. It is time to change preferences!
vanhees71 said:The "apparent weirdness" of QT is for me completely resolved by the minimal statistical interpretation. It's not QT is weird but our prejudice that our "common sense", trained by everyday experience with rough macroscopic observables (or preceptions if you wish), tells us the full structure of matter.
Here are some words which, however legitimate and necessary in application, have no place in a formulation with any pretension to physical precision: system, apparatus, environment, microscopic, macroscopic, reversible, irreversible, observable, information, measurement.