The thermal interpretation of quantum physics

In summary: I like your summary, but I disagree with the philosophical position you take.In summary, I think Dr Neumaier has a good point - QFT may indeed be a better place for interpretations. I do not know enough of his thermal interpretation to comment on its specifics.
  • #561
Demystifier said:
Where can I see a simple general quantitative explanation of why exactly that happens?
By simple I mean not longer than a couple of pages, by general I mean referring to a wide class of cases, by quantitative I mean containing equations (not merely verbal hand waving). Mathematical rigor is not required.
Demystifier said:
I guess the answer to my question above is the following. The dynamics of the open subsystem is described not only by a single Hamiltonian, but also by a series of Lindblad operators. The consequence, I guess, is that there are many (rather than one) stable states to which the system can finally decay. To which one it will decay depends on fine details of the initial state that in practice cannot be known exactly, so they play a role of "hidden variables".
The detailed dynamics of an open system is governed by a (piecewise deterministic or quantum diffusion) stochastic process of which a Lindblad equation is only a summary form. See, e.g., the book by B&P cited in Part III, which has lots of (not fully rigorous) formulas and lots of examples.

For measurement settings, if one assumes the noise to be small (weak coupling to the environment), there is a deterministic dissipative part which makes the system generically move into a fixed point; each fixed point corresponding to a measurement outcome. If the detector (e.g., a bubble chamber) is initially in a metastable state, there is an ambiguity into which fixed point it is mapped, and this ambiguity is resolved randomly by the noise in the stochastic process. This is analogous to a ball balanced on the top of a hill, which moves under noise into a random direction, eventually ending up at the bottom of one of the valleys.
 
Last edited:
Physics news on Phys.org
  • #562
cube137 said:
But why only one spot appear and not two simultaneously?
Conservation of energy, together with the instability of macroscopic superpositions and randomly broken symmetry forces this, just as a classical bar under vertical pressure will bend into only one direction. See Subsection 5.1 of Part III and the similar discussion around this post.
cube137 said:
Won't it be possible to put the above under experimental test or scrutiny as it is the most graphic ramifications yet of the thermal interpretation?
This is just a more complicated form of the usual test at two locations, and only complicates the setting without adding substance. Every particle collider makes similar experiments, not with photons but with massive particles, and one always observes that symmetry is broken. Whether this symmetry is discrete or continuous is in the present context of secondary importance.
 
  • #563
stevendaryl said:
The wave function by linearity cannot evolve from a superposition of two possibilities to a choice of one possibility or the other.
In the thermal interpretation, beables (q-expectations) are quadratic in the wave function, hence cannot be superimposed. They are always definite numbers that are approximately measureable.

The two possiblities come from the fact that the reduced dynamics of system plus detector has two fixed points, and one of them is reached as explained in the previous few posts.
 
  • #564
A. Neumaier said:
The detailed dynamics of an open system is governed by a (piecewise deterministic or quantum diffusion) stochastic process of which a Lindblad equation is only a summary form. See, e.g., the book by B&P cited in Part III, which has lots of (not fully rigorous) formulas and lots of examples.

For measurement settings, if one assumes the noise to be small (weak coupling to the environment), there is a deterministic dissipative part which makes the system generically move into a fixed point; each fixed point corresponding to a measurement outcome. If the detector (e.g., a bubble chamber) is initially in a metastable state, there is an ambiguity into which fixed point it is mapped, and this ambiguity is resolved randomly by the noise in the stochastic process. This is analogous to a ball balanced on the top of a hill, which moves under noise into a random direction, eventually ending up at the bottom of one of the valleys.
My problem is that I don't see how to reconcile it with the non-stochastic description of decoherence which, in a measurement setting, typically predicts evolution not to a single fixed point (one measurement outcome) but to an incoherent superposition of all fixed points (all possible measurement outcomes, as in the many-world interpretation).

I agree that influence of unknown degrees of freedom can effectively be described by an appropriate stochastic model, but it doesn't mean that any stochastic model is appropriate. In particular, a stochastic model that predicts evolution to a single fixed point is probably not appropriate for the purpose of solving the measurement problem. Perhaps it can be appropriate as a description of the phenomenological fact that we do observe single measurement outcomes, but such a model doesn't really solve the measurement problem. Instead, it merely assumes that somehow it is already solved by more fundamental non-stochastic means, so that one is allowed to use stochastic models for practical purposes.

Related to this, at page 25 of part III of your paper you say:
"The thermal interpretation claims that this influences the results enough to cause all randomness in quantum physics, so that there is no need for intrinsic probability as in traditional interpretations of quantum mechanics."
This might be the central claim of thermal interpretation, but I am not at all convinced that this claim is true.
 
Last edited:
  • #565
Demystifier said:
My problem is that I don't see how to reconcile it with the non-stochastic description of decoherence which, in a measurement setting, typically predicts evolution not to a single fixed point (one measurement outcome) but to an incoherent superposition of all fixed points (all possible measurement outcomes, as in the many-world interpretation).
This is because the analysis of decoherence works on the coarser level of Lindblad equations rather than on the finer level of the underlying stochastic piecewise deterministic process (PDP) of B&P.
Demystifier said:
I agree that influence of unknown degrees of freedom can effectively be described by an appropriate stochastic model, but it doesn't mean that any stochastic model is appropriate. In particular, a stochastic model that predicts evolution to a single fixed point is probably not appropriate for the purpose of solving the measurement problem.
There is no choice. The deterministic dynamics of the universe and the properties of system and detecor force everything, including the form of the reduced stochastic process. If a quantity with discrete spectrum is measured one ends up with a PDP, if it is continuous, wih a quantum diffusion process.
Demystifier said:
such a model doesn't really solve the measurement problem. Instead, it merely assumes that somehow it is already solved.
Only if you create the model by guessing. If you create it by coarse-graining the true dynamics, one gets a solution to the measurement problem.
Demystifier said:
Related to this, at page 25 of part III of your paper you say:
"The thermal interpretation claims that this influences the results enough to cause all randomness in quantum physics, so that there is no need for intrinsic probability as in traditional interpretations of quantum mechanics."
This might be the central claim of thermal interpretation, but I am not at all convinced that this claim is true.
Read B&P to get the necessary mathematical background!
 
  • #566
A. Neumaier said:
This is because the analysis of decoherence works on the coarser level of Lindblad equations rather than on the finer level of the underlying stochastic piecewise deterministic process (PDP) of B&P.
But decoherence can be described at an even more fundamental level, without the Lindblad equation and withoud stochastic processes. See e.g. Schlosshauer's book, Chapters 2 and 3. This more fundamental level typically predicts incoherent superposition of all possible measurement outcomes.
 
  • #567
Arnold, I just saw this, does it have any relation to your interpretation since it also talks about thermal interpretation.
 
  • #568
Demystifier said:
If I am right, then this stochastic approach doesn't really solve the measurement problem but rather assumes that somehow it is already solved.

I haven't studied the thermal interpretation, so am not commenting on it directly. However, it is reasonable to conceive solving the measurement problem by postulating the stochastic equation directly, eg. under some circumstances CSL (non-Copenhagen) and the continuous measurement formalism (Copenhagen) produce the same equation.
https://en.wikipedia.org/wiki/Belavkin_equation
 
  • Like
Likes Demystifier
  • #569
atyy said:
However, it is reasonable to conceive solving the measurement problem by postulating the stochastic equation directly
I agree with that, for instance the GRW theory is of that kind. But the thermal interpretation is not of that kind.
 
  • Like
Likes atyy
  • #570
Demystifier said:
But decoherence can be described at an even more fundamental level, without the Lindblad equation and withoud stochastic processes. See e.g. Schlosshauer's book, Chapters 2 and 3. This more fundamental level typically predicts incoherent superposition of all possible measurement outcomes.
I meant with ''on the coarser level of Lindblad equations'' anything based on a deterministic approximate dynamics for the reduced density matrix. I don't have Schlosshauers book at hand but believe he only works on this level. On the other hand, B&P produce a stochastic dynamics for the reduced density matrix, which has a greater resolution power.
atyy said:
I haven't studied the thermal interpretation, so am not commenting on it directly. However, it is reasonable to conceive solving the measurement problem by postulating the stochastic equation directly, eg. under some circumstances CSL (non-Copenhagen) and the continuous measurement formalism (Copenhagen) produce the same equation.
Demystifier said:
I agree with that, for instance the GRW theory is of that kind. But the thermal interpretation is not of that kind.
B&P produce such an equation by coarse-graining from a unitary dynamics rather than postulating it directly. Some of my claims about the thermal interpretation rely on this.
 
  • #571
ftr said:
I just saw this, does it have any relation to your interpretation since it also talks about thermal interpretation.
I don't have access to this book, but from the tiny Google snippets in your link, there seems to be no close connection.
 
  • #572
A. Neumaier said:
I don't have Schlosshauers book at hand but believe he only works on this level.
I disagree, but I would invite you to check it by yourself.

And by the way, do B&P claim anywhere that they solve the measurement problem? I don't think so. In fact, I think they don't even address the measurement problem.
 
  • #573
Demystifier said:
do B&P claim anywhere that they solve the measurement problem? I don't think so. In fact, I think they don't even address the measurement problem.
Correct; they cannot, because they rely on a traditional interpretation.

But I can solve it, with their help: Their calculations are independent of any interpretation and hence apply also in the context of the thermal interpretation.
 
Last edited:
  • #574
A. Neumaier said:
Correct; they cannot, because they rely on a traditional interpretation.

But I do, with their help; their calculations are independent of any interpretation and hence apply also in the context of the thermal interpretation.
Fine. But if some approach could explain evolution towards a single fixed point of the density matrix, that would be a solution of the measurement problem. Hence their approach cannot explain evolution towards a single fixed point of the density matrix. So how exactly can the thermal interpretation do that?
 
  • #575
Demystifier said:
Fine. But if some approach could explain evolution towards a single fixed point of the density matrix, that would be a solution of the measurement problem. Hence their approach cannot explain evolution towards a single fixed point of the density matrix. So how exactly can the thermal interpretation do that?
Without telling what the beables are, there cannot be a solution of the measurement problem. That's the difference. The thermal interpretation has from the start unique outcomes, and only must explain which ones occur. Note that this does not involve convergence of the density matrix; only the pointer reading, i.e., in the thermal interpretation a q-éxpectation (not an eigenvalue) matters!
 
  • #576
I would like to see a worked-out "toy" example of how metastability leads to the selection of an eigenstate of the observable being measured. To me, it's very counter-intuitive. I actually feel that there should be a proof that it is impossible without assuming something beyond the minimal interpretation of quantum mechanics (which Bohmian mechanics does, as does the "objective" collapse models).
 
  • Like
Likes Demystifier
  • #577
A. Neumaier said:
Without telling what the beables are, there cannot be a solution of the measurement problem. That's the difference. The thermal interpretation has from the start unique outcomes, and only must explain which ones occur. Note that this does not involve convergence of the density matrix; only the pointer reading, i.e., in the thermal interpretation a q-éxpectation (not an eigenvalue) matters!
But if the density matrix does not converge, then how can the expected value, uniquely determined by the density matrix, converge? If the density matrix is
$$\rho=\frac{1}{2}\rho_1 + \frac{1}{2}\rho_2$$
then the expected value of the observable ##O## is
$$\langle O\rangle ={\rm Tr}O\rho=\frac{ \langle O\rangle_1 + \langle O\rangle_2 }{2}$$
where ##\langle O\rangle_k={\rm Tr}O\rho_k##. I don't see how can ##\langle O\rangle## converge to ##\langle O\rangle_1## or ##\langle O\rangle_2##.
 
Last edited:
  • #578
Demystifier said:
But if the density matrix does not converge, then how can the expected value, uniquely determined by the density matrix, converge?
Some function of a matrix can converge even if the matrix itself dos not converge. Just like ##x_k=(k^{-1}-1)^k## does not converge but its squares converge.
Demystifier said:
If the density matrix is
$$\rho=\frac{1}{2}\rho_1 + \frac{1}{2}\rho_2$$
then the expected value of the observable ##O## is
$$\langle O\rangle ={\rm Tr}O\rho=\frac{ \langle O\rangle_1 + \langle O\rangle_2 }{2}$$
where ##\langle O\rangle_k={\rm Tr}O\rho_k##. I don't see how can ##\langle O\rangle## converge to ##\langle O\rangle_1## or ##\langle O\rangle_2##.
This state is only an average state. The true reduced state satisfies a nonlinear stochastic dynamics under which it is unstable and decays after tiny random displacements. Averaging never preserves a nonlinear dynamics.
 
  • Like
Likes Auto-Didact
  • #579
stevendaryl said:
I would like to see a worked-out "toy" example of how metastability leads to the selection of an eigenstate of the observable being measured. To me, it's very counter-intuitive.
I'd like to see such an example, too. But this stuff is quite technical, and not easy to simplify. In the thermal interpretation, no eigenstate is selected, only one of two values for the q-expectation of the pointer variable. Such a 2-valuedness is what generically happens when perturbing a metastable stagte in a double-well potential. Instability in more complicated systems is similar, though in detail more complicated. But detectors are quite special systems, created to produce outcomes of a certain kind.

stevendaryl said:
I actually feel that there should be a proof that it is impossible without assuming something beyond the minimal interpretation of quantum mechanics (which Bohmian mechanics does, as does the "objective" collapse models).
I also think that one needs to assume beables of some sort to get definite results. Both Bohmian mechanics and the thermal interpretation introduce such beables, but in quite different ways.
 
  • #580
A. Neumaier said:
The true reduced state satisfies a nonlinear stochastic dynamics under which it is unstable and decays after tiny random displacements.
If that's true, then why cannot it solve the measurement problem by itself?
 
  • Like
Likes Auto-Didact
  • #581
A. Neumaier said:
Some function of a matrix can converge even if the matrix itself dos not converge. Just like ##x_k=(k^{-1}-1)^k## does not converge but its squares converge.
But expected values (that is, beables in thermal interpretation) are linear in the density matrix.
 
  • #582
Demystifier said:
If that's true, then why cannot it solve the measurement problem by itself?
Because without beables no solution of the measurement problem. The thermal interpretation provides intuitive beables.
 
  • #583
Demystifier said:
But expected values (that is, beables in thermal interpretation) are linear in the density matrix.
Yes, but the reduced dynamics is nonlinear in the density operator. Thus there is no reason to consider your particular mixture, it is an artifact of the ignorance of the stochasticity in ##\rho##.
 
  • #584
A. Neumaier said:
Yes, but the reduced dynamics is nonlinear in the density operator. Thus there is no reason to consider your particular mixture, it is an artifact of the ignorance of the stochasticity in ##\rho##.
I have a proof that you are wrong, which I will present in a separate thread.
 
  • #585
Arnold, in III.4.2 you say:

"These other variables therefore become hidden variables that would determine the stochastic elements in the reduced stochastic description, or the prediction errors in the reduced deterministic description. The hidden variables describe the unmodeled environment associated with the reduced description.6 Note that the same situation in the reduced description corresponds to a multitude of situations of the detailed description, hence each of its realizations belongs to different values of the hidden variables (the q-expectations in the environment), slightly causing the realizations to differ. Thus any coarse-graining results in small prediction errors, which usually consist of neglecting experimentally inaccessible high frequency effects. These uncontrollable errors are induced by the variables hidden in the environment and introduce a stochastic element in the relation to experiment even when the coarse-grained description is deterministic. The thermal interpretation claims that this influences the results enough to cause all randomness in quantum physics, so that there is no need for intrinsic probability as in traditional interpretations of quantum mechanics."

Bell's theorem is understood as constraining these determinstic hidden variables to exhibit either parameter dependence or source dependence. The former type are the non-local HVs in the Bohmian fashion, whereas the latter are local but superdeterminsitic or "conspiratorial" HVs. Which type of hidden variables are you contemplating here? Or do you propose a way out of this choice?
 
  • #586
charters said:
Which type of hidden variables are you contemplating here?
I specified precisely what my hidden variables are. I haven't tried to classify them in terms of the notions you mention. Probably any deterministic interpretation with a wholistic dynamics for the universe looks conspiratorial, but maybe the technical meaning of this term is different.
 
  • #587
A. Neumaier said:
I specified precisely what my hidden variables are. I haven't tried to classify them in terms of the notions you mention. Probably any deterministic interpretation with a wholistic dynamics for the universe looks conspiratorial, but maybe the technical meaning of this term is different.

I think you do need to think about this issue more closely.

If you take the route of parameter dependence you will need to address 1) the preferred foliation problem that is familiar to the Bohmians, but arises for any interpretation with this HV approach and 2) how you supply the necessary non-local corrections to local subsystems, which the Bohmians do through their ontic pilot wave, but I don't see how you achieve.

If you take the source dependent route, superdeterminism/conspiracy route, you need to address the fine tuning concerns. In these interpretations, the validity of standard quantum theory is an accidental coincidence of having the exactly right initial conditions for the HVs, so that the diachronic probabilities of normal quantum theory are produced due to this luck. I can see you had a long thread here about superdeterminism and fine tuning a couple years ago, and though it doesn't look like it was entirely well focused, I understand if you don't want to rehash that.

Regardless, I do think you would benefit from speaking more directly on where you stand on this in the papers, as it is one of the basic frameworks for how folks mentally categorize interpretations, and so will make your ideas easier for readers to understand and place in the constellation of pre-exisiting approaches.
 
  • #588
charters said:
about superdeterminism
If superdeterminism means that everything is determined by the state of the universe in the Heisenberg picture then the TI is superdeterministic. I don't see fine tuning as a problem - the universe is what it is, we need to describe it but not explain why it is the way it is. Moreover, most of what happens in the solar system is fairly independent of the details of the state of the universe, fine-tuning matters only for the analysis of systems fine-tuned by human preparation, such as long distance entanglement experiments.

I leave it to others to classify the TI. @DarMM gave recently a classification into 5 categories, and he placed the TI in the first one, together with Bohmian mechanics.
 
  • #589
charters said:
I do think you would benefit from speaking more directly on where you stand on this in the papers, as it is one of the basic frameworks for how folks mentally categorize interpretations
A. Neumaier said:
I leave it to others to classify the TI. DarMM gave recently a classification into 5 categories, and he placed the TI in the first one, together with Bohmian mechanics.
Actually into 6, here.
DarMM said:
Category 1. Though I should rephrase it possibly.
How to classify is clearly researcher-dependent...
 
  • #590
vanhees71 said:
it's still not clarified what the interpretation of the "thermal interpretation" really is (you only told us what it is not ;-)).
You don't hear any of the positive statements about the TI.

If you compare cross sections to theory you compare q-expectations of the S-matrix, not single statistical events. If you compare spectra with experiments, you compare q-expectations of the spectral density functions, not single statistical events. If you compare quantum thermodynamic predictions with experiments, you compare q-expectations of internal energy, mass, etc., not single statistical events. The thermal interpretation talks about what is actually compared, and thus gives primary (beable) status to these q-expectations, since these are directly related to reproducible (and publishable) experimental results.

You (consistent with the tradition) only talk differently about this, giving primary status instead to the eigenvalues (which have only a statistical meaning), thus creating the illusion of the need for a statistical interpretation.

What do you expect of different interpretations if not that they talk differently about the same theory and the same experiments? If the talk is the same, the interpretation is the same. If the interpretation is different, the talk is different.

There is no interpretation of the TI in terms of your statistical interpretation, and you seem to be blind to any alternative interpretation.
 
  • Like
Likes Auto-Didact and dextercioby
  • #591
vanhees71 said:
How can ##\rho## (assuming it's what's called the statistical operator in the standard interpretation) be a "beable", if it depends on the picture of time evolution chosen? The same holds for operators representing observables.
I meant to say, in any fixed picture, ##\rho## is a beable. In the thread where you posted this, we were silently using the Schrödinger picture. Picture change are like coordinate changes.

The situation is the same as when considering a position vector as a beable of a classical system, by fixing the coordinate system.

vanhees71 said:
What's a physical quantity (...) are
$$P(t,a|\rho)=\sum_{\beta} \langle t, a,\beta|\rho(t)|t,a,\beta \rangle,$$
where ##|t,a,\beta## and ##\rho(t)## are the eigenvectors of ##\hat{A}## and ##\rho(t)## the statistical operator, evolving in time according to the chosen picture of time evolution. In the standard minimal interpretation ##P(t,a|\rho)## is the probability for obtaining the value ##a## when measuring the observable ##A## precisely at time ##t##.
Yes, this physical quantity is the q-observable ##P(t,a|\rho)=\langle B\rangle##, where
$$B=\sum_{\beta} |t,a,\beta \rangle\langle t, a,\beta|$$
Thus you agree that at least certain q-observables are physical quantities, and we are getting closer.

vanhees71 said:
Now, before one discuss or even prove anything concerning an interpretation, one must define, what's the meaning of this expression in the interpretation. I still didn't get, as what this quantity is interpreted in the thermal interpretation, because you forbid it to be interpreted as probabilities.
I do not forbid it; I only remove it from the foundations, and allow q-expectations (rather than eigenvalues) to be interpreted as the true properties (beables). See the previous post #590.

In cases where someone actually performs many microscopic measurements of ##A## in the textbook sense, this q-expectation has indeed the statistical meaning you describe.

But there are other q-observables associated to macroscopic objects (all properties considered in thermodynamics; e.g., the mass of a particular brick of iron) which can be measured by actually performing only a single measurement, and measurement statistics over single events (or over unperformed measurements) is meaningless. The thermal interpretation still applies since it is independent of statistics.

vanhees71 said:
On the other hand, I think it's pretty safe to say the universe, on a large space-time scale, is close to local thermal equilibrium, as defined in standard coordinates of the FLRM metric, where the CMBR is in local thermal equilibrium up to tiny fluctuations of the relative order of ##10^{-5}##.
I agree. This implies that the exact state of the universe is ##\rho=e^-S/k_B##, where the entropy operator ##S## of the universe is approximately given by an integral over the energy density operator and particle density operators, with suitable weights (intensive fields). The coarse-graining inherent in the neglect of field products in an expansion of ##S## into fields makes ##S## exactly equal to such an expression and defines exact local equilibrium as an approximate state of the universe.

Thus the state of the universe is fairly well, but not in all details, specified by our current knowledge.
 
  • #592
Again, my first objection is that an element of the formalism that is description dependent (like the stat. op. and the observable ops. in QT are dependent on the arbitrary choice of the picture of time evolution, or the electromagnetic potentials which depend on the choice of gauge, or coordinates in classical mechanics etc.) cannot describe something in the real world.

My second question is also not answered, because you simply again say that q-expectations are "physical quantities", however without an interpretation, i.e., a way to measure them, that's an empty phrase. Born's description is clear: He says it's the probability to find the possible value of the observable ##A## when measuring it precisely, and that implies that you can give a measurement procedure to measure the observable and that you can repreat the experiment as often as you like to test the prediction for the probabilities. This leads inevitably to the ensemble interpretation. Of course, it's formally the expectation value of the observable described by the projector onto the eigenspace of eigenvalue ##a## of the operator ##\hat{A}##. Yet, you forbid me to use this usual probabilistic meaning of "expectation value" and rename it to "q-expectation value" but not giving the explanation what this means in the lab if not an expectation value in the sense of probability theory.

Last but not least, entropy is not an observable, and there's no operator for it. It's just defined (based on information theory, which you are not allowing in your thermal interpretation either, because also this information-theoretical definition of entropy is based on the probabilistic meaning of the quantum state) as ##S=-k_{\text{B}} \mathrm{Tr} \hat{\rho} \ln \hat{\rho}##.

On the other hand you argue within this information-theoretical paradigma. No matter, how else you redefine the meaning of entropy when denying the probability theoretical foundations, this clearly shows that you never ever refer to the "state of the entire universe" but at most to the "state of the observable part of the universe" plus the assumption of the cosmological principle, i.e., the assumption that our neighborhood is not special in any sense and thus reflects how the part of the universe which is causally connected with our observable neighborhood should look like under this assumption. Well, and now we are completely lost in metaphysics and philosophy ;-))).
 
  • #593
vanhees71 said:
Last but not least, entropy is not an observable, and there's no operator for it. It's just defined (based on information theory, which you are not allowing in your thermal interpretation either, because also this information-theoretical definition of entropy is based on the probabilistic meaning of the quantum state) as ##S=-k_{\text{B}} \mathrm{Tr} \hat{\rho} \ln \hat{\rho}##.
For a density operator ##\rho## with positive spectrum, ##S:=-k_B\log\rho## is a well-defined operator, and I can give it any name I like. I call it the entropy operator, since its q-expectation is your entropy. In this way, the entropy operator is well-defined, and its q-expectation agrees in equilibrium with the observable thermodynamic entropy, just as the q-expectation of the Hamiltonian agrees in equilibrium with the observable thermodynamic internal energy. Thus everything is fully consistent.

Nothing information theoretic is involved, unless you read it into the formulas.
 
Last edited:
  • Like
Likes Auto-Didact and Demystifier
  • #594
ftr said:
Arnold, I just saw this, does it have any relation to your interpretation since it also talks about thermal interpretation.
I checked it; there is no relation at all. Your reference is instead about the interpretation of certain states as thermal states.
 
  • #595
vanhees71 said:
On the other hand you argue within this information-theoretical paradigma.
No. The relevant state is an objective state of the full universe, independent of anyone's knowledge or even its knowability. Subjective is only its approximation by something explicit. But the same holds for the state of a Laplacian universe. Any bounded subsystem of it can know only a very limited part of this state.
 

Similar threads

Replies
24
Views
4K
Replies
4
Views
1K
Replies
42
Views
6K
Replies
1
Views
2K
Replies
25
Views
3K
Replies
53
Views
6K
Replies
7
Views
2K
Back
Top