Statistical ensemble interpretation done right

  • #176
A. Neumaier said:
Finding the right framework in which to solve tough mathematical problems that have been unsolved for years in spite of many attempts usually involves much philosophical pondering about the "interpretation" of the problem!

The philosophical part goes away only after the problems have been solved.
vanhees71 said:
I think this vast work gives enough glimpses on more comprehensive descriptions to exorcize any philosophical speculations ;-)).
I think that the "philosophical pondering" and the "philosophical part" here should not be confused with "philosophical speculations". More likely, the "philosophical pondering about the interpretation of the problem" will turn out to be mostly metamathematics, with a small amount of linguistics and semantics. We know that you are not a huge fan of semantics either, but words do have meaning, and mathematical formalisms can have meaning too.

Note also that the word "interpretation" can have two slightly different meanings. One of the meanings is to give a mathematical model of a theory. The other meaning is to explain how a mathematical theory is used in its applications. Dismissing anything which requires careful use of words and their meaning as philosophy ensures that "problems ... unsolved for years" will continue to remain unsolved.
gentzen said:
My impression is that linguistic and metamathematics are a huge part of analytical philosophy, and perhaps most of the stuff called "philosophy" in this forum should also better be just called metamathematics.
gentzen said:
And if analytic philosophy had never happened, this would be totally unproblematic. They tried to "save" philosophy from metaphysics and postmodern nonsense. But because of them, substantial parts of most structural sciences and linguistic are now part of philosophy.
 
  • Like
Likes Lynch101, lodbrok and vanhees71
Physics news on Phys.org
  • #177
A. Neumaier said:
I know all this. But unless one adopts the thermal interpretation, it doesn't answer questions about observations on single systems. For example, Lindblad equations and all decoherence arguments always average over a whole ensemble of identically prepared systems.
Of course, they do that formally, but as discussed many times, you can also interpret it as averaging over parts of a system over microscopically large, macroscopically small, space-time volumes. Of course, this assumes a separation of scales in this sense, i.e., that "quantum fluctuations" are on small space-time scales, while the "relevant" macroscopic observables, referring to local but microscopically large numbers of "microscopic degrees of freedom" are varying on large macroscopic space-time scales. This is behind the idea of the gradient expansion to go from the full microscopic Kadanoff-Baym equations (or many-body Dyson-Schwinger equations) to a semiclassical Boltzmann-like transprot equation in the Wigner representation.

In this sense you get an effective macroscopic description of single macroscopic systems from the underlying (probabilistic) quantum dynamics of its microscopic constituents.
 
  • #178
vanhees71 said:
Of course, they do that formally, but as discussed many times, you can also interpret it as averaging over parts of a system over microscopically large, macroscopically small, space-time volumes. Of course, this assumes a separation of scales
But you cannot do this to analyze a single measurement of a single particle, say. At least the analysis is highly nontrivial, and nobody succeeded in giving for this a precise analysis without smuggling in Born's rule, which assumes many measurements to be meaningful. This is precisely the step that is missing in the solution of the measurement problem through the thermal interpretation.
 
  • Like
Likes physika
  • #179
Macroscopic observable do not refer to single particles but are rather collective variables. E.g., for a solid body, approximated as a classical point particle you consider the center-of-mass position vector. For a gas close to equilibrium you use hydrodynamics, which describes the flow of "fluid cells" which are macroscopically small but microscopically large, i.e., they still contain many particles. The quantum flucutations are overwhelmed by the thermal fluctuations, which in turn are also pretty small on the macroscopic scale. That's why you get effectively classical behavior of macroscopic systems. What do you think is still missing in the understanding of classical behavior from the underlying microscopic (quantum) dynamics? I thought this is indeed much in the spirit of your "thermal interpretation", although you deny the standard use of probabilities as defined by the general Born rule, for a reason I still do not understand.
 
  • #180
vanhees71 said:
What do you think is still missing in the understanding of classical behavior from the underlying microscopic (quantum) dynamics? I thought this is indeed much in the spirit of your "thermal interpretation", although you deny the standard use of probabilities as defined by the general Born rule, for a reason I still do not understand.
Did you ever heard of Pasch's axiom? It was THE axiom missing from Euclid's axioms. You may think, why is it missing, isn't it SO OBVIOUS that we don't even need to write down that axiom? Well, if you just look at the theory defined by Euclid's axioms, then that theory would also allow other models for which many constructions from Euclid would not work, and many theorems from Euclid would not be true. Of course, we all know that those models were not intended by Euclid, and that is precisely why we can say that Pasch's axiom was missing.
gentzen said:
However, the state might not be the only reason, why there is a measurement problem. For example, A. Neumaier's thermal interpretation uses q-expectations and q-corrections instead of the state. But even here, you don't "automatically" solve the problem of unique results. The thermal interpretation needs an additional assumption for that (the assumption is stated as: there is only a single world).
You may wonder, what is the alternative to there being only a single world. Well, there could be two worlds, or three worlds, or 42 worlds, or infinitely many worlds. And if you have for example three worlds, then there are some ways how those three worlds could related to our experiences.
But as long as you cannot accept that Pasch's axiom was really missing from Euclid's axioms, you will have a very though time trying to make sense of that.
 
  • #181
vanhees71 said:
Macroscopic observable do not refer to single particles but are rather collective variables.
But in a measurement they refer to a property of the single system measured.
vanhees71 said:
What do you think is still missing in the understanding of classical behavior from the underlying microscopic (quantum) dynamics? I thought this is indeed much in the spirit of your "thermal interpretation",
It is. The unsolved question is how precisely a single interaction with a single particle is reflected mathematically in the corresponding collective variable read from the detector.
vanhees71 said:
although you deny the standard use of probabilities as defined by the general Born rule, for a reason I still do not understand.
I never denied this; it is included as a special case. But the thermal interpretation goes beyond it in claiming approximate but objective properties for single systems, where Born's rule (which is about properties of identically prepared ensembles) is silent. This is needed to give the term measurement a formal mathematical meaning.
 
  • #182
A. Neumaier said:
But in a measurement they refer to a property of the single system measured.

It is. The unsolved question is how precisely a single interaction with a single particle is reflected mathematically in the corresponding collective variable read from the detector.

I never denied this; it is included as a special case. But the thermal interpretation goes beyond it in claiming approximate but objective properties for single systems, where Born's rule (which is about properties of identically prepared ensembles) is silent. This is needed to give the term measurement a formal mathematical meaning.
That's what I never understood. For me you have "objective properties for single systems" in the case of macroscopic systems, where the macroscopic coarse-grained description is sufficient for the description of these properties, because the fluctuations (standard deviations) of the "relevant macroscopic observables" is small to the relevant scale of these observables' values. Then it's "almost certain" to find a specific value given by the macroscopic properties of the system (the extreme case is thermal equilibrium, where temperature and chemical potential(s) determine these values).

What is the concrete generalization of this statistical standard argument of (quantum) statistical physics and why do you need it?
 
  • #183
vanhees71 said:
That's what I never understood. For me you have "objective properties for single systems" in the case of macroscopic systems, where the macroscopic coarse-grained description is sufficient for the description of these properties, because the fluctuations (standard deviations) of the "relevant macroscopic observables" is small to the relevant scale of these observables' values. Then it's "almost certain" to find a specific value given by the macroscopic properties of the system (the extreme case is thermal equilibrium, where temperature and chemical potential(s) determine these values).
This is only an informal argument that must be made mathematically cogent. Herein lies the problem.
vanhees71 said:
What is the concrete generalization of this statistical standard argument of (quantum) statistical physics and why do you need it?
What needs to be proved is that the unitary dynamics for a macroscopic system, coupled to a single particle in a way that the latter acts as a detector, almost always produces to high accuracy a measurement outcome (coarse-grained expectation value) that equals one of the eigenvalues of the quantum observable measured.

If one defines the macroscopic system by a mixed state corresponding to a grand canonical ensemble with time-dependent intensive variables (which would be the naive attempt implied by your description) then the unitary dynamics produces instead a superposition of macroscopic systems, each one corresponding to one of the possible eigenvalues.

Thus the naive approach does not give the physically observed answer, and one needs something more sophisticated, something unknown so far. My informal analysis of what is needed points to chaotic motion that (due to the environment) settles quickly to an equilibrium state. But the standard decoherence arguments always take an average somewhere, hence produce only an average answer, but not the required answer in almost every single case. Thus one needs better mathematical tools that apply to the single case. I am working on these, but progress is slow.
 
Last edited:
  • Like
Likes Fra, mattt, physika and 3 others
  • #184
A. Neumaier said:
But the standard decoherence arguments always take an average somewhere, hence produce only an average answer, but not the required answer in almost every single case. Thus one needs better mathematical tools that apply to the single case. I am working on these, but progress is slow.
Mathematically it is the difference between convergence in the mean and almost everywhere convergence. The latter is much harder to achieve than the former. All arguments I have seen in the statistical mechanics of nonequilibrium processes (the measurement process clearly is such a process) are about convergence in the mean only.
 
  • Like
Likes mattt and vanhees71
  • #185
Ok, but why do you think that's not sufficient? If the "fluctuations" (standard deviations) are small on the scale of the relevant observables, the result is with high probability the mean (expectation value).
 
  • #186
A. Neumaier said:
Thus the naive approach does not give the physically observed answer, and one needs something more sophisticated, something unknown so far. My informal analysis of what is needed points to chaotic motion that (due to the environment) settles quickly to an equilibrium state. But the standard decoherence arguments always take an average somewhere, hence produce only an average answer, but not the required answer in almost every single case. Thus one needs better mathematical tools that apply to the single case. I am working on these, but progress is slow.
But in such a case there is no predetermined single outcome, because your measured system is not well-described by a coarse-grained state. If you do a Stern-Gerlach experiment with silver atoms from an oven, you don't expect to find each silver atom at one spot corresponding to the average value 0 of the magnetic moment but randomly (with equal probability) at the one spot for a magnetic moment of +1 magneton or the one for -1 magneton. Indeed, the single Ag atom in this state has no predetermined direction of its magnetization but a random one, and that's what you want to get with your calculation.
 
  • #187
vanhees71 said:
Ok, but why do you think that's not sufficient? If the "fluctuations" (standard deviations) are small on the scale of the relevant observables, the result is with high probability the mean (expectation value).
No. If the Born probability is 50% for two possible results then the result is with high probability one of two values, while the naively predicted expectation is the mean of the two values.
vanhees71 said:
If you do a Stern-Gerlach experiment with silver atoms from an oven, you don't expect to find each silver atom at one spot corresponding to the average value 0 of the magnetic moment but randomly (with equal probability) at the one spot for a magnetic moment of +1 magneton or the one for -1 magneton.
Precisely. This means that in each single case you must get a macroscopic state describing exactly one of the two spots, but what one gets in each single case from a naive argument is instead a superposition of two macroscopic states, one for each spot!
 
  • Like
Likes mattt, physika and PeterDonis
  • #188
I'd say you rather get a mixed state due to decoherence, but that's of course irrelevant for the argument.

Still, I think this is a goal that never can be achieved, because what QT tells you is that indeed the Ag atom hasn't a determined value of the measured component of the magnetic moment before it went through the magnet. It's completely random with probability 1/2 for either of the possible outcomes. That's why for a single experiment there's no way to know beforehand, what will come out, and all that's provided by QT are these probabilities. A good measurement device delivers these correct probabilities for the outcomes in the limit of large statistical samples. It cannot deliver more, because the measured observable's values are true random variables, i.e., this randomness is not due to our ignorance about the state of the Ag atom but because it's truely random. Isn't this "non-realism" the almost inevitable conclusion of all the Bell tests?
 
  • Like
Likes Lord Jestocost
  • #189
vanhees71 said:
I'd say you rather get a mixed state due to decoherence, but that's of course irrelevant for the argument.
Well, actually one gets a mixed state that is something like a superposition of the grand canonical states corresponding to the two measurement results; talking about superpositions when starting with a mixed state is loose talk only.
vanhees71 said:
Still, I think this is a goal that never can be achieved, because what QT tells you
... what the minimal statistical interpretation of QT tells you ...

But the thermal interpretation is more than the minimal statistical interpretation;
the latter appears only as a special case.
vanhees71 said:
is that indeed the Ag atom hasn't a determined value of the measured component of the magnetic moment before it went through the magnet.
Like the detector, the atom, properly prepared, has a definite pure or mixed state, hence has according to the thermal interpretation definite properties. Thus there is a well-defined mathematical problem to be solved, and whether it is solvable is an open question.
vanhees71 said:
It's completely random with probability 1/2 for either of the possible outcomes. That's why for a single experiment there's no way to know beforehand, what will come out, and all that's provided by QT are these probabilities.
This is what happens in practice, i.e., when there is much uncertainty about most details.

But according to the thermal interpretation, a complete knowledge of the joint state of the detector, the atom, and the environment at the start of the experiment determines the joint state deterministically and unitarily. Thus one can in principle find out the properties of the detector at the end of the measurement. That one cannot do it in practice doesn't matter; one cannot calculate in practice the Newtonian dynamics of an N-particle system; nevertheless one can answer many qualitative questions.
vanhees71 said:
this randomness is not due to our ignorance about the state of the Ag atom but because it's truely random. Isn't this "non-realism" the almost inevitable conclusion of all the Bell tests?
No. The violation of Bell's inequalities in Bell tests says nothing at all about true randomness, since Bell's inequalities are based on a classical model of physics, hence are completely silent about quantum mechanics. (Except that they prove that Bell's assumptions are not valid in quantum mechanics.)
 
  • Like
Likes Lynch101
  • #190
vanhees71 said:
A good measurement device delivers these correct probabilities for the outcomes in the limit of large statistical samples. It cannot deliver more, because the measured observable's values are true random variables, i.e., this randomness is not due to our ignorance about the state of the Ag atom but because it's truely random. Isn't this "non-realism" the almost inevitable conclusion of all the Bell tests?
[Bold by LJ]

The question regarding this should be:

Are there any experimentally verifiable hints which might question this inevitable conclusion!
 
  • #191
No! If there were, we'd need a new theory, different from QT.
 
  • Like
Likes Lord Jestocost
  • #192
vanhees71 said:
If you do a Stern-Gerlach experiment with silver atoms from an oven, you don't expect to find each silver atom at one spot corresponding to the average value 0 of the magnetic moment but randomly (with equal probability) at the one spot for a magnetic moment of +1 magneton or the one for -1 magneton. Indeed, the single Ag atom in this state has no predetermined direction of its magnetization but a random one, and that's what you want to get with your calculation.
A. Neumaier said:
Precisely. This means that in each single case you must get a macroscopic state describing exactly one of the two spots, but what one gets in each single case from a naive argument is instead a superposition of two macroscopic states, one for each spot!
The naivety is in the construction of the sample space of experimental outcomes. If we construct a sample space that includes the outcomes +1 and -1, then we would not expect, for any single run, a superposition of the two.

In QM, unlike classical mechanics, there is no unique, maximally fine-grained sample space of outcomes, and so the experimenter must always select one appropriate for the experiment they are interested in. It's something I tried to explore in this thread
 
Last edited:
  • Like
Likes vanhees71
  • #193
Morbert said:
The naivety is in the construction of the sample space of experimental outcomes. We construct a sample space that includes the outcomes +1 and -1, then we would not expect, for any single run, a superposition of the two.

In QM, unlike classical mechanics, there is no unique, maximally fine-grained sample space of outcomes, and so the experimenter must always select one appropriate for the experiment they are interested in. It's something I tried to explore in this thread
But since the experimenter is part of the environment, its activities (''must always select'') should be explainable in terms of the physical laws - at least if the experimenter is just a machine doing the recordings. The unique outcome must come from somewhere...

The natural - and the only natural - source for the unique outcome is symmetry breaking due to chaoticity. It is of the same kind as the choice made by a straight Newtonian rod subject to an increasing longitudinal force that at some point makes the rod bend in a random direction. (in 2D physics, this would result in a binary choice.)

The unsolved problem is how to make this principle work mathematically in the quantum case in such a way that, in sufficient generality, the correct Born probabilities appear.
 
  • #194
A. Neumaier said:
The natural - and the only natural - source for the unique outcome is symmetry breaking due to chaoticity.
No. Roland Omnès, a proponent of the consistent histories interpretation, was clear that one cannot disprove MWI. One cannot prove unique outcomes, at least not without going beyond non-relativistic QM. In non-relativistic QM (i.e. where Bohmian Mechanics works), one cannot even disprove that there are not three, or 42 outcomes. So for somebody like me, who is not very good at QFT, the only reasonable way forward is to assume unique outcomes as an additional axiom, and only try to show that the resulting theory is still consistent.

A. Neumaier said:
The unsolved problem is how to make this principle work mathematically in the quantum case in such a way that, in sufficient generality, the correct Born probabilities appear.
And if you don't want to use an additional axiom, then you risk to need a huge amount of QFT knowledge. The drawback of this is that the amount of people able to follow your mathematical proof (if you should be able to find one) will be very small.
 
  • #195
gentzen said:
No. Roland Omnès, a proponent of the consistent histories interpretation, was clear that one cannot disprove MWI.
I am not trying to disprove MWI. I think MWI is completely nonpredictive since everything happens that can possibly happen. Thus it has no scientific content at all.
gentzen said:
One cannot prove unique outcomes, at least not without going beyond non-relativistic QM.
Not in theminimal interpretation, by its assumptions. But the thermal interpretation is not minimal but maximal, hence has a broader basis from which to proceed.
gentzen said:
And if you don't want to use an additional axiom, then you risk to need a huge amount of QFT knowledge. The drawback of this is that the amount of people able to follow your mathematical proof (if you should be able to find one) will be very small.
Whatever will be needed will be used. Who can follow the arguments is a matter of time. In the beginnings of relativity theory there were only a handful experts understanding it, but now even lay people believe to understand the essentials. The same happens whenever some new approach settles something that had been a long term puzzle.
 
  • #196
A. Neumaier said:
But since the experimenter is part of the environment, its activities (''must always select'') should be explainable in terms of the physical laws - at least if the experimenter is just a machine doing the recordings. The unique outcome must come from somewhere...
Not according to standard QT. We indeed also don't need an "experimenter" (not to run in the even more strange idea the "final collapse" would need a "conscious observer" a la von Neumann/Wigner ;-)), just a measurement device, which stores the result somehow (that's how modern experiments in particle physics work: you have detectors, which store the results of measurements electronically, and these data can then be read out and evaluated later).

Taking QT in its minimal statistical interpretation seriously, and for me that's the most straight-forward conclusion of all the experiments testing QT (particularly "Bell tests"), there is no cause, for the oucome of the measurement on a single system. The measured observable has not have a determined value before the measurement, and that's why the outcome is unpredictable, and only with a sufficiently "large statistical sample" of equally performed experiments (equally prepared systems) you can test the predicted probabilities of QT. There's no way to know the unique outcome of a measurement, given the preparation of the system, because the measured observable takes random values with probabilities predicted by QT.

The Bell tests, demonstrating the violation of Bell's inequalities, at least tell us that if you assume "locality" in the usual sense of relativistic theories, including standard relativistic, microcausal QFT, you must accept that "realism" has to be given up, where "realism" means that there is some hidden cause behind the outcome of a measurement on an individual system, i.e., the randomness of the measurement outcomes is "only due to our ignorance of this cause" (usually described as the existence of some additional hidden variables, which we can't observe or simply don't know for whatever reasons).
A. Neumaier said:
The natural - and the only natural - source for the unique outcome is symmetry breaking due to chaoticity. It is of the same kind as the choice made by a straight Newtonian rod subject to an increasing longitudinal force that at some point makes the rod bend in a random direction. (in 2D physics, this would result in a binary choice.)
I don't understand, what this has to do with the unique-outcome quibble. In this example you can always argue within classical physics, and the direction the rod bends is simply due to some asymmetry of the imposed force, which we are not able to determine because of limitations of our control about the direction of this force.
A. Neumaier said:
The unsolved problem is how to make this principle work mathematically in the quantum case in such a way that, in sufficient generality, the correct Born probabilities appear.
I don't understand, where the motivation for this task comes from, given that all tests confirm QT, which tells us that there are in fact no causes that determine the individual measurement outcome.

I think to solve this task you necessarily must find a theory different from standard QT (e.g., something like GRW, where they assume some additional stochastic dynamics which causes the collapse of the quantum state as a real dynamical process). It may well be, that you can construct such a theory, but there's no hint yet that this really is necessary to describe what we observe in Nature, i.e., "irreducibly random" outcomes of measurements on single quantum systems.
 
  • #197
vanhees71 said:
Not according to standard QT.
Not??? Only according to the minimal interpretation! Standard QT is silent about this. The unique outcome is a very reliably observed fact that in my opinion needs to be explained!
vanhees71 said:
Taking QT in its minimal statistical interpretation seriously,
I don't take it seriously since it is too minimal. The thermal interpretation is a more comprehensive maximal interpretation of QT.
vanhees71 said:
The Bell tests, demonstrating the violation of Bell's inequalities, at least tell us that if you assume "locality" in the usual sense of relativistic theories, including standard relativistic, microcausal QFT, you must accept that "realism" has to be given up, where "realism" means that there is some hidden cause behind the outcome of a measurement on an individual system,
No. Instead of argueing against interpretation issues you should read the literature analysing the interpretations - so that you know what can be asserted.

One can conclude from the Bell tests only that there is no classical local hidden variable interpretation of quantum mechanics.
vanhees71 said:
In this example you can always argue within classical physics, and the direction the rod bends is simply due to some asymmetry of the imposed force, which we are not able to determine because of limitations of our control about the direction of this force.
This example is indeed classical physics, since it was intended to serve as an analogy for what needs to be shown in the quantum case.

In my example, the force is exactly longitudial, so the situation is exaclty symmetric, and deterministic elasticity thery predicts no bend. The observed bend is due to random fluctuations (or imperfections) in the dynamics.

The same is likely to hold in quantum dynamics. I expect that noise and imperfections in preparation and experimental setup disturb the theoretical quantum dynamics and (together with dissipation in the environment) produce by symmetry breaking a unique outcome rather than the symmetric superposition.

No additional stochastic dynamics as in GRW should be needed, since coarse-graining produces enough chaoticity.
vanhees71 said:
given that all tests confirm QT, which tells us that there are in fact no causes that determine the individual measurement outcome.
The tests confirm QT. But they do not tell me that there are in fact no causes that determine the individual measurement outcome.
 
Last edited:
  • Like
Likes Lynch101, weirdoguy, Fra and 3 others
  • #198
A. Neumaier said:
Not only according to the minimal interpretation! Standard QT is silent about this. The unique outcome is a very reliably observed fact that in my opinion needs to be explained!

I don't take it seriously since it is too minimal. The thermal interpretation is a more comprehensive maximal interpretation of QT.
But obviously it also can't explain the unique mesurement outcome in the sense you describe it!
A. Neumaier said:
No. Instead of argueing against interpretation issues you should read the literature analysing the interpretations - so that you know what can be asserted.

One can conclude from the Bell tests only that there is no classical local hidden variable interpretation of quantum mechanics.
But isn't it precisely this what you want? I.e., you want a theory, where there's a cause for the single-measurment outcome, i.e., that there is some whatever "hidden cause" behind this outcome or that the world is, in some "hidden way" deterministic.
A. Neumaier said:
This example is indeed classical physics, since it was intended to serve as an analogy for what needs to be shown in the quantum case.

In my example, the force is exactly longitudial, so the situation is exaclty symmetric, and deterministic elasticity thery predicts no bend. The observed bend is due to random fluctuations (or imperfections) in the dynamics.
Exactly, and the randomness of these fluctuations is only due to our ignorance. In classical physics, as a deterministic theory, "in reality" the imperfections are there and fully determined.
A. Neumaier said:
The same is likely to hold in quantum dynamics. I expect that noise and imperfections in preparation and experimental setup disturb the theoretical quantum dynamics and (together with dissipation in the environment) produce by symmetry breaking a unique outcome rather than the symmetric superposition.
But the only result of the quantum dynamics of an ideal closed quantum system is always only probabilities, i.e., QT is "only" probabilistic.
A. Neumaier said:
No additional stochastic dynamics as in GRW should be needed, since coarse-graining produces enough chaoticity.
But this argument you did not accept so far. For me that's indeed all that's needed (at least FAPP), i.e., the classical behavior of macroscopic systems (including mesurement devices) is sufficiently explained by "coarse-graining" to the "relevant collective macroscopic observables", e.g., a grain of silver salt in a photo plate blackened due to the interaction with a single photon, although which point at the plate will be blackened you cannot know given the single-photon state.
A. Neumaier said:
The tests confirm QT. But they do not tell me that there are in fact no causes that determine the individual measurement outcome.
Of course not, because QT claims there are no causes. You'd need a new deterministic theory to get this. It will be very difficult to find one in accordance with relativistic causality, because it should be a non-local theory, and the only relativistic deterministic theories are local (!!!) classical field theories, which obviously cannot explain the results of the corresponding (local) QFTs. So you'd need some non-local classical field theory, in accordance with relativistic causality. Obviously that's a very difficult task!
 
  • #199
vanhees71 said:
It cannot deliver more, because the measured observable's values are true random variables, i.e., this randomness is not due to our ignorance about the state of the Ag atom but because it's truely random.

vanhees71 said:
Taking QT in its minimal statistical interpretation seriously, and for me that's the most straight-forward conclusion of all the experiments testing QT (particularly "Bell tests"), there is no cause, for the oucome of the measurement on a single system. The measured observable has not have a determined value before the measurement, and that's why the outcome is unpredictable, and only with a sufficiently "large statistical sample" of equally performed experiments (equally prepared systems) you can test the predicted probabilities of QT. There's no way to know the unique outcome of a measurement, given the preparation of the system, because the measured observable takes random values with probabilities predicted by QT.
You complain a lot about philosophy but you practice the worst kind yourself very often here.
[Bold emphasis is mine]
 
  • Like
Likes weirdoguy and gentzen
  • #200
vanhees71 said:
But obviously it also can't explain the unique mesurement outcome in the sense you describe it!
Not yet. But this will change in due time. Difficult problems are not solved overnight.
vanhees71 said:
you want a theory, where there's a cause for the single-measurment outcome,
Yes, and the thermal interpretation provides a framework for doing that.
vanhees71 said:
i.e., that there is some whatever "hidden cause" behind this outcome or that the world is, in some "hidden way" deterministic.
It is neither local nor hidden, hence Bell's assumptions don't apply.

It is not hidden since my observables are the N-point correlation functions of QFT, as they were always used, but without the ensemble interpretation - which does not make sense for quantum fields in spacetime, since a spacetime field cannot be prepared repeatedly.

And it is not local in Bell's sense since correlation functions are not local but multilocal observables. Nevertheless, causality is guaranteed by the standard QFT approach.
vanhees71 said:
Exactly, and the randomness of these fluctuations is only due to our ignorance. In classical physics, as a deterministic theory, "in reality" the imperfections are there and fully determined.
And one may presume that the same holds in quantum physics, without having to assume irreducible randomness.
vanhees71 said:
But the only result of the quantum dynamics of an ideal closed quantum system is always only probabilities, i.e., QT is "only" probabilistic.
Only when you take the minimal interpretation stance. But this restricts attention to only a small part of the possibilities that are open in the thermal interpretation.
vanhees71 said:
Of course not, because QT claims there are no causes.
No. You claim that, without giving proof.

QT does not talk about causes. It has no concept that specifies what a cause should mean.
vanhees71 said:
You'd need a new deterministic theory to get this.
No. The old deterministic unitary dynamics represented by the Wightman axioms for N-point function suffices.

vanhees71 said:
It will be very difficult to find one in accordance with relativistic causality,
No. It is manifest in relativistic QFT.
vanhees71 said:
because it should be a non-local theory,
It is Bell-nonlocal but causal, hence local in your terminology.
 
  • Like
Likes gentzen
  • #201
vanhees71 said:
the only result of the quantum dynamics of an ideal closed quantum system is always only probabilities
No. An ideal closed quantum system cannot be observed from the outside, hence probabilities do not apply.

And for observations from the inside of a quantum system, current interpretations - with sole exception of the thermal interpretation - have nothing to say. It would mean for them to give a mathematical definition of what it means for one part of a quantum systems to observe another part!
 
Last edited:
  • #202
A. Neumaier said:
It is not hidden since my observables are the N-point correlation functions of QFT, as they were always used, but without the ensemble interpretation - which does not make sense for quantum fields in spacetime, since a spacetime field cannot be prepared repeatedly.
Can you explain why "a spacetime field cannot be prepared repeatedly"? I don't see why the ensemble interpretation should not work for QFT. If I were better at QFT, I probably would simply object to that statement.

A. Neumaier said:
Whatever will be needed will be used. Who can follow the arguments is a matter of time. In the beginnings of relativity theory there were only a handful experts understanding it, but now even lay people believe to understand the essentials. The same happens whenever some new approach settles something that had been a long term puzzle.
And Einstein made many mistakes on the way while developing general relativity. Some of them he found himself later, but many of them were pointed out to him by other people (mostly mathematicians) who were familiar with Riemannian geometry and tensor calculus.
 
  • #203
gentzen said:
Can you explain why "a spacetime field cannot be prepared repeatedly"?
I think he means that you can't prepare multiple spacetimes with the same process and then run the same experiment on each one in order to collect statistics. We only have one spacetime to work with, the spacetime of our universe.
 
  • Like
Likes vanhees71
  • #204
gentzen said:
Can you explain why "a spacetime field cannot be prepared repeatedly"?
PeterDonis said:
I think he means that you can't prepare multiple spacetimes with the same process and then run the same experiment on each one in order to collect statistics. We only have one spacetime to work with, the spacetime of our universe.
But he is not talking about a spacetime, but only about a spacetime field.

And his claim seems to be that the ensemble interpretation does not work for QFT, i.e. no need to invoke a quantum theory of spacetime, just normal QFT.
 
  • #205
gentzen said:
he is not talking about a spacetime, but only about a spacetime field.
A spacetime field is a field on spacetime. You can't go back to the same region of spacetime and re-prepare a field in it a second time. You only get to do it once for a given region of spacetime.
 
  • Like
Likes vanhees71 and mattt
  • #206
PeterDonis said:
A spacetime field is a field on spacetime. You can't go back to the same region of spacetime and re-prepare a field in it a second time. You only get to do it once for a given region of spacetime.
But if we focus on the idealization of flat spacetime for the moment, then we will have no problem finding regions of spacetime equivalent to the given region allowing us to re-prepare a field many times.
 
  • Like
Likes vanhees71
  • #207
gentzen said:
if we focus on the idealization of flat spacetime for the moment, then we will have no problem finding regions of spacetime equivalent to the given region
Of course in that idealized case this will be true, since you've idealized away the possibility of any region of spacetime being different from any other.
 
  • Like
Likes vanhees71 and gentzen
  • #208
A. Neumaier said:
And for observations from the inside of a quantum system, current interpretations - with sole exception of the thermal interpretations - have nothing to say. It would mean for them to give a mathematical definition of what it means for one part of a quantum systems to observe another part!
Decoherent histories has been around for a good few decades at this stage, with one motivation for its development being the description of closed systems, and measurements as processes therein.
https://www.webofstories.com/play/murray.gell-mann/163
https://iopscience.iop.org/article/10.1088/1742-6596/2533/1/012011/pdf
https://arxiv.org/abs/1704.08725

It gives a clear account of what it means for a measurement to occur in a closed system. It might even recover the ensemble interpretation insofar as we could conceptualize an infinite ensemble of histories of the universe and associate measurement with correlations over the ensemble.
 
  • #209
gentzen said:
But he is not talking about a spacetime, but only about a spacetime field.

And his claim seems to be that the ensemble interpretation does not work for QFT, i.e. no need to invoke a quantum theory of spacetime, just normal QFT.
Of course, it works for QFT. It's, how QFT is used in the lab: In scattering experiments you describe asymptotic free states in the initial state,i, (usually 2 particles with pretty well-determined momenta, sometimes also some polarization) and in the out state, f. The corresponding S-matrix elements (probability transition amplitudes) squared for the process i->f give the cross sections, which can be measured by repeating the collision experiment many times. That's statistics and the use of the statistical interpretation in pretty pure form, I'd say.
 
  • #210
vanhees71 said:
Of course, it works for QFT. It's, how QFT is used in the lab: In scattering experiments you describe asymptotic free states in the initial state,i, (usually 2 particles with pretty well-determined momenta, sometimes also some polarization) and in the out state, f. The corresponding S-matrix elements (probability transition amplitudes) squared for the process i->f give the cross sections, which can be measured by repeating the collision experiment many times. That's statistics and the use of the statistical interpretation in pretty pure form, I'd say.
That is statistics, but you can use any interpretation. It seems that you dont always distinguish statistics from statistical interpretation.
 
  • Like
Likes Lynch101

Similar threads

Replies
84
Views
4K
Replies
2
Views
1K
Replies
3
Views
2K
Replies
91
Views
6K
Replies
14
Views
2K
Back
Top