Are there signs that any Quantum Interpretation can be proved or disproved?

In summary, according to the experts, decoherence has not made much progress in improving our understanding of the measurement problem.
  • #106
WernerQH said:
I think "thermal interpretation" is a misnomer. How can you call it an interpretation if it carefully avoids the question what quantum theory is about.
If you look at my book (or the earlier papers), you'll see that it does not avoid at all the question what quantum theory is about, but makes many statements about it that are different from the standard interpretations.
WernerQH said:
It is more like an empty shell, big enough to contain theories as diverse as quantum mechanics and thermodynamics.
Thus it is not an empty shell but a full shell - able to interpret both.
 
  • Like
Likes PeroK and gentzen
Physics news on Phys.org
  • #107
A. Neumaier said:
This is not a fact but your assumption. No known fact contradicts the possibility of a a "pure quantum" description of the world; in contrast, there is no sign at all that a classical description must be used in addition.
Except the failure of delivering such a description. (I don't accept MWI as being an consistent interpretation. At least I have never seen a consistent version.) Nonexisting theories cannot be discussed. And therefore they cannot count.
A. Neumaier said:
... it postulates a classical world in addition to the quantum world. How the two can coexist is unexplained.
Yes. That's the problem. Which does not exist in realistic interpretations.
A. Neumaier said:
It does not exist for quantum field theory, which is needed for explaining much of our world!
It exists for QFT. All you need for this is to use the fields to define the configuration space.

Bohm.D., Hiley, B.J., Kaloyerou, P.N. (1987). An ontological basis for the quantum theory, Phys. Reports 144(6), 321-375
 
  • #108
Sunil said:
Except the failure of delivering such a description. (I don't accept MWI as being an consistent interpretation. At least I have never seen a consistent version.) Nonexisting theories cannot be discussed. And therefore they cannot count.
MWI is not the only "pure quantum" description, the thermal interpretation too has no additional non-unitary dynamics. But since you are obviously unaware of it, it would probably not help if I tried to discuss it with you. So let me try something else instead.

The Bose-Einstein statistics was derived and published in 1924 before the invention of the Born rule (and other elements of the Copenhagen interpretation). So I guess that its derivation does not depend on any specific interpretation of quantum mechanics. And I guess the same is true for derivations of other thermal states. For a quantum experiment in a laboratory, instead of trying to prepare a special pure state, it can also make sense to just prepare a "nice" thermal state. Now state preparation and measurement are closely related. So for the measurement too, it can make sense to just measure some nice smooth thermodynamical property, instead of going for some discrete property exhibiting quantum randomness. And again, it is quite possible that the derivation of such smooth thermodynamical properties does not depend on any specific interpretation of quantum mechanics like MWI, Copenhagen, or de Broglie-Bohm.
 
  • #109
gentzen said:
MWI is not the only "pure quantum" description, the thermal interpretation too has no additional non-unitary dynamics. But since you are obviously unaware of it, it would probably not help if I tried to discuss it with you. So let me try something else instead.

The Bose-Einstein statistics was derived and published in 1924 before the invention of the Born rule (and other elements of the Copenhagen interpretation). So I guess that its derivation does not depend on any specific interpretation of quantum mechanics. And I guess the same is true for derivations of other thermal states. For a quantum experiment in a laboratory, instead of trying to prepare a special pure state, it can also make sense to just prepare a "nice" thermal state. Now state preparation and measurement are closely related. So for the measurement too, it can make sense to just measure some nice smooth thermodynamical property, instead of going for some discrete property exhibiting quantum randomness. And again, it is quite possible that the derivation of such smooth thermodynamical properties does not depend on any specific interpretation of quantum mechanics like MWI, Copenhagen, or de Broglie-Bohm.
There are many approaches who aim to present a "pure quantum" description, but IMHO they all fail to deliver. (I specially mentioned MWI only because I have a quite special opinion about MWI, namely that it is not even a well-defined interpretation, the phrase "credo quia absurdum" seems to be about MWI believers.) Roughly, these are all interpretations which attempt to solve the measurement problem without really solving it. (In realistic interpretations it does not exist, given that they have a trajectory already on the fundamental level.)

I don't get the point of your consideration. Of course, one can use thermodynamics in state preparation as well as in measurements.
 
  • Haha
  • Skeptical
Likes PeroK and atyy
  • #110
A. Neumaier said:
No known fact contradicts the possibility of a a "pure quantum" description of the world; in contrast, there is no sign at all that a classical description must be used in addition. Once the latter is assumed, one must show how to define the classical in terms of the more comprehensive quantum. This is the measurement problem. You simply talk it away by making this assumption.
The "sign" I see is that degree to which the the quantum framework is well defined, is directly dependent on the degree which the reference (classical background) is not limiting in terms of information capacity and processing power. The laws of the quantum dynamics are inferred depdend on an inference machinery living in the classical domain (which in the ideal picture is unlimited).

Once we relax the firmness of this reference, and inference machinery, we likely need to acqknowledge that some of the deduce power of QM formalism are invalid, and we need to reconstruct our "measurement theory" based on a non-classical reference.

So if we by "pure quantum" means the quantum formalism including hamiltonians and laws - as inferred relative classical background - but what remains when one discards the "classical baggage", that really makes no sense to me. To me it's mathematical extrapolation that I think is unlikely to be the right way to remove the classical background. If it works, I agree it would be easier, so it makes sense to entertain the idea but conceptually I think it's flawed.

/Fredrik
 
  • #111
Sunil said:
There are many approaches who aim to present a "pure quantum" description
Do you mean neo-Copenhagen non-representationalist approaches like QBism or RQM? Or approaches like consistent histories that remain agnostic on realism (or representation) and have no problems with being incomplete (at least as far as Roland Omnès or Robert Griffiths are concerned)? Or the minimal statistical interpretation?
MWI and the thermal interpretation are different from those in being realist (i.e. representationalist) approaches that aim for completeness. I am not aware of other candidates in this group that maintain a "pure quantum" description without additional non-unitary dynamics.

Sunil said:
I don't get the point of your consideration. Of course, one can use thermodynamics in state preparation as well as in measurements.
The point is that you don't need the projection postulate (and probably not even the Born rule in any form whatsoever) to derive the statistics of the thermal state or prepare a system in it. To prepare a system in it, you probably just have to control the thermodynamical degrees of freedom and keep them constant long enough for the system to settle/converge sufficiently to the thermal state. So you can get away with the much weaker assumptions that the thermodynamical degrees of freedom can be controlled and measured. At least I guess that this assumption is weaker than assuming the existence of a classical description plus some version of the Born rule.
 
  • Like
Likes A. Neumaier
  • #112
gentzen said:
Do you mean neo-Copenhagen non-representationalist approaches like QBism or RQM? Or approaches like consistent histories that remain agnostic on realism (or representation) and have no problems with being incomplete (at least as far as Roland Omnès or Robert Griffiths are concerned)? Or the minimal statistical interpretation?
Yes. One should exclude "interpretations" which are in fact different theories (GRW and so on) which explicitly introduce collapse dynamics. And all the realistic interpretations which explicitly introduce a configuration space trajectory. Then there is Copenhagen and similar interpretations which do not get rid of the classical part. Everything else can be roughly classified as attempts to construct a pure quantum interpretation.

Interpretations accepting being incomplete are an interesting question. On the one hand, this removes the most serious fault of Copenhagen. Then, one can ask if there is a measurement problem at all if one accepts that the theory is incomplete. That would mean that the Schrödinger evolution would be approximate. If so, what would be the problem with a collapse? It would be simply the place where the subquantum effects reach the quantum level.

MWI and the thermal interpretation are different from those in being realist (i.e. representationalist) approaches that aim for completeness. I am not aware of other candidates in this group that maintain a "pure quantum" description without additional non-unitary dynamics.
gentzen said:
The point is that you don't need the projection postulate (and probably not even the Born rule in any form whatsoever) to derive the statistics of the thermal state or prepare a system in it. To prepare a system in it, you probably just have to control the thermodynamical degrees of freedom and keep them constant long enough for the system to settle/converge sufficiently to the thermal state.
Hm, does this method allow to cover all preparation procedures? Plausibly yes, if one applies the same argument as in dBB theory for the non-configuration observables. Whatever we can measure, the result can be identified from the classical measurement device by looking only at its configuration too. Looking only at the thermodynamical degrees of freedom of the measurement device would be sufficient too.

But what about preparation with devices which shoot single particles from time to time? Say, a piece of radioactive material? To prepare something in a stable state is one thing - the source itself can be described as being in a stable state. But states of fast moving particles being prepared with "keep them long enough" does not sound very plausible.

The other point is that thermodynamics is also incomplete description of something else. (Especially in the Bayesian approach.) And this something else can become visible. Say, the early states of the universe may have been quite close to equilibrium, but with changing temperature this become unstable and where we live today appeared out of some fluctuation. We could not have seen that place being different from others looking at the thermodynamic variables of the early universe.
 
  • #113
Sunil said:
But what about preparation with devices which shoot single particles from time to time? Say, a piece of radioactive material? To prepare something in a stable state is one thing - the source itself can be described as being in a stable state. But states of fast moving particles being prepared with "keep them long enough" does not sound very plausible.
I basically agree that there are many experiments that are not covered by preparation in the thermal state or by measurement of thermal degrees of freedom. I try to elaborate a bit below, why I did bring up the thermal state nevertheless.

Sunil said:
Hm, does this method allow to cover all preparation procedures?
Probably not, but it covers more preparation procedures than the MWI mindset typically acknowledges. Even if an experiment is prepared in a thermal state, typical quantum interference effects still occur at interfaces or surfaces, because locally the state can look much more pure than it looks from a less local perspective.
It doesn't cover all preparation procedures from my instrumentalistic point of view. And I guess also not from the point of view of the thermal interpretation, but for different reasons.

If I have to simulate the interaction of an electron beam with a sample, then for me (as instrumentalist) what counts as prepared is the sample (in a thermal state) and the beam (in a rather pure state) incident on the sample. Independent of whether the electron source was in a thermal state, there was some filtering going on (with associated randomness - quantum or not) in the preparation of the beam. For me, this filtering is part of the preparation procedure, and it probably depends on interpretative assumptions. And in any case, the combined system of sample and beam does not seem to be in a thermal state.

Even more extreme, error correction schemes in (superconducting) quantum computing require measuring some qubits and then modifying the unitary dynamics based on results of that measurement. I can't believe that measurements of thermodynamical degrees of freedom are good enough for that. After all, the quantum randomness is crucial here for that correction scheme to work.

(That quantum computing example is part of the reason why I bring up the thermal state. I have the impression that when Stephen Wolfram and Jonathan Gorard conclude (1:25:08.0 SW: in the transcipt) "..., it’s still a little bit messy, but it seems that you can never win with quantum computing, ...", then they are slighlty too much in the MWI mindset, which cares more about the supposedly pure state of the universe as a whole than about the mostly thermal states locally here on earth.)
 
  • #114
vanhees71 said:
In the vacuum ##\vec{k}## is a real vector. That's precisely what I don't understand!

If there is a planar interface between two homogeneous regions, there's no vacuum of course, and then there are evanescent waves (aka total reflection).
The tricky part is probably that when I try to measure the evanescent wave, then I will bring some kind of detector (for example the photoresist) close to the planar interface. And then all that is left from the vacuum is a very thin layer, thin enough such that evanescent wave did not vanish yet at the detector surface. A (huge) part of the evanescent wave will be reflected by that surface, and the time average of the z-component of the Poynting vector of the superposition of the reflected part with the "initial" evanescent wave is no longer zero. So you no longer have perfect total reflection, if you (successfully) measure the evanescent wave.

Maybe it helps you to imagine the very thin layer of vacuum between the planar interface and the detector surface as some sort of waveguide.
 
  • Like
Likes A. Neumaier
  • #115
It's not vacuum if you there is a planar interface! There are no evanescent waves in the vacuum.
 
  • #116
vanhees71 said:
It's not vacuum if you there is a planar interface! There are no evanescent waves in the vacuum.
Yes, far away from interfaces, there are no evanescent waves in vacuum. Just to clarify that your remark about the planar interface only applies to evanescent waves. It is still valid to talk about the optical waves in vacuum, even so there is a planar interface. The reason is that they will still be there far away from the interface, where it is fine to say that there is vacuum.
 
  • Like
Likes vanhees71
  • #117
With the advent of QFT which successfully unites the quantum and classical scale, these so called interpretations are no longer needed. They are a relic of the 40's , 50's and 60's when QM was the sole actor on the stage.

The world as described by the successor of QM(QFT) is composed entirely of fields and the everyday stuff like chairs and tables are secondary manifestations(particular state of the fields). This is the comprehensive picture that unites the scales. It might be at odds with your preconceived notions but Nature doesn't care. The world is what it is.
These so-called interpretations also make a lot of sacrifices and introduce various kinds of weirdness, so it's not totally unexpected that some prejudices need to go. The world of the senses is real but is not fundamental. I believe this is what stunned Neils Bohr when he said "If you are not shocked, you have not understood it yet"
 
  • Like
Likes vanhees71
  • #118
EPR said:
With the advent of QFT which successfully unites the quantum and classical scale, these so called interpretations are no longer needed. They are a relic of the 40's , 50's and 60's when QM was the sole actor on the stage.
Would this mean that the so-called measurement problem is mere imagination? If not, does QFT resolve it?
 
  • #119
timmdeeg said:
Would this mean that the so-called measurement problem is mere imagination? If not, does QFT resolve it?
No. It's less obvious but is still there. In QFT 'particles' are created and annihilated at a particular location.
 
  • #120
gentzen said:
The Bose-Einstein statistics was derived and published in 1924 before the invention of the Born rule (and other elements of the Copenhagen interpretation). So I guess that its derivation does not depend on any specific interpretation of quantum mechanics.
It’s not clear to me that Bose-Einstein statistics are particularly “quantum”. Their derivation is the same as Maxwell-Boltzmann statistics except the assumption of indistinguishability (in counting the number of states).
 
  • #121
EPR said:
In QFT 'particles' are created and annihilated at a particular location.
Exactly. Only the locations are real. What "travels" between them (particles or waves) are figments of our classical imagination. The field quanta are identical, which means the locations can be connected in different, but indistinguishable ways. All Feynman diagrams contribute and have to be summed over.
 
  • Skeptical
Likes weirdoguy
  • #122
stevendaryl said:
It’s not clear to me that Bose-Einstein statistics are particularly “quantum”.
That is a bit unfair, I already went from Planck's less "quantum" derivation in 1900 to Bose's in 1924, for this tiny bit of additional legitimacy. And what about the thermal state? In a sense, the possibility to define what is meant by "constant" (in time) gives significance to the energy eigenstates here. (In general relativity, it gets hard to define what is mean by "constant".) So you get an operational meaning from a symmetry. I find this very quantum.

Many energy eigenstate computations that students will do in their first course on quantum mechanics won't need more for being interpreted in the sense of providing predictions about the real world. (In a subsequent answer, I already stated that this does not cover all practically relevant preparation and measurement procedures.)
 
  • #123
stevendaryl said:
It’s not clear to me that Bose-Einstein statistics are particularly “quantum”. Their derivation is the same as Maxwell-Boltzmann statistics except the assumption of indistinguishability (in counting the number of states).

I probably should amend that, The most convenient way to "count states" is to start with discrete energy levels, and count the number of ways those states can fill up with particles. So I guess quantum mechanics is implied by starting with energy levels.
 
  • Like
Likes gentzen
  • #124
The way to "count states" is what's particularly quantum for both fermions and bosons. One of the great puzzles solved by QT is the resolution of Gibbs's paradox. Before the discovery of quantum statistics by Bose and Einstein or Pauli, Jordan, and Dirac it was solved by Boltzmann with one of the many ad-hoc adjustments of the classical theory.
 
  • Informative
Likes gentzen
  • #125
@Demystifier - is Bohmian Mechanics consistent with the standard lore of quantum statistical mechanics and identical particles? In classical mechanics, there are no identical particles since particles have distinct trajectories at all times, so the resolution of the Gibbs paradox is a fudge. It is often said that quantum mechanics gives the true resolution of the Gibbs paradox, since quantum particles don't have trajectories and can truly be swapped. But in BM, particles have trajectories, so does it mean that the proper derivation of the Gibbs factor is also a fudge in QM?
 
  • Like
Likes gentzen, Demystifier and vanhees71
  • #126
atyy said:
@Demystifier - is Bohmian Mechanics consistent with the standard lore of quantum statistical mechanics and identical particles? In classical mechanics, there are no identical particles since particles have distinct trajectories at all times, so the resolution of the Gibbs paradox is a fudge. It is often said that quantum mechanics gives the true resolution of the Gibbs paradox, since quantum particles don't have trajectories and can truly be swapped. But in BM, particles have trajectories, so does it mean that the proper derivation of the Gibbs factor is also a fudge in QM?
The answer to the last question is - no. To understand it, one must first fix the language. In the Bohmian language, the particle is, by definition, an object with well defined position ##x##. So a wave function ##\psi(x)## is not a particle, it is a wave function that guides the particle. So when in quantum statistical mechanics we say that "particles cannot be distinguished", what it really means is that the wave function has certain symmetry (or anti-symmetry). In other words, quantum statistical mechanics is not a statistical mechanics of particles; it is a statistical mechanics or wave functions. This means that all standard quantum statistical mechanics is correct in Bohmian mechanics too, with the only caveat that one has to be careful with the language: standard quantum statistical mechanics is not a statistics of particles, but a statistics of wave functions. In particular, the Gibbs factor in standard quantum statistical mechanics is perfectly correct and well justified, provided that you have in mind that this factor counts wave functions, not particles.
 
  • Informative
Likes gentzen and atyy
  • #127
vanhees71 said:
Also I didn't claim that the macroscopic behavior is not described by the underlying microscopic dynamics. To the contrary it is derivable from it by quantum statistics.
A. Neumaier said:
Then you need to derive from quantum statistics that, upon interacting with a particle to be detected, the detector is not - as the Schrödinger equation predicts - in a superposition of macroscopic states with pointer positions distributed according to Born's rule, but that it is in a macroscopic state where one of the pointer positions is actually realized, so that we can see it with our senses.

Since you claim that this is derivable, please show me a derivation! If valid, it would solve the measurement problem!
This last quote misses the point. Because the detector entangled with the microscopic system in question is a macroscopic object consisting of typically more than ##N=10^{20}## atoms and because our observations are based on its collective coordinates, i.e. quantities averaged over macroscopic numbers of order ##10^{20}##, it turns out that quantum interference effects between two (approximately) classical states of these collective coordinates are suppressed by the double exponential factor of ##e^{-10^{20}}##, and are, even in principle, unobservable. This double exponent is so astonishingly small, that its inverse, viewed as a time interval, doesn't even need a unit of time to be attached to it, because it is basically the same HUMONGOUS number, no matter if one measures it in Planck units or in ages of the Universe. In addition, since macroscopic detectors are almost impossible to completely isolate from the environment, especially on such gigantic time scales, the usual environmental decoherence makes the above overlap between states even tinier.

Thus, in dealing with such collective coordinates, we may employ the usual rules of classical probability, if we are willing to make an error of order less than ##e^{-10^{20}}##. In particular, we may employ the rules of conditional probability and, after monitoring individual histories of collective coordinates (corresponding to specific positions of a needle on the dial of our macroscopic detector), apply Bayes’ rule to throw out the window the portion of the probability distribution that was not consistent with our observation of the collective coordinate, and to rescale the rest of the distribution. This is essentially what "collapse" is all about.

All of the above is nicely and pedagogically explained in the new QM book by Tom Banks (a great addendum to Ballentine's book, in my opinion).
 
Last edited:
  • Like
Likes PeroK and vanhees71
  • #128
In terms of interpretation being a misnomer for interpretations which make new predictions, is there a formal definition of what a prediction in this sense is? Do they need to be analytical predictions? Or could they be predictions which are only derivable through simulation, and how about recursively enumerable but not co-recursively enumerable predictions (sets of predictions which cannot be analytically determined to be predictions, and are only determinable to be predictions if they are predictions)?

In other words, in some cases, maybe it is not knowable whether an interpretation is an interpretation, and whether it is provable testable, until/unless it is successfully proven tested. If that were the case, then we could be in store for a never ending quest to determine if an interpretation adds predictions on top of QM, or if it is supported by observation is correct, and in the meantime, nobody could ever rule the question in hand to be a matter of philosophy rather than science.
 
Last edited:
  • #129
physicsworks said:
in dealing with such collective coordinates, we may employ the usual rules of classical probability
This is common practice, but not a derivation from first principle. Nobody doubts that quantum mechanics works in practice, the question is whether this working can be derived from a purely microscopic description of the detectors (plus measured system and environment) and unitary quantum mechanics.

I'd be very surprised if such a derivation is in the book by Banks. Can you point to the relevant pages?
 
  • #130
A. Neumaier said:
This is common practice, but not a derivation from first principle.
It may be helpful to step back and have a closer look at what those "first" principles are. Otherwise you may be setting yourself an unattainable goal.
 
  • #131
WernerQH said:
you may be setting yourself an unattainable goal.
It is a cheap way out to declare a goal that is not yet achieved to be unattainable.

Progress is made by making the unreachable reachable.
 
  • Like
Likes gentzen
  • #132
physicsworks said:
This last quote misses the point. Because the detector entangled with the microscopic system in question is a macroscopic object consisting of typically more than ##N=10^{20}## atoms and because our observations are based on its collective coordinates, i.e. quantities averaged over macroscopic numbers of order ##10^{20}##, it turns out that quantum interference effects between two (approximately) classical states of these collective coordinates are suppressed by the double exponential factor of ##e^{-10^{20}}##, and are, even in principle, unobservable.
By a similar argument one could argue that the detailed hamiltonian for such system + macroscopic detector is in principle not inferrable by an observer?

If it is not inferrable, does it have to exist? We have high standards for the observable status of certain contexual things, but not on the hamiltonian, why? Isn't this deeply disturbing?

/Fredrik
 
  • #133
A. Neumaier said:
This is common practice, but not a derivation from first principle.
a derivation of what? The above estimation by Banks is just a way to show that the quoted statement
A. Neumaier said:
you need to derive from quantum statistics that, upon interacting with a particle to be detected, the detector is not - as the Schrödinger equation predicts - in a superposition of macroscopic states with pointer positions distributed according to Born's rule, but that it is in a macroscopic state where one of the pointer positions is actually realized, so that we can see it with our senses.
is a common mistaken belief and, more importantly, to demonstrate in what sense classical mechanics emerges as an approximation of QM. It uses simple counting of states, it is very general (i.e. not relying on a particular model of the macroscopic detector) and is based on first principles, such as the principle of superposition, locality of measurements, etc. It is not a derivation per se because we are dealing with the dynamics of a macroscopic number of particles constituting the detector, for which humans have not yet developed exact solutions of EOMs and probably never will. However, this is OK, and we don't need to wait for them to make such an amazing accomplishment, because we know from the above estimation (rather, underestimation) that the discussed interference effects are not observable even in principle. Not for all practical purposes, but in principle, since any experiment that is set to distinguish between classical and quantum-mechanical predictions of these effects would have to ensure the system is isolated over times that are unimaginably longer than the age of the Universe.
A. Neumaier said:
Nobody doubts that quantum mechanics works in practice, the question is whether this working can be derived from a purely microscopic description of the detectors (plus measured system and environment) and unitary quantum mechanics.
I am sorry, this sounds like circular logic to me. Are you really talking about deriving quantum mechanics from quantum mechanics here? The above estimation is quantum mechanical. We are trying to interpret classical mechanics, not quantum mechanics. Otherwise we would look like Dr. Diehard from the celebrated lecture by Sidney Coleman titled "Quantum mechanics in your face", who thought that deep down, it's classical.
A. Neumaier said:
Progress is made by making the unreachable reachable.
To this regard and in the context of the above discussion, I can quote Banks:
the phrase “With enough effort, one can in principle measure the quantum correlations in a superposition of macroscopically different states”, has the same status as the phrase “If wishes were horses then beggars would ride”.
Fra said:
By a similar argument one could argue that the detailed hamiltonian for such system + macroscopic detector is in principle not inferrable by an observer?
Why? An observer records particular values of collective coordinates associated with the macroscopic detector. As long as this detector, or any other macroscopic object like a piece of paper on which we wrote these values, continues to exist in a sense that it doesn't explode into elementary particles, we can, with fantastic accuracy, use Bayes' rule of conditioning (on those particular values recorded) to predict probabilities of future observations. If those macroscopic objects which recorded our observations by means of collective coordinates cease to exist in the above mentioned sense, then we must go back and use the previous probability distribution before such conditioning was done.
 
  • Like
Likes vanhees71
  • #134
physicsworks said:
a common mistaken belief
Your beliefs and standards are so different from mine that a meaningful discussion is impossible.
 
  • #135
physicsworks said:
However, this is OK, and we don't need to wait for them to make such an amazing accomplishment, because we know from the above estimation (rather, underestimation) that the discussed interference effects are not observable even in principle.

This line of argumentation is not at all convincing to me. We agree on this fact:
  • If probabilities are classical (that is, they represent lack of information; a coin is either heads or tails, but we just don't know which, and we are using probabilities to reason about the uncertainty), then there are no interference effects.
But you seem to be arguing the converse, that if there are no interference effects, then the probabilities must be classical. That's just invalid reasoning, it seems to me.
 
  • #136
@stevendaryl, the argument is quite different from that. Quantum systems do not obey classical rules of probability, but a special class of compatible observables associated with macroscopic objects, the so-called collective coordinates by means of which we record our observations of the microscopic system in question, do approximately obey these rules with an unprecedented accuracy. Classical probability theory with its sum over histories rule and with probabilities representing the lack of information about initial conditions or our ignorance about it, is only an approximation of the probability theory in QM. As with any approximation, it eventually fails; in this case, when we talk about unavoidable quantum uncertainties in the initial position and velocity of a collective coordinate like the center of mass of a detector, the approximation fails if you wait long enough.
 
  • Like
Likes vanhees71
  • #137
physicsworks said:
@stevendaryl, the argument is quite different from that. Quantum systems do not obey classical rules of probability, but a special class of compatible observables associated with macroscopic objects, the so-called collective coordinates by means of which we record our observations of the microscopic system in question, do approximately obey these rules with an unprecedented accuracy. Classical probability theory with its sum over histories rule and with probabilities representing the lack of information about initial conditions or our ignorance about it, is only an approximation of the probability theory in QM.
Even if you want to say it is only an approximation, it seems invalid to me. In the quantum case, we know that the probabilities are NOT due to ignorance.
 
  • #138
Quantum systems cannot obey classical probabilities, much less approximate classical probabilities for some obscure reason.
Assuming that they do is circular reasoning. Nothing in the theory says that quantum systems tend to or must approximate anything classical. Including classical probabilities.
 
  • #139
stevendaryl said:
Even if you want to say it is only an approximation, it’s bogus. In the quantum case, we know that the probabilities are NOT due to ignorance.
What is bogus, exactly? And, of course in QM they are not, the question is to explain in what sense classical world is an approximation to quantum, not the other way around.
EPR said:
Quantum systems cannot obey classical probabilities, much less approximate classical probabilities for some obscure reason.
I am not sure if this is addressed to me, but I will reply with almost an exact quote from the previous message which you probably missed or misunderstood:
physicsworks said:
Quantum systems do not obey classical rules of probability, but a special class of compatible observables associated with macroscopic objects ---- do
approximately, of course.
EPR said:
Assuming that they do is circular reasoning.
No one is assuming that.

I suggest reading Banks, he explains this in much more detail than I do (and much better).
 
  • Like
Likes PeroK
  • #140
physicsworks said:
What is bogus, exactly? And, of course in QM they are not, the question is to explain in what sense classical world is an approximation to quantum, not the other way around.
The question is what, if anything the lack of macroscopic interference terms tells us. I thought you were suggesting that if there aren’t any interference effects, then we might as well assume the ignorance interpretation of probabilities. If you weren’t suggesting that, then what is relevance of the lack of interference effects?
 

Similar threads

Replies
1
Views
1K
Replies
4
Views
1K
Replies
5
Views
224
Replies
25
Views
3K
Replies
49
Views
4K
Replies
84
Views
4K
Back
Top