Quantum mechanics is not weird, unless presented as such

In summary, quantum mechanics may seem weird due to the way it is often presented to the general public. However, there is a long history of this approach, as it sells better. In reality, it can be an obstacle for those trying to truly understand the subject. The paper referenced in the conversation shows that quantum mechanics can actually be derived from reasonable assumptions, making it not as weird as some may think. However, this derivation is only one author's view and may not be the complete truth. There are also other interpretations of quantum mechanics, such as the ensemble interpretation, which may not be fully satisfactory. Overall, a proper derivation of quantum mechanics must account for all aspects, including the treatment of measurement devices and the past before measurements
  • #71
strangerep said:
E.g., Galilean symmetry is (a subgroup of) what you get from considering the motion of a free particle.

Hmmmm. Good point. But treatments I have seen (eg Landau) take the symmetry as given and develop the dynamics.

Thanks
Bill
 
Physics news on Phys.org
  • #72
A. Neumaier said:
Later, quantum mechanics is completely self-contained.
Do you mean von Neumann's measurement scheme?
 
  • #73
bhobba said:
Indeed. But the dynamics is determined by symmetry.

Thanks
Bill

The dynamics, i.e., the force field, is determined by local symmetry. Global symmetries such as translation, rotation, etc, have no connection with any particular force law. They do, however, constrain the form of the allowed dynamical laws to a considerable extent, but by no means determine them. This line of thought leads one to think whether it might be possible to impose further stronger type of symmetry constraints so that the forms of the laws are determined. This is, indeed, what Yang and Mills did. Of course, the force (gauge) fields and their interaction must exist in order for certain local symmetries to be true. :smile:
 
  • Like
Likes Greg Bernhardt and bhobba
  • #74
Shyan said:
Do you mean von Neumann's measurement scheme?
No; this is a caricature of most actual measurements only; it not even covers photodetection - upon detection of a photon, the photon doesn't go into an eigenstate of the nonexistent position operator, but becomes itself nonexistent.

The right key words are POVMs and Lindblad equations on the level of applications, and the projection operator formalism on the level of statistical mechanics.
 
Last edited:
  • Like
Likes ShayanJ
  • #75
stevendaryl said:
It seems that the notion of a "measurement outcome" depends on a classical notion of a measuring device.
Yes, this was the aspect of quantum mechanics that Landau was discussing.
 
  • #76
A. Neumaier said:
No. A measurement device is simply a large quantum object, and the measurement process can - like anything involving macroscopic quantum objects - be described by quantum statistical mechanics. It is only when starting quantum mechanics that one needs classical props to get an initial understanding. Later, quantum mechanics is completely self-contained.

I think that there is still a problem. The physical content of QM (and this includes QFT, as well) is that you calculate amplitudes, and these amplitudes give probabilities for observables having particular values. You can abstract away from the measuring devices, and just talk about observables. But I don't see how it changes anything to do that. The problem is that it is inconsistent to assume that all possible observables have values at all times. So for QM to be consistent, there has to be a way to make some observables more equal than others. Bohmian mechanics just picks position as the privileged observable, but other interpretations of quantum mechanics that have definite outcomes allow measurement to single out a preferred observable.

I know that some people believe that decoherence can replace measurement as the basis for choosing a preferred basis. But I don't see how it completely solves the problem.
 
  • #77
stevendaryl said:
Well, I don't know what Landau meant, but I think you might be talking about something slightly different. You can come up with the Schrodinger equation or (Klein Gordon, or Dirac) based on symmetry, but that's only half of quantum mechanics. The other half is the interpretation of quantum amplitudes as giving (when squared) the probabilities for measurement outcomes. It seems that the notion of a "measurement outcome" depends on a classical notion of a measuring device.
Exactly, but what makes it occur weird is just not accepting that this is it. We are so trained in thinking in terms of classical (deterministic) physics that it is hard to accept that nature is inherently probabilistic that we try to find some "metapicture" of the world answering the (in my opinion unscientific) question, what's behind this inherent randomness.

Natural science tells us how nature behaves (or what we can objectively know about its behavior) and needs not agree with the prejudices we have about it.
 
  • #78
vanhees71 said:
We are so trained in thinking in terms of classical (deterministic) physics that it is hard to accept that nature is inherently probabilistic that we try to find some "metapicture" of the world answering the (in my opinion unscientific) question, what's behind this inherent randomness.
That's not true. It's not hard to accept randomness. Randomness is present everywhere around us in classical world.
What is hard to accept is that certainty can emerge from randomness without some deterministic physical phenomena behind it.
 
  • #79
stevendaryl said:
The problem is that it is inconsistent to assume that all possible observables have values at all times.
Well, it is inconsistent to assert that all possible variables have infinitely precise values at all times. But this is an unnecessary, unduly strong assertion!

It is already violated in many situations of daily life, hence constitutes no real problem:
  • The position of a soccer player on a football pitch is not defined to a precision better than perhaps 10cm.
  • The area of a city is not better defined than to a few significant digits.
  • Neither is the position of a piece of scientific equipment.
  • Even integers such as the number of people in a room are not always determined to infinite precision (e.g., when a person is standing in the door).
  • Neither is the number of clicks of a Geiger counter, during the short times when this number changes.
Thus infinitely precise values at all times for measurable quantities are convenient abstractions of classical physics that have no place in real life.

More importantly, outside classical physics, this assertion is nowhere used in theory or practice! Hence there is no need to assume it, and all problems that are artificially created by ghost stories about Schroedinger cats or Wigner's friend are gone.

One only needs to assume that quantum mechanics predicts expectation values, according to the standard rules. This assumption implies that it also predicts standard deviations, since these can be computed from expectations. A definite prediction is one in which the standard deviation is negligibly small. Just as in any classical stochastic model. Probabilities (of being in a region of space, say) can be defined as expectation values of characteristic functions.

Everything is fully consistent without any reference to classical objects.

To see in more detail that this works perfectly without assuming any quantum-classical correspondence, look at Chapter 10 of my book.
 
  • Like
Likes Student100
  • #80
vanhees71 said:
Exactly, but what makes it occur weird is just not accepting that this is it. We are so trained in thinking in terms of classical (deterministic) physics that it is hard to accept that nature is inherently probabilistic that we try to find some "metapicture" of the world answering the (in my opinion unscientific) question, what's behind this inherent randomness.
Natural science tells us how nature behaves (or what we can objectively know about its behavior) and needs not agree with the prejudices we have about it.

I don't actually think that it's the randomness that causes so much conceptual difficulties. I think people can get an intuitive grasp on a certain kind of classical randomness by thinking in terms of coin flips: You're driving down a road, and you reach an intersection where you can either turn left or right. So you flip a coin to decide. Even though Newtonian physics would lead us to believe that result of a coin flip is predictable, I don't think that it's too big of a stretch for most people to accept that there can be genuine randomness, and that some processes such as radioactive decay are completely unpredictable.

The part that's mysterious is that QM seems to have a kind of nonlocal randomness. In an EPR-type experiment, it's as if Alice and Bob each flip different coins, and the results are random, but they always get the opposite result. It's the combination of randomness and certainty that is hard to grasp.
 
  • #81
According to classical physics everything is deterministic, and randomness is only due to our inability to precisely know the initial conditions and to write down the exact equations of motion. There's no principle randomness, while quantum randomness is inherent in nature as a fundamental principle. A particle has neither a precisely determined position nor a precisely determined momentum (Heisenberg uncertainty relation), and this is not because we are not able to determine its location in phase space accurate enough but it just isn't possible according to the fundamental postulates for quantum theory. So if QT is a precise description of nature (and all our observations of the real world agrees with this view) then nature is just not deterministic.
 
  • #82
A. Neumaier said:
Well, it is inconsistent to assert that all possible variables have infinitely precise values at all times. But this is an unnecessary, unduly strong assertion!

It is already violated in many situations of daily life, hence constitutes no real problem:
  • The position of a soccer player on a football pitch is not defined to a precision better than perhaps 10cm.
  • The area of a city is not better defined than to a few significant digits.
  • Neither is the position of a piece of scientific equipment.
  • Even integers such as the number of people in a room are not always determined to infinite precision (e.g., when a person is standing in the door).
  • Neither is the number of clicks of a Geiger counter, during the short times when this number changes.

I don't think that those examples are at all analogous to the incompatibility of observables in QM. In an EPR-type experiment, the startling fact isn't that Alice and Bob's measurements of spin are fuzzy--it's that they are very precise. If Alice measures spin-up along an axis, then Bob will definitely measure spin-down along that axis (in the spin-1/2 case). So appealing to fuzziness or infinite precision doesn't seem to help.
 
  • #83
stevendaryl said:
The part that's mysterious is that QM seems to have a kind of nonlocal randomness. In an EPR-type experiment, it's as if Alice and Bob each flip different coins, and the results are random, but they always get the opposite result. It's the combination of randomness and certainty that is hard to grasp.
The nonlocal EPR-type correlations are only weird, if you do not accept that they are inherent as the result of the preparation of the system and its (unitary) dynamical evolution afterwards. If you accept this, there's nothing weird about it, although the measured local observables at the far distant places of A and B are not determined by this preparation. It's just something very far from our classical notion of the world and not describable by classical deterministic models of the world.
 
  • #84
stevendaryl said:
I don't think that those examples are at all analogous to the incompatibility of observables in QM. In an EPR-type experiment, the startling fact isn't that Alice and Bob's measurements of spin are fuzzy--it's that they are very precise. If Alice measures spin-up along an axis, then Bob will definitely measure spin-down along that axis (in the spin-1/2 case). So appealing to fuzziness or infinite precision doesn't seem to help.

There certainly can be fuzziness in a spin measurement--if you use a Stern Gerlach device and see if the electron goes left or right, there will be cases where it's not clear which way the electron is deflected. Or there will be times when you just fail to detect the electron. Or there will be times when you detect a stray electron that isn't actually from the source you thought it was from. So there is fuzziness. But that fuzziness doesn't seem to have any role in the violation of Bell's inequality.
 
  • #85
vanhees71 said:
The nonlocal EPR-type correlations are only weird, if you do not accept that they are inherent as the result of the preparation of the system and its (unitary) dynamical evolution afterwards. If you accept this, there's nothing weird about it...

You seem to be saying that it's not weird, because it's a prediction of QM. That seems to be just defining away the weirdness. (Which is what the "shut up and calculate" interpretation does).

I find it weird for QM to split things into the three parts: (1) Preparation procedures, (2) Unitary evolution, (3) Measurements. At some level, (1) and (3) are just complicated physical processes, so that should be included in (2).
 
  • #86
vanhees71 said:
So if QT is a precise description of nature (and all our observations of the real world agrees with this view) then nature is just not deterministic.
''So'' doesn't follow, since there might be an underlying deterministic theory from which quantum mechanics is derived.

Even though I don't believe that Bohmian mechanics is the right mechanism, I do believe that God doesn't play dice.

The main reason is that the notion of inherent randomness is conceptually problematic, and I believe even ill-defined, especially since quantum mechanics obviously applies to unique objects such as the Earth or the Sun.

Whereas the appearance of randomness through chaos and limited knowledge is well-founded and mathematically well-understood, without any of the philosophical problems associated with classical probability.

For a thorough discussion of these problems, see the very informative books by
T.L. Fine,
Theory of probability; an examination of foundations.
Acad. Press, New York 1973.
and
L. Sklar,
Physics and Chance,
Cambridge Univ. Press, Cambridge 1993.
 
  • #87
I defy anyone to present an explanation of quantum entanglement which is not "weird".

I would actually go further than weird. Quantum entanglement correlations require nothing short of supernatural behaviour, as by Bell's theorem no natural model can explain them. Even non-locality provides no escape if relativity is included.

Supernatural is probably a better word than weird in general. When you have either events which arise from no cause, or objects which have no reality until they are measured, etc, etc, then are you not better off in the long term admitting that such things defy natural explanation instead of endless trying to reinterpret or reframe things. It's an ugly word of course, but the facts around entanglement are harsh.
 
  • Like
Likes Jonathan Scott
  • #88
Yeah, the shut-up-and-calculate interpretation is the best working one for the introductory QT lecture. Of course, it's worth while to think a bit deeper about the fundamental issues of interpretation, but after a lot of thinking I came back to the shut-up-and-calculate interpretation, now knowing that you can call it a bitbit nicer "minimal statistical interpretation".
 
  • #89
stevendaryl said:
So appealing to fuzziness or infinite precision doesn't seem to help.
You are changing the context. I was only discussing your statement
stevendaryl said:
The problem is that it is inconsistent to assume that all possible observables have values at all times.
that you made to justify your conclusion
stevendaryl said:
So for QM to be consistent, there has to be a way to make some observables more equal than others.
I was simply pointing out that you assumed an inconsistency that one does not need to assume in order to give meaning to observables. (And, by silent implication, that therefore your conclusion is not justified.)
 
  • #90
stevendaryl said:
I find it weird for QM to split things into the three parts: (1) Preparation procedures, (2) Unitary evolution, (3) Measurements. At some level, (1) and (3) are just complicated physical processes, so that should be included in (2).

When people say that the problem in understanding QM is because it is too far removed from human experience and human intuition, I don't agree. To me, what's weird is the parts (1) and (3) above, and what's weird about them is that they seem much too tightly tied to human actions (or to humanly comprehensible actions). Nature does not have preparation procedures and measurements, so it's weird for those to appear in a fundamental theory.
 
  • Like
Likes ddd123
  • #91
A. Neumaier said:
You are changing the context. I was only discussing your statement
the problem is that it is inconsistent to assume that all possible observables have values at all times.​

But when I made that statement, what I had in mind was the sort of choice of observables as spin direction measurements in EPR. In that case, fuzziness or infinite precision doesn't seem relevant.
 
  • #92
stevendaryl said:
When people say that the problem in understanding QM is because it is too far removed from human experience and human intuition, I don't agree. To me, what's weird is the parts (1) and (3) above, and what's weird about them is that they seem much too tightly tied to human actions (or to humanly comprehensible actions). Nature does not have preparation procedures and measurements, so it's weird for those to appear in a fundamental theory.

I don't believe that decoherence completely solves the problem. What decoherence basically tells us is that certain observables are in practice impossible to measure, because of entanglement. We can observe a dead cat, and we can observe a live cat, but there is no way we can observe a cat to be in the state:

[itex]\frac{1}{\sqrt{2}} |dead\rangle + \frac{1}{\sqrt{2}} |alive\rangle[/itex]

But I find that a less than complete resolution. It still seems to be putting measurement into the fundamental physics.
 
  • #93
stevendaryl said:
what I had in mind was the sort of choice of observables as spin direction measurements in EPR.
Well, I can't read your mind, but only respond to what you write down. But...

stevendaryl said:
the startling fact isn't that Alice and Bob's measurements of spin are fuzzy--it's that they are very precise.
Yes. But what is the problem here? It is in this respect no different from the very ordinary fact that casting a die always gives very precise numbers. A classical probabilistic model for predicting the die does not give precise predictions for this number, but only for the mean value after a long sequence of casts. Similarly, the quantum probabilistic model does not give precise predictions for Alice's measurements, but only for the mean value after a long sequence of casts.

The only seemingly startling fact in entanglement experiments is that the quantum probabilistic model predicts 100% correlations between the unpredictable results of Alice and Bob. But conceptually, this is no more startling than that if one records together with the value of each die cast (Alice) also the value of the invisible face (Bob) of the same die [aka entangled photon pair]. Comparing the predictions of the classical stochastic model of the die with the
observations of Alice and Bob gives a perfect prediction of 100% correlations: The values of Alice and Bob add up to 7 in the classical analogue.

So once it is accepted that the entangled photon pair is a conceptual unity of the same kind as a die (and indeed careful preparation avoiding decoherence is needed to ensure the former!), the analogy is complete. Thus there is nothing startling at all in predicting precise correlations in an otherwise random experiment.

The only startling fact remaining is that the two faces of the die are close and rigidly connected, while Alice and Bob in the quantum experiment may be very far away. But this has nothing to do with measurement or probabilities. Hence it has nothing to do with the conceptual clarity of quantum mechanics independent of any assumed quantum-classical correspondence.

Therefore, quantum mechanics is a complete and consistent theory independent of the need for any classical concepts related to measurement.
 
  • #94
A. Neumaier said:
''So'' doesn't follow, since there might be an underlying deterministic theory from which quantum mechanics is derived.

Even though I don't believe that Bohmian mechanics is the right mechanism, I do believe that God doesn't play dice.

The main reason is that the notion of inherent randomness is conceptually problematic, and I believe even ill-defined, especially since quantum mechanics obviously applies to unique objects such as the Earth or the Sun.

Well, but which observation tells you that they do not behave probabilistically? Some coarse-grained observables behave classical to the accuracy relevant for any practical purpose, but if you believe that QT applies to macroscopic objects like the Sun and the other bodies, then this implies that they are not deterministic (not even determined precisely at any instant of time).

Whereas the appearance of randomness through chaos and limited knowledge is well-founded and mathematically well-understood, without any of the philosophical problems associated with classical probability.

For a thorough discussion of these problems, see the very informative books by
T.L. Fine,
Theory of probability; an examination of foundations.
Acad. Press, New York 1973.
and
L. Sklar,
Physics and Chance,
Cambridge Univ. Press, Cambridge 1993.

Thanks for the references. I'll have a look at them.
 
  • #95
stevendaryl said:
At some level, (1) and (3) are just complicated physical processes, so that should be included in (2).
And they are, when - rather than starting with postulates assuming an external classical world - one analyzes the measurement process in terms of statistical mechanics. See, e.g.,
A.E. Allahverdyan et al., Understanding quantum measurement from the solution of dynamical models. Physics Reports, 525 (2013), 1-166. http://arxiv.org/abs/1107.2138
 
  • #96
A. Neumaier said:
The only seemingly startling fact in entanglement experiments is that the quantum probabilistic model predicts 100% correlations between the unpredictable results of Alice and Bob. But conceptually, this is no more startling than that if one records together with the value of each die cast (Alice) also the value of the invisible face (Bob) of the same die [aka entangled photon pair]. Comparing the predictions of the classical stochastic model of the die with the
observations of Alice and Bob gives a perfect prediction of 100% correlations: The values of Alice and Bob add up to 7 in the classical analogue.

So once it is accepted that the entangled photon pair is a conceptual unity of the same kind as a die (and indeed careful preparation avoiding decoherence is needed to ensure the former!), the analogy is complete. Thus there is nothing startling at all in predicting precise correlations in an otherwise random experiment.

Well no, come on, the weirdness in entanglement experiments shows up when considering non-commuting observables. As Scott Aaronson puts it:

Scott Aaronson said:
Perhaps the best way to explain local realism is that it’s the thing you believe in, if you believe all the physicists babbling about “quantum entanglement” just missed something completely obvious. Clearly, at the moment two “entangled” particles are created, but before they separate, one of them flips a tiny coin and then says to the other, “listen, if anyone asks, I’ll be spinning up and you’ll be spinning down.” Then the naïve, doofus physicists measure one particle, find it spinning down, and wonder how the other particle instantly “knows” to be spinning up—oooh, spooky! mysterious! Anyway, if that’s how you think it has to work, then you believe in local realism, and you must predict that Alice and Bob can win the CHSH game with probability at most 3/4.

Even having to give up counterfactual definiteness to avoid the problem is weird. It's weird however you put it.
 
  • #97
A. Neumaier said:
Yes. But what is the problem here? It is in this respect no different from the very ordinary fact that casting a die always gives very precise numbers.

Yes, but for two very distant throws of the dice to always give the SAME numbers is pretty weird.

The only seemingly startling fact in entanglement experiments is that the quantum probabilistic model predicts 100% correlations between the unpredictable results of Alice and Bob. But conceptually, this is no more startling than that if one records together with the value of each die cast (Alice) also the value of the invisible face (Bob) of the same die [aka entangled photon pair]. Comparing the predictions of the classical stochastic model of the die with the
observations of Alice and Bob gives a perfect prediction of 100% correlations: The values of Alice and Bob add up to 7 in the classical analogue.

Yes, that makes it more understandable, but has the undesirable quality that it's based on a falsehood. That's explaining the correlation in terms of hidden variables, which are inconsistent with quantum predictions.

So once it is accepted that the entangled photon pair is a conceptual unity of the same kind as a die (and indeed careful preparation avoiding decoherence is needed to ensure the former!), the analogy is complete.

I don't agree. Your analogy would make sense if you can imagine that for every possible choice of measurement angle, there is a corresponding "Alice end" and "Bob end" of the dice. But that's the sort of predetermined result that Bell's inequality proves is impossible. So relying on this analogy seems to me to be relying on something that's provably false.
 
  • #98
kmm said:
I remember how weird it was to me when I learned that, in a vacuum, a feather and hammer would fall at the same rate

Experiments like that are very straight forward and to me "not weird" it follows basic logic. But when you get to the micro level and things behave differently then an intelligent person would logically expect. Logic kinda flies out the window and we are left with mathematical equations and very complex diverse ideas that on paper make sense but that the language of mathematics and some of us don't speak that very well unfortunately. How ever they have no commonsensical equivalent at a macro level. Its very complex to translate that mathematical language that I feel we lack the complete English language to explain. Perhaps one day a bright physics professor will right a book and include complementary dictionary explaining the simple English language used to describe these experiments in detail.
 
  • #99
vanhees71 said:
Well, but which observation tells you that they do not behave probabilistically?
They may behave random when viewed as one item in the ensemble of all planets and stars, respectively. But we observe a lot of nonrandom, fairly accurate facts about materials and processes of the unique Earth and Sun, and although they are properties of unique quantum objects (namely our Earth and our Sun), many of them are predictable with quantum mechanics to the accuracy we can measure them! I'd call this a startling fact!

Of course there are also a lot of detailed, fairly accurate facts about Earth and Sun that are not predictable by quantum mechanics. But in any classical model of a mechanical system there is also a lot unpredictable, since most of the details depend on their initial conditions (the classical state), which is not fixed by theory and must be learned by observation. Thus this is nothing specific to quantum mechanics. Both theories only predict correlations between past and present observations. And this is what works in the numerical quantum models for the interior of the Sun as well as it works for numerical classical models for water running in a pipe.

So I see nothing intrinsically strange in the foundations of quantum mechanics - one doesn't need to shut up and calculate, but one can calculate and at the same have a consistent intuition about how to interpret everything!
 
  • #100
A. Neumaier said:
And they are, when - rather than starting with postulates assuming an external classical world - one analyzes the measurement process in terms of statistical mechanics. See, e.g.,
A.E. Allahverdyan et al., Understanding quantum measurement from the solution of dynamical models. Physics Reports, 525 (2013), 1-166. http://arxiv.org/abs/1107.2138

Thanks for the pointer. It's a long paper, with lots of mathematics, but from skimming, what I believe that they are discussing is the way that a system can be a measuring device, and that such devices can be described using ordinary physics (statistical mechanics, since they necessarily involve many, many particles). I can understand that. The measuring device is a complex system in a metastable "neutral state", which then makes a transition into a stable pointer state through interaction with the microscopic quantity that is being measured. That's understandable. It's exactly what happens in classical mechanics, and is the reason that we can get discrete outcomes ("heads" or "tails") from continuous Newtonian dynamics.

But it's the pairing of distant measurement results in a correlated pair such as EPR that is mysterious. Alice's device is in a metastable state, and when it interacts with a spin-1/2 particle, it falls into a stable pointer state. Similarly for Bob's device. But to describe the transition using statistical mechanics seems to make the fact that Alice's and Bob's results are perfectly anti-correlated even more mysterious. If the measurement process is inherently statistical, then how does perfect anti-correlation come about?

The way that people argue that there is nothing mysterious about QM is by showing that the various features (perfect anti-correlation, discrete outcomes to measurements, etc.) have unmysterious analogies in pre-quantum physics. But the different analogies, taken together are mutually inconsistent. If you understand perfect anti-correlation in terms of "Alice and Bob are seeing opposite sides of the same die", that picture is inconsistent with Bell's theorem. If you understand the measurement process in terms of the decay of a meta-stable state, that picture is inconsistent with the perfect anti-correlations. Or it seems to me.

It seems to me that the various ways of explaining away the mystery of QM is akin to trying to prove to somebody that a Mobius strip is actually a cylinder. You point to one section of the strip, and say: "There's no twist in this section." You point to another section of the strip, and say: "There's no twist in this section, either." Then after going over every section, you conclude: "Clearly, there are no twists anywhere. It's a cylinder." The fact that it's twisted is a nonlocal property, you can always remove the twist from anyone section.
 
  • #101
A. Neumaier said:
Of course there are also a lot of detailed, fairly accurate facts about Earth and Sun that are not predictable by quantum mechanics. But in any classical model of a mechanical system there is also a lot unpredictable, since most of the details depend on their initial conditions (the classical state), which is not fixed by theory and must be learned by observation.

But that's exactly the type of nondeterminism that Bell shows cannot serve as an explanation for quantum statistics.
 
  • #102
stevendaryl said:
it's based on a falsehood. That's explaining the correlation in terms of hidden variables
I stated true facts about dice and true facts about entanglement experiments, and made the analogy to show that predicting exact correlations in a quantum setting is in itself not more mysterious than predicting exact correlations in a classical setting.

There is no falsehood anywhere. My analogy shows that your pointing to the exact correlations to debunk my statistical arguments justifying that quantum mechanics is independent of a classical context is an unfounded argument.

stevendaryl said:
for two very distant throws of the dice to always give the SAME numbers is pretty weird.
This does not happen in my analogy; you argue against a straw man.

If you reread what I wrote, you can see that I acknowledged that there is something startling only in the distance of Alice and Bob in the quantum experiment. But as I had explained this has nothing to do with the foundations! It is a phenomenon of the same kind as the startling fact that a sufficiently accelerated observer reads completely different clock times than a resting one. But I haven't seen anyone claiming that this means that the basic concepts of general relativity are not sound.

My way of making this intuitively understandable is the realization that a coherent 2-photon state is a single (in these experiments very extended) quantum object and not two separate things, in a similar way as the small, rigid die is a single classical object. The only stretch of imagination needed is then to accept that invisible objects can be as strongly united as small rigid objects of our everyday experience. This is a comparatively minor step of about the same difficulty as accepting length contraction and other well-known classical relativistic effects that are outside our everyday experience. And it is supported by the experimental fact that very extended entangled state are quite fragile objects, easily broken into pieces: The more distant Alice and Bob are, the more difficult it is to ensure that the 2-photon states remain coherent since decoherence strongly works against it. Once coherence is lost, the two photon statistics are completely independent.

Once the possibility of strong unity (this is what the word ''coherence'' conveys) across large distances (and how easy it is to break it) is developed as part of one's intuition, one can get a good intuitive understanding of entanglement phenomena. This is my answer to the weirdness part of your setting.

But I emphasize again that this has nothing to do with problems in the foundations that we had originally discussed in the context of your defense of Landau's statement
TonyS said:
that it is impossible to formulate the basic concepts of quantum mechanics without using classical mechanics
My main arguments were targeted at showing that, today, this statement (which is completely independent from any specific experimental conclusions predicted by the resulting theory) is no longer tenable.
 
Last edited:
  • Like
Likes Student100 and vanhees71
  • #103
stevendaryl said:
If the measurement process is inherently statistical, then how does perfect anti-correlation come about?
The two are not logically contradictory, not even classically, as I showed with the examples of the perfectly correlated die readings, in a much simpler and fully understandable context. That it is possible in principle can be seen from the classical example, the details of how it comes about are of course specific to the quantum experiment. But they are given by the math, which has to (and does) follow the quantum rules - where a single particle is in principle infinitely extended since it is just a semiclassical conceptual simplification of an infinitely extended wave (with a wave function with unbounded support).

For me this is enough to reconcile intuition with the formalism, to the same extent as I can reconcile intuition about relativistic classical effects outside of my experience with what I obtain from calculations. I think it is unreasonable to expect more. A level of intuitive understanding that cannot even be achieved in the classical domain should not be made a requirement for understanding in the quantum domain.
 
  • #104
A. Neumaier said:
The two are not logically contradictory, not even classically, as I showed with the examples of the perfectly correlated die readings, in a much simpler and fully understandable context.

Yes, but in light of Bell's theorem, that sort of "Bertlemann's socks" explanation is known not to work.

Clearing a subject of mystery is worth doing, but not if requires grasping analogies that are known to be wrong.
 
  • #105
A. Neumaier said:
I stated true facts about dice and true facts about entanglement experiments, and made the analogy to show that predicting exact correlations in a quantum setting is in itself not more mysterious than predicting exact correlations in a classical setting.

I don't think the analogy works, because of Bell's theorem.

I understand that if you roll a classical die, and Alice sees one side, and Bob sees the other, then even though both get a random result, their results are perfectly anti-correlated. But if you try to extend that the quantum case of two spin-1/2 particles, it doesn't work. Rather than a single die, it's as if you have a different die for every possible detector orientation. But that's the kind of deterministic function of a random "hidden variable" that Bell proves is impossible. So it doesn't seem clarifying to bring up the classical analogy, it just seems like a distraction.
 
Last edited:

Similar threads

Replies
6
Views
2K
Replies
1
Views
6K
Replies
36
Views
4K
Replies
2
Views
1K
Replies
7
Views
525
Replies
2
Views
840
Replies
4
Views
6K
Back
Top