Loophole-free test of local realism via Hardy's violation

  • #1
DrChinese
Science Advisor
Gold Member
8,195
1,930
Loophole-free test of local realism via Hardy's violation (2024)
Si-Ran Zhao, Shuai Zhao, Hai-Hao Dong, Wen-Zhao Liu, Jing-Ling Chen, Kai Chen, Qiang Zhang, Jian-Wei Pan

"Bell's theorem states that quantum mechanical description on physical quantity cannot be fully explained by local realistic theories, and lays solid basis for various quantum information applications. Hardy's paradox is celebrated to be the simplest form of Bell's theorem concerning its "All versus Nothing" way to test local realism. However, due to experimental imperfections, existing tests of Hardy's paradox require additional assumptions of experimental systems, which constitute potential loopholes for faithfully testing local realistic theories. Here, we experimentally demonstrate Hardy's nonlocality through a photonic entanglement source."

The cited experiment was released a few days ago from a top team, closing both the locality loophole and the detection loophole in a test of Hardy's paradox. The observed value was close to the quantum prediction of .0004646 (small, yes, but above zero). The local realistic prediction is strictly <=0, and the highest value that was consistent with the local realistic prediction (given the actual results) was 10^-16348 (i.e. only about 16344 orders of magnitude too low).

This discussion is more about foundations than interpretations, but really touches on both. I have included some references on Hardy's Paradox for those who might be interested in learning more about this particular no-go theorem. Like GHZ, Hardy is an all-or-nothing no-go, and it implies that Quantum Mechanics is both nonlocal AND contextual. In other words: we should reject both locality* and realism.

Wiki: Hardy's Paradox
Detail version: Generalized Hardy's Paradox
Lay version of the above: Generalized Hardy's paradox shows an even stronger conflict between quantum and classical physics



My take: When cataloging the results of various modern tests of quantum theory and No-Go theorems of locality and/or realism, it is becoming clearer and clearer that a) purely local theories will not match experiment; and b) purely non-contextual (realistic/HV) theories will not match experiment.

Gleason 1957: reject non-contextuality
Bell 1964: reject local realism (some - such as Norsen - call it rejection of locality)
Bell-Kocken-Specker 1966-1967: reject non-contextuality/hidden variables
GHZ 1989: reject local realism (and both locality and realism, depending on your viewpoint)
Hardy 1993: reject local realism (and both locality and realism, depending on your viewpoint)
Leggett 2003: reject realism
PBR 2011: reject epistemic interpretations

Are there any interpretations still standing? ** *** :smile:Also: In each of the above No-Go works, the related experimental outcomes match traditional Quantum Mechanics - without any regard to Special Relativity at all. My conclusion is that adding relativistic considerations to QM does not in any way change the underlying foundations of essential theory. Otherwise, you would need to account for it in experiments. Clearly, relativistic considerations are unnecessary in all of the various no-go's and related experiments using entangled systems to test the concepts of Bell locality and realism/non-contextuality/hidden variables.

Putting together all of the above:
  • a) "Relativity" is and must be respected in quantum physics; whereas "Locality" is not a feature of QM.
  • b) Quantum Mechanics cannot be completed by hidden variables existing independently of the act of observation (i.e. choice of basis measurement).
  • c) The wave function is "real" in the sense of PBR's "ontic".
There's a lot out there to consider, but it seems like there is some degree of convergence occurring in recent years. Thoughts, comments?

*Of course I mean locality in the Bell sense (separable/factorizable), not in the sense of signal locality. From Nonlocality without inequalities for almost all entangled states of any quantum
system (Ghirardi, Marinatto, 2005) ****:

"In the case in which the measurement processes take place at spacelike separated locations, the following condition demanding that all conceivable probability distributions of measurement processes satisfy the factorization property

Pλ (Ai = a, Bj = b, . . . , Zk = z) = Pλ(Ai = a)Pλ(Bj = b). . . Pλ(Zk = z) ∀λ ∈ Λ (2)

is a physically natural one which every hidden variable model is requested to satisfy. This factorizability request is commonly known as Bell’s locality condition [7]. We remark that all “nonlocality without inequalities” proofs aim at exhibiting a conflict between the quantum predictions for a specific entangled state and any local completion of quantum mechanics which goes beyond quantum mechanics itself. In fact, in the particular case in which the most complete specification of the state of a physical system is represented by the knowledge of the state vector |ψi alone, i.e., within ordinary quantum mechanics, the failure of the locality condition of Eq. (2) can be established directly by plugging into it appropriate quantum mechanical observables. Indeed, it is a well-known fact that for any entangled state there exist joint probabilities which do not factorize and, consequently, that ordinary quantum mechanics is inherently a nonlocal theory."

** Simultaneous observation of quantum contextuality and quantum
nonlocality
This simply discusses many of the No-Go's mentioned above.

*** Interestingly (at least to me): Bohmian Mechanics is usually considered a nonlocal hidden variable (deterministic) theory. @Demystifier (one of our resident experts in Bohmian theory) told me once that he considers BM to be contextual. So deterministic *and* contextual! Apparently you can have it both ways! Of course, one of the complaints against the Bohmian side is precisely that no relativistic version has been successfully developed, while ordinary QM does have a relativistic versions (QFT).

**** Note that this title is the same as Hardy's original 1993 paper (on the same subject), as well as the same title as Sheldon Goldstein's 1993 paper (also on the same subject). I tried but was unable to locate PDFs of either of these.
 
  • Love
  • Skeptical
  • Informative
Likes bob012345, Lord Jestocost and vanhees71
Physics news on Phys.org
  • #2
Which interpretation is still standing? The minimal statistical interpretation!
 
  • Like
Likes DrChinese
  • #3
vanhees71 said:
Which interpretation is still standing? The minimal statistical interpretation!
The right question is which interpretation is ruled out? I am not aware of any interpretation that is ruled out.
 
  • Like
Likes physika, mattt and PeterDonis
  • #4
DrChinese said:
Leggett 2003: reject realism
Can you elaborate it a bit, or give a reference? I don't think that realism can be ruled out without further specifications/assumptions.
 
  • Like
Likes pines-demon, physika and mattt
  • #5
Demystifier said:
The right question is which interpretation is ruled out? I am not aware of any interpretation that is ruled out.
You cannot rule out interpretations since they are metaphysical, philosophical additions to the minimal interpretation. It's like religion. You cannot rule out any of them based on scientific criteria for a theory or model being successful in describing the phenomena or not. For the pure scientific purpose you only need a minimal interpretation, which describes how the mathematical formalism is applied to observed phenomena in Nature.
 
  • Like
Likes Lord Jestocost and martinbn
  • #6
Interpretations can't be ruled out by experiment but there are "performance metrics" that can be argued over. Like how consistent an interpretation is, how ambiguous it is, its generalizeability to all quantum theories, its simplicity, its contribution to adjascent research projects, its metaphysical commitments/baggage etc. Some are more subjective than others, but all have been avenues of critique.
 
  • Like
Likes martinbn and vanhees71
  • #7
Yes, but all this is a matter of personal opinion, and sometimes it seems to be very emotional when one discusses about it also in this forum, where I regularly clash with @PeterDonis about what's allowed to say and what not about interpretations. In my opinion, it's simply a personal decision which interpretation you follow.

If you ask me, the only interpretation of any physical theory must be a minimal one, i.e., one must aim at an interpretation of the mathematical formalism of the theory with as little metaphysical or philosophical additions as possible.

For quantum mechanics, in my opinion, that's the minimal statistical interpretation, defined via the standard (rigged-)Hilbert space formulation:

(1) A quantum system is described by a (separable) Hilbert space

(2) A quantum state, representing a preparation procedure of a quantum system, is uniquely represented by a statistical operator ##\hat{\rho}##, which is a self-adjoint positive semidefinite operator with trace 1

(3) An observable is represented by a self-adjoint operator. The possible measurement results, values, of the observable are the eigenvalues of this self-adjoint operator.

(4) A complete set of independent observables is represented by a set of mutually commuting self-adjoint operators ##\hat{A}_1,\ldots,\hat{A}_N## such that the simultaneous eigen spaces ##\text{Eig}(a_1,\ldots,a_N)## are one-dimensional and none of the operators can be expressed as a function of the others.

In the following ##|a_1,\ldots,a_N \rangle## denotes a complete orthonormal set of simultaneous eigenvectors of such a complete set of observables.

(5) There exists an operator ##\hat{H}##, the Hamilton operator of the system, such that, if ##\hat{A}## represents an observable, then ##\mathring{A}=1/(\mathrm{i} \hbar) [\hat{A},\hat{H}]## represents the time derivative of this observable.

(6) If a system is prepared in the state ##\hat{\rho}## and one measures an observable ##A_1## in a complete set of compatible independent observables (as defined above), then the probability for finding the value ##a_1## when measuring this observable is
$$P(a_1)=\sum_{a_2,\ldots,a_N} \langle a_1,\ldots,a_N |\hat{\rho}|a_1,\ldots,a_N \rangle.$$
That's it concerning the minimal interpretation. I think that's all one needs to apply the formalism to real-world experiments/observations.

The most additions of other interpretations are due to the uneasiness of many people with the idea that Nature is inherently random, i.e., that the probabilities as defined in the above formalism are fundamental. The main reason for the quibble is that in a sense this "interpretation" of the probabilities violates the principle of sufficient cause, i.e., the idea that there is a cause for the specific result when measuring an observable. In the above formalism there is indeed nothing that in any way lets us know of any cause, why in a measurement the value comes out that is observed. All we can know, according to this formalism, are the probability for this outcome, and in this sense it doesn't tell you a property of the single system measured but about an ensemble of equally prepared such systems, because only by repeating the measurement on many such prepared systems you can with statistical analyses test whether the probabilities provided by the formalism are correct.

Another quibble for many people is that the above formalism implies that there is no state of a system, where all possible observables take determined values (at least not for the realistic QTs based on observable algebras determined via the representation theory of the space-time symmetries, defining the algebra of the operators of the usual observables like position, momentum, angular momentum, ... and allowing a construction of the entire formalism for specific realistic systems like elementary particles, electromagnetic waves, atomic nuclei, atoms, molecules, condensed matter, i.e., everything we can observe in Nature).

At the present state of our physics knowledge we don't know whether there's a cause for the outcome of measurements or whether there's a cause for a radiactive nucleus decaying right at the time after one has established its presence it does decay or whether there is a description of Nature in accordance with all known empirical facts where all observables always take determined values, i.e., whether the probabilistic description of Nature as given by QT must "be considered complete" or whether there's a more comprehensive "realistic description".
 
  • #8
vanhees71 said:
In my opinion, it's simply a personal decision which interpretation you follow.
I agree with this. You are mistaken if you think my reasons for objecting to some of your posts in this subforum are that they are opinions. The subforum guidelines that I referred to state that opinions are to be expected in this subforum since that's what all interpretation discussions come down to.

Those guidelines also state, however, that, precisely because all interpretation discussions are a matter of opinion, one should not make claims that one's preferred interpretation is correct and others are wrong, or other statements along those lines. Which means that statements like this...

vanhees71 said:
If you ask me, the only interpretation of any physical theory must be a minimal one
...are out of bounds here, because you are saying that your preferred approach is the right one and others are wrong. Most of the rest of your post is just defining what you mean by "the minimal statistical interpretation", and that part is fine. But saying that that interpretation "must be" "the only interpretation" is not fine. And those kinds of statements are what I have objected to.
 
  • Like
Likes DrChinese
  • #9
Demystifier said:
Can you elaborate it a bit, or give a reference? I don't think that realism can be ruled out without further specifications/assumptions.

Naturally, every No-Go has some assumptions attached. But they are not equally reasonable to all people. More specifically, those assumptions seem to melt under the watchful eyes of supporters of a particular theory or interpretation ruled out by that No-Go. Here, the Leggett Inequality violation rules out nonlocal HV theories such as Bohmian Mechanics. I would expect you to reject their argument, and in fact I'd be disappointed if you didn't. :smile:

1. Experimental Falsification of Leggett's Non-Local Variable Model (2007)
Cyril Branciard, Alexander Ling, Nicolas Gisin, Christian Kurtsiefer, Antia Lamas-Linares, Valerio Scarani

The fact that quantum correlations can be attributed neither to LV nor to communication below the speed of light is referred to as quantum non-locality. While non-locality is a striking manifestation of quantum entanglement, it is not yet clear how fundamental this notion really is: the essence of quantum physics maybe somewhere else [2]. For instance, non-determinism is another important feature of quantum physics, with no a priori link with non-locality. ... Bell’s theorem having ruled out all possible LV models, we have to move on to models based on non-local variables (NLV). ... A different such model was proposed more recently by Leggett [6]. This model supposes that the source emits product quantum states |α> ⊗ |β> with probability density ρ(α, β), and enforces that the marginal probabilities must be compatible with such states ["reasonable" assumptions (2) and (3) presented here] ... What Leggett showed is that the simple requirement of consistency (i.e., no negative probabilities should appear at any stage) constrains the possible correlations, even non-local ones, to satisfy inequalities that are slightly but clearly violated by quantum physics.
[Experiment itself...]
Obviously, there are NLV models that do reproduce exactly the quantum predictions. Explicit examples are
Bohmian Mechanics [14] and, for the case of two qubits, the Toner-Bacon model [15]. Both are deterministic. Now, in Bohmian mechanics, if the first particle to be measured is A, then assumption (2) can be satisfied, but assumption (3) is not. This remark sheds a clearer light on the Leggett model, where both assumptions are enforced: the particle that receives the communication is allowed to take this information into account to produce non-local correlations, but it is also required to produce outcomes that respect the marginals expected for the local parameters alone.


I.e. The Bohmian model cannot satisfy both assumptions simultaneously.2. Other teams have explored this area as well, coming to similar conclusions:

Violation of Leggett-type inequalities in the spin-orbit degrees of freedom of a single photon (2013)

Conclusion: Using Leggett-type inequalities, we have experimentally tested the possible validity of hidden-variable models for the measurement correlations between different degrees of freedom, namely spin and orbital angular momentum, in the case of a photon prepared in a single particle entangled state of these two observables. The measured correlations agree with quantum predictions and hence violate the inequalities in a range of experimental parameters, thus showing with high confidence that for this physical system a wide class of deterministic models that preserve realism, even admitting a possible contextuality of the two observables, disagree with experimental results.
 
  • #10
PeterDonis said:
...are out of bounds here, because you are saying that your preferred approach is the right one and others are wrong. Most of the rest of your post is just defining what you mean by "the minimal statistical interpretation", and that part is fine. But saying that that interpretation "must be" "the only interpretation" is not fine. And those kinds of statements are what I have objected to.
This I don't understand. It's clearly labelled as a personal opinion, and you said that's fine in this subforum. I think your refereeing in this subforum is too strict. It hinders the free discussion of opinions necessary to discuss about interpretations at all!
 
  • #11
vanhees71 said:
It's clearly labelled as a personal opinion, and you said that's fine in this subforum.
Please read what I said in context. And please read the subforum guidelines again, since all this is discussed there. You can't just pick out the parts you like and ignore the parts you don't like; the guidelines are to be read as a whole.

vanhees71 said:
I think your refereeing in this subforum is too strict. It hinders the free discussion of opinions necessary to discuss about interpretations at all!
Nonsense. Plenty of others in this thread, and in every other thread in this subforum, have no problem at all discussing interpretations without violating the subforum guidelines or getting confused about the difference between describing what an interpretation says and claiming that your preferred interpretation is right and others are wrong.
 
  • Like
Likes Motore
  • #12
Maybe I misunderstand the guidelines. Sorry for that. Please let's stop these fruitless discussions. I want to discuss physics and not some "legal interpretations" of your guidelines. I'm not a lawyer!
 
  • Haha
  • Skeptical
Likes Motore and Demystifier
  • #13
vanhees71 said:
Yes, but all this is a matter of personal opinion
The word "all" is the issue here. Criticisms of established interpretations can be objective and substantive. E.g. Papers like these [1] [2] [3] level substantive charges re/interpretation that cannot be reduced to personal opinion.
one must aim at an interpretation of the mathematical formalism of the theory with as little metaphysical or philosophical additions as possible.

For quantum mechanics, in my opinion, that's the minimal statistical interpretation
Quantum theories will assign probabilities satisfying Kolmogorov's axioms to measurement outcomes, but will also assign probabilities satisfying Kolmogorov's axioms to events that are not measurement outcomes. You could argue that a minimalist interpretation require an additional rule/commitment not enforced by the formalism: Limit probability measures to measurement outcomes. Is it really minimalist then? E.g. consider Hardy's thought experiment [3] which involves a quantum system with the time-evolution$$\begin{eqnarray*}|s^+s^-\rangle&\rightarrow&\frac{1}{2}(i|u^+\rangle+|v^+\rangle)(i|u^-\rangle + |v^-\rangle)\\&\rightarrow&\frac{1}{2}(-|\gamma\rangle +|\psi\rangle)\\&\rightarrow&\frac{1}{4}(-2|\gamma\rangle-3|c^+\rangle|c^-\rangle+i|c^+\rangle|d^-\rangle + i|d^+\rangle|c^-\rangle-|d^+\rangle|d^-\rangle)\end{eqnarray*}$$where ##|\psi\rangle = i|u^+\rangle|v^-\rangle + i|v^+\rangle|u^-\rangle + |v^+\rangle|v^-\rangle## is a superposition. We can assign probabilities to the joint measurement outcomes as per usual, but we can also construct a set of histories with the support$$\left[s^+s^-\right]\otimes\begin{cases}\left[\psi\right]&\otimes&\begin{cases}\left[c^+c^-\right]\\\\\left[c^+d^-\right]\\\\\left[d^+c^-\right]\\\\\left[d^+d^-\right]\end{cases}\\\\\ \left[\gamma\right]&\otimes&\left[\gamma\right]\end{cases}$$which asserts properties at an intermediate time between preparation and measurement. QM will return probabilities for these just as readily as it will for measurement outcomes. I find it hard to justify the discarding of these, based on the formalism alone.
 
Last edited:
  • Like
Likes physika
  • #14
I'm not sure I understand the scenario you suggest. If you make measurements in between these deliver also outcomes whose probabilities are predicted by QT. Then it may be possible to make further measurements on the system, if the first measurements where not destructive. To be able to predict the probabilities you have to take the influence on the state of the system into account to predict the outcomes of the second ones and so on. Such a sequence of measurements are in no way excluded by the formalism, andI didn't intend to exclude such scenarios in any sense.
 
  • #15
DrChinese said:
Here, the Leggett Inequality violation rules out nonlocal HV theories such as Bohmian Mechanics. I would expect you to reject their argument, and in fact I'd be disappointed if you didn't. :smile:

1. Experimental Falsification of Leggett's Non-Local Variable Model (2007)
I agree with their argument, but not with your interpretation of it. In the paper they say: "Obviously, there are NLV models that do reproduce exactly the quantum predictions. Explicit examples are Bohmian Mechanics ..."
In the next sentence they also explain that Bohmian mechanics violates assumption (3), which is why Bohmian mechanics is not ruled out by the experiment.

DrChinese said:
I.e. The Bohmian model cannot satisfy both assumptions simultaneously.
Right, it cannot and it does not. That is why Bohmian model isn't ruled out. The experiment rules out theories in which both assumptions are satisfied simultaneously, but Bohmian theory is not of that kind.

In Bohmian mechanics, as in standard QM, the wave function of the whole system changes upon measurement, so the probabilities change, which is why the assumption (3) is violated.
 
Last edited:
  • Like
Likes DrChinese, vanhees71 and mattt
  • #16
vanhees71 said:
I'm not sure I understand the scenario you suggest. If you make measurements in between these deliver also outcomes whose probabilities are predicted by QT. Then it may be possible to make further measurements on the system, if the first measurements where not destructive. To be able to predict the probabilities you have to take the influence on the state of the system into account to predict the outcomes of the second ones and so on. Such a sequence of measurements are in no way excluded by the formalism, andI didn't intend to exclude such scenarios in any sense.
Each measurement in a sequence of measurements will induce decoherence and hence consistent assignment of probabilities. But the interesting feature I was trying to highlight is quantum mechanics lets us assign consistent probabilities to certain intermediate events even if no intermediate measurement is made. The support I showed earlier is consistent even if no intermediate measurement is made.

This amounts to an interesting relaxation of the constraints of the minimalist interpretation. If we, for example, do not see any detectors click in the above experiment, we can infer an annihilation event at an earlier time, whereas a strict minimalist would insist we limit our inferences to macroscopic data.
 
  • Like
Likes vanhees71
  • #17
Of course, you can assign any probabilities to events not being measured, but it's irrelevant for the result of measurements which are really done. Usually you get into contradictions when speculating about the outcome of measurements based on measurements which could have been done "in between" but haven't been done.

What do you mean by "the support I showed earlier is consistent even if no intermediate measurement is made"?

The inference of no detectors clicking, supposed you have ideal detectors, that then the annihilation of the electron-positron pair occured, is no contradiction with minimally interpreted QT, as far as I can see.
 
  • #18
vanhees71 said:
Of course, you can assign any probabilities to events not being measured, but it's irrelevant for the result of measurements which are really done. Usually you get into contradictions when speculating about the outcome of measurements based on measurements which could have been done "in between" but haven't been done.

What do you mean by "the support I showed earlier is consistent even if no intermediate measurement is made"?

The inference of no detectors clicking, supposed you have ideal detectors, that then the annihilation of the electron-positron pair occured, is no contradiction with minimally interpreted QT, as far as I can see.
The contradiction I see is a claim about the annihilation of the electron-positron pair requires a departure from a minimalist interpretation. A minimalist interpretation makes claims about a microscopic system in terms of macroscopic preparations and tests, not microscopic events in the interim.
 
  • #19
Of course the minimalist interpretation makes predictions for the annihilation of an electron-positron pair. Why shouldn't it?
 
  • #20
vanhees71 said:
Of course the minimalist interpretation makes predictions for the annihilation of an electron-positron pair. Why shouldn't it?
Because the annihilation of an electron-positron pair isn't macroscopic. A minimalist interpretations instead makes predictions about data produced by constant-fraction differential discriminators, time-to-amplitude converters, fast coincident units and other macroscopic instruments when performing tests on electron-positron systems.
 
  • #21
The minimal interpretation makes predictions for the probabilities for the outcome of measurements. Annihilation of a particle and its antiparticle is an observable/measurable process, and minimally interpreted relativistic QFT makes predictions about the probabilities for such processes, e.g., ##\text{e}^+ + \text{e}^- \rightarrow \gamma + \gamma##. It's one of the first exercises one calculation in the QFT 1 lecture.
 
  • #22
The calculation can of course be done by anyone subscribing to any established interpretation. But there is a distinction between a measurable process and a measurement outcome. The minimalist interpretation ascribes meaning only to the latter. Hardy's paradox is a resolved by a total rejection of any statements about the pair travelling through the apparatus as real.
 
  • Like
Likes vanhees71
  • #23
DrChinese said:
Also: In each of the above No-Go works, the related experimental outcomes match traditional Quantum Mechanics - without any regard to Special Relativity at all. My conclusion is that adding relativistic considerations to QM does not in any way change the underlying foundations of essential theory. Otherwise, you would need to account for it in experiments. Clearly, relativistic considerations are unnecessary in all of the various no-go's and related experiments using entangled systems to test the concepts of Bell locality and realism/non-contextuality/hidden variables.
There is logical fallacy here. Like, a trajectory of a falling stone can be explained using traditional Newton's gravity without any regard to GR at all. However one theory has spooky actions at a distance and the other does not.
Same here, NRQM is manifestly non-local while QFT may be treated as local (depends on interpretation) and yet they give the same predictions.
 
  • #24
Morbert said:
The calculation can of course be done by anyone subscribing to any established interpretation. But there is a distinction between a measurable process and a measurement outcome. The minimalist interpretation ascribes meaning only to the latter. Hardy's paradox is a resolved by a total rejection of any statements about the pair travelling through the apparatus as real.
That in fact IS the minimal interpretation!
 
  • #25
vanhees71 said:
You cannot rule out interpretations since they are metaphysical, philosophical additions to the minimal interpretation. It's like religion. You cannot rule out any of them based on scientific criteria for a theory or model being successful in describing the phenomena or not. For the pure scientific purpose you only need a minimal interpretation, which describes how the mathematical formalism is applied to observed phenomena in Nature.
You are aware that this is a very personal definition of "the pure scientific purpose"?

And if it were true, I wonder how paradigm shifts happen, like General Relativity. The main reason for Einstein to develop his theory of General Relativity was not some anomaly with observation like Mercury's motion. His main motivation was the unexplained "coincidence" of inertial and gravitational mass being the same according to experiment. So I guess Einstein's introduction of the equivalence principle then wasn't a "pure scientific purpose".

The same goes with the geocentric worldview. The main reason to abandon it was not because the geocentric model couldn't explain observations. Actually, the opposite was true: people couldn't explain how we wouldn't notice anything about the motion of the earth in a heliocentric model according to the physical worldview of that time.

Your notion of "pure scientific purpose" seems to me naive. To say it quantummechanically: scientific advancement cannot be regarded in isolation as just "mathematical formalism applied to observed phenoma": scientific progress is most of the time contextual.

But I guess we've had this discussion many times before and I don't want to ruin the topic.
 
  • #26
Paradigm shifts happen if phenomena become known that cannot be described with the then paradigmatic theories. GR has been discovered, because of the discovery of SR, which became necessary to describe electromagnetic phenomena in a consistent theory. It turned out that this needed a paradigm shift in the sense that the hitherto valid spacetime model had to be modified. This implied that all other phenomena had to be described in accordance with relativity, and when trying to find a description of the gravitational interaction Einstein figured out that one needs another adaption of the spacetime model to get a constistent theory implementing the equivalence principle.

With quantum theory it was pretty much similar, i.e., it became necessary to describe phenomena, which could not be described with classical physics, starting with the black-body spectrum (Planck 1900), the stability of matter given the existence of atoms and their structure as observed by Rutherford (1911), the spectral lines emitted by atoms beyond hydrogen (1925 "modern QT" by Born, Jordan, Heisenberg, Schrödinger, and Dirac).

Paradigm shifts have never come from philosophical speculations. They were always triggered by the discovery of new phenomena.
 
  • Like
Likes DrChinese
  • #27
haushofer said:
I wonder how paradigm shifts happen, like General Relativity. The main reason for Einstein to develop his theory of General Relativity was not some anomaly with observation like Mercury's motion. His main motivation was the unexplained "coincidence" of inertial and gravitational mass being the same according to experiment.
It wasn't that simple. While Einstein's "happiest thought of my life" that freely falling objects are weightless was certainly a key insight that started him on the road to GR, he also knew, first, that Newtonian gravity was incompatible with special relativity, and second, that there were anomalies like Mercury's perihelion precession that the Newtonian model could not explain. And all during his development of GR, one of the first things he did with every new version of the field equation he came up with was to check its prediction for Mercury's perihelion precession, to make sure he was on the right track.

Also, as you yourself say, the "coincidence" of inertial and gravitational mass was itself an experimental observation. GR gives a simpler theoretical explanation of this observation than Newtonian gravity does, but it's still an experimental observation.

haushofer said:
The same goes with the geocentric worldview. The main reason to abandon it was not because the geocentric model couldn't explain observations.
Yes, it was. The heliocentric model was not accepted when it was first proposed (by Copernicus), because it did not match observations as well as the geocentric model. It was not until Kepler developed a better heliocentric model using elliptical orbits and based on Tycho Brahe's data that the heliocentric model made more accurate predictions--and then it became accepted. (To be fair, there was also religious dogma involved, which made it harder for the heliocentric model to become accepted even after it made more accurate predictions. But if it hadn't made more accurate predictions to begin with, nobody would have even tried to get it accepted in the face of religious dogma.)
 
  • Like
Likes DrChinese and vanhees71
  • #28
Delta Kilo said:
There is logical fallacy here. Like, a trajectory of a falling stone can be explained using traditional Newton's gravity without any regard to GR at all. However one theory has spooky actions at a distance and the other does not.
Same here, NRQM is manifestly non-local while QFT may be treated as local (depends on interpretation) and yet they give the same predictions.
The fallacy you point to is applied backwards. If theory X explains all experiments within its scope as well as theory X+Y, then: for that scope, theory Y is completely ad hoc and of no use. Newton's theory works pretty well within some specific scope. And true enough to your example, does NOT match relativity for many situations where more extreme gravity is involved.

However, I am not familiar with a single entanglement experiment or No-Go theorem in which relativity has any relevance at all. On the other hand, I freely acknowledge that QFT and in general relativistic versions of QM are clearly superior in many types of experiments and applications. Just not in the scope I mentioned. If you know differently, please share. :smile: (Again, I am referring to Bell tests, GHZ, entanglement and the like.)



Note: I am a fan of Korzybski's quote: "The map is not the territory". Theories are either more useful or less useful. But no theory can describe all behavior equally well, because the number of input variables changes from one theory to the next - thereby changing its utility. There are no input variables from relativity that are needed to predict the outcome of an entanglement experiment on - let's say - photon polarization. QM alone provides the best useful predictions. Saying otherwise has no experimental support, and you can see that in virtually any paper on the subject. There is no mention of relativity or QFT at all in the seminal papers on these No-Go's (unless they are demonstrating that QM violates relativistic locality constraints).
 
  • Like
Likes gentzen
  • #29
vanhees71 said:
That in fact IS the minimal interpretation!
Im ignorant on the following question in this interpretation (sorry if it's off topic for this thread): Let's say I run a stern-gerlach experiment and get the result of spin up. Does your interpretation allow observables that commute with spin to take on values even though they aren't directly observed?

Theoretically, we can know them, but since the minimal interpretation seems more concerned about direct measurement, I'm not sure if it's allowed.
 
  • Like
Likes vanhees71
  • #30
Of course, why not? Nevertheless you have to prepare them accordingly. E.g., in non-relativistic theory spin is compatible with, say, momentum, i.e., there's a common basis of (generalized) eigenstates of the spin-##z## component of a silver atom and its momentum, ##|\vec{p},\sigma \rangle## where ##\sigma \in \{-1/2,1/2\}## and eigenvalues of ##\hat{s}_z## of ##\hbar \sigma## and eigenvalues ##\vec{p} \in \mathbb{R}^3##.

In an Stern-Gerlach experiment you have entanglement between spin and momentum after the silver atom moved through the magnet, i.e., if you let a silver atom with a pretty well defined momentum go through the magnet of the SGE, after the magnet it's in a state like ##|\vec{p}_1,1/2 \rangle+|\vec{p}_2,-1/2 \rangle##. If you filter out the Ag atoms with momentum (distribution peaked around) ##\vec{p}_2## you are left with Ag atoms with a momentum (distribution peaked around) ##\vec{p}_1## and ##\sigma_z=+1/2##, i.e., you have prepared both the momentum and the ##s_z## component.
 
  • Like
Likes romsofia
  • #31
romsofia said:
I'm ignorant on the following question in this interpretation (sorry if it's off topic for this thread): Let's say I run a stern-gerlach experiment and get the result of spin up. Does your interpretation allow observables that commute with spin to take on values even though they aren't directly observed?
Just to emphasize the subtlety in minimalist interpretations: Observables represent macroscopic tests, and a quantum system is "a useful abstraction that does not exist in nature, and defined by its preparation" (A. Peres) What a pair of commuting observables imply is a quantum system can be prepared such that the outcome of the joint/both tests represented by the pair of observables can be known with effective certainty. So when a minimalist defines a system by, say, ##|\vec{p}_1,1/2\rangle##, they are not asserting that the quantum system now has values for spin and momentum. They are asserting the outcome of a joint spin-momentum test (or either individual test) can be predicted for this quantum system.
 
  • Like
  • Informative
Likes mattt, romsofia and vanhees71
  • #32
If you could prepare a silver atom in the state ##|\vec{p}_1,1/2 \rangle## then the momentum of this atom has the determined value ##\vec{p}_1## and the spin-##z## component as the determined value ##\hbar/2##. Of course in reality you cannot prepare such a state, because it's not normalizable to 1, i.e., the momentum must have some uncertainty, which can be made as small as you like but never made 0. That's because the momentum operator has a continuous spectrum. Despite this qualification, if a system is prepared in an eigenstate of an operator that represents an observable, then this observable takes the corresponding determined value, i.e., the eigenvalue of this operator.
 
  • #33
vanhees71 said:
If you could prepare a silver atom in the state ##|\vec{p}_1,1/2 \rangle## then the momentum of this atom has the determined value ##\vec{p}_1## and the spin-##z## component as the determined value ##\hbar/2##. Of course in reality you cannot prepare such a state, because it's not normalizable to 1, i.e., the momentum must have some uncertainty, which can be made as small as you like but never made 0. That's because the momentum operator has a continuous spectrum. Despite this qualification, if a system is prepared in an eigenstate of an operator that represents an observable, then this observable takes the corresponding determined value, i.e., the eigenvalue of this operator.
Yes the usual caveats about momentum applies. I have a bit more to say but since we are already quite far from the thread topic I will start a new one.
 
  • Like
Likes vanhees71
  • #34
PeterDonis said:
-It wasn't that simple. -Yes, it was. The heliocentric model was not accepted when it was first proposed (by Copernicus), because it did not match observations as well as the geocentric model. It was not until Kepler developed a better heliocentric model using elliptical orbits and based on Tycho Brahe's data that the heliocentric model made more accurate predictions--and then it became accepted. (To be fair, there was also religious dogma involved, which made it harder for the heliocentric model to become accepted even after it made more accurate predictions. But if it hadn't made more accurate predictions to begin with, nobody would have even tried to get it accepted in the face of religious dogma.)
- I'm not claiming "it was simple"; I'm claiming that observations weren't the only input, and afaik it wasn't Einstein's main reason to find an alternative for Newton's theory of gravity.

- I'm refering to the fact that people could add arbitrary amounts of epicycles to the geocentric model to "explain observations".

But nevermind, this is off-topic here,
 
  • Like
Likes bob012345
  • #35
Of course, the development of theories is not simply to take some observations and put them in a mathematical scheme. It's a complicated interrelation between theory and experiment. E.g., the motiviation for Einstein to devlop GR was of course that after the electromagnetic interactions (Einstein 1905) and classical mechanics (Planck 1906) has been successfully "translated" to the relativistic spacetime model, also the gravitational interaction had to be somehow incorporated into relativity. Characteristically for Einstein in his younger years he immediately found out the one specific empirical fact about the gravitational interaction was the equivalence principle, i.e., that in small enough regions of space and for sufficiently small time intervals the gravitational interaction is equivalent to choosing a (local) non-inertial frame of reference. What this really mathematically means in the context of relativistic spacetime took him then 10 years and finally lead to GR.
 

Similar threads

Back
Top