First loophole-free Bell test?

  • A
  • Thread starter bohm2
  • Start date
  • Tags
    Bell Test
In summary, the first loophole-free Bell test was conducted in 2015 by a team of researchers at the University of Vienna, using a system of entangled photons. This test provided the strongest evidence yet for the validity of quantum entanglement and the violation of local realism. The results of this experiment have important implications for our understanding of the fundamental nature of reality.
  • #36
Superdeterminism and retrocausality first strike me as ridiculous interpretations. But I'm not 100% ready to say that they are nonsense. The reason we view these as ridiculous is because of our intuitions about the asymmetry between past and future. But physics doesn't really have a good explanation for that asymmetry that isn't ad hoc.
 
Physics news on Phys.org
  • #37
stevendaryl said:
Also, just in case you care about insulting people to their face, Nikolic is a regular, and well-respected, participant in this forum (where he uses a pseudonym).
I do care about that. But I am not trying to insult any people. I'm just telling you my personal reaction to certain ideas. Maybe this just shows that I didn't work hard enough yet to understand those ideas. (And BTW I am just a mathematician / statistician, not a physicist, so perhaps not qualified to say much here at all).
 
  • #38
atyy said:
You were good to give such a serious criticism. But did you read Scott Aaronson's hilarious commentary on your criticism?

"Now, as Gill shows, Joy actually makes an algebra mistake while computing his nonsensical “correlation function.” The answer should be -a.b-a×b, not -a.b. But that’s truthfully beside the point. It’s as if someone announced his revolutionary discovery that P=NP implies N=1, and then critics soberly replied that, no, the equation P=NP can also be solved by P=0." http://www.scottaaronson.com/blog/?p=1028

Yes. Well that's the interesting thing about Christian's "theory". He brings up some really interesting topics, such as quaternions and how they relate to the 3-sphere, and so forth. It's easy to get lost in those topics (I personally spent a lot of time getting up to speed on what Joy Christian was talking about.) But the bottom line is that no matter how interesting his model is, what he's claiming to do is ridiculous, and that can be shown in one line: Bell proved that there can be no functions [itex]A(\lambda, \alpha), B(\lambda, \alpha)[/itex] that take values in [itex]\{ +1, -1 \}[/itex] satisfying blah, blah, blah, and Christian is constructing quaternion-valued functions. No matter how interesting his functions, they can't possibly refute Bell.
 
  • Like
Likes Ilja and gill1109
  • #39
gill1109 said:
(And BTW I am just a mathematician / statistician, not a physicist, so perhaps not qualified to say much here at all).

Bell wasn't qualified to do statistics either :)

BTW, do statisticians consider Pearl's work statistics, or something else? (Sorry, I know many others worked on it, but for biologists, that's maybe the most famous name.)
 
  • #40
atyy said:
Bell wasn't qualified to do statistics either :)

BTW, do statisticians consider Pearl's work statistics, or something else? (Sorry, I know many others worked on it, but for biologists, that's maybe the most famous name.)
Well Pearle comes from computer science but his work has big impact in statistics.

I think Bell's *statistical* insights and understanding were really good. Way above that of most of his colleagues. There was so much misunderstandings of what he'd done in the early years due to lack of statistical sophistication on the part of most of the physicists discussing his results.
 
  • Like
Likes atyy
  • #41
gill1109 said:
- solipsistic hidden variables - objective reality exists and is local, but objective reality describes only the observers, not the observed objects (H. Nikolic, http://xxx.lanl.gov/abs/1112.2034 )

Sounds like a word game to me.

FYI: This paper was written by Demystifier. Despite the interpretation, he is actually a Bohmian. But one of the few that actually considers other interpretations. So you can talk to him. :smile:
 
  • Like
Likes Demystifier
  • #42
It's impossible to discuss the significance of this experiment, and especially what questions are still open, without reference to various interpretations.

However, It would be a virtuous and good thing (yes, I know, my daughters have repeatedly explained to me that "virtuous and good" is parent-speak for "boring") if we could keep this thread from turning into a debate on the merits of the various interpretations. That's not a question that can be settled here.
 
  • #43
stevendaryl said:
Superdeterminism and retrocausality first strike me as ridiculous interpretations. But I'm not 100% ready to say that they are nonsense. The reason we view these as ridiculous is because of our intuitions about the asymmetry between past and future. But physics doesn't really have a good explanation for that asymmetry that isn't ad hoc.

Superdeterminism and Retrocausality really should not be grouped together. Out of respect for Nugatory's comment about discussing interpretations in this thread, I will leave it at that.
 
  • #44
DrChinese said:
Superdeterminism and Retrocausality really should not be grouped together. Out of respect for Nugatory's comment about discussing interpretations in this thread, I will leave it at that.

They seem very similar to me. It seems to me that a retrocausal theory, with back-and-forth influences traveling through time, can be reinterpreted as a superdeterministic theory, where the initial conditions are fine-tuned to get certain future results. In the end, you have a fine-tuned correlation between initial conditions and future events, and retrocausality would be a mechanism for achieving that fine-tuning.
 
  • #45
DrChinese said:
FYI: This paper was written by Demystifier. Despite the interpretation, he is actually a Bohmian. But one of the few that actually considers other interpretations. So you can talk to him. :smile:

Some of my best friends are Bohmians!
 
  • Like
Likes Ilja
  • #46
As I see it, there are two kinds of loopholes:

The proper loopholes, that specifiy some physical mechanism, like the detection loophole, the coincidence loophole, or the memory loophole. As far as I can see, they will all be closed now (I hope Hensen et.al. are still running the experiment in order to increase the sample size and reduce the p-value).

Then you have the metaphysical loopholes, that cannot even in principle be falsified by experiments. I have to side with Popper on this one: It's not science.
 
Last edited:
  • Like
Likes zonde
  • #47
stevendaryl said:
Superdeterminism and retrocausality first strike me as ridiculous interpretations. But I'm not 100% ready to say that they are nonsense. The reason we view these as ridiculous is because of our intuitions about the asymmetry between past and future. But physics doesn't really have a good explanation for that asymmetry that isn't ad hoc.

DrChinese said:
Superdeterminism and Retrocausality really should not be grouped together. Out of respect for Nugatory's comment about discussing interpretations in this thread, I will leave it at that.

Hello DrChinese, if you could please elaborate more on this in the thread below I would appreciate it.
https://www.physicsforums.com/threads/is-retrocausality-inherently-deterministic.829758/

Heinera said:
As I see it, there are two kinds of loopholes:

The proper loopholes, that specifiy some physical mechanism, like the detection loophole, the coincidence loophole, or the memory loophole. As far as I can see, they will all be closed now (I hope Hensen et.al. are still running the experiment in order to increase the sample size and reduce the p-value).

Then you have the metaphysical loopholes, that cannot even in principle be falsified by experiments. I have to side with Popper on this one: It's not science.

I couldn't agree more with this statement, as I feel any sort of local hidden variable "superdeterministic" conspiracy theory contemplation is a complete waste of time and energy.
 
  • #48
stevendaryl said:
They seem very similar to me. It seems to me that a retrocausal theory, with back-and-forth influences traveling through time, can be reinterpreted as a superdeterministic theory, where the initial conditions are fine-tuned to get certain future results. In the end, you have a fine-tuned correlation between initial conditions and future events, and retrocausality would be a mechanism for achieving that fine-tuning.

There is no need for fine-tuning in time-symmetric/retrocausal interpretations, any more than there is fine-tuning in Bohmian or MW interpretations. All predict the full universe of events follow the statistics of QM. Superdeterminism posits that there is a subset of events which match the stats of QM but the full universe does not.
 
  • #49
stevendaryl said:
Are you saying that every electron produced is eventually detected? (Or a sizable enough fraction of them?)

Yes, there is no loss on that side to speak of. As I see it, essentially the same pair of distant electrons (I guess really from the same pair of atoms) are being entangled over and over again, 245 times in this experiment. The entanglement itself occurs after* the random selection of the measurement basis for the Bell test is made, and too late to affect the outcome of the measurements (by propagation at c or less).

*Using a delayed choice entanglement swapping mechanism, see some of the PF threads on that for more information. Or read: http://arxiv.org/abs/quant-ph/0201134 and check out figure 1 on page 8. Photons 0 and 3 are replaced in our loophole-free test by electrons. Other than that, quite similar in space-time layout. Of course, in the loophole free version, some additional pretty cool things going on (literally).
 
  • #50
Let me see if I understand this right. Alice and Bob pick their settings, perform their measurements. During the process a photon is emitted from their respective electrons. Both photons are sent to station C. At station C, "entanglement swapping" (aka post-processing) is performed to decide if "state-preparation" was successful. They successfully "prepare the state " with a success probability of 6.4e-9! Only 245 successful "preparation" out of many millions of trials.

Maybe it's the wine I drank before reading the paper but, it looks to me like a detection loophole experiment done in reverse, then misinterpreted. I'll have to read it again in the morning. Has this thing even been peer-reviewed? Have any of you read it carefully?
 
  • #51
billschnieder said:
Let me see if I understand this right. Alice and Bob pick their settings, perform their measurements. During the process a photon is emitted from their respective electrons. Both photons are sent to station C. At station C, "entanglement swapping" (aka post-processing) is performed to decide if "state-preparation" was successful.
You have misunderstood the process. Look at figure 2a in the paper. First photon is emitted by NV center and sent to station C and only a moment later basis is selected.
 
  • #52
billschnieder said:
Let me see if I understand this right. Alice and Bob pick their settings, perform their measurements. During the process a photon is emitted from their respective electrons. Both photons are sent to station C. At station C, "entanglement swapping" (aka post-processing) is performed to decide if "state-preparation" was successful. They successfully "prepare the state " with a success probability of 6.4e-9! Only 245 successful "preparation" out of many millions of trials.

Maybe it's the wine I drank before reading the paper but, it looks to me like a detection loophole experiment done in reverse, then misinterpreted. I'll have to read it again in the morning. Has this thing even been peer-reviewed? Have any of you read it carefully?
I have read it very carefully. The experiment has been under preparation for two years and a stream of peer-reviewed publications have established all the components of the experiment one by one http://hansonlab.tudelft.nl/publications/ . The design of the present experiment was announced half a year ago. Two years ago I believe, they already did this with 1.5 metre separation.

Please take a look at Bell's (1981) "Bertlmann's socks", discussion of an experiment around figure 7. With the three locations A, B, C. This is exactly the experiment which they did in Delft.

The idea of having so-called "event-ready detectors" through entanglement swapping has been known since 1993 http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.71.4287

‘‘Event-ready-detectors’’ Bell experiment via entanglement swapping
M. Żukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert
Phys. Rev. Lett. 71, 4287 – Published 27 December 1993

It's true that Alice and Bob are doing those measurements again and again and all but a tiny proportion of their attempts are wasted. They don't know in advance which measurements are the good ones, which ones aren't (because by the time a message arrived from the central location saying that this time it's the real thing, they would already be half finished with the measurement they are doing at that moment).

So there is a "post-selection" of measurement results. But the space-time arrangement is such that it cannot influence the settings being used for the measurements. Your computer does the selection retrospectively but effectively it was done in advance.
 
Last edited:
  • #53
gill1109 said:
Some of my best friends are Bohmians!
They believe in you even when they don't see you. :biggrin:
 
  • Like
Likes jerromyjon, Nugatory and atyy
  • #54
zonde said:
You have misunderstood the process. Look at figure 2a in the paper. First photon is emitted by NV center and sent to station C and only a moment later basis is selected.
Then why are they randomly switching between two different microwave pulses in order to generate the photons entangled with the electrons. Why not just a single pulse. It seems to me the settings are the microwave pulses and those are set before the photons are emitted. The readout happens later but the emitted photons already know the settings. How do they avoid setting dependent post-selection at C?
 
  • #55
gill1109 said:
Please take a look at Bell's (1981) "Bertlmann's socks", discussion of an experiment around figure 7. With the three locations A, B, C. This is exactly the experiment which they did in Delft.
That's a stretch. Bells event ready setup involves a third signal to Alice and Bob that an entangled pair was emitted. In this case, Alice and Bob's particles are not entangled to begin with. But photons from A and B are used at station C to select the sub-dnsemble of results that correspond to entanglement. No " event-ready" signal is ever sent to A and B.
So there is a "post-selection" of measurement results. But the space-time arrangement is such that it cannot influence the settings being used for the measurements. Your computer does the selection retrospectively but effectively it was done in advance.
Maybe, that's what is not so clear to me. Are the "settings" the microwave pulses P0 and P1, driven by RNGs?
 
  • #56
gill1109 said:
Please take a look at Bell's (1981) "Bertlmann's socks", discussion of an experiment around figure 7. With the three locations A, B, C. This is exactly the experiment which they did in Delft.

You had mentioned "Bertlmann's socks" before. I'm familiar with that essay by Bell, but I always took his device, with the boxes and settings, to be an intuitive abstraction of the EPR experiment. I never thought of it as a serious proposal for an actual experiment.
 
  • #57
Their p value is 0.04. Why is that loophole free?
 
  • #58
Heinera said:
Then you have the metaphysical loopholes, that cannot even in principle be falsified by experiments. I have to side with Popper on this one: It's not science.
I disagree. Loophole is loophole. One has to close it.

In the case of a metaphysical loophole, it is closed by accepting, as a sort of axiom or fundamental principle, something postulate which prevents it. This postulate, taken alone, cannot be tested by observation.

But this does not make such a postulate unphysical, not even from Popper's point of view. Popper has recognized from the start that not every particular statement of a physical theory can be tested, that one needs the whole theory to get predictions about real experiments. Then, answering Quine's holism, which claims that a single theory is not enough, but the whole of physics is necessary to make experimental predictions, he has recognized even more, namely that often even a whole theory taken alone is not sufficient to derive any nontrivial, falsifiable prediction. Each real experiment depends on a lot of different theories - in particular, theories about the accuracy of all measurement instruments involved.

Just for example that even famous theories taken alone do not give anything, take GR. Whatever the observed distribution of matter, and whatever the gravitational field, by defining dark matter as [itex]T_{mn}^{dark} = G_{mn}-T_{mn}^{obs}[/itex] the Einstein equations of GR can be forced to hold exactly. One needs additional assumptions about properties of dark matter to derive anything from the Einstein equations. Else, all what is predicted by GR is nothing more than what is predicted by all metric theories of gravity - namely that what clocks measure may be described by a metric.

The point what makes it unnecessary to accept Quine's holism is that one can test the several theories involved in each actual experiment in other, independent experiments. This is, in particular, how one solves the problem of theories about measurement devices. You can test the measurement devices in completely different experiments, and this is what is done with real experimental devices. Say, their accuracy can be tested as by comparison with other devices, or (for the most accurate ones) by comparing other devices of the same type.

But, even if we can reject Quine's holism, the other extreme that single principles, taken alone, should be falsifiable, is nonsensical too.

But, once we cannot test them, taken alone, why should we accept them? There are some good reasons for accepting them.

For example, compatibility: Even if we have no TOE, a principle may be compatible with all the best available theories. Another point is what would be the consequence of rejection: It could be that, once it is rejected, one would have to give up doing science, because, if the rejection would be taken seriously, no experiment could tell us anything nontrivial. Superdeterminism would be of this type. Similarly a rejection of Reichenbach's principle of common cause: Once it is rejected, there would be no longer any justification to ask for realistic explanation of observed correlations. The tobacco lobby would be happy, no need to explain correlations of smoking and cancer, astrologers too, because the discussion about astrology would be reduced to statistical facts about correlations - are correlations between positions of planet with various things in our lifes significant or not, and the major point that there is no causal explanation for such influences would disappear.

So, there are possibilities for strong arguments in favour of physical principles, even if they, taken alone, cannot be tested.
 
  • #59
billschnieder said:
That's a stretch. Bells event ready setup involves a third signal to Alice and Bob that an entangled pair was emitted. In this case, Alice and Bob's particles are not entangled to begin with. But photons from A and B are used at station C to select the sub-dnsemble of results that correspond to entanglement. No " event-ready" signal is ever sent to A and B.
Maybe, that's what is not so clear to me. Are the "settings" the microwave pulses P0 and P1, driven by RNGs?
Sure, Bell was thinking of signals going from C to A and B. Now we have the opposite. But the end result is the same. There is a signal at C which says that at a certain time later it is worth doing a measurement at A and at B. We use the "go" signals at C to select which of the A and B measurements go into the statistics. The end result is effectively the same.
 
  • #60
stevendaryl said:
You had mentioned "Bertlmann's socks" before. I'm familiar with that essay by Bell, but I always took his device, with the boxes and settings, to be an intuitive abstraction of the EPR experiment. I never thought of it as a serious proposal for an actual experiment.
If you look at several other papers by Bell around the same time you will see that he was very very serious about finding a three-particle atomic decay so that one of the particles could be used to signal that the other two were on their way. Remember that Pearle's detection loophole paper was 10 years earlier. Bell well understood the problem with the experiments (like Aspect's) which were starting to be done at that time, where there was no control at all of when particles got emitted / measured.
 
  • #61
Ilja said:
Superdeterminism would be of this type. Similarly a rejection of Reichenbach's principle of common cause: Once it is rejected, there would be no longer any justification to ask for realistic explanation of observed correlations. The tobacco lobby would be happy, no need to explain correlations of smoking and cancer, astrologers too, because the discussion about astrology would be reduced to statistical facts about correlations - are correlations between positions of planet with various things in our lifes significant or not, and the major point that there is no causal explanation for such influences would disappear.

Why would superdeterminism lead to giving up science? Couldn't one be a Bohmian and a superdeterminist? The Bohmian theory would be an effective theory, and the superdeterminist theory would be the true unknowable theory.

Also, couldn't one still make operational predictions if one gives up Reichenbach's principle? In quantum mechanics, we would still be able to say that a preparation and measurement yields certain correlations. So we would still be able to say that a preparation involving smoking, and a measurement involving cancer would still give the correlations.
 
  • #62
atyy said:
Why would superdeterminism lead to giving up science? Couldn't one be a Bohmian and a superdeterminist? The Bohmian theory would be an effective theory, and the superdeterminist theory would be the true unknowable theory.

Also, couldn't one still make operational predictions if one gives up Reichenbach's principle? In quantum mechanics, we would still be able to say that a preparation and measurement yields certain correlations. So we would still be able to say that a preparation involving smoking, and a measurement involving cancer would still give the correlations.
The problem with superdeterminism is that it isn't a theory. It doesn't make predictions. It doesn't explain how those correlations come about. It says they come about because the stuff in Alice's lab knows what is going on in Bob's lab, since everything was predetermined at the time of the big bang. You might like to call this theology. I don't call it physics.

Gerard 't Hooft believes that nature is completely deterministic at the Planck scale. He points out that we do not do experiments at the Planck scale. But he does not explain how determinism at this scale allows coordination between Bob's photo-detector and Alice's random number generator ... a subtle coordination which does not allow Alice to communicate with Bob instantaneously over vast distances but does keep their measurement outcomes and measurement settings mysteriously and delicately constrained, without their having any idea that this is going on.
 
  • #63
gill1109 said:
The problem with superdeterminism is that it isn't a theory. It doesn't make predictions. It doesn't explain how those correlations come about. It says they come about because the stuff in Alice's lab knows what is going on in Bob's lab, since everything was predetermined at the time of the big bang. You might like to call this theology. I don't call it physics.

Yes, but is there any problem with believing in it and doing physics?
 
  • #64
atyy said:
Yes, but is there any problem with believing in it and doing physics?
I have no problem with what anyone else wants to believe. As long as weird beliefs don't get in the way of doing physics.

Did any good physics come out of superdeterminism?
 
  • #65
atyy said:
Why would superdeterminism lead to giving up science?

I wouldn't go so far as to say that it's impossible to do science in a superdeterministic universe, but it's a lot harder. We learn about the laws of physics by tweaking conditions and seeing how our observations are changed. To reason about such tweaking, we typically assume that our tweaks are independent variables. To give an example, if you're trying to figure out whether Skittles cause cancer in rats, you give Skittles to some rats and not to others, and compare their cancer rates. If (through some unknown mechanism), you're more likely to give Skittles to cancer-prone rats than non-cancer-prone rats, then such a test wouldn't say anything about whether Skittles cause cancer.

A superdeterministic explanation of EPR results might go like this: The electron/positron pair have predetermined, fixed spins. Depending on those spins, Alice and Bob are more likely to choose one setting over another. Superdeterminism casts into doubt our notions of what is the "independent variable" in an experiment.

As I said, I don't think superdeterminism necessarily makes science impossible, but it makes it much more difficult to understand what's going on in an experiment.
 
  • #66
gill1109 said:
I have no problem with what anyone else wants to believe. As long as weird beliefs don't get in the way of doing physics.

Did any good physics come out of superdeterminism?

I was only arguing that there is no problem with believing in superdeterminism and being simultaneously a Bohmian and a Copenhagenist.

One needs some philosophy to do physics, eg. I am not a brain in a vat. Otherwise, quantum mechanics predicts that it is impossible for the Bell inequalities to be violated at spacelike separation.
 
  • #67
stevendaryl said:
I wouldn't go so far as to say that it's impossible to do science in a superdeterministic universe, but it's a lot harder. We learn about the laws of physics by tweaking conditions and seeing how our observations are changed. To reason about such tweaking, we typically assume that our tweaks are independent variables. To give an example, if you're trying to figure out whether Skittles cause cancer in rats, you give Skittles to some rats and not to others, and compare their cancer rates. If (through some unknown mechanism), you're more likely to give Skittles to cancer-prone rats than non-cancer-prone rats, then such a test wouldn't say anything about whether Skittles cause cancer.

A superdeterministic explanation of EPR results might go like this: The electron/positron pair have predetermined, fixed spins. Depending on those spins, Alice and Bob are more likely to choose one setting over another. Superdeterminism casts into doubt our notions of what is the "independent variable" in an experiment.

As I said, I don't think superdeterminism necessarily makes science impossible, but it makes it much more difficult to understand what's going on in an experiment.

I really don't understand why it would be any harder. Let's suppose our universe is superdeterministic. Experimental evidence shows that we already do science, eg. we developed and tested the Copenhagen interpretation of quantum mechanics. So if the universe is superdeterministic, experimental evidence shows that we have already overcome the hurdles that superdeterminism poses.
 
  • #68
gill1109 said:
The problem with superdeterminism is that it isn't a theory. It doesn't make predictions. It doesn't explain how those correlations come about. It says they come about because the stuff in Alice's lab knows what is going on in Bob's lab, since everything was predetermined at the time of the big bang. You might like to call this theology. I don't call it physics.

I don't think that's a fair characterization. One could just as well criticize Newton's laws of motion on those grounds: It doesn't make any predictions to say that acceleration is proportional to the force, if you don't know what forces are at work. That's true. Newton's laws are not a predictive theory in themselves, but become predictive when supplemented by a theory of forces (gravitational, electromagnetic, etc.)

The same thing could be true for a superdeterministic theory. Saying that superdeterminism holds doesn't make any predictions, but if you have a specific theory that allows you to derive the superdeterministic correlations, then that would be predictive.
 
  • #69
atyy said:
I really don't understand why it would be any harder. Let's suppose our universe is superdeterministic. Experimental evidence shows that we already do science, eg. we developed and tested the Copenhagen interpretation of quantum mechanics. So if the universe is superdeterministic, experimental evidence shows that we have already overcome the hurdles that superdeterminism poses.

I would say that the success of science so far (with the way that we currently do experiments) shows that the laws of physics can be usefully approximated by theories that are not superdeterministic.
 
  • #70
stevendaryl said:
I would say that the success of science so far (with the way that we currently do experiments) shows that the laws of physics can be usefully approximated by theories that are not superdeterministic.

And if there is another universe in which superdeterminism prevents science, then, well we don't live in it. It's a bit like the anthropic principle.
 

Similar threads

Replies
0
Views
775
Replies
4
Views
617
Replies
1
Views
984
Replies
17
Views
1K
Replies
8
Views
2K
Replies
6
Views
2K
  • Quantum Physics
3
Replies
85
Views
8K
Replies
63
Views
7K
Replies
25
Views
2K
Replies
2
Views
1K
Back
Top