Did they do a Loopholes free Bell test?

  • Thread starter Nick666
  • Start date
  • Tags
    Bell Test
In summary, the conversation discusses the progress and challenges in conducting a loopholes-free Bell test. While some experiments have been able to eliminate two out of three loopholes, there are still plans to conduct more comprehensive experiments in the near future. The conversation also touches on the topic of closing the "free will" loophole and its implications in scientific models. Additionally, the conversation mentions the importance of loophole-free Bell violations in the field of quantum cryptography. Finally, there is a discussion on a recent experiment that claims to have achieved a loophole-free Bell violation, although there are some concerns and unanswered questions about the results.
  • #1
Nick666
168
7
Did they manage to do a loopholes free Bell test ? The best I got from google was an article from february that says no , they only did one where 2 out of 3 loopholes were eliminated in one test.
 
Physics news on Phys.org
  • #2
I don't think is has been done yet, but I know a number of groups are planning to do such an experiment in the near future (some might already have started). In at least one case I am aware of the modification to their setup should be more or less trivial (e..g moving part of their setup to the opposite side of campus) so it shouldn't really be that hard (famous last words...)

Hence, I wouldn't at all be surprised if something is published in the next few months.
 
  • Like
Likes atyy
  • #3
I know about two loopholes. These are closed in two separate experiments:
Violation of Bell's inequality under strict Einstein locality conditions
Bell violation with entangled photons, free of the fair-sampling assumption

The third might be "free will" loophole but I'm not sure if it can be exploited in any scientific model. Maybe the idea of closing "free will" loophole is to eliminate possibility of poor random number generator.

Closing communication and fair-sampling loopholes in one experiment might take some time as they have conflicting requirements. Closing fair-sampling loophole requires that photons are not lost so you want detector closer to source, but closing communication loophole requires considerable distance between source and detectors and that of course increases photon losses unless you can perform experiment in vacuum.
 
  • Like
Likes atyy
  • #4
This says that in the same experiment they closed the locality and freedom-of-choice loopholes.
 
  • #5
I thought there were going to be like a flood of answers. Isnt the topic the hottest and most important thing in quantum physics right now ?
 
  • #6
Nick666 said:
I thought there were going to be like a flood of answers. Isnt the topic the hottest and most important thing in quantum physics right now ?
Not really. I mean, Bell inequalities are being violated routinely these days. (Almost) Nobody believes that if all loopholes are closed the experimental outcome will be different, everybody still expects to see the same Bell violation. If we did close all the loopholes, and we saw to our surprise that now we don't get any Bell violation, that would mean that quantum theory is wrong since the latter predicts a violation. But nobody believes that the theory is wrong, at least not at the low energies at which Bell experiments are conducted. Therefore, nobody cares!

But some people do care about closing all the loopholes... but for other reasons...
In the field of quantum cryptography, a new sub-field has emerged the past ten years, the so-called Device-Independent Quantum Key Distribution. There it has been shown that a loophole free violation of a Bell-inequality is important so that a secure key is established between two parties even if the measurement devices of each party are not trusted themselves (e.g. may have been hacked). Therefore, loophole free violations do offer great technological advantages.
 
  • #7
JK423 said:
Not really. I mean, Bell inequalities are being violated routinely these days. (Almost) Nobody believes that if all loopholes are closed the experimental outcome will be different, everybody still expects to see the same Bell violation.

So why don't we move on to explanations for the violations: In posts 214 and 219 here:
https://www.physicsforums.com/threads/von-neumann-qm-rules-equivalent-to-bohm.816876/page-11
@ vanhess71 shows an explanation ( non local correlations) for the 100% perfect correlations when detector settings are aligned.
From here is there an explanation for some of the Bell inequality violations when detector settings at A and B are not aligned ?
 
Last edited:
  • #8
morrobay said:
So why don't we move on to explanations for the violations: In posts 214 and 219 here:
https://www.physicsforums.com/threads/von-neumann-qm-rules-equivalent-to-bohm.816876/page-11 @ vanhess71 shows an explanation for the 100% perfect correlations when detector settings are aligned. So from here can there be a progression to understanding the violations when detector settings at A and B are not aligned ?
Bell inequalities specify the limit of correlations that vanhess71 type models can reach when detector settings at A and B are not aligned. To state it more directly vanhess71 type model can not violate Bell inequalites.
 
  • #10
gill1109 said:
http://www.nature.com/news/quantum-spookiness-passes-toughest-test-yet-1.18255 (paper posted on arXiv, not passed peer review yet)
I have not studied the paper in detail, but would like to make some comments based on what the authors write in their article.

First, it was noted in another thread that the probability $p$=0.019/0.039 is not very impressive.

Second, authors write: "Our observation of a loophole-free Bell inequality violation thus rules out all local theories that accept ... that the outputs are final once recorded in the electronics." On the other hand, as I wrote here a few times, unitary evolution of quantum mechanics is, strictly speaking, incompatible with final outcomes of measurement, as far as I understand (for example, due to Poincare recurrence). Therefore, the authors' experimental results can only rule out local realistic theories that predict deviations from unitary evolution. For example, the local realistic theories of my article http://link.springer.com/content/pdf/10.1140/epjc/s10052-013-2371-4.pdf (Eur. Phys. J. C (2013) 73:2371) have the same evolution as unitary evolution of some quantum field theories.
 
  • #11
I hope the referees push them to be more clear in their descriptions. There is a lot hidden between the lines. I've already identified in the other closed thread. For example, are the "settings" different from the randomly chosen microwave pulses which generate the entangled photons?

Another description of the experiment, see diagram at https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/). In usual CHSH setups, Alice and Bob each have 2 settings [1,2] which they randomly switch between. In this experiment, that appears to be the microwave pulses. These pulses excite the crystals to produce photons which are entangled with the electrons. Both photons are then sent to station C, where they are post-selected in order to find an ensemble for which the electrons at A and B could be considered as entangled (This is what entanglement swapping is all about, the two electrons being previously unentangled). Some time after the photons have left to be "filtered" at C, but before any signal can travel from C back to A and B, the state of the electrons at A and B are "read-out". Only those results at A and B which correspond to successful filtration at C are kept. Everything else is discarded. This corresponds to a success rate of 6.4e-9.

My suspicion is that the post-processing or "entanglement swapping" will be the key to unlock this experiment.
 
  • #12
billschnieder said:
I hope the referees push them to be more clear in their descriptions. There is a lot hidden between the lines. I've already identified in the other closed thread. For example, are the "settings" different from the randomly chosen microwave pulses which generate the entangled photons?

Another description of the experiment, see diagram at https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/). In usual CHSH setups, Alice and Bob each have 2 settings [1,2] which they randomly switch between. In this experiment, that appears to be the microwave pulses. These pulses excite the crystals to produce photons which are entangled with the electrons. Both photons are then sent to station C, where they are post-selected in order to find an ensemble for which the electrons at A and B could be considered as entangled (This is what entanglement swapping is all about, the two electrons being previously unentangled). Some time after the photons have left to be "filtered" at C, but before any signal can travel from C back to A and B, the state of the electrons at A and B are "read-out". Only those results at A and B which correspond to successful filtration at C are kept. Everything else is discarded. This corresponds to a success rate of 6.4e-9.

My suspicion is that the post-processing or "entanglement swapping" will be the key to unlock this experiment.
The settings are chosen by a quantum RNG http://arxiv.org/pdf/1506.02712v1.pdf So it's an "independent" piece of quantum optics / electronics. Personally I would have preferred a state of the art pseudo RNG. I don't know if they can be fast enough. It would be fine by me even that pseudo random settings are generated in advance and read from a file as needed. The point is to make it ludicrous (contrary to Occam's razor) that by some kind of conspiratorial and unknown physics, Alice's spin measurement somehow already "knows" Bob's setting.

Entanglement swapping is not "post-processing". The central location (C) preserves a record of which pairs of measurements (at A and B) are interesting to look at. Sure, you only then go and fish out those pairs after the experiment was done. At some point you have to look at the correlations between the experimental records generated at A, B and C. The timing is very delicate and has to be very careful. That's what the referees have to look at closely. But the "marks" saying which ones count were made *before* the corresponding measurements were done.
 
Last edited:
  • #13
gill1109 said:
Entanglement swapping is not "post-processing".
Apologies, I see this is somewhat off tangent from the thread topic itself but I just have to object here. Entanglement swapping is completely post-processing. The correlations between A and B are noticeable and interesting only if you post-select samples where C decided to do a measurement, and then even take into account his measurement result to see if it means correlation or anti-correlation.
 
  • #14
georgir said:
Apologies, I see this is somewhat off tangent from the thread topic itself but I just have to object here. Entanglement swapping is completely post-processing. The correlations between A and B are noticeable and interesting only if you post-select samples where C decided to do a measurement, and then even take into account his measurement result to see if it means correlation or anti-correlation.
I agree that the calculation of correlations is done *post experiment* but the calculated correlations already exist from the moment that the outcomes at A, B and C exist. And indeed the timing is very important so you have to rule out not only that A's settings could have influenced B's outcomes, but also that C's "seal of approval" could have influence A and B's settings. Which the authors do, in their paper.

And C's measurement result, on the basis of which A and B data gets selected, is quite simply: a photon is detected, one in each channel.
 
  • #15
billschnieder said:
My suspicion is that the post-processing or "entanglement swapping" will be the key to unlock this experiment.

Although only repeating what Gill said above: entanglement swapping is "processing" but you shouldn't confuse it with "post-selection". Regardless of how frequently the right circumstances occur, each event ready occurrence both causes and heralds a Bell pair that is being created in another spacetime region.

And it would take chutzpah to claim this setup doesn't disprove local realism, when the entanglement is performed non-local to the A and B measurements (and the selection of measurement settings). The local realist, after all, would say that what happens at C can have no bearing on any measurement outcome at A or B. I.e. there is no such physical state as entanglement! So why would one group of A's and B's exhibit measurement correlation differently than another, based on what is done at C? A sample of entangled pairs of electrons gives a different correlation rate (perfect correlations in the ideal case) than an un-entangled sample.
 
  • #16
DrChinese said:
Although only repeating what Gill said above: entanglement swapping is "processing" but you shouldn't confuse it with "post-selection".
You will find that it is indeed post-processing by selection of sub-ensembles. You take four ensembles A,B,C,D of particles, with every particle in A entangled with a sibling in B, and every particle in C entangled with a sibling in D. No entanglement between AB and CD pairs. You measure B and C together and based on the joint result of B and C, you select a subset of A and D that would now be entangled with each other. It is obviously post-selection.

when the entanglement is performed non-local to the A and B measurements (and the selection of measurement settings).
The swapping is done non-local to A and B, but it uses information from A and B to do the post-selection. A key question is whether in this experiment, the microwave pulses are "settings" or not?
 
  • #17
billschnieder said:
The swapping is done non-local to A and B, but it uses information from A and B to do the post-selection. A key question is whether in this experiment, the microwave pulses are "settings" or not?
Yes, microwave pulses are "settings".
But information from A and B for post-selection is only about initial state of A and B, not later generated measurement settings.
They say: "As can be seen in the spacetime diagram in Fig. 2a, we ensure that this event-ready signal is space-like separated from the random input bit generation at locations A and B."
 
  • #18
gill1109 said:
The settings are chosen by a quantum RNG http://arxiv.org/pdf/1506.02712v1.pdf So it's an "independent" piece of quantum optics / electronics. Personally I would have preferred a state of the art pseudo RNG. I don't know if they can be fast enough. It would be fine by me even that pseudo random settings are generated in advance and read from a file as needed. The point is to make it ludicrous (contrary to Occam's razor) that by some kind of conspiratorial and unknown physics, Alice's spin measurement somehow already "knows" Bob's setting.
I agree with this. Using quantum RNG is sort of begging the question fallacy (we assume quantum RNG behaves as QT says in order to test what QT says).
 
  • #19
Let me describe the traditional Bell-CHSH experiment and the entanglement-swapping version through computer analogies (known as the "Bell game")

Standard Bell game
==================

You control three computers A, B, C
You may write computer programmes for them.
Many times, the following happens:

Computer C sends messages to A and B
No more communication allowed between A, B and C
I toss two fair coins and deliver outcomes (H/T) to A and B
A and B output binary outcomes +/-1

Your aim: when A and B both receive "H" the outcomes should be the same (both +1 or both -1)
When either or both of A and B receive "T" the outcomes should be different (+1 and -1)

We call each of these exchanges a "trial". Each trial, you either win or lose
(you either achieve your per-trial aim or you don't).

The whole game: your overall aim is to "win" statistically significantly more than 75% of the trials (say: 80%, with N pretty large). Bell says it can't be done. (Well - just once in a while it could happen purely by chance, obviously, but you can't write those computer programs so that you systematically win).
Modifed Bell game
=================

You control three computers A, B, C
You may write computer programmes for them.
Many times, the following happens:

Computers A and B send messages to C
No more communication allowed between A, B and C
I toss two fair coins and deliver outcomes (H/T) to A and B
A and B output binary outcomes +/-1
C delivers a binary outcome "go"/"no-go"

Your aim: when C outputs "go" *and* A and B both receive "H" the outcomes should be the same (both +1 or both -1)
When C outputs "go" *and* either or both of A and B receive "T" the outcomes should be different (+1 and -1)

We call each of these exchanges a "trial". Each trial in which C says "go", you either win or lose
(you either achieve the aim or you don't).

The whole game: your aim is to "win" statistically significantly more than 75% of the trials for which C said "go"(say: 80%, with N pretty large). Bell says it can't be done. (Well - just once in a while it could happen purely by chance, obviously, but you can't write those computer programs so that you systematically win).
 
  • #20
Gill, are you trying to formulate a "Dr. Chinese challenge" for the swapping experiment?
I may be misunderstanding either you or the entanglement swapping experiment itself, but I don't think your "games" are good description of what happens. Remember, even though A and B do not have anything shared between them, they do each have an entangled pair shared with C.
 
  • #21
With any random pair of binary data there is either correlation or anti-correlation. C is able to compare both and say what it is going to be. C does not literally create the entanglement, and his actions do not have some magical effect that changes anything between A and B.

Edit: Still, I am not saying this can be explained classically or with hidden variables. It may still very well violate some inequality or something, just because the entanglement that exists in the A-C and B-C pairs violates it. But not because C does anything to A and B.
 
  • #22
georgir said:
Gill, are you trying to formulate a "Dr. Chinese challenge" for the swapping experiment?
I may be misunderstanding either you or the entanglement swapping experiment itself, but I don't think your "games" are good description of what happens. Remember, even though A and B do not have anything shared between them, they do each have an entangled pair shared with C.
I think these two games are a good description of what "local realism" can allow is happening, both in a traditional Bell-CHSH experiment (particles leave the source C and go to two locations A and B) and in the new generation of experiment using entanglement swapping (particles leave locations A and B and meet at C where they are measured and where a selection occurs of "favourable" situations). The whole point is that under local realism we can't expect a better than 75% success rate. In either game. However replace my computers A, B and C with quantum devices and quantum communication and in theory we could have an 85% success rate. (The Delft experiment had an 80% success rate).

Obviously, once we have selected pairs of particles at A and B on the basis of a particular outcome of some measurement of particles at C which came from A and B, we can create statistical dependence between subsequent measurement outcomes at A and B. The exercise to the student is to understand why, for both games, 75% is the best success rate you can hope for. Do it for the more simple (conventional) game first. Then figure out how to adapt your solution to the newer game.

Of course this doesn't help you understand how quantum mechanics can be harnessed to achieve an 85% success rate. The whole point of Bell's theorem that there is no way we can understand this in traditional terms - ie with local hidden variables, and free of conspiracy or superdeterminism.

Incidentally, I posed a computer challenge more than 10 years ago first to Bell-denier Accardi, later to Hess and Philipp, later to others. And I figured out how to set up the challenge so I would only have a tiny chance of losing a 5000 Euro bet, even if my opponent used memory and time variation. And this bit of probability theory is what the Delft experimenters are actually using in their "paranoid" analysis of the experiment. http://arxiv.org/abs/quant-ph/0110137, http://arxiv.org/abs/quant-ph/0301059, http://arxiv.org/abs/1207.5103

By the way nobody is saying that what happens at C alters what is going on at A and B. We don't believe in action at a distance. Quantum mechanical entanglement can't be used to send signals faster than the speed of light or even to help increase a classical communication rate ("information causality": Pawlowski et al)
 
Last edited:
  • #23
I am not quite sure how to interpret your game. What do the messages and coins represent?
It is my impression that in order to get a Bell test you need to consider at least 3 different measurement bases.
In most understandable explanations I've seen, you use i.e. 0 deg and +/- 30 deg and have to have 25% difference between 0 and +/-30 but 75% between +/-30, etc.
 
  • #24
georgir said:
I am not quite sure how to interpret your game. What do the messages and coins represent?
It is my impression that in order to get a Bell test you need to consider at least 3 different measurement bases.
In most understandable explanations I've seen, you use i.e. 0 deg and +/- 30 deg and have to have 25% correlation between 0 and +/-30 but 75% between +/-30, etc.
Please learn about the CHSH inequality and read Bell (1981) "Bertlmann's socks". Alice and Bob each choose between two measurement directions. Alice chooses between 0 and 90 degrees; Bob between 45 and 135. IMHO it's just as easy to understand as stories where Alice and Bob each have three measurement directions.

Remember the original Bell (1964) inequality also built in an assumption of perfect anti-correlation when Alice and Bob used the same measurement settings. If you would test that assumption experimentally, you would find that it's not exactly true. So the original Bell inequality was rapidly discarded in favour of CHSH, as far as real experiments are concerned. For tutorial introductions, there are advantages and disadvantages to both versions...

The idea of a Bell game goes back a long time. Already in the 80's people understood that Bell's inequality is driven by elementary logic and probability, not by calculus. (NB Bell's original inequality is a special case of CHSH, which results by setting one of the four correlations equal to +/-1).
 
  • #25
gill1109 said:
Please learn about the CHSH inequality and read Bell (1981) "Bertlmann's socks". Alice and Bob each choose between two measurement directions. Alice chooses between 0 and 90 degrees; Bob between 45 and 135. IMHO it's just as easy to understand as stories where Alice and Bob each have three measurement directions.

Remember the original Bell (1964) inequality also built in an assumption of perfect anti-correlation when Alice and Bob used the same measurement settings. If you would test that assumption experimentally, you would find that it's not exactly true. So the original Bell inequality was rapidly discarded in favour of CHSH, as far as real experiments are concerned. For tutorial introductions, there are advantages and disadvantages to both versions...

The idea of a Bell game goes back a long time. Already in the 80's people understood that Bell's inequality is driven by elementary logic and probability, not about calculus. (NB Bell's original inequality is a special case of CHSH, which results by setting one of the four correlations equal to +/-1).
georgir said:
I am not quite sure how to interpret your game. What do the messages and coins represent?
It is my impression that in order to get a Bell test you need to consider at least 3 different measurement bases.
In most understandable explanations I've seen, you use i.e. 0 deg and +/- 30 deg and have to have 25% difference between 0 and +/-30 but 75% between +/-30, etc.
The messages represent particles, or if you prefer, physical transmission of information. The coins represent random choices of measurement settings. Each computers represents a source of some particles or a measurement device (destination of particles). The computer programs running on the computers represent pieces of a local hidden variables theory. So the messages might just contain the values of all hidden variables in the theory we are simulating.
 
  • #26
Ok, I hate the feeling that I'm missing something obvious. But apparently I am. I can't see how using only 0, 45 and 90 can work to formulate a Bell test. 0 and 90 are always perfectly anti-correlated, 45 or 135 is always 1/2 correlated to either of them... any choice between those pairs of settings seems pointless to me as it is not affecting the 1/2 correlation. Anyway, I'll try to find more time and read up on "CHSH" in the future.
 
  • #27
georgir said:
Ok, I hate the feeling that I'm missing something obvious. But apparently I am. I can't see how using only 0, 45 and 90 can work to formulate a Bell test. 0 and 90 are always perfectly anti-correlated, 45 or 135 is always 1/2 correlated to either of them... any choice between those pairs of settings seems pointless to me as it is not affecting the 1/2 correlation. Anyway, I'll try to find more time and read up on "CHSH" in the future.

Maybe we have to halve these angles. Are we talking about spin or about polarization? These are the settings which result in correlations equal to +/- 1 / sqrt 2. When we add one positive correlation and subtract three negative ones we get 4 / sqrt 2 = 2 sqrt 2 = 2.828... (the Tsirelson bound, the best that QM can do).

See page 14 of https://cds.cern.ch/record/142461/files/198009299.pdf (CERN preprint of Bell's "Bertlmann's socks")
 
  • #28
Ok, half those angles makes sense. Should've seen that... And I see now how QM wins your game in cos(22.5deg)^2 or 85% of the times. This is cool.
EDIT: I still don't understand what's the go/no go part though.
 
Last edited:
  • #29
georgir said:
Ok, half those angles makes sense. Should've seen that... And I see now how QM wins your game in cos(22.5deg)^2 or 85% of the times. This is cool.
EDIT: I still don't understand what's the go/no go part though.
The go/no-go part is a way to implement the strict timing requirements. We want to do measurements at the two locations A and B, very rapidly, far apart. We want to engineer that there are two parts of a two-component quantum system localized in these times and places, when we do those measurements. How to arrange that? It rutns out to be very difficult to get quantum systems at distant locations A and B in the good entangled state at the same *prespecified* time "to order". ie at a time chosen and fixed in advance. The entanglement-swapping method is just one of many tricks which finesses this difficulty.
 
  • #30
georgir said:
EDIT: I still don't understand what's the go/no go part though.

The go/nogo stuff is there to ensure that we commit to counting a pair in our final results BEFORE we've performed the measurements. That's why this experiment closes the "detection loophole".

Previous experiments left open the possibility that the measurement altered the states of the particles in such a way that we'd fail to count pairs subjected to one measurement more often than we fail to count pairs subjected to the other - in terms of Gill's games, we'd cheat by throwing out some of the trials in which we didn't make the winning play. Here, we're committed to counting a trial before we know the outcome, so that form of cheating is precluded.
 
  • #31
georgir said:
EDIT: I still don't understand what's the go/no go part though.
QM predicts a violation of the CHSH-inequality when the observed pairs are entangled. Ensuring that pairs are entangled is notoriously difficult, and particularly when the distance between them increase. This experiment has the nice property that one can generate signals that tells us whether the particles are entangled or not (the go/no go), just before the settings are randomly chosen and the results read out (or to be more precise, the signal is recorded outside the lightcone of the read out). Which means that we can discard all the uninteresting unentangled pairs, which would otherwise just add random noise to the correlations.
 
Last edited:
  • #33
billschnieder said:
And the photons are produced by the microwave pulses hitting the crystals https://d1o50x50snmhul.cloudfront.net/wp-content/uploads/2015/08/09_12908151-800x897.jpg?And the post-selection is based on the photons?
No, the photons used for signalling that "the event is ready" is not produced by the microwave puls that reads out the spin. You should read the paper, and in particular the long texts accompanying the figures.
 
  • #34
billschnieder said:
You will find that it is indeed post-processing by selection of sub-ensembles. You take four ensembles A,B,C,D of particles, with every particle in A entangled with a sibling in B, and every particle in C entangled with a sibling in D. No entanglement between AB and CD pairs. You measure B and C together and based on the joint result of B and C, you select a subset of A and D that would now be entangled with each other. It is obviously post-selection.

1. Of course there is initial entanglement of AB and CD before the entanglement swap to make AD entangled. (You said: "No entanglement between AB and CD pairs.")

2. Of course there is entanglement swapping (which doesn't even exist if you are a local realist). If you simply perform the same measurements on B and C without bringing them together for the swap, nothing happens to cause AD to be entangled. Just route them through separate beam splitters and wait for the otherwise similar signature.
billschnieder said:
You will find that it is indeed post-processing by selection of sub-ensembles. You take four ensembles A,B,C,D of particles, with every particle in A entangled with a sibling in B, and every particle in C entangled with a sibling in D. No entanglement between AB and CD pairs. You measure B and C together and based on the joint result of B and C, you select a subset of A and D that would now be entangled with each other. It is obviously post-selection.

If it were post selection, then you could get the same result by looking for the same arrival "signature" without* allowing the swapping to occur. That cannot happen. Unless the photons are brought together indistinguishably, there is no swapping. No swapping, no Bell inequality violation.

*Just bring them near to each other so the timing is the same when they go through the beam splitter and are detected. Apparently you don't see that the swapping of B&C is causing the entanglement, and yet you acknowledge that the ready events are the ones that lead to Bell inequality violation.
 
  • #35
Heinera said:
This experiment has the nice property that one can generate signals that tells us whether the particles are entangled or not (the go/no go), just before the settings are randomly chosen and the results read out (or to be more precise, the signal is recorded outside the lightcone of the read out).

As I read it: the settings are selected before the swapping is done and the measurement is performed around the time the swapping is done. And as you say, the measurement is performed outside the causal light cone of the swapping, and the swapping is performed outside the causal light cone of the measurement. So neither can affect the other.
 

Similar threads

Replies
0
Views
775
Replies
4
Views
617
Replies
1
Views
984
  • Quantum Physics
3
Replies
82
Views
10K
Replies
0
Views
817
Replies
19
Views
2K
Replies
71
Views
4K
Replies
14
Views
591
Replies
8
Views
2K
Replies
6
Views
2K
Back
Top