- #36
georgir
- 267
- 10
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
It's a technical term. It has been around for more than 20 years. http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.71.4287georgir said:I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
georgir said:I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
If we had been talking about correlation instead of entanglement it would not have been weird at all. Suppose particles A and B have equal and opposite momentum, but whose amount is random. Suppose that completely independently of this, particles C and D have equal and opposite, randomly varying, momentum. Now catch particles B and C and if their momentum is equal and opposite say "go". It is no surprise that particles A and D have highly correlated momentum if we only look at them both on those occasions that we got the "go" signal. The extraordinary (and beautiful) thing is how the mathematics of Hilbert space quantum entanglement works in just the same way...Nugatory said:Well, it is a very special and unusual sort of "detection". We start with two photon-electron entangled pairs, and if we detect (that is, interact with) the photons in just the right way, we end up with the electrons entangled instead. When this happens, we say that we've "swapped" the photon-electron entanglement that we had for the electron-electron entanglement that we wanted. And as Gill says, the term has used that way for decades.
Googling for "Barrett-Kok entanglement swapping" will bring up a whole bunch of fairly technical stuff on the techniques used in this experiment.
georgir said:I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably (see second link). That's what I have read and today there was another piece discussing the experiment:akhmeteli said:First, it was noted in another thread that the probability $p$=0.019/0.039 is not very impressive.
Quantum weirdness proved real in first loophole-free experimentThere are still a few ways to quibble with the result. The experiment was so tough that the p-value – a measure of statistical significance – was relatively high for work in physics. Other sciences like biology normally accept a p-value below 5 per cent as a significant result, but physicists tend to insist on values millions of times smaller, meaning the result is more statistically sound. Hanson’s group reports a p-value of around 4 per cent, just below that higher threshold.That isn’t too concerning, says Zeilinger. “I expect they have improved the experiment, and by the time it is published they’ll have better data,” he says. “There is no doubt it will withstand scrutiny.”
As long as the experimenters only have N = 245 pairs of measurements and S = 2.4 that p-value is not going to go down. Here is a simple calculation which explains why: An empirical correlation between binary variables based on a random sample of size N has a variance of (1 - rho^2)/N. The worst case is rho = 0 and variance 1/N. In fact we are looking at four empirical correlations equal to approx +/- 0.6. So if we believe that we have four random samples of pairs of binary outcomes then each empirical correlation has a variance of about 0.64 / N where N is the number of pairs of observations for each pair of settings. If the four samples are statistically independent the variance of S is about 4 * 0.64 / N where N = 245 / 4. This gives a variance of 0.042 and a standard error of 0.2. We observed S = 2.4 but our null hypothesis says that its mean value is nor larger than 2. Since N is large enough that normal approximation is not bad, we can say that we have 0.4 / 0.2 = 2 standard deviations departure from local realism. The chance that this occurs by chance is about 0.025.bohm2 said:I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably (see second link). That's what I have read and today there was another piece discussing the experiment:
Physicists claim 'loophole-free' Bell-violation experiment
http://physicsworld.com/cws/article...claim-loophole-free-bell-violation-experiment
Quantum weirdness proved real in first loophole-free experiment
https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/
Yes, I guess that this was what they felt was the maximal p-value they could get away with and still establish priority (they are in a race here), and that they are still running the experiment to achieve p-values at a level that is now regarded as the norm in physics (4 - 5 sigmas).bohm2 said:I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably.
Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?Michel_vdg said:Perhaps a stupid question, but do they also do these kinds of test with un-entangled particles for calibration? And if so how different are the results from entangled particles?
Michel_vdg said:Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?
Michel_vdg said:Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?
Nugatory said:Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.
Yes the equipment needs to be calibrated, that's also the case in medecine, but I thought once it is all calibrated that they also wound do a non-entagled testrun to lay next to the 245 pairs of measurements they now realized.Nugatory said:Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.
Michel_vdg said:... but I thought once it is all calibrated that they also wound do a non-entagled testrun to lay next to the 245 pairs of measurements they now realized.
Do I miss something? Whether photons are indistinguishable or not is measured.DrChinese said:You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.
Yes, post selection is based on the photons. But these photons are emitted earlier, before measurement settings are generated by RNG.billschnieder said:And the photons are produced by the microwave pulses hitting the crystals https://d1o50x50snmhul.cloudfront.net/wp-content/uploads/2015/08/09_12908151-800x897.jpg?And the post-selection is based on the photons?
zonde said:Do I miss something? Whether photons are indistinguishable or not is measured.
In the paper about fig. 3b they write:
"(b) Time-resolved two-photon quantum interference signal. When the NV centres at A and B emit indistinguishable photons (orange), the probability of a coincident detection of two photons, one in each output arm of the beam-splitter at C is expected to vanish. The observed contrast between the case of indistinguishable versus the case of distinguishable photons of 3 versus 28 events in the central peak yields a visibility of (90±6)% (Supplementary Information)."
Perhaps the null hypothesis is a bit pessimistic. S is a sample statistic and it could have a expectation much closer to zero than 2 under a no-correlation null hypothesis. Even adding the SD of this statistic to the mix could result in a much higher confidence level. But it could also go down.gill1109 said:This gives a variance of 0.042 and a standard error of 0.2. We observed S = 2.4 but our null hypothesis says that its mean value is nor larger than 2. Since N is large enough that normal approximation is not bad, we can say that we have 0.4 / 0.2 = 2 standard deviations departure from local realism.
They did also compute a randomization only based, possibly conservative, p-value. It was 0.039Mentz114 said:Perhaps the null hypothesis is a bit pessimistic. S is a sample statistic and it could have a expectation much closer to zero than 2 under a no-correlation null hypothesis. Even adding the SD of this statistic to the mix could result in a much higher confidence level. But it could also go down.
One approach is to work out the randomization distribution of S ( actually S without the modulus) by randomizing the data between bins. This will give an estimate of the mean and variance of the statistic.
Just to be sure about details I looked at the paper about their previous experiment - http://arxiv.org/abs/1212.6136DrChinese said:You didn't miss anything. But they are measured indirectly to determine indistinguishability, just not it at C.
At C they are made to overlap. This is done in the set up. That makes them indistinguishable, and causes the entanglement swap. The measurement is later performed at A and B by observing a violation of a Bell inequality. That demonstrates the entanglement swap occurred, in accordance with theory.
Detection at C distinguishes different entanglement states:DrChinese said:Is entanglement swapping simply post-selection? From the paper:
"We generate entanglement between the two distant spins by entanglement swapping in the Barrett-Kok scheme using a third location C (roughly midway between A and B, see Fig. 1e). First we entangle each spin with the emission time of a single photon (time-bin encoding). The two photons are then sent to location C, where they are overlapped on a beam-splitter and subsequently detected. If the photons are indistinguishable in all degrees of freedom, the observation of one early and one late photon in different output ports projects the spins at A and B into the maximally entangled state..."
You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.
By calling this POST-SELECTION you are really saying you think a local realistic explanation is viable. But if you are asserting local realism as a candidate explanation, obviously nothing that occurs at C can affect the outcomes at A & B; it occurs too late! So making the photons registered at C overlap and be indistinguishable cannot make a different to the sub-sample. But it does, because if they are distinguishable there will not be a Bell inequality violated.
So that is a contradiction. Don't call it post-selection unless you think that the overlap can be done away with and still get a sample that shows the same statistics as when they are indistinguishable pairs.
IMHO, it would have been better to choose the settings by a cascade of classical (pseudo) random number generators. Or even just read them from a big file of pre-generated settings. But they need to get these random settings very fast and it appears that quantum RNGs are faster than state of the art classical pseudo RNGs.DirkMan said:Since they used a quantum random number generator instead of the human random decisions, can no conclusions be drawn regarding this aspect ? Something like... we have as much "free will" as a quantum random number generator ? They do mention "free will" in the arvix paper .
With post-selection one usually means selection made after the experiment has been done, using knowledge of the results from both wings of the experiment.zonde said:But I still don't get your argument against calling detection at C a post-selection.
zonde said:Just to be sure about details I looked at the paper about their previous experiment - http://arxiv.org/abs/1212.6136
1. They do a lot of things to tune the setups at A and B and make photons indistinguishable.
2. Success of tuning is verified by observing HOM interference at C.
3. But I still don't get your argument against calling detection at C a post-selection.
4. And without singling out (post-selecting) just one of those entangled states there is no Bell inequality violation.
And the action that is occurring at C is interference between two photons (photon modes) so that [itex]|\psi^-\rangle[/itex] and [itex]|\psi^+\rangle[/itex] can be told apart.
And why is 245 not enough?Heinera said:The above loop should run until the number of selected pairs is 1000 or more.
Because we want to minimize the impact of flukes. You understand this, I'm sure.billschnieder said:And why is 245 not enough?
Without post-selection, there will be no swapping. You have to understand that swapping is precisely the process of selecting a sub-ensemble which is correlated in a specific way from a larger ensemble which is not correlated.DrChinese said:Yes, there is post selection - no quibble about that. But if swapping were not occurring, that sample would not tell us anything. The local realist denies there is swapping that affects the resulting sample. They would say indistinguishability and overlap do not matter. Those things are ingredients for the swapping but NOT for the post-selection.
Weather there is swapping or not does not really matter here. We could equally well have a theory where we just happened to produce the two electrons in an entangled state in the first place, and then the measurment of the photons at C would just confirm this entanglement, with no swapping taking place. The math would be the same. The only thing that matters for a LHV model, is that this confirmation is outside the lightcone of the experiment performed. So in no way can the settings or the results of the experiment influence the confirmation, nor vice versa.billschnieder said:Without post-selection, there will be no swapping. You have to understand that swapping is precisely the process of selecting a sub-ensemble which is correlated in a specific way from a larger ensemble which is not correlated.
It matters to have a proper understanding of what entanglement swapping entails.Heinera said:Weather there is swapping or not does not really matter here.
It matters also, that filtering results after the fact using information from both stations introduces "nonlocality". The "good" Z1 and Z2 ensembles are therefore nonlocally generated.The only thing that matters for a LHV model, is that this confirmation is outside the lightcone of the experiment performed. So in no way can the settings or the results of the experiment influence the confirmation, nor vice versa.
But the whole point here is that in the experiment, they are not filtered after the fact . They are actually filtered prior to the fact (i.e. the performance of the experiment). See my post on the revised Quantum Randi Challenge earlier in this thread.billschnieder said:It matters also, that filtering results after the fact using information from both stations introduces "nonlocality". The "good" Z1 and Z2 ensembles are therefore nonlocally generated.