- #141
vanesch
Staff Emeritus
Science Advisor
Gold Member
- 5,117
- 20
nightlight said:You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters).
I'm looking more into Santos and Co's articles. It's a slow read, but I'm working up my way... so patience :-) BTW, thanks for the lecture notes, they look great !
You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.
Not at all. My view (which I have expressed here already a few times in other threads) is quite different and I don't really think you need a "collapse at a distance" at all - I'm in fact quite a fan of the decoherence program. You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation. This means that macroscopic systems can be in a superposition, but that's no problem, just continuing the unitary evolution (this is th essence of the decoherence program). But the point was not MY view :-)
Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions, which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).
Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).
To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.
Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".
Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.
Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to, indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?
Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.
The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).
You're perfectly right, and I acknowledged that already a while ago when I said that there's indeed no way to distinguish "single photon" events that way. What I said was that such a single-photon event (which is one of a pair of photons), GIVEN A TRIGGER WITH ITS CORRELATED TWIN, will give you an indication of such an exclusivity in the limit of low intensities. It doesn't indicate any non-locality or whatever, but indicates the particle-like nature of photons, which is a first step, in that the marble can only be in one place at a time, and with perfect detectors WILL be in one place at a time. It would correspond to the 2 511KeV photons in positron annihilation, for example. I admit that my views are maybe a bit naive for opticians: my background is in particle physics, and currently I work with thermal neutrons, which come nicely in low-intensity Poissonian streams after interference all the way down the detection spot. So clicks are marbles :-)) There are of course differences with optics: First of all, out of a reactor rarely come correlated neutron pairs :-), but on the other hand, I have all the interference stuff (you can have amazing correlation lengths with neutrons!), and the one-click-one-particle detection (with 98% efficiency or more if you want), background ~ 1 click per hour.
This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.
In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.
I agree with you here concerning the scientific attitude to adopt, and apart from a stimulus for learning more quantum optics, it is the main motivation to continue this discussion :-) To me, these experiments don't exclude anything, but they confirm beautifully the quantum predictions. So it is very well possible that completely other theories will have similar predictions, it is "sufficient" to work them out. However, if I were to advise a student (but I won't because it is not my job) on whether to take that path or not, I'd strongly advise against it, because there's so much work to do first: you have to show agreement on SUCH A HUGE AMOUNT OF DATA that the work is enormous, and the probability of failure rather great. On the other hand, we have a beautifully working theory which explains most if not all of it. So it is probably more fruitful to go further in the successfull path than to err "where no man has gone before". On the other hand, for a retired professor, why not play with these things :-) I myself wouldn't dare, for the moment: I hope to make more "standard" contributions and I'm perfectly happy with quantum theory as it stands now - even though I think it isn't the last word, and we will have another theory, 500 years from now. But I can make sense of it, it works great, and that's what matters. Which doesn't mean that I don't like challenges like you're proposing :-)
cheers,
Patrick.
Last edited: