Photon Wave Collapse Experiment (Yeah sure; AJP Sep 2004, Thorn )

In summary: The authors claim to violate "classicality" by 377 standard deviations, which is by far the largest violation ever for this type of experiment. The setup is an archetype of quantum mystery: A single photon arrives ata 50:50 beam splitter. One could verify that the two photon wave packet branches (after the beam splitter) interfere nearly perfectly, yet if one places a photodetector in each path, only one of the two detectorswill trigger in each try. As Feynman put it - "In reality, it contains the only mystery." How does "it" do it? The answer is -- "it" doesn'tdo "it
  • #1
nightlight
187
0
Photon "Wave Collapse" Experiment (Yeah sure; AJP Sep 2004, Thorn...)

There was a recent paper claiming to demonstrate the indivisibility of
photons in a beam splitter experiment (the remote "wave collapse"
upon "measurement" of "photon" in one of the detectors).

1. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M. Beck
http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
Am. J. Phys., Vol. 72, No. 9, 1210-1219 (2004).
http://marcus.whitman.edu/~beckmk/QM/

The authors claim to violate "classicality" by 377 standard deviations,
which is by far the largest violation ever for this type of experiment.
The setup is an archetype of quantum mystery: A single photon arrives at
a 50:50 beam splitter. One could verify that the two photon wave packet
branches (after the beam splitter) interfere nearly perfectly, yet if
one places a photodetector in each path, only one of the two detectors
will trigger in each try. As Feynman put it - "In reality, it contains
the only mystery." How does "it" do it? The answer is -- "it" doesn't
do "it" and the mysterious appearance is no more than a magic trick.

Unlike the earlier well known variants of this experiment ([2],[3]),
the present one describes the setup in sufficient detail that the
sleight of hand can be spotted. The setup is sketched below, but
you should get the paper since I will refer to figure and formula
numbers there.

Code:
     G photon Source   TR photon    PBS  T 
DG <---------- [PDC] ----------------\----------> DT
                                     |
                                     | R
                                     V
                                     DR
The PDC Source generates two photons, G and TR. The G photon is used
as a "gate" photon, meaning that the trigger of its detector DG defines
the time windows (of 2.5ns centered around the DG trigger) in which
to count the events on the detectors DT and DG, which detect the
photons in the Transmitted and Reflected beams (wave packets). The
"quantum" effect they wish to show is that after a detector, say,
DT triggers, the other detector DR will not trigger. That would
demonstrate the "indivisibility" of the photon and a "collapse"
of the remote wave packet at DR location as soon as the photon
was "found" at the location DT.

In order to quantify the violation of classicality, [1] defines
a coefficient g2 which is a normalized probability of joint
trigger of DT and DR (within the windows defined by DG trigger)
and is given via:

... g2 = P(GTR) / [P(GT)*P(GR)] ... (1)

or in terms of the (ideal) counts as:

... g2 = N(GTR)*N(G) / [N(GT)*N(GR)] ... (2)

where the N(GTR) is count of triple trigger, the N(GT) of double
triggers DG and DT, etc. The classical prediction is that g2>=1
(the equality g2=1 would hold for a perfectly steady laser source,
the "coherent light"). This inequality is eq (AJP.3). The quantum
"prediction" (eq AJP.8,13) is that for a single photon state TR,
the g2=0. The paper claims they obtained g2=0.0177 +/- 0.0026.
The accidental (background) coincidences alone would yield
g2a=0.0164, so that the g2-g2a is just 0.0013, well within the
std deviation 0.0026 from the quantum prediction. Perfection.

The two tiny, little clouds in this paradise of experimental
and theoretical perfection are:

a) while there is a QED prediction of g2=0, it is not for this
kind of detection setup (that's a separate story which we could
[post=529314]pursue later[/post]), and

b) the experiment doesn't show that (2) yields g2=0, since they
didn't measure at all the actual triple coincidence N(GTR) but
just a small part of it.

Let's see what was the sleight of hand in (b). We can look at the
coincident detections scheme as sampling of the EM fields T and R
where the sampling time windows are defined by the triggers of
gate DG. Here we had sampling window of 2.5ns and they measured
around 100,000 counts/sec on DG (and 8000 c/s on DT+DR). Thus
the sampled EM field represents just 0.25 ms out of each second.

The classical prediction g2>=1 applies for either continuous or
sampled measurements, provided the samples of GT and GR are
taken from the same position in the EM stream. For the coincidences
GT and GR, [1] does seem to select the properly matching sampling
windows since they tuned the GT and GR coincidence units (TAC/SCA,
see AJP p.1215-1216, sect C, Fig 5) to maximize each rate (they
don't give unfortunately any actual counts used for computing via
(2) their final results in AJP, Table I, but we'll grant them this).

Now, one would expect, that obtaining the sequence of properly
aligned GT and GR samples (say a sequence of length N(G) of
0's and 1's), one would extract the triple coincidence count
N(GTR) by simply adding 1 to N(GTR), whenever both GT and
GR contain 1 for the same position in the bit array.

But no, that's not what [1] does. For "mysterious" reasons
they add a third, separate coincidence unit (AJP.Fig 5; GTR)
which they tune on its own to extract its own separate sample
of EM fields. That alone is a gigantic loophole, a secret pocket
in the magicians sleeve. If the sampling windows GT and GR for
the new GTR unit are different enough from the earlier GT/GR
windows (e.g. shifted by just 1.25 ns in opposite directions),
the classical prediction via (2) will also be g2=0 (just background).
And as they're about to finish they pause at the door, with
'and by the way' look say "There is one last trick used in setting
up this threefold coincidence unit." where they explain how they
switch optical fibers, from DR to DG, then tune to G+T to stand
in for R+T coincidences, because they say "we expect an absence
of coincidences between T and R" (p 1216). Well, funny they
should mention it, since after all this, somehow, I am also
beginning to expect the absence of any GTR coincidences.

Also, unlike the GT and GR units which operated in START/STOP
TAC mode the GTR unit operated in START GATE mode, where a
separate pulse from DG is required to enable the acceptance
of DT and DR signals (here the DT was used as START and DR as
STOP input to TAC/SCA, see fig 5, while in the other two units
G was used for SATRT and T or R for STOP). It surely is getting
curioser and curioser, all these seeming redundancies with all
their little differences.


The experiment [3] also had a 3rd GTR unit with its own tuning,
but they didn't give any details at all. The AJP authors [1] give
only the qualitative sketch, but no figures on the T and R sampling
window positions for GTR unit (e.g. relative to those from GT and
GR units) were available from the chief author. Since the classical
prediction is sensitive to the sampling window positions, and can
easily produce via (2) anything from g2=0 to g2>1, just by changing
the GTR windows, this is a critical bit of data the experimenters
should provide, at least mention how they checked it and what
the windows were.

Of course, after that section, I was about ready to drop it
as yet another phony 'quantum magic show'. Then I noticed
at the end they give part numbers for their TAC/SCA units,
(p1218, Appendix C): http://www.ortec-online.com/electronics/tac/567.htm . The data sheet for
the model 567 lists the required delay of 10ns for the START
(which was here DT signal, see AJP.Fig 5) from the START GATE
signal (which was here DG signal) in order for START to get
accepted. But the AJP.Fig 5, and the several places in the
text give their delay line between DG and DT as 6 ns. That
means when DG triggers at t0, 6ns later (+/- 1.25ns) the
DT will trigger (if at all), but the TAC will ignore it
since it won't be ready yet, and for another 4ns. Then,
at t0+10ns the TAC is finally enabled, but without START
no event will be registered. The "GTR" coincidence rate
will be close to accidental background (slightly above
since if the real T doesn't trip DT and the subsequent
background DT hits at t0+10, then the DG trigger, which
is now more likely than background, at t0+12ns will
allow the registration). And that is exactly the g2 they
claim (other papers claim much smaller violations and
only on subtracted data, not the raw data, which is how
it should be, the "nonclassicality" as the Quantum Otpics
term of art, not anything nonclassical for real).

So, as described in [1], the GTR unit settings will cut off
almost all genuine GTR coincidences, yielding the "perfect"
g2. I did ask the chief experimenter about the 10ns delay
and the inconsistency. Oh, he knew it all along, and they have
actually used the proper delay (and not the 6ns as stated the
paper), but the paper was too long for such details. Lemme see,
they wrote the paper and the AJP told them to shorten it to such
and such length. Now, say, the six of them sweated for a day
editing the text, and just had it at about the right length,
except for 5 extra characters. Now, having cut out all they could
think of, they sit at the table wondering, vot to do? Then suddenly
a lightning strikes and they notice that if they were to replace
the delay they actually used (say 15ns, or anything above 11.5,
thus 2+ digit anyway) with 6 ns they could shorten the text by
10 characters. Great, let's do it, and so they edit the paper,
replacing the "real" delay of say 15ns with the fake delay of 6ns.
Yep, that's how the real delay which was actually and truly greater
than 10ns must have become the 6ns reported in the paper in 10 places.
Very plausible - it was the paper length that did it.

And my correspondent ended his reply with: "If I say I measured
triple coincidences (or lack thereof) then I did. Period. End of
discussion." Yes, Sir. Jahwol, Herr Professor.



--- Additional References:

2. J. F. Clauser, ``Experimental distinction between the quantum and
classical field-theoretic predictions for the photoelectric effect,''
Phys. Rev. D 9, 853-860 (1974).

3. P. Grangier, G. Roger, and A. Aspect, ``Experimental evidence for
a photon anticorrelation effect on a beam splitter: A new light on
single-photon interferences,'' Europhys. Lett. 1, 173-179 (1986).

4. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Is this is true, then you should write a REBUTTAL to AJP and see if the authors will respond formally. If not, your complaint here will simply disappear into oblivion.

Zz.
 
  • #3
ZapperZ said:
Is this is true, then you should write a REBUTTAL to AJP and see if the authors will respond formally. If not, your complaint here will simply disappear into oblivion. Zz.

You can check the paper and the ORTEC data sheet at the links provided. It is, of course, true. Even the chief author acknowledges the paper had wrong delays. As to writing to AJP, I am not subscribed any more to AJP (I was getting it for few years after grad school). I will be generous and let the authors issue their own errata since I did correspond with them first. Frankly, I don't buy their "article was too long" explanation -- it makes no sense that one would make time delay shorter because the "article was already too long". Plain ridiculous. And after the replies I got, I don't believe they will do the honest experiment either. Why confuse the kids with messy facts, when the story they tell them is so much more exciting.
 
  • #4
nightlight said:
You can check the paper and the ORTEC data sheet at the links provided. It is, of course, true. Even the chief author acknowledges the paper had wrong delays. As to writing to AJP, I am not subscribed any more to AJP (I was getting it for few years after grad school). I will be generous and let the authors issue their own errata since I did correspond with them first. Frankly, I don't buy their "article was too long" explanation -- it makes no sense that one would make time delay shorter because the "article was already too long". Plain ridiculous. And after the replies I got, I don't believe they will do the honest experiment either. Why confuse the kids with messy facts, when the story they tell them is so much more exciting.

I have been aware of the paper since the week it appeared online and have been highlighting it ever since. If it is true that they either messed up the timing, or simply didn't publish the whole picture, then the referee didn't do as good a job as he/she should, and AJP should be made aware via a rebuttal. Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

Furthermore, the section that this paper appeared in has no page limit as far as I can tell, if one is willing to pay the publication cost. So them saying the paper was getting too long is a weak excuse.

However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. Accounting for dark counts has ALWAYS been a problem with single photon detectors such as this - this is a frequent ammo for some people to point at the so-called detection loophole in EPR-type experiments. But this doesn't mean such detectors are completely useless and cannot produce reliable results either. This is where the careful use of statistical analysis comes in. If this is where they messed up, then it needs to be clearly explained.

Zz.
 
  • #5
ZapperZ said:
However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. Accounting for dark counts has ALWAYS been a problem with single photon detectors such as this - this is a frequent ammo for some people to point at the so-called detection loophole in EPR-type experiments. But this doesn't mean such detectors are completely useless and cannot produce reliable results either. This is where the careful use of statistical analysis comes in. If this is where they messed up, then it needs to be clearly explained.

Honestly, I think there is no point. In such a paper you're not going to write down evident details. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; at least that is what I gather from the response of the original author. So he didn't bother writing down these experimentally evident details. He also didn't bother to write down that he put in the right power supply ; that doesn't mean you should complain that he forgot to power his devices.
You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

cheers,
Patrick.
 
  • #6
Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

Well, if I write I would like to see it, and see the replies, etc.

However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained.

It changes quite a bit. The semi-classical model works perfectly here, when one models the subtractions as well (Marshall & Santos wrote about it way back when Grangier et al published their version in 1986). The problem with this AJP paper is that they claim nearly perfect "quantum" g2, without any subtractions (of accidental coincidences and of unpaired DG triggers, which lowers substantially g2, via eq (2) where N(G) would drop from 100,000 c/s to 8000 c/s) . If true, it would imply genuine physical collapse at spacelike region. If they had it for real, you would be looking at a Nobel prize work. But, it just happens to be contrary to the facts, experimental or theorietical.

Check for example a recent preprint by Chiao & Kwiat where they do acknowledge in their version that no real remote collapse was shown, although they still like that collapse imagery (for its heuristic and mnemonic value, I suppose). Note also that they acknowledge that classical model can account for any g2 >= eta (the setup efficiency, when accounting for the unpaired DG singles in classical model), which is much smaller than 1. Additional subtraction of background accidentals can lower the classical g2 still further (if one accounts for it in the classical model of the experiment). The experimental g2 would have to go below both effects to show anything nonclassical -- or, put simply, the experiment would have to show the violation on the raw data, and that has never happened. The PDC source, though, cannot show anything nonclassical since it can be perfectly modeled by local semiclassical theory, the conventional Stochastic Electrodynamics (see also last chapter in Yariv's Quantum Optics book on this).
 
  • #7
vanesch said:
Honestly, I think there is no point. In such a paper you're not going to write down evident details. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; at least that is what I gather from the response of the original author. So he didn't bother writing down these experimentally evident details. He also didn't bother to write down that he put in the right power supply ; that doesn't mean you should complain that he forgot to power his devices.
You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

cheers,
Patrick.

If those ARE the missing delays that is the subject here, then yes, I'd agree with you. *I* personally do not consider those in my electronics since we have calibrated them to make sure we only measure all time parameters of the beam dynamics rather than our electronics delay.

Again, the important point here is the central point of the result. Would that significantly change if we consider such things? From what I have understood, I don't see how that would happen. I still consider this as a good experiment for undergrads to do.

Zz.
 
  • #8
nightlight said:
Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

Well, if I write I would like to see it, and see the replies, etc.

OK, so now YOU are the one offering a very weak excuse. I'll make a deal with you. If you write it, and it gets published, *I* will personally make sure you get a copy. Deal?

However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained.

It changes quite a bit. The semi-classical model works perfectly here, when one models the subtractions as well (Marshall & Santos wrote about it way back when Grangier et al published their version in 1986). The problem with this AJP paper is that they claim nearly perfect "quantum" g2, without any subtractions (of accidental coincidences and of unpaired DG triggers, which lowers substantially g2, via eq (2) where N(G) would drop from 100,000 c/s to 8000 c/s) . If true, it would imply genuine physical collapse at spacelike region. If they had it for real, you would be looking at a Nobel prize work. But, it just happens to be contrary to the facts, experimental or theorietical.

There appears to be something self-contradictory here, caroline. You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

Zz.
 
  • #9
What nightlight is complaining about is that the authors didn't take into account an extra delay in a component.

Not quite so. If the delay between DG and DT pulses was more than 6ns, say it was 15ns, why was the 6ns written in the paper at all? I see no good reason to change the reproted delay from 15ns to 6ns. There is no economy in stating it was 6ns.

I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ;

There is no effective vs real delay. It is a plain delay between a DG pulse and DT pulses. It is not like it would have added any length to the article to speak of.

You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

And detector has latency and dead time, etc. That's not relevant. Talking about needless elements here, adding and going into trouble of tuning the independent sampling of EM fields on T and R for the third unit is, and describing it all is just fine. But saying that delay was 15ns instead of 6ns is way too much trouble. Yeah, sure, that sounds very plausible.

If I wanted to do the collapse magic trick, I would add that third unit, too. There are so many more ways to reach the "perfection" with that unit in there than to simply reuse the already obtained samples from the GT and GR units (as AND-ed signal).

In any case, are you saying you can show violation (g2<1) without any subtractions, on raw data?
 
  • #10
nightlight said:
There appears to be something self-contradictory here, caroline.

I am not Caroline. For one, my English (which I learned when I was over twenty) is much worse than her English. Our educational backgrouds are quite different, too.

1. It's interesting that you would even know WHO I was referring to.

2. You also ignored the self-contradiction that you made.

3. And it appears that you will not put your money where your mouth is and send a rebuttal to AJP. Oblivion land, here we come!

Zz.
 
  • #11
There appears to be something self-contradictory here... You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

No, that's not what I said. You can't get what they claim, the violation of g2>=0 without subtractions (the accidentals and unpaired singles; the two can be traded by settings on the detectors, window sizes, etc). But if you do the subtractions and if you model semi-classically not just the raw data but also the subtractions themselves, then the semiclassical model is still fine. The subtraction-adjusted g2, say g2' is now much smaller than 1, but that is not a violation of "classicality" in any real sense but merely a convention of speech (when the adjusted g2' is below 1 then we define such phenomenone as "nonclassical"). But a classical model, which also does the same subtraction on its predictions will work fine.


If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

As I said, being a nice guy, I will let them issue their own errata. They can simply say that the delays were not 6ns and give the correct value, if that is true (which from their g2 nearly 0, I don't think it is, unless they had few backup "last tricks" already active in the setup) . But if they do the experiment with the longer delay they won't have g2<1 on raw data (unless they fudge it in any of the 'million' other ways that their very 'flexible' setup offers).
 
  • #12
nightlight said:
There appears to be something self-contradictory here... You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

No, that's not what I said. You can't get what they claim, the violation of g2>=0 without subtractions (the accidentals and unpaired singles; the two can be traded by settings on the detectors, window sizes, etc). But if you do the subtractions and if you model semi-classically not just the raw data but also the subtractions themselves, then the semiclassical model is still fine. The subtraction-adjusted g2, say g2' is now much smaller than 1, but that is not a violation of "classicality" in any real sense but merely a convention of speech (when the adjusted g2' is below 1 then we define such phenomenone as "nonclassical"). But a classical model, which also does the same subtraction on its predictions will work fine.

Sorry, but HOW would you know it "will work fine" with the "classical model"? You haven't seen the full data, nor have you seen what kind of subtraction that has to be made. Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

As I said, being a nice guy, I will let them issue their own errata. They can simply say that the delays were not 6ns and give the correct value, if that is true (which from their g2 nearly 0, I don't think it is, unless they had few backup "last tricks" already active in the setup) . But if they do the experiment with the longer delay they won't have g2<1 on raw data (unless they fudge it in any of the 'million' other ways that their very 'flexible' setup offers).

No, you are not being "nice" at all. In fact, all I see is a cop out. Being "nice" means taking the responsibility to correct something that one sees as a possible error or misleading information and informing the community in question about it. You refused to do that. Instead, you whinned about it here where it possibly will make ZERO impact on anything. So why even bother in the first place? The authors may not make any corrections at all based on what you have described as their response, so most likely the paper will stand AS IS. They will continue to get the recognition, you get NOTHING.

Zz.
 
  • #13
nightlight said:
1. It's interesting that you would even know WHO I was referring to.

Why is it interesting? Haven't you argued with her here quite a bit, not long ago? And she has been all over the internet on this topic for several years. Despite of her handicaps in this field (of being neither a physicist nor a male), she is quite a warrior and at the bottom of it she is right on the experimental claims of QM/QO.

Then you must have missed her claims on Yahoo groups that "Maxwell equations are just math" when I asked her to derive the wave equation from them (she keeps claiming that light is ONLY a wave, yet she doesn't know what a "wave" is). She also has repeatedly claim that QM is dead" despite admitting she knows very little of it and of classical mechanics.

So the person you championed is ignorant of the very subject she's criticizing, by her own admission.

And it appears that you will not put your money where your mouth is and send a rebuttal to AJP. Oblivion land, here we come!

Nothing goes to oblivion, not even a butterfly flipping his wings somewhere in Hawaii. Writing to AJP, where they will censor anything they disagree with idelogocally is not kind of pursuit I care about. I left academia and work in industry to stay away from their kind of self-important poltician-scientists. I might post it to usenet, though.

I agree. The kind of info you are advertizing fits very nicely in the bastion of the Usenet.

It also appears that this conspiracy theory about physics journals has finally reared its ugly head again. Have you ever TRIED submitting to AJP, ever? Or did you already made up your mind and use that prejudice in deciding what they will do? It is ironic that you are criticizing a paper in which you claim they have possibly made a bias decision on what to select, and yet you practice the very same thing here. I hate to think you do this often in your profession.

Zz.
 
  • #14
nightlight said:
You refused to do that. Instead, you whinned about it here where it possibly will make ZERO impact on anything.

Well, let's see if this paper (of which I heard first here) gets trotted out again as a definite proof of collapse or photon indivisibility, or some such. It was a magic trick for kids and now you know it, too.

As to AJP letter impact, it would have zero impact there, if they were to publish it at all, since the authors of [1] would dismiss it as they did in email, and however phony their excuse sounds ("article was already too long" - give me break), that would be the end of it. No reply allowed. And the ortodoxy vs heretic scores once again 1:0. When, years back, Marshall and Santos challenged Aspects experiments, after the Aspect et al replied, their subsequent submissions (letters and papers) were turned down by the editors as of no further interest. That left the general impression, false as it was (since they had the last word), that Aspect's side won against the critics, adding thus extra feather to their triumph. Marshall and Santos have been battling it this way and it just doesn't work. The way the "priesthood" (Marshall's label) works, you just help them look better and you end up burned out if you play these games they have set-up so that half of the time they win and the other half you lose.

That is correct, they were rebuffed in their attempts to rebutt Aspect. And for good reason. They were wrong, although many (including Caroline and yourself) can't see it. It is clear the direction things are going in: more efficiency means greater violation of the local realistic position. So naturally you don't want the Thorn paper to be good, it completely undermines your theoretical objections to Bell tests - i.e. photons are really waves, not particles.

But you have missed THE most important point of the Thorn paper in the process. It is a cost-effective experiment which can be repeated in any undergraduate laboratory. When it is repeated your objections will eventually be accounted for and the results will either support or refute your hypothesis.

I do not agree that your objections are valid anyway, but there is nothing wrong with checking it out if resources allow. Recall that many felt that Aspect had left a loophole before he added the time-varying analyzers. When he added them, the results did not change. A scientist would conclude that a signal is not being sent from one measuring apparatus to the other at a speed of c or less. Other "loopholes" have been closed over the years as well. Now there is little left. New experiments using PDC sources don't have a lot of the issues the older cascade type did. But I note that certain members of the LR camp operate as if nothing has changed.

Make no mistake about it: loopholes do not invalidate all experimental results automatically. Sometimes you have to work with what you get until something better comes along. After all, almost any experiment has some loophole if you look it as hard as the LR crew has.

I wonder why - if the LR position is correct - the Thorn experiment supports the quantum view? If it is a "magic trick", why does it work? Hmmm.
 
  • #15
nightlight said:
Then you must have missed her claims on Yahoo groups that "Maxwell equations are just math" when I asked her to derive the wave equation from them (she keeps claiming that light is ONLY a wave, yet she doesn't know what a "wave" is). She also has repeatedly claim that QM is dead" despite admitting she knows very little of it and of classical mechanics.

That has no relation to what I said.

Yes it does, because you were the one who made her to be what she's not - someone who posesses the knowledge to know what she's talking about. Unless you believe that one can learn physics in bits and pieces, and that physics are not interconnected, then knowing one aspect of it does not necessarily mean one has understood it. This is what she does and this is who you touting.

I agree. The kind of info you are advertizing fits very nicely in the bastion of the Usenet... It also appears that this conspiracy theory about physics journals has finally reared its ugly head again...

Why don't you challenge the substance of the critique, instead of drifting into irrelevant ad hominem tangents. This is, you know, the physics section here. If you want to discuss politics there are plenty of other places you can have fun.

But it IS physics - it is the PRACTICE of physics, which I do every single working day. Unlike you, I cannot challenge something based on what you PERCEIVE as the whole raw data. While you have no qualms in already making a definitive statement that the raw data would agree with "semi-classical" model, I can't! I haven't seen this particular raw data and simply refuse to make conjecture on what it SHOULD agree with.

Again, I will ask you if you have made ANY dark current or dark counts measurement, and have looked at the dark count energy spectrum, AND based on this, if we CAN make a definitive differentiation between dark counts and actual measurements triggered by the photons we're detecting. And don't tell me this is irrelevant, because it boils down to the justification of making cuts in the detection count not only in this type of experiments, but in high energy physics detectors, in neutrino background measurements, in photoemission spectra, etc.. etc.

Zz.
 
  • #16
Sorry, but HOW would you know it "will work fine" with the "classical model"? You haven't seen the full data, nor have you seen what kind of subtraction that has to be made.

All the PDC source (as well as laser & thermal light) can produce is semi-classical phenomena. You need to read Glauber's derivation (see [4]) of his G functions (called "correlation" functions) and his detection model and see the assumptions being made. In particular, any detection is modeled as quantized EM field-atomic dipole interaction (which is Ok for his purpose) and expanded perturbatively (to 1st order for single detector, n-th order for n-detectors). The essential points for Glauber's n-point G coincidences are:

a) All terms with vacuum produced photons are dropped (that results in the normally ordered products of creation and anhilation EM field operators). This formal procedure corresponds to the operational procedure of subtracting the accidentals and unpaired singles. For single detector that subtraction is non-controversial local pocedure and it is built into the design i.e. the detector doesn't trigger if no external light is incident on its cathode. {This of course is only approximate, since the vacuum and the signal fields are superposed when they interact with electrons, so you can't subtract the vacuum part accurately from just knowing the average square amplitude of vacuum alone (which is 1/2 hv per EM mode on average) and the square amplitude of the superposition (the vector sum), i.e. knowing only V^2 and (V+S)^2 (V and S are vectors), you can't deduce what is the S^2, the pure signal intensity. Hence, detector by its design effectively subtracts some average for all possible vectorial additions, which it gets slightly wrong on both sides -- the existence of subtraction which are too small results in (vacuum) shot noise, and of subtractions which are too large results missing some of the signal (absolute efficiency less than 1; note that conventional QE definition already includes background subtractions, so QE could be 1, but with very large dark rate, see e.g. QE=83% detector, with noise comparable to the signal photon counts). }

But for the multiple detectors, the vacuum removal procedure built into the Glauber's "correlations" is non-local -- you cannot design a "Glauber" detector, not even in principle, which can subtract locally the accidental coincidences or unpaired singles . And these are the "coincidences" predicted by the Glauber's Gn() functions -- they predict, by definition, the coincidences modified by the QO subtractions. All their nonlocality comes from non-local formal operation (dropping of terms with absorptions of spacelike vacuum photons), or operationally, from inherently non-local subtractions (you need to gather data from all detectors and only then you can subtract accidentals and unpaired singles). Of, course, when you add this same nonlocal subtraction procedure to semi-classical correlation predictions, they have no problem replicating Glauber's Gn() functions. The non-locality of Gn() "correlations" comes exclusively from non-local subtractions, and it has nothing to do with the particle-like indivisible photon (that assumption never entered Glauber's derivation, that is a metaphysical add-on, with no empirical consequences or a formal counterpart with such properties, grafted on top of the theory, for their mnemonic and heuristic value).

b) Glauber's detector (the physical model behind the Gn() or g2=0, eq.8 of AJP paper) produces 1 count if and only if it absorbs the "whole photon" (which is a shorthand for the quantized EM field mode; e.g. an approximately localized single photon |Psi.1> is a superposition of vectors Ck(t)|1k>, where Ck(t) are complex functions of time and 4-vector k=(w,kx,ky,kz), and |1k> are Fock states in some base, which depend on k as well; note that hv quantum for photon energy is an idealization applicable to infinite plane wave). This absorption process in Glauber's perturbative derivation of Gn() is is purely dynamical process i.e. the |Psi.1> is absorbed as an interaction of EM field with atomic dipole, all being purely local EM field-matter field dynamics treated perturbatively (in 2nd quantized formalism). Thus, to absorb the "full photon" the Glauber detector has to interact with all the photon's field. There is no magic collapse of anything in this dynamics (the von Neumann's boundary classical-quantum is moved one layer above the detector) -- the fields merely follow their local dynamics (as captured in 2nd quantized perturbative approximation).

Now, what does g2=0 (where g2 is eq AJP.8) mean for the single photon incident field? It means that T and R are fields of that photon and the Glauber detector which absorbs and counts this photon, and to which g2() of eq (AJP.8) applies, must interact with the entire mode, which is |T> + |R>, which means the Glauber detector which counts 1 for this single photon is spread out to interact with/capture both T and R beams. By its definition this Glauber detector leaves EM vacuum as the result of the absorpotion, thus any second Glauber detector (which would have to be somewhere else, e.g. behind it in space, thus no signal EM field would reach it) would absorb and count 0, just the vacuum. That of course is the trivial kind of "anticorrelation" predicted by the so-called "quantum" g2=0 (g2 of eq AJP.8). There is no great mystery about it, it is simply a way to label T and R detectors as one Glauber detector for single photon |Psi.1>=|T>+|R> and then declare it has count 1 when one or both DT or DR trigger and declare it has count 0 if none of DT or DR triggers. It is a trivial prediction. You could do the same (as is often done) with photo-electron counts on a single detector, declare 1 when 1 or more photo-electrons are emitted, 0 if none is emitted. The only difference is that here you would have this G detector spread out to capture distant T and R packets (and you don't differentiate their photoelectrons regarding declared counts of the G detector).

The actual setup for the AJP experiment has two separate detectors. Neither of them is the Glauber detector for the single mode superposed of T and R vectors, since they don't interact with the "whole mode", but only with the part of it. The QED model of detector (which is not Glauber's detector any more) for this situation doesn't predict g2=0 but g2>=1, the same as semiclassical model does, each detector triggers on average half the time, independently of each other (within the gate G window when both T and R have the same intensity PDC pulse). Since each detector here gets just the half of the "signal photon" field, the "missing" energy needed for its trigger is just the vacuum field which is superposed to the signal photon (to its field) on the beam splitter (cf eq. AJP.10, the a_v operators). If you were to split further each T and R into halves, and then these 1/4 beams to further halves, and so on for L levels, with N=2^L final beams and N detectors, the number of the triggers for each G event would be Binomial distribution (provided the detection time is short enough that signal field is roughly constant; otherwise you compund Binomial, which is super-Poissonian), i.e. the probability of exactly k detectors triggering is p(k,N)=C(N,k)*p^k*(1-p)^(N-k), where p=p0/N, and p0 is probability of trigger of a single detector capturing the whole incident photon (which may be defined as 1 for ideal detector and ideal 1 photon state). If N is large enough, you can approximate Binomial distribution p(k,N) with Poissonian distribution p(k)=a^k exp(-a)/k!, where a=N*p=p0, i.e. you get the same result as the Poissonian distribution of the photoelectrons (i.e. these N detectors behave as N electrons of a cathode of a single detector).

In conclusion, there is no anticorrelation effect "discovered" by the AJP authors for the setup they had. They imagined it and faked the experiment to prove it (compare AJP claim to much more cautious claim of the Chiao & Kwiat preprint cited earlier). The g2 of their eq (8) which does have g2=0 applies to a trivial case of a single Glauber detector absorbing the whole EM field of the "single photon" |T>+|R>. No theory predicts nonclassical anticorreltation, much less anything non-local, for their setup with the two independent detectors. It is a pure fiction resulting from an operational misinterpretation of QED for optical phenomena in some circles of Quantum Opticians and QM popularizers.

Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

Engineering S/N enhancements, while certainly important, e.g. when dealing with TV signals, have no place in these types of experiments. In these experiments (be it this kind of beam splitter "collapse" or Bell inequality tests) the "impossibility" is essentially of enumerative kind, like a pigeonhole principle. Namely, if you can't violate classical inequalities on raw data, you can't possibly claim a violation on subtracted data since any such subtraction can be added to classical model, it is a perfectly classical extra operation.

The only way the "violation" is claimed is to adjust the the data and then compare it to the original classical prediction that didn't perform the same subtractions on its predictions. That kind term-of-art "violation" is the only kind that exists so far (the "loophole" euphemisms aside).

-- Ref

4. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.
 
Last edited:
  • #17
nightlight said:
Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

Engineering S/N enhancements, while certainly important, e.g. when dealing with TV signals, have no place in these types of experiments. In these experiments (be it this kind of beam splitter "collapse" or Bell inequality tests) the "impossibility" is essentially of enumerative kind, like a pigeonhole principle. Namely, if you can't violate classical inequalities on raw data, you can't possibly claim a violation on subtracted data since any such subtraction can be added to classical model, it is a perfectly classical extra operation.

The only way the "violation" is claimed is to adjust the the data and then compare it to the original classical prediction that didn't perform the same subtractions on its predictions. That kind term-of-art "violation" is the only kind that exists so far (the "loophole" euphemisms aside).

Wait... so how did what you wrote answered the question I asked?

This isn't just a S/N ratio issue! It goes even deeper than that in the sense of how much is known about dark current/dark counts in a photodetector. It was something I had to deal with when I did photoemission spectroscopy, and something I deal with NOW in a photoinjector accelerator. And considering that I put in as much as 90 MV/m of E-field in the RF cavity, I'd better know damn well which ones are the dark currents and which are the real counts.

I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

Zz.
 
  • #18
That is correct, they were rebuffed in their attempts to rebutt Aspect. And for good reason. They were wrong,

How about substantiating your assertion. Can you give me specific paper and result and what was "wrong" (maybe you meant "unpopular" ir "unfashinable" instead of "wrong") about it.

It is clear the direction things are going in: more efficiency means greater violation of the local realistic position.

Yep, better, like these AJP 377 std deviation, after they short-circuit triple concidences out of the loop all together. There are a handful fraudsters selling quantum computers to investors. Let me know when any of it actually works. After 30+ years of claims, your claim sounds just like perpetuum mobile excuses from centuries ago. It's just around the corner, as soon as this friction can be taken out and this reservoir temperature dropped few thousand degrees lower (before thermodynamics).

So naturally you don't want the Thorn paper to be good, it completely undermines your theoretical objections to Bell tests - i.e. photons are really waves, not particles.

This AJP paper claim is plain wrong. Their timings don't add up, even the author acknowledges that much. You can check the paper and data sheet and 'splain to me how does that work. Their result contradicts the theory as well -- the g2<1 occurs only for subtracted data, not the raw counts. There is nothing nonclassical about that kind of g2<1, since the semi-classical model can be extended to subtract the same way as done in the experiment or as done in the QO formalism via the Glauber's normal order prescription for his correlation function.

But you have missed THE most important point of the Thorn paper in the process. It is a cost-effective experiment which can be repeated in any undergraduate laboratory.

That's what worries me. Bunch of deluded kids trying to become physicist, imagining instant vanishing of EM fields at spacelike distances, just because some far detector somewhere triggered. That effect doesn't exist. The actual collapse of EM fields does occur, as shown in the semi-classical and 2nd quantized theories of photoeffect -- it occurs through purely local interactions -- the photons do vanish. Check for example what a highly respected Quantum Optician from MIT, Herman Haus wrote on this matter:
F. X. Kärtner and H. A. Haus http://prola.aps.org/abstract/PRA/v47/i6/p4585_1
Phys. Rev. A 47, 4585–4592 (1993)

This paper intends to clarify some issues in the theory of quantum measurement by taking advantage of the self-consistent quantum formulation of nonlinear optics. A quantum-nondemolition measurement of the photon number of an optical pulse can be performed with a nonlinear Mach-Zehnder interferometer followed by a balanced detector. The full quantum-mechanical treatment shows that the shortcut in the description of the quantum-mechanical measurement, the so-called ``collapse of the wave function,'' is not needed for a self-consistent interpretation of the measurement process. Coherence in the density matrix of the signal to be measured is progressively reduced with increasing accuracy of the photon-number determination. The quantum-nondemolition measurement is incorporated in the double-slit experiment and the contrast ratio of the fringes is found to decrease systematically with increasing information on the photon number in one of the two paths. The ``gain'' in the measurement can be made arbitrarily large so that postprocessing of the information can proceed classically.

When it is repeated your objections will eventually be accounted for and the results will either support or refute your hypothesis.

You can tell me when it happens. As it stands in print, it can't work. The g2~0 is due to an experimental error. You can't wave away the 6ns delay figure in 10 places in the paper, by blaming the article length restriction. If it was 15ns or 20ns, and not 6ns the article length difference is negligible. It's the most ridiculous excuse I have heard, at least since my age when dogs ate homeworks.

I do not agree that your objections are valid anyway, but there is nothing wrong with checking it out if resources allow.

Anything to back-up your faith in AJP paper claim?

Recall that many felt that Aspect had left a loophole before he added the time-varying analyzers.

No that was a red herring, a pretense to be fixing something that wasn't a problem. Nobody seriously thought, much less proposed a model or theory, that the far apart detectors or polarizers are somehow communicating in which position they are so the they could all conspire to replicate the alleged correlations (which were never obtained in the first place, at least not on the measured data; the "nonclassical correlations" exist only on vastly extrapolated "fair sample" data, which is, if you will pardon the term, the imagined data). It was a self-created strawman. Why don't they tackle and test the fair sampling conjecture. There are specific proposals how to test it, you know (even though the "quantum magic" appologists often claim that the "fair sampling" is untestable), e.g. check Khrennikov's papers.

Other "loopholes" have been closed over the years as well. Now there is little left. New experiments using PDC sources don't have a lot of the issues the older cascade type did. But I note that certain members of the LR camp operate as if nothing has changed.

Yes, the change is that 30+ years of promisses have passed. And oddly, no one wants to touch the "fair sampling" tests. All the fixes fixed what weren't really challenged. Adding few std deviations or aperture depolarization (that PDC fixed from cascades) wasn't a genuine objection. The semiclassical models weren't even speculating in those areas (one would need some strange conspirtatorial models to make those work). I suppose, it is easier to answer the question you or your friend pose to yourselves than to answer with some subtance to challenges from the opponents.


Make no mistake about it: loopholes do not invalidate all experimental results automatically. Sometimes you have to work with what you get until something better comes along. After all, almost any experiment has some loophole if you look it as hard as the LR crew has.

Term "loophole" is a verbal smokescreen. It either works or doesn't. The data either violates inequalities or it doesn't (what some imagined data does, doesn't matter, no matter how you call it).

I wonder why - if the LR position is correct - the Thorn experiment supports the quantum view? If it is a "magic trick", why does it work?

This experiment is contrary to QED prediction (see reply to Zapper, items a and b on Glauber's detectors and the g2=0 setup). You can't obtain g2<1 on raw data. Find someone who does it and you will be looking at the Nobel prize experiment, since it would overturn the present QED -- it would imply a discovery of a mechanism to perform spacelike non-interacting absorption of the quantized EM fields, which according to the existent QED evolve only locally and can be absorbed only through the local dynamics.
 
Last edited:
  • #19
nightlight said:
It is clear the direction things are going in: more efficiency means greater violation of the local realistic position.

Yep, better, like these AJP 377 std deviation, after they short-circuit triple concidences out of the loop all together. There are a handful fraudsters selling quantum computers to investors. Let me know when any of it actually works. After 30+ years of claims, your claim sounds just like perpetuum mobile excuses from centuries ago. It's just around the corner, as soon as this friction can be taken out and this reservoir temperature dropped few thousand degrees lower (before thermodynamics).

etc.

You have already indicated that no evidence will satisfy you; you deny all relevant published experimental results! I don't. Yet, time and time again, the LR position must be advocated by way of apology because you have no actual experimental evidence in your favor. Instead, you explain why all counter-evidence is wrong.

Please tell me, what part of the Local Realist position explains why one would expect all experiments to show false correlations? (On the other hand, please note that all experiments just happen to support the QM predictions out of all of the "wrong" answers possible.)

The fact is, the Thorn et al experiment is great. I am pleased you have highlighted it with this thread, because it is the simplest way to see that photons are quantum particles. Combine this with the Marcella paper (ZapperZ's ref on the double slit interference) and the picture is quite clear: Wave behavior is a result of quantum particles and the HUP.
 
  • #20
Wait... so how did what you wrote answered the question I asked? This isn't just a S/N ratio issue!

Subtracting background accidentals and unpaired singles is irrelevant for the question of classicality of the experiment, as explained. If the semiclassical model M of the phenomenon explains raw data RD(M), then if you perform some adjustemnet operation A and create adjusted data A(RD(M)), then for a classical model to follow, it simply needs to model the adjustment procedure itself on its original prediction, which is always possible (subtracting experimentally obtained numbers from the predicted correlations and removing others which experiment showed to be unpaired singles -- it will still match because the undadjusted data matched, and the adjustments counts come from the experiment, just as you do to compare to the QO prediction via Glauber's Gn(). The only difference is that Glauber's QO predictions refer directly to subtracted data while the semiclassical refer to either, depending on how you treat the noise vs signal separation, but they both have to use experimentally obtained corrections and the raw experimental correlations, and both can match filtered or unfiltered data).

It goes even deeper than that in the sense of how much is known about dark current/dark counts in a photodetector. It was something I had to deal with when I did photoemission spectroscopy, and something I deal with NOW in a photoinjector accelerator. And considering that I put in as much as 90 MV/m of E-field in the RF cavity, I'd better know damn well which ones are the dark currents and which are the real counts.

These are interesting and deep topics, but they are just not related to the question of "nonclassicality".

I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

Yes, I had whole lot of lab work, especially in graduate school at Brown (the undergraduate work in Belgrade had less advanced technology, more classical kind of experiments, although it did have a better nuclear physics lab than Brown, and avoided that one as much as I could), including the 'photon counting' experiments with the full data processing, and I hated it at the time. It was only years later that I realized that it wasn't all that bad and that I should have gotten more out of it. I guess, that wisdom will have wait for the next reincarnation.
 
Last edited:
  • #21
You have already indicated that no evidence will satisfy you; you deny all relevant published experimental results!

I do? Tell me what measured data violates Bell inequalities or shows what this AJP "experiment" claims (violation of g2>=1 on nonsubtracted data; that's not even Glauber's QED prediction since his G2(x1,x2) does assume subtraction of all vacuum photon effects, bith dark rates and unpaired singles)? I ignore the overly optmisitic claims based on imagined data under the conjecture ("fair sampling") which they refuse to test.

I don't.

Well, you prefer imagined data even the non-data resulting by cutting-out of triple coincidences via the wrong delay setting. I don't care about any of those kinds, that's true. Sorry. When I am in the mood to read fiction, there is better stuff out there than that.

Yet, time and time again, the LR position must be advocated by way of apology because you have no actual experimental evidence in your favor.

You tell me which Bell test or anticorrelation test violates classicality constraints. None so far. Adjusting data is a perfectly classical operation, even Bohr would go for that, so that can't make classically compatible raw data non-classical.

Instead, you explain why all counter-evidence is wrong.

What evidence? Which experiment claims to be loophole free and thus excludes any local model?

Please tell me, what part of the Local Realist position explains why one would expect all experiments to show false correlations?

You are confusing imagined correlations (the adjusted data) with measured data (the raw counts). There is nothing non-classical in the procedure of subtracting experimentally obtained background rates and unpaired singles. You simply subtract the same thing from the classical prediction, and if it was compatible with raw data, it will stay compatible with the subtracted data.

(On the other hand, please note that all experiments just happen to support the QM predictions out of all of the "wrong" answers possible.)

Irrelevant for the question of experimental proof of non-classicality. The absence of a semiclassical computation to model some experimental data doesn't imply that the experiment proves 'noclassicality' (the exclusion of any semiclassical model even in principle). Just as not using QED to compute telescope parameters doesn't mean the QED is excluded for the kind of light going through telescope.

Regarding the Bell inequality tests, the only difference is that classical models don't necessarily separate 'noise' from 'signal' along the same line as QM prediction does. What QM calls accidental, the semiclassical model sees as data. There can be no fundamental disagreement between two theories which predict same raw data. What do they do later with the data, how to call each piece or which order to draw the lines between the labels ('noise' 'accidental' 'unpaired singles' etc) is a matter of verbal conventions. It's not physics of the phenomenon. You keep persistently confusing the two, despite yur tag line.

The fact is, the Thorn et al experiment is great. I am pleased you have highlighted it with this thread, because it is the simplest way to see that photons are quantum particles.

Yes, especially if you use 6ns delays on TAC START as claimed in combination with the 10ns TAC GATE START latency. You get just background rate of triple coincidences. Imagine what you can get, if you just cut the wire from the GTR unit going to the counter all together. You get perfect zero GTR rate, no need to estimate backgrouns to subtract or unpaired singles. Absolute "quantum magic" perfection.
 
Last edited:
  • #22
nightlight said:
I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

Yes, I had whole lot of lab work, especially in graduate school at Brown (the undergraduate work in Belgrade had less advanced technology, more classical kind of experiments, although it did have a better nuclear physics lab than Brown, and avoided that one as much as I could), including the 'photon counting' experiments with the full data processing, and I hated it at the time. It was only years later that I realized that it wasn't all that bad and that I should have gotten more out of it. I guess, that wisdom will have wait for the next reincarnation.

"photon counting experiments"? Did you, or did you not perform dark counts measurement AND dark current energy spectrum? [How many times have I asked that already? Four?]

Zz.
 
  • #23
nightlight said:
Yes I measured and subtracted dark rates. Not the spectrum, though. What's that got to do with the subject matter anyway. If you think I mistated facts being discussed, ask that. You could as well ask whether I played football or how did I do with girls, too. When I get old enough to want to write my biography, I will do it myself, thanks.

But this is crucial because the dark count spectrum is VERY DIFFERENT than the real count spectrum. The difference is like night and day (no pun intended). So if one is unsure if simply doing dark counts and then subtracting it out of the actual count is kosher, one can confirm this via looking at the energy spectrum of the raw signal. One can even see signals resembling "false coincidence counts" due to random enviromental events! These things often occur at a wrong energy range than your calibrated signal for a particular photodetector!

So while they don't teach you, or maybe they don't tell you, why we can simply cut off the raw data, or maybe you thought this is such a trivial exercise, there are ample physics behind this methodology. They are NOT swept under the rug and simply dismissed. Spend some time doing this and you will know that this is not voodoo science based on some unproven idea.

Zz.
 
  • #24
ZapperZ said:
But this is crucial because the dark count spectrum is VERY DIFFERENT than the real count spectrum. The difference is like night and day (no pun intended). So if one is unsure if simply doing dark counts and then subtracting it out of the actual count is kosher, one can confirm this via looking at the energy spectrum of the raw signal. One can even see signals resembling "false coincidence counts" due to random enviromental events! These things often occur at a wrong energy range than your calibrated signal for a particular photodetector!

So while they don't teach you, or maybe they don't tell you, why we can simply cut off the raw data, or maybe you thought this is such a trivial exercise, there are ample physics behind this methodology. They are NOT swept under the rug and simply dismissed. Spend some time doing this and you will know that this is not voodoo science based on some unproven idea. Zz.

I know they get subtracted by the detector. That doesn't change that this subtraction is still approximate since you simply don't have enough info to extract the pure signal intensity out of the intensity of the vectorial sum of (S+V), the total field knowing just the V^2. Thus you still get dark rates and some failures to detect the value of low intensity signals, especially for short time windows. The sum, which you have is V^2 + 2VS + S^2, and the mixed term you can't know well enough to subtract that effect of V (you can remove only the V^2 term). For slow detections the mixed term will cancel out, but not for the fast detections, which can be vieweed as a manifestation of the energy-time uncertainty relations - if you want to know the signal energy S^2 accurately enough, you have to wait longer. So you won't ever have accurate S^2 to send out (or in the discrete counting mode, you will have wrong decisions, dark rates and missed counts).
 
  • #25
nightlight said:
I know they get subtracted by the detector. That doesn't change that this subtraction is still approximate since you simply don't have enough info to extract the pure signal intensity out of the intensity of the vectorial sum of (S+V), the total field knowing just the V^2. Thus you still get dark rates and some failures to detect the value of low intensity signals, especially for short time windows. The sum, which you have is V^2 + 2VS + S^2, and the mixed term you can't know well enough to subtract that effect of V (you can remove only the V^2 term). For slow detections the mixed term will cancel out, but not for the fast detections, which can be vieweed as a manifestation of the energy-time uncertainty relations - if you want to know the signal energy S^2 accurately enough, you have to wait longer. So you won't ever have accurate S^2 to send out (or in the discrete counting mode, you will have wrong decisions, dark rates and missed counts).

No, I'm not referring to ANY experiment, and certainly not THIS experiment. I was focusing on the physics of photodetection, of which I've spent considerable effort in. What you have described would be a problem too even with trying to prove classical model of ANY kind. If the signal is weak, it is weak, no matter what you are trying to detect. However, that is why we do multiple measurements, and when you look at the energy spectrum over such a period, you CAN distinguish between what you are looking for, and those that came in at random. Go home, and come back, and do the spectrum again and you will see what was supposed to be there, are still there, and what didn't suppose to be there, are now at different locations! Then do this again next week at Midnight to make sure you weren't facing some astronomical event that's causing anomalous systematic counts in your detector - and yes, I did work with THAT sensitive of an instrument.

I've yet to see anyone who has done THIS systematic determination of the dark count spectrum when compared to the real signal disputes the methodology done in these experiments. And there ARE people whose sole profession is instrumentation physics. If anyone would know what part of the detected signal to trust and what part not to, they certainly would.

Zz.
 
  • #26
nightlight said:
What evidence? Which experiment claims to be loophole free and thus excludes any local model?

Q.E.D. :)

OK, how about Bell Test Experiments which even mentions some of the loopholes commonly asserted.

And what experiment is loophole free anyway? Focus enough energy on anything, and you may plant a seed of doubt. But to what purpose? QM says the entangled photon correlation will be [tex]cos^2 \theta[/tex]. If that formula is not sufficiently accurate, what better formula do you have? Let's test that!
 
  • #27
nightlight said:
And there ARE people whose sole profession is instrumentation physics.

Yes, I know, I am married to one of these "people". My wife, who is en experimental physicist has worked (until our third kid was born; from there to our fifth she has advanced to a full time mother and kids sports taxi driver) in couple instrumentation companies as a physicist and later as a VP of engineering (one for air pollution measurements, another for silicon clean rooms and disk defect detections). They were all heavy users of QO correlation techniques and it was while visiting there and chatting with their physicists and engineers that it downed on me I had no clue what I was talking about regarding Quantum Bell tests and measurement problem (which were topic of my masters thesis while in Belgrade; I switched later to nonperturbative field theory methods, with A. Jevicki and G. Guralnick, when I came to USA, to Brown grad school). What I knew before was all a fiction and math toy models detached completely from reality. When I realized how the real world (unlike the fake stuff done at university labs, such as this AJP "demonstration" of photon magic) optical measurements are done, it occurred to me to re-eximine Bell inequality, and within days I "discovered" that it would be easy to violate the inequalities with right amount of data subtractions, and even completely replicate QM prediction with the right kind of losses. I wrote a little computer simulation which indeed could reproduce the QM cos^2 correlations on subtracted data. Not long after that I found all that was an old hat, P. Pearle had come with roughly same idea couple decades earlier, and then bunch of people had rediscovered it since then. Then I ran into Marshall and Santos (and the rest of the "heretics") none of whom I had ever heard of before, even though this problem was my MS thesis material. That's how shielded students are from the "politically incorrect" science. I was really POed (and still am, I guess) at the whole sterile, conformist and politicized mentality which (mis)guided me through my academic life. It was exactly the same kind of stalinist mindset (or as Marshall calls them, the "priesthood") I found in charge on both sides of the "iron curtain." And it's still there, scheming and catfighting as noisily and as vigorously as ever in their little castle in the air.


Then explain to me how these actually occured:

1. 1986. People were CONVINCED that superconductivity would NEVER occur above 30K (33K to be exact, which was the phonon coupling limit at that time). In fact, the field of superconductivity was thought to be dead, that we knew all there was to know, and for the first time in physics, a field actually has reach maturity. Yet, within just ONE year, the whole community embraced the high-Tc superconductors.

2. The standard model had ZERO ability to include the neutrino oscillation/mass. Yet, after the Super-K discovery, it is now the hottest area of neutrino physics.

3. Photoelectric effect. The Einstein model clearly notes that for photons below the work function of the metal, NO electrons can be emitted. This is, after all, what you are taught in school, even college! Yet, I violate this almost every time I run the photoinjector! Not only that, various multiphoton photoemission work violates this ALL the time AND, get this, they even get to appear in physics journals! Horrors!

I can go on and on and on. All of the above completely destroy what we teach kids in college. If we were to buy your story, NONE of these would have happened, and we would do nothing but only just regurgitate the identical stuff we were taught in school. Such revolutionary discoveries as the ones above would have been surpressed the same way as these so call "heretics". They haven't! These alone are enough to destroy your paranoia about any "priesthoods" wanting to maintain any kid of status quo on any part of physics.

Considering that the whole history of physics is filled with the continuing search for where existing theories would FAIL, it is absurd to suggest that all we care about is preserving them! Why would any physicist want to stop the natural evolutionary process of physics where the essential reason why it continues to exist is to reinvent itself?

Again, you haven't addressed my claim that you CAN make a differentiation between random and dark counts, and actual counts that are of interest. Look at the energy spectrum of dark currents in a photodetector, and then look at it again when you have a real count. Till then, you are missing a HUGE part of the credibility of your argument.

Zz.
 
  • #28
OK, how about Bell Test Experiments which even mentions some of the loopholes commonly asserted.

Whoa, that place is getting worse every time I visit. Now, the opposing view is already a tiny, euphemism and cheap ridicule laced three paragraph sectionlette, under the funny title "Hypothetical Loopholes." Which idiot wrote that "new and improved" stuff? By next visit I expect to see it had evolved into "Hypothetical Conspiracy Wingnut Alleged Loopholes" or something like that with one paragraph of JFK-Waco-911 linkage of the opposition fully exposed. Shame on whoever did that crap. It's not worth the mouse click to get there.

And what experiment is loophole free anyway?

As explained before, the enginerring S/N filtering and data-adjustment procedures are perfectly classically reproducable add-ons. They have no useful place in the 'nonclassicality' tests. All of the nonclassicality experimental claims so far are based on a gimick of making semi-classical prediction for non-adjusted outcomes, then adjusting the outcomes (subtractions of accidentals and unpaired singles), since that is the current form of QO prediction, the Glauber's "correlations" which have built in, simply by definition, the non-local subtractions of all vacuum originated photons and their effects. And then, upon "discovering" the two don't match (as they ought not to) and crowing 'nonclassicality' reproduced yet again by 377 std deviations.

Again, to remind you (since your tag line seems to have lost its power on you) -- that is a convention, a map, for data post-processing and labeling, which has nothing to do with the physics of the phenomenon manifesting in the raw data. QO defines G "correlations" in such a way that they have built in these subtraction, while the semi-classical models don't normally (unless contrived for the purpose of imitation of Gn()) distinguish and label data as "accidental" or "signal vs noise" or apply any other form of engineering filtering of 'rejectables' upfront, mixed up with the physical model. What is for the Gn() post-processing convention "accidental coincidence" and discardable, is for a semiclassical theory just as good a coincidence as "the Good" coincidence of the Gn() scheme (the purified, enhanced signal bearing coincidence residue).


QM says the entangled photon correlation will be [tex]cos^2 \theta[/tex].

And so does the semiclassical theory, when you include the same subtraction conventions as those already built into the Quantum Optics engineering style signal filtering conventions, which predict the cos^2 pseudo-correlations (they are pseudo since they are not bare correlations of anything in the proper sense of the word -- they, by man-made verbal definition, terminological convention, use Gn() "correlations" to mean data correlation followed by the standard QO signal filtering). I have nothing against data filtering. It is perfectly fine if you're designing TVs, cell phones, and such, but they have no usefulness for the 'non-classicality' experiments, unless one is trying to make living publishing "quantum mysteries" books for the dupes or swindling venture and military bucks for the "quantum computing, quantum cryptography and teleportation magic trio" (and there are handful of these already, the dregs from the dotcom swindle crash).

These kinds of arguments for 'nonclassicality' are based on mixing up and missaplying different data post-processing conventions, and "discovering" that if different conventions are used in two different theories, they don't match each other. Yep, so what. Three and half inches are not three and half centimeters either. Nothing to crow from the rooftops about that kind of "discoveries".
 
  • #29
nightlight said:
(BTW, I am not interested in discussing these politics-of-science topics; so I'll ignore any further diverging tangents thrown in. I'll respond only to the subject matter of thread proper.)

Then don't START your rant on it!

It was YOU who were whinning about the fact that things are not accepted that counter the prevailing idealog, or did you have a memory lapse that at every opportunity you got, you never failed to refer to the "priesthood". And you complain about going off on a tangent? PUHLEEZE!

I just showed you SEVERAL (there's more even from WITHIN academia) examples. In fact, the fact that these came from outside the academics area SHOULD make them even MORE resistant to be accepted. Yet, in one clear example, within just ONE year it was universally accepted. So there!

You want to go back to physics? Fine! Go do the energy spectrum of dark counts in a photodetector. Better yet, get the ones used to detect the Cerenkov light from passing neutrinos since these are MORE susceptible to dark counts all the time!

Zz.
 
  • #30
ZapperZ said:
Again, you haven't addressed my claim that you CAN make a differentiation between random and dark counts, and actual counts that are of interest. Look at the energy spectrum of dark currents in a photodetector, and then look at it again when you have a real count.

Of course I did address it, already twice. I explained why there are limits to such differentiation and consequently why the dark counts and missed detections (combined in a tradeoff of experimenters choice) are unavoidable. That's as far as the mildely on-topc aspect of that line of questions goes. { And, no, I am not going to rush out to find a lab to look at the "power spectra." Can't you get off it. At least in this thread (just start another one on "power spectra" if you just have to do it).}
 
  • #31
nightlight said:
Of course I did address it, already twice. I explained why there are limits to such differentiation and consequently why the dark counts and missed detections (combined in a tradeoff of experimenters choice) are unavoidable. That's as far as the mildely on-topc aspect of that line of questions goes. { And, no, I am not going to rush out to find a lab to look at the "power spectra." Can't you get off it. At least in this thread (just start another one on "power spectra" if you just have to do it).}

And why would this be "off topic" in this thread? Wasn't it you who made the claim that such selection of the raw data CAN CHANGE the result? This was WAAAAAY towards the beginning of this thread. Thus, this is the whole crux of your argument, that AFTER such subtraction, the data looks very agreeable to QM predictions, where as before, you claim that all the garbage looks like "classical" description. Did you or did you not make such claim?

If you can question the validity of making such cuts, then why isn't it valid for me to question if you actually know the physics of photodetector on WHY such cuts are valid in the first place? So why would this be off-topic in this thread?

Zz.
 
  • #32
nightlight said:
The photoeffect was modeled by a patent office clerk, who was a reject from the academia.

I guess the conspirators silenced him too. Ditto Bell, whose seminal paper received scant attention for years after release.

Anyone who can come up with a better idea will be met with open arms. But one should expect to do their homework on the idea first. Half a good idea is still half an idea. I don't see the good idea here.

The Thorn experiment is simple, copied of course from earlier experiments (I think P. Grangier). If the photons are not quantum particles, I would expect this experiment to clearly highlight that fact. How can you really expect to convince folks that the experiment is generating a false positive? The results should not even be close if the idea is wrong. And yet a peer-reviewed paper has appeared in a reputable journal.

To make sense of your argument essentially requires throwing out any published experimental result anywhere, if the same criteria is applied consistently. Or does this level of criticism apply only to QM? Because the result which you claim to be wrong is part of a very concise theory which displays incredible utility.

I noticed that you don't bother offering an alternative hypothesis to either the photon correlation formula or something which explains the actual Thorn results (other than to say they are wrong). That is why you have half a good idea, and not a good idea.
 
  • #33
ZapperZ said:
And why would this be "off topic" in this thread? Wasn't it you who made the claim that such selection of the raw data CAN CHANGE the result? This was WAAAAAY towards the beginning of this thread. Thus, this is the whole crux of your argument, that AFTER such subtraction, the data looks very agreeable to QM predictions, where as before, you claim that all the garbage looks like "classical" description. Did you or did you not make such claim? Zz.

You have tangled yourself up into the talmudic logic trap of Quantum Optics and QM magic non-clasicality experimental claims. It is a pure word game, comparing three inches to three centimeters and discovering the two are different, as explained in several recent messages to DrChinese.

You can see the same gimmick in the verbal setup of nearly every such claim -- they define "classical correlations" conventionally, which means the prediction is made assuming no subtractions. Then they do the experiment, do the standard subtraction and show that the result match predictions of the Glauber's "correlations" Gn(..). But Gn()'s are defined differently than the ones they used for "classical" prediction -- the Gn()'s include the subtractions in their definition so, of course they'll match the subtracted data, while the "classical prediction" which, again by the common convention doesn't include the subtractions into the prediction, won't match it. Big suprprise.

This AJP paper and experiment had followed precisely this same common recipe for the good old Quantum Optics magic show. Their eqs' (1)-(3) are "classical" and by definitions no subtractions are included in the model here. Then they go to "quantum" expression for g2, eq. (8) label it same as classical, but it is entirely different thing -- it is Glauber's normal ordered expectation value, and that is the prediction for correlation plus vacuum effects subtractions (which operationally are the usual QO subtractions). The authors, following the established QO magic show playbook, use the same label g2(0) for two different things, one which models the data post-procesing recipe, the other that doesn't. The two g2's of (1) vs (8) are entirely different quantities.

But these authors then went one beyond the true and tried QO show playbook, by adding in the redundant third unit and then, by misconfiguring the delays, achieve the Nobel prize result (if it were real) -- they get nearly maximum violation before they even subtracted the accidentals or unpaired singles (on DG detector, when neither DT or DR triggers). It's a completely made up effect, that doesn't exist in QED theory ([post=529314]as already explained[/post]) or any experiment (other than the [post=529069]rigged "data"[/post] they had put out).

I also already explained what g2(0)=0 on single photon state means operationally. That prediction of full anticorrelation has nothing to do with the AJP paper setup and the computations via eq (14). It is operationally entirely different procedure to which g2=0 applies (see my [post=529314]earlier reply[/post] on details).

If you can question the validity of making such cuts,

I am not questioning some generic "validity" (say for some other purpose) of the conventional QO subtractions.

I am questioning their comparing of apples and oranges -- defining "classical" g2() in eq (AJP.1) so that it doesn't include subtractions, and then labeling g2() in eq (AJP.8) same way, suggesting implicitly they are the same definition regarding the convention for subtractions. The (8) includes subtractions by definitions, the (1) doesn't. The inequality (3) modelling by convention the non-subtracted data is not violated by any non-nonsubtracted data. If you want to model the subtracted data via modified "classical" g2, you can easily do that, but then the inequality (3) isn't any more g2>=1 but g2>=eta (the overall setup efficiency), which is much smaller than one (see Chiao & Kwiat preprint, page 10, eq (11) correpsonding to AJP.14, and page 11 giving g2>=eta, corresonding to AJP.3). The Chiao-Kwiat stayed within the limits of the regular QO magic show rules, thus their "violations" are of term-of-art kind, a matter of defining custom term "violation" in this context, without really claiming any genuine nonlocal collapse, as they acknowledge (page 11):
And in fact, such a local realistic model can account for the
results of this experiment with no need to invoke a nonlocal collapse.
Then, after that recognition of inadequacy of PDC + beam splitter as a proof of collapse, following the true and tried QO magic show recipe, they do invoke Bell's inequality violations, take the violations for granted, as having been shown. But that's another story, not their experiment, and it is also well known that no Bell test data has actually violated the inequality, only the imagined "data" (reconstructed under the "fair sampling" assumption, which no-one wishes to put to test) did violate the inequalities. As with the collapse experiments, the semiclassical model violates Bell inequalities, too, once you allow it the same subtraction that are assumed in the QO prediction (which includes the non-local vacuum effects subtractions in its definition).
 
Last edited:
  • #34
The Thorn experiment is simple, copied of course from earlier experiments (I think P. Grangier). If the photons are not quantum particles, I would expect this experiment to clearly highlight that fact.

The AJP experiment is showing that if you misconfigure the triple coincidence unit timings so that it doesn't get anything but the accidentals, the g2 via eq (14) will come out nearly same as the one accidentals alone will produce (their "best" g2 is within half std deviation from the g2 for accidentals alone).

How can you really expect to convince folks that the experiment is generating a false positive?

Just read the data sheet and read their 6ns delay brought up in 10 places in their text plus in the picture once. Show me how it works with the delays given. Or explain why would they put wrong delay in the paper in so many places, while somehow having used the correct delay in the experiment. See the Chiao-Kwiat paper mentioned in my previous msg to see what this experiment can prove.

To make sense of your argument essentially requires throwing out any published experimental result anywhere, if the same criteria is applied consistently. Or does this level of criticism apply only to QM? Because the result which you claim to be wrong is part of a very concise theory which displays incredible utility.

You're arguing as if we're discussing literary critique. No substance, no pertinent point, just the meta-discussion and psychologizing, euphemizing, conspiracy ad hominem labeling, etc.

I noticed that you don't bother offering an alternative hypothesis to either the photon correlation formula or something which explains the actual Thorn results (other than to say they are wrong).

Did you read this thread. I have [post=529314]explained what g2=0 [/post]means operationally and why it is irrelevant for their setup and their use of it via eq (AJP.14). The classical prediction is perfectly fine, the g2 of unsubtracted data used via (AJP.14) will be >=1. See also the [post=530058]Chiao-Kwiat acknowledgment of that plain fact I already quoted[/post].
 
Last edited:
  • #35
vanesch said:
It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

It wouldn't be unusual for pedagogical materials to cheat, for what in their mind is a justifiable cause. After all, how many phony derivations are done in regular textbooks. It doesn't mean the authors don't know better, but they have to get it simple enough, or at least mnemonic enough, for students to absorb. If they believe the final result is right, they don't hestiate to take cheap shortcuts to get there.

The same probably holds for the experimenters. Except that here, there is no such effect to be observed, [post=529314]in theory[/post] or in [post=530058]practice[/post] (see the section on Chiao-Kwiat paper).

Another bit that raises eyebrows -- the chief author would not provide even a single actual count figure of anything in the experiment. He just stuck with the final results for g2, Table I, but neither the paper nor the email requests provided a single actual count figure used to compute them. All the answers I got were in the style I quoted in the initial post -- 'we did it right, don't ask, period.' That doesn't sound like a scientific experiment at all.

There is no way they can claim their formula (14), in which there are neither accidental subtractions nor the subtractions of unpaired singles (since they use N_G for the singles rates; Chiao-Kwiat experiment subtracts unpaired singles, which would correspond to (N_T+N_R) in the numerator of (AJP.14)) could yield g2<1.

The only hypothesis that fits this kind of 'too good to be true' perfection (contrary to other similar experiments and the theory) and their absolutely unshakable determination to not expose any count figures used to compute g2, is what I stated at the top of this thread -- they did use the wrong delay of 6ns in the experiment, not just in its AJP description.

Their article length excuse for timing inconsistency is completely lame -- that explanation would work only if they didn't give the 6ns figure at all. Then one can indeed say -- well it's one of those experimental details we didn't provide, along with many others. But they did put the 6ns figure in the article, not once but in 10 places plus in the figure 5. Why over-emphasize the completely bogus figure so much, especially if they knew it was bogus? So, clearly, the repetition of 6ns was meant as an instruction to other student labs, a sure-fire magic ingredient which makes it always "work" to absolute "perfection", no matter what the accidentals and the efficiencies are.

-- comment added:

Note that in our earlier discussion, you were claiming, too, that this kind of remote nondynamical collapse does occur. I gather from some of your messages in other threads you have now accepted my locality argument that the local QED field dynamics cannot yield such prediction without cornering itself into the selfcontradiction (since, among others, that would imply that the actual observable phenomena depend on position of the von Neumann's boundary). If you follow that new understanding one more step, you will realize that this implies that von Neumann's general projection postulate for composite systems, such as that used for deducing Bell's QM prediction (the noninteracting projection of spacelike remote subsystem) fails for the same reason -- it contradicts the von Neumann's very own deduction of independence of phenomena on the position of his boundary. In other words, von Neumann's general projection postulate is invalid since it contradicts the QED dynamics. The only projection which remains valid is a projection which has a consistent QED dynamical counterpart, such as photon absorption or a fermion localization (e.g. a free electron gets captured into a bound state by an ion).
 
Last edited:

Similar threads

Replies
40
Views
3K
Back
Top