Young's Experiment: Exploring Wave-Particle Duality

  • Thread starter Cruithne
  • Start date
  • Tags
    Experiment
In summary: This is not supported by the experiments.In summary, this article is discussing a phenomena that is observed in Young's experiment, however it contradicts what is said in the other examples of the experiment. The mystery of the experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. Additionally, the statement suggests that the interference patterns produced were not the result of any observations.
  • #106
nightlight said:
ZapperZ No, what YOU think of "dark current" is the one that is fuzzy. The dark current that *I* detect isn't.

I was talking about the usage of the term. Namely you claimed that in your field of work, the contribution of the QED vacuum fluctuation doesn't count into the "dark current." Yet, the authors of the detector preprint I cited (as well as the other Quantum Optics literature) count the vacuum fluctuations contributions under the "dark current" term. Therefore, that disagreement in usage alone demonstrates the term usage is fuzzy. QED.

The usage of the term "dark current" as used in PHOTODETECTORS (that is, after all, what we are talking about, aren't we?) has NOTHING to do with "QED vacuum fluctuation". Spew all the theories and ideology that you want. The experimental observation as applied to photodetectors and photocathodes do NOT make such connection.

This is getting way too funny, because as I'm typing this, I am seeing dark currents from a photocathode sloshing in an RF cavity. Not only that, I can control how much dark current is in there. There's so much of it, I can detect it on a phosphor screen! QED vacuum fluctuations? It is THIS easy?! C'mon now! Let's do a reality check here!

Zz.
 
Physics news on Phys.org
  • #107
vanesch What is potentially interesting in this discussion is how far one can push semiclassical explanations for optical phenomena.

If you recall that Stochastic Quantization starts by adding Gaussian noise to the classical field dynamics (in functional form and with the added imaginary time variable) to construct the quantized field (in path integrals form), you will realize that the Marshall-Santos model of Maxwell equations+ZPF (Stochastic Electrodynamics) can go at least as far as QED, provided you drop the external field and the external current approximations, thus turning it into ZPF enchanced version of the Barut's nonlinear self-field ED (which even without the ZPF reproduces the leading order of QED radiative corrections; see Barut group's KEK preprints; they had published much of it in Phys Rev as well).

The Ord's results provide much nicer, more transparent, combinatorial interpretation of the analytic continuation (of time variable) role in transforming the ordinary diffusion process into the QT wave equations -- they extract the working part of the analytic continuation step in the pure form -- all of its trick, the go of it, is contained in the simple cyclic nature of the powers if "i", which serves to separate the object's path sections between the collisions into 4 sets (this is the same type of role that powers of x in combinatorial generating functions techniques perform - they collect and separate the terms for the same powers of x).

Marshall-Santos-Barut SED model (in the full self-interacting form) is the physics behind the Stochastic Quantization, the stuff that really makes it work (as a model of natural phenomena). The field quantization step doesn't add new physics to "the non-linear DNA" (as Jaynes put it). It is merely a fancy jargon for a linearization scheme (a linear approximation) of the starting non-linear PD equations. For a bit of background on this relation check some recent papers by a mathematician Krzysztof Kowalski, which show how the sets of non-linear PDEs (the general non-linear evolution equations, such as those occurring in chemical kintecs or in population dynamics) can be linearized in the form of a regular Hilbert space linear evolution with realistic (e.g. bosonic) Hamiltonians. Kowalski's results extend the 1930s Carleman's & Koopman's PDE linearization techniques. See for example his preprints: solv-int/9801018, solv-int/9801020, chao-dyn/9801022, math-ph/0002044, hep-th/9212031. He also has a textbook http://www.worldscibooks.com/chaos/2345.html with much more on this technique.
 
Last edited by a moderator:
  • #108
nightlight said:
You may have lost the context of the arguments that brought us to the detecton question -- the alleged insufficiency of the non-quantized EM models to account for various phenomena (Bell tests, PDC, coherent beam splitters).

No, the logic in my approach is the following.
You claim that we will never have raw EPR data with photons, or with anything for that matter, and you claim that to be something fundamental. While I can easily accept the fact that maybe we'll never have raw EPR data ; after all, there might indeed be limits to all kinds of experiments, at least in the forseable future, I have difficulties with its fundamental character. After all, I think we both agree on the fact that if there are individual technical reasons for this inability to have EPR data, this is not a sufficient reason to conclude that the QM model is wrong. If it is something fundamental, it should be operative on a fundamental level, and not depend on technological issues we understand. After all (maybe that's where we differ in opinion), if you need a patchwork of different technical reasons for each different setup, I don't consider that as something fundamental, but more like the technical difficulties people once had to make an airplane go faster than the speed of sound.

You see, what basically bothers me in your approach, is that you seem to have one goal in mind: explaining semiclassically the EPR-like data. But in order to be a plausible explanation, it has to fit into ALL THE REST of physics, and so I try to play the devil's advocate by taking each individual explanation you need, and try to find counterexamples when it is moved outside of the EPR context (but should, as a valid physical principle, still be applicable).

To refute the "fair sampling" hypothesis used to "upconvert" raw data with low efficiencies into EPR data, you needed to show that visible photon detectors are apparently plagued by a FUNDAMENTAL problem of tradeoff between quantum efficiency and dark current. If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV range. So if it is something that is fundamental, and not related to a specific technology, you should understand my wondering why this happens in the case that interests you, namely the eV photons, and not in the case that doesn't interest you, namely gamma rays. If after all, this phenomenon doesn't appear for gamma rays, you might not bother because there's another reason why we cannot do well EPR experiments with gamma rays, but to me, you should still explain why a fundamental property of EM radiation at eV suddenly disappears in the MeV range when you don't need it for your specific need of refuting EPR experiments.

Like with perpetuum mobile claims (which these absolute non-classicality claims increasingly resemble, after three, or even five, decades of excuses and ever more creative euphemisms for the failure), each device may have its own loophole or ways to create illusion of energy excess.

Let's say that where we agree, is that current EPR data do not exclude classical explanations. However, they conform completely with quantum predictions, including the functioning of the detectors. It seems that it is rather your point of view which needs to do strange gymnastics to explain the EPR data, together with all the rest of physics.
It is probably correct to claim that no experiment can exclude ALL local realistic models. However, they all AGREE with the quantum predictions - quantum predictions that you do have difficulties with explaining in a fundamental way without giving trouble somewhere else, like the visible photon versus gamma photon detection.

I asked you where is the point-photon in QED. You first tried via the apparent coexistence of anti-correlation and the coherence on a beam splitter.

No, I asked you whether the "classical photon" beam is to be considered as short wavetrains, or as a continuous beam.
In the first case (which I thought was your point of view, but apparently not), you should find extra correlations beyond the Poisson prediction. But of course in the continuous beam case, you find the same anti-correlations as in the billiard ball photon picture.
However, in this case, I saw another problem: namely if all light beams are continuous beams, then how can we obtain extra correlations when we have a 2-photon process (which I suppose, you deny, and just consider as 2 continuous beams). This is still opaque to me.

You may wonder why is it always so that there is something else that blocks the non-classicality from manifesting. My guess is that it is so because it doesn't exist and all attempted contraptions claiming to achieve it are bound to fail one way or the other, just as perpetuum mobile has to.

The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological". I can understand your viewpoint and the comparison to perpetuum mobile, in that for each different construction, it is yet ANOTHER reason. That by itself is no problem. However, each time the "another reason" should be fundamental (meaning, not depending on a specific technology, but a property common to all technologies that try to achieve the same functionality). If the reason photon detectors in the visible range are limited in QE/darkcurrent tradeoff, it should be due to something fundamental - and you say it is due to the ZPF.
However, then my legitime question was: where's this problem then in gamma photon detectors ?

cheers,
Patrick.
 
  • #109
vanesch The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological"

Before energy conservation laws were declared the fundamental laws, you could only refute perptuum mobile claims on case by case basis. You would have to analyze and measure the forces, frictions, etc and show that the inventors claims don't add to net excess of energy.

The conventional QM doesn't have any such general principle. But in the Stochastic ED, the falsity of the violation immediately follows from the locality of the theory itself (it is an LHV theory).

In nonrelativistic QM you not only lack a general locality principle, but you have a non-local state collapse as a postulate, which is the key in deducing QM prediction that violates locality. Thus you can't have a fundamental refutation in QM -- it is an approximate theory which not only lacks but explicitly violates locality principle (via collapse and via the non-local potentials in Hamiltonians).

So, you are asking too much, at least with the current explicitly non-local QM.

The QED (the Glauber's Quantum Optics correlations) itself doesn't predict a sharp cos(theta) correlation. Namely it predicts that for the detectors and the data normalized (via background subtractions and the trigger level tuning) to the above-vacuum-fluctuations baseline counting, one will have perfect correlation for these kinds of counts.

But these are not the same counts as the "ideal" Bell's QM counts (which assume sharp and conserved photon number, none of which is true in a QED model of the setups), since both calibration operations, the subtractions and the above-vacuum-fluctuation detector threshold, remove data from the set of pair events, thus the violation "prediction" doesn't follow without much more work. Or if you're in hurry, you can just declare "fair sampling" principle is true (in the face of the fact that the semiclasical ED and the QED don't satisfy such "principle") and you spare yourself all the trouble. After all, the Phys. Rev. referees will be on your side, so why bother.

On the other hand, the correlations computed without normal ordering, thus corresponding to the setup which doesn't remove vacuum fluctuations, yields Husimi joint distributions which are always positive, hence their correlations are perfectly classical.
 
  • #110
Let's say that where we agree, is that current EPR data do not exclude classical explanations.

We don't need to agree about any conjectures, but only about the plain facts. Euphemisms aside, the plain fact is that the experiments refute all local fair sampling theories (even though there never were and there are no such theories in the first place).
 
  • #111
No, I asked you whether the "classical photon" beam is to be considered as short wavetrains, or as a continuous beam.

There is nothing different about "classical photon" or QED photon regarding the amplitude modulations. They both propagate via the same equations (Maxwell) throughout the EPR setup. Whatever amplitude modulation the QED photon has there, the same modulation is assumed for the "classical" one. This is a complete non-starter for anything.

While you still avoid to answer where did you get point photon idea from QED (other than through mixup with the "photons" of the Old Quantum Theory, of pre-1920s; since you're using the kind of arguments used in that era, e.g. only one grain of silver blackened modernized to "only one detector trigger", etc). It doesn't follow from QED any more than the point phonons follow from the exactly same QFT methods used in the solid state theory.
 
  • #112
vanesch If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV range.

But the fundamental interaction constants don't scale along. The design of the detectors and the analyzers depends on the interaction constants. For example, for visible photons there is too little space left between 1/2 hv1 and 1 hv1 compared to atomic ionisation energies. At the MeV level, there is plenty of room in terms of atomic ionization energies to insert between the 1/2 hv2 and 1 hv2. The tunneling rates for these two kinds of gaps are hugely different since the interaction constants don't scale. Similarly, the polarization ineraction is negligable for MeV photons to affect their energy-momentum for analyzer design... etc.


In the current QM you don't have general locality principle to refute outright the Bell locality violation claims. So, the only thing one can do is refute particular designs, case by case. The detection problem plagues the visibile photon experiments. Analyzer problems plague the MeV photons.

To refute the perpetuum mobile claims before the energy conservation principles, one had to find friction or gravity or temperature or current of whatever other design specific mechanism ended up balancing the energy.

The situation with EPR Bell tests is only different in the sense that the believers in the non-locality are on top, as it were, so even though they never even showed anything that violates the locality, they insist that the opponents show why that whole class of experiments can't be improved in the future and made to work. That is quite a bit larger burden on the opponents. If they just showed anything that explicitly appeared to violate non-locality, one could just look their data and find error there (if locality holds). But there is no such data. So one has to look for underlying physics of the design and show how that particular design can't yield anything decisive.
 
  • #113
nightlight said:
There is nothing different about "classical photon" or QED photon regarding the amplitude modulations. They both propagate via the same equations (Maxwell) throughout the EPR setup. Whatever amplitude modulation the QED photon has there, the same modulation is assumed for the "classical" one. This is a complete non-starter for anything.

No, this is only true for single-photon situations. There is no classical way to describe "multi-photon" situations, and that's where I was aiming at. If you consider these "multi-photon" situations as just the classical superposition of modes, then there is no way to get synchronized hits beyond the Poisson coincidence. Nevertheless, that HAS been found experimentally. Or I misunderstand your picture.

cheers,
Patrick.
 
  • #114
vanesch said:
The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological". I can understand your viewpoint and the comparison to perpetuum mobile, in that for each different construction, it is yet ANOTHER reason. That by itself is no problem. However, each time the "another reason" should be fundamental (meaning, not depending on a specific technology, but a property common to all technologies that try to achieve the same functionality). If the reason photon detectors in the visible range are limited in QE/darkcurrent tradeoff, it should be due to something fundamental - and you say it is due to the ZPF.
However, then my legitime question was: where's this problem then in gamma photon detectors ?

cheers,
Patrick.

Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation simply by changing the type of media near to it! So forget about going from visible light detector to gamma ray detectors. Even just for visible light detector, you can already manipulate the dark current level. There are no QED theory on this!

I suggest we write something and publish it in Crank Dot Net. :)

Zz.
 
  • #115
ZapperZ Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation simply by changing the type of media near to it!

You change rate of tunneling since the ionization energies are different in those cases.
 
  • #116
nightlight said:
vanesch If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV range.

But the fundamental interaction constants don't scale along.

Yes if we take into account atoms, but the very fact that we take atoms to make up our detection apparatus might 1) very well be at the origin of being unable to produce EPR raw data - I don't know (I even think so), but 2) this is nothing fundamental, is it ? It is just because we earthlings are supposed to work with atomic matter. The description of the fundamental behaviour of EM fields shouldn't be dealing with atoms, should it ?

To refute the perpetuum mobile claims before the energy conservation principles, one had to find friction or gravity or temperature or current of whatever other design specific mechanism ended up balancing the energy.

Well, let us take the following example. Imagine that perpetuum mobile DO exist, when working, say, with dilithium crystals. Imagine that I make such a perpetuum mobile, where on one I put in 5000V and 2000A, and on the other side, I have a huge power output on a power line, 500000V and 20000A.

Now I cannot make a direct volt meter measuring 500000 V, so I use a voltage divider with a 1GOhm resistor and a 1MOhm resistor, and I measure 500V over my 1MOhm resistor. Also I cannot pump 20000A in my ampmeter, so I use a shunt resistance of 1milliOhm in parallel with 1 Ohm, and I measure 20 Amps in the shunt.
Using basic stuff, I calculate from my measurement that 500V on my voltmeter, times my divider factor (1000) gives me 500 000V and that my 20 Amps times my shunt factor (1000) gives me 20 000 A.

Now you come along and say that these shunt corrections are fudging the data, that the raw data give us 500V x 20A = 10 KW output, while we put 5000Vx2000A = 10MW input in the thing, and that hence we haven't shown any perpetuum mobile.
I personally would be quite convinced that these 500V and 20A, with the shunt and divider factors, are correct measurements.
Of course, it is always possible to claim that shunts don't work beyond 200A, and dividers don't work beyond 8000V, in those specific cases when we apply them to such kinds of perpetuum mobile. But I find that far-stretched, if you accept shunts and dividers in all other cases.

cheers,
Patrick.
 
  • #117
nightlight said:
ZapperZ Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation simply by changing the type of media near to it!

You change rate of tunneling since the ionization energies are different in those cases.

Bingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Zz.
 
  • #118
vanesch No, this is only true for single-photon situations. There is no classical way to describe "multi-photon" situations, and that's where I was aiming at. If you consider these "multi-photon" situations as just the classical superposition of modes, then there is no way to get synchronized hits beyond the Poisson coincidence. Nevertheless, that HAS been found experimentally. Or Imisunderstand your picture.

We already went over "sub-Poissonian" distributions few messages back -- it is the same problem as with anti-correlation. This is purely the effect of the vacuum fluctuation subtraction (for the correlation functions by using normal operator ordering), yielding negative probability regions in the Wigner joint distribution functions. The detection and the setups corresponding to correlations computed this way adjust data and tune detectors so that in the absence of signal all counts are 0.

The semiclassical theory which model this data normalization for coincidence counting, also predicts the sub-Poissonian distribution, the same way. (We already went over this at some length earlier...)
 
  • #119
Nightlight


Marshall-Santos-Barut SED model (in the full self-interacting form) is the physics behind the Stochastic Quantization , the stuff that really makes it work (as a model of natural phenomena). The field quantization step doesn't add new physics to "the non-linear DNA" (as Jaynes put it).
It is merely a fancy jargon for a linearization scheme (a linear approximation) of the starting non-linear PD equations.
For a bit of background on this relation check some recent papers by a mathematician Krzysztof Kowalski, which show how the sets of non-linear PDEs ... can be linearized in the form of a regular Hilbert space linear evolution with realistic (e.g. bosonic) Hamiltonians. .



Finally, I understand (may be not :) you agree that PDE, ODE and their non linearities may be rewritten into the Hilbert space formalism. See your example, Kowalski's paper (ODE) solv-int/9801018.

So, it is the words “linear approximation” that seem no to be correct as the reformulation of ODE, PDE with non-linearities in Hilbert space are equivalent.
In fact, I understand what you may reject in the Hilbert space formulation is the following:

-- H is an hermitian operator in id/dt|psi>=H|psi> <=> the time evolution is unitary (what you may call the linear approximation?).

And if I have understood the paper, Kowalski has demonstrated, a well known fact of the hilbert space users, that with a simple gauge transformation (selection of other p,q type variables), you can change a non hermitian operator into an hermitian one (its relation (4.55 to 4.57)). Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.
So I still conclude that even a unitary evolution of the Shroedinger equation may represent a non linear ODE.
Therefore, I still not understand what you call the “linear approximation” as in all cases I get exact results (approximation means, for me the existence of small errors).

--All the possible solutions given by the Schrödinger type equation id/dt|psi>=H|psi> (and its derivatives in the second quantization formalism).
At least the solutions that are non-compatible with the local hidden variables theories.

So, if we interpret your possible physical interpretation stochastic electrodynamics as giving the true results, what we just need to do is to rewrite it formally into the Hilbert space formalism and look at the difference with QED?


Seratend.

P.S. For example, with this comparison, we can see effectively if the projection postulate is really critical or only a way of selecting an initial state.

P.P.S We thus may also check in a rather good confidence if the stochastic electrodynamics formulation does not contain by itself an hidden “hidden variable”.

P.P.P.S and then, if there is really fundamental difference we identify, we can update the schroedinger equation : ) as well as the other physical equations.
 
  • #120
nightlight said:
We already went over "sub-Poissonian" distributions few messages back -- it is the same problem as with anti-correlation. This is purely the effect of the vacuum fluctuation subtraction (for the correlation functions by using normal operator ordering), yielding negative probability regions in the Wigner joint distribution functions.

Yes, I know, but I didn't understand it.

cheers,
Patrick.
 
  • #121
vanesch Yes if we take into account atoms, but the very fact that we take atoms to make up our detection apparatus might 1) very well be at the origin of being unable to produce EPR raw data - I don't know (I even think so), but 2) this is nothing fundamental, is it ? It is just because we earthlings are supposed to work with atomic matter. The description of the fundamental behaviour of EM fields shouldn't be dealing with atoms, should it ?

It is merely related to specific choice of the setup which claims (potential) violation. Obviously, why are the constants what they are, is fundamental.

But why the constants (in combination with laws) happen to block this or that claim in just that particular way is the function of the setup, what is the physics they overlooked to make them pick that design. That isn't fundamental.

Now you come along and say that these shunt corrections are fudging the data, that the raw data give us 500V x 20A = 10 KW output, while we put 5000Vx2000A = 10MW input in the thing, and that hence we haven't shown any perpetuum mobile.

Not quite analogical situation. The "fair sampling" is plainly violated by the natural semiclassical model (see earlier message on sin/cos split) as well as in the Quantum Optics, which will have exactly the same amplitude propagation through the polarizers to detectors and follow the same square law detector probabilities of trigger. See also Khrennikov's paper cited earlier on proposed test for the "fair sampling."

So the situation is as if you picked some imagined behavior that is contrary to the existent laws and claimed that you will assume in your setup the property holds.

Additionally, there is no universal "fair sampling" law which you just invoke without examination whether it applies in your setup. And in the EPR-Bell setups it should be at least tested since the claim is so fundamental.
 
Last edited:
  • #122
nightlight You change rate of tunneling since the ionization energies are different in those cases.

ZapperZBingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

Complete non sequitur. You stated that if you change detector materials, you will get different dark currents and that this would imply (if vacuum fluctuations had a role in dark currents) that vacuum fluctuations were changed.

It doesn't mean that vacuum fluctuations changed. If you change material and thus change the gap energies, the tunneling rates will change even though the vacuum stayed the same, since the tunneling rates depend on the gap size, which is the material property (in addition to depending on vacuum fluctuation energy).

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Take any textbook on Quantum Optics and read up on photo-detectors noise and find out for yourself.
 
  • #123
seratend And if I have understood the paper, Kowalski has demonstrated, a well known fact of the hilbert space users, that with a simple gauge transformation (selection of other p,q type variables), you can change a non hermitian operator into an hermitian one (its relation (4.55 to 4.57)). Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.

That was not it. The point is the linearization procedure which is the step of construction of M, which is linear operator. That step occurs in transition between 4.44 and 4.46 where the M is obtained. The stuff you quote is the comparatively trivial part about making 4.45 look like Schroedinger equation with the hermitean operator (instead of M).

So I still conclude that even a unitary evolution of the Shroedinger equation may represent a non linear ODE.
Therefore, I still not understand what you call the “linear approximation” as in all cases I get exact results (approximation means, for me the existence of small errors).


The point of his procedure is to have as input any set of non-linear (evolution) PD equations and create a set of linear equations which approximate the nonlinear set (iterative procedure which arrives to infinite set of linear approximations). The other direction, from linear to non-linear, is relatively trivial.

So, if we interpret your possible physical interpretation stochastic electrodynamics as giving the true results, what we just need to do is to rewrite it formally into the Hilbert space formalism and look at the difference with QED?

If you want to try, check first Barut's work which should save you lots of time. It's not that simple, though.
 
Last edited:
  • #124
nightlight said:
nightlight You change rate of tunneling since the ionization energies are different in those cases.

ZapperZBingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

Complete non sequitur. You stated that if you change detector materials, you will get different dark currents and that this would imply (if vacuum fluctuations had a role in dark currents) that vacuum fluctuations were changed.

It doesn't mean that vacuum fluctuations changed. If you change material and thus change the gap energies, the tunneling rates will change even though the vacuum stayed the same, since the tunneling rates depend on the gap size, which is the material property (in addition to depending on vacuum fluctuation energy).

Eh? Aren't you actually strengtening my point? It is EXACTLY what I was trying to indicate that changing the material SHOULD NOT change the nature of the vacuum beyond the material - and I'm talking about the order of 1 meter here from the material! If you argue that the dark currents are due to "QED vacuum fluctuations", then the dark current should NOT change simply because I switch the photocathode since, by your own account, vacuum fluctuations hasn't changed!

But the reality is, IT DOES! I detect this all the time! So how could you argue the dark currents are due to QED vacuum fluctuation? Just simply because a book tells you? And you're the one who are whinning about other physicists being stuck with some textbook dogma?

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Take any textbook on Quantum Optics and read up on photo-detectors noise and find out for yourself.

... and why don't you go visit an experimental site that you criticizing and actually DO some of these things? It appears that your confiment within theoretical boundaries are cutting you off from physical reality. Grab a photodiode and make a measurement for yourself! And don't tell me that you don't need to know how it works to be able to comment on it. You are explicitly questioning its methodology and what it can and cannot measure. Without actually using it, all you have is a superficial knowledge of what it can and cannot do.

Comments such as the one above is exactly the kind of theorists that Harry Lipkin criticized in his "Who Ordered Theorists" essay. When faced with an experimental results, such theorists typically say "Oh, that can't happen because my theory says it can't". It's the same thing here. When I tell you that I SEE dark currents, and LOTS of it, and they have nothing to do with QED vacuum fluctuations, all you can tell me is to go look up some textbook? Oh, it can't happen because that text says it can't!

If all the dark currents I observe in the beamline is due to QED vacuum fluctuation, then our universe will be OPAQUE!

Somehow, there is a complete disconnect between what's on paper and what is observed. Houston, we have a problem...

Zz.
 
  • #125
nightlight said:
Not quite analogical situation. The "fair sampling" is plainly violated by the natural semiclassical model (see earlier message on sin/cos split) as well as in the Quantum Optics, which will have exactly the same amplitude propagation through the polarizers to detectors and follow the same square law detector probabilities of trigger.

Well, the fair sampling is absolutely natural if you accept photons as particles, and you only worry about the spin part. It is only in this context that EPR excludes semiclassical models. The concept of "photon" is supposed to be accepted.

I've been reading up a bit on a few of Barut's articles which are quite interesting, and although I don't have the time and courage to go through all of the algebra, it is impressive indeed. I have a way of understanding his results: in the QED language, when it is expressed in the path integral formalism, he's working with the full classical solution, and not working out the "nearby paths in configuration space". However, this technique has a power over the usual perturbative expansion, in that the non-linearity of the classical solution is also expanded in that approach. So on one hand he neglects "second quantization effects", but on the other hand he includes non-linear effects fully, which are usually only dealt with in a perturbative expansion. There might even be cancellations of "second quantization effects", I don't know. In the usual path integral approach, these are entangled (the classical non-linearities and the quantum effects).

However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

cheers,
Patrick.
 
  • #126
vanesch said:
However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

I guess the crux is given by equation 27 in quant-ph/9711042,
which gives us the coincidence probability for 2 photon detector clicks in 2 coherent beams. However, this is borrowed from quantum physics, no ? Aren't we cheating here ?
I still cannot see naively how you can have synchronized clicks with classical fields...

cheers,
Patrick.

EDIT: PS: I just ordered Mandel's quantum optics book to delve a bit deeper in these issues, I'll get it next week... (this is a positive aspect for me of this discussion)
 
Last edited:
  • #127
nightlight ... Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.

That was not it. The point is the linearization procedure which is the step of construction of M, which is linear operator. That step occurs in transition between 4.44 and 4.46 where the M is obtained. The stuff you quote is the comparatively trivial part about making 4.45 look like Schroedinger equation with the hermitean operator (instead of M).


Oki, for me the redefinition of M of (3.2) into the “M’” of (4.46) was trivial : ). In fact I only considered §D and §E as only a method on a well known problem of vector spaces: how to make diagonal some non hermitian operators (i.e. where solving the time evolution operator is easy in our case).
Starting from a non-hermitian operator M=M1+iM2 where M1 and M2 are hermitian, we may search a linear transformation on the Hilbert space where M becomes hermitian (and thus it may be diagonal with further transformation).

So, the important point for me, is that I think you accept that a PDE/ODE equation may be reformulated into the Hilbert space formalism, but without the obligation to get a unitary time evolution (i.e id/dt|psi>=M|psi> where M may or may not be hermitian.

nightlight The point of his procedure is to have as input any set of non-linear (evolution) PD equations and create a set of linear equations which approximate the nonlinear set (iterative procedure which arrives to infinite set of linear approximations).

So “approximation” for you mean an infinite set of equations to solve (like computing the inverse of an infinite dimensional matrix). But theoretically, you know that you have an exact result. Ok, let’s take this definition. Well, you may accept more possibilities than what I do: that a non unitary time evolution (id/dt|psi>=M|psi>) may be changed, with an adequate transformation, into a unitary one.

So, the use of stochastic electrodynamics may be theoretically transposed into an exact id/dt|psi>=H|psi> (may be in a 2nd quantification view) where H (the hypothetic Hamiltonian) may not be hermitian (and I think you assume It is not).

So if we assume (very quickly), that H is non hermitian (=H1+iH2, H1 and H2 hermitian, and with H1>> H2 for all practical purposes), we thus have a schroedinger equation that has a kind of diffusion coefficient (H2) that may approach some of the proposed modifications found in the literature to justify the “collapse” of the wave function. Decoherence program – see for example Joos quant-ph/9908008 eq. 22 and many other papers.

Thus the stochastic electrodynamics may be (?) checked more easily with experiments that measure the decoherence time of quantum systems rather than the EPR experiment where you have demonstrated that it is not possible to make the difference.


Thus, I think, that the promoters of this theory should work on a Hilbert space formulation (at least approximate, but with enough precision in order to get a non hermitian Hamiltonian). Therefore they may apply it to other experiments to demonstrate the difference (i.e different previsions between classical QM and stochastic electrodynamics through, may be, decoherence experiments) rather than working on experiments that cannot separate the theories and verifying that the 2 theories agree with the experiment:
It is their job, as their claim seem to say that classical QM is not ok.

Seratend.
 
  • #128
vanesch Well, the fair sampling is absolutely natural if you accept photons as particles, and you only worry about the spin part. It is only in this context that EPR excludes semiclassical models. The concept of "photon" is supposed to be accepted.

I don't see how does "natural" part help you restate the plain facts: Experiments refuted the "fair sampling" theories. It seems the plain fact still stands as is.

Secondly, I don't see it natural for the particles either. Barut has some EPR-Bell papers for spin 1/2 point particles in classical SG and he gets the unfair sampling. In that case it was due to the dependence of the beam spread in the SG on the hidden classical magnetic momentum. This yields particle paths in the ambiguous, even wrong region, depending on the value of hidden spin orientation to the SG axis. Thus for the coincidence it makes the coincidence accuracy sensitive to the angle between the two SG's i.e. the "loophole" was in the analyzer violating the fair sampling (same analyzer problem happens for the high energy photons).

It seems what you mean by "particle" is strict unitary evolution with sharp particle number, and in the Bell setup with sharp value 1 for each side. That is not true of QED photon number (you neither have sharp nor conserved photon number; and specifically for the proposed setups you either can't detect them reliably enough or you can't analyze them on a polarizer reliably enough, depending on photon energy; these limitations are facts arising from the values of relevant atomic interaction constants and cross sections which are what they are i.e. you can't assume you can change those, you can only come up with another design which works around any existent flaw which was due to the relevant constants).

However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

It helps in the sense that if you can show that Barut's self-field ED agrees with QED beyond the perturbative orders relevant for Quantum Optics measurements (which is actually the case), you can immediately conclude, without having to work out a detailed Quantum Optics/QED prediction (which is accurate enough to predict exactly what would actually be obtained for correlations with the best possible detectors and polarizers consistent with the QED vacuum fluctuations and all the relevant physical constants of the detectors and the polarizers) that a local model of the experiment exists for any such prediction -- the model would be the self-field ED itself.

Thus you would immediately know that however you vary the experimental setup/parameters, within the perturbative orders to which QED & self-fields agree, something will be there to block it from violating locality. You don't need to argue about detector QE or plarizer efficiency, you know that anything they cobble together within the given perturbative orders, will subject any conjecture about future technology being able to improve efficiency of a detector (or other components) beyond threshold X, to reductio ad absurdum. Thus the experimenter would have to come up with a design which relies in an essential way on the predictions of QED perturbative orders going beyond the orders of agreement with the self-field ED. Note also that the existent experiments are well within the orders of agreement of the two theories, thus for this type of tests there will always be a "loophole" (i.e. the actual data won't violate the inequality).

This is the same kind of conclusion that you would make about perpertuum mobile claim which asserts that some particular design will yield 110% efficiency if the technology gets improved enough to have some sub-component of the apratus work with the accuracy X or better. The Bell tests make exactly this kind of assertion (without the fair sampling conjecture which merely helps them claim that experiments exclude all local "fair sampling" theories, and that is too weak to exlude anything that exists, much less all future local theories) -- we could violate it if we can get X=83% or better the overall setup efficiency (which accounts for all types of losses, such as detection, polarizer, apperture efficiency, photon number spread,...).

Now, since in the perpetuum mobile case you can invoke general energy conservation law, you can immediately tell the inventor that (provided the rest of his logic is valid) his critical component will not be able to achieve accuracy X since that would violate energy conservation. You don't need to go into detailed survey of all conceivable relations between the relevant interaction constants in order to compute the limits on the accuracy X for all conceivable technological implementatons of that component. You would know it can't work since assuming otherwise would lead to reductio ad absurdum.
 
Last edited:
  • #129
vanesch I guess the crux is given by equation 27 in quant-ph/9711042,
which gives us the coincidence probability for 2 photon detector clicks in 2 coherent beams. However, this is borrowed from quantum physics, no ? Aren't we cheating here ?


The whole section 2 is derivation of standard Quantum Optics prediction (in contrast to a toy QM prediction) expressed in terms of Wigner functions, which are perfectly standard tool for QED or QM. So, of course, it is all standard Quantum Optics, just using Wigner's joint distribution functions instead of the more conventional Glauber's correlation functions. The two are fully equivalent, thus that is all standard, entirely uncontroversial part. They are simply looking for a more precise expression of the standard Quantum Optics prediction showing the nature of the non-classicality in a form suitable for their analysis.

The gist of their argument is that if for a given setup you can derive strictly positive Wigner function for joint probabilities, there is nothing else to discuss, all statistics is perfectly classical. The problem is how to interpret the other case, when there are negative probability regions of W (which show up in PDC or any sub-Poissonan setup). To do that they need to examine to what kind of detection and counting does the Wigner distribution correspond operationally (to make prediction with any theoretical model you need operational mapping rules of that kind). With the correspondence established, their objective is to show that the effects of the background subtractions and the above-vacuum threshold adjustments on individual detectors (the detectors adjustments can only set the detector's Poisson average, there is no sharp decision for each try, thus there is always some loss and some false positives) are in combination always sufficient to explain the negative "probability" of the Wigner measuring setups as artifacts of the combined subtraction and losses (the two can be traded off by the experimenter but he will always have to chose between the increased losses and the increased dark currents, thus subtract the background and unpaired singles in proportions of his choice but with the total subtraction not reducible below the vacuum fluctuations limit).

That is what the subsequent sections 3-5 deal with. I recall discussing with them the problems of that paper at the time since I thought that the presentation (especially in sect 5) wasn't sharp and detailed enough for readers who weren't up to speed on their whole previous work. They had since then sharpened their detector analysis with more concrete detection models and detailed computations, you can check their later preprints in quant-ph if the sections 4 & 5 don't seem sufficient for you.

Note that they're not pushing here SED model explicitly, but rather working in the standard QO scheme (even though they know from their SED treatment where to look for the weak spots in the QO treatment). This is to make the paper more publishable, easier to get through the hostile gatekeepers (who might just say, SED who? and toss it all out). After all, it is not a question whether experiments have produced any violation -- they didn't. In a fully objective system, that would be it, no one would have to refute anything since nothing was shown that needs refuting.

But in the present system, the non-locality side has an advantage so that their handwave about "ideal" future technology is taken as a "proof" that it will work some day, which then the popularizers and the educators translate into "it worked" (which is mostly the accepted wisdom you'll find in these kind of student forums). Unfortunately, with this kind of bias, the only remaining way for critics of the prevailing dogma is to show, setup by setup, why a given setup, even when granted the best conceivable future technology can't produce a violation.

I still cannot see naively how you can have synchronized clicks with classical fields...

You can't get them and you don't get them with the raw data, which will always have either too much uncorrelated background or too much pair losses. You can always lower the sensitivity to lower the background noise in order to have smaller explicit subtractions, but then you will be discarding too many unpaired singles (events which don't have a matching event on the other side).
 
Last edited:
  • #130
nightlight said:
The gist of their argument is that if for a given setup you can derive strictly positive Wigner function for joint probabilities, there is nothing else to discuss, all statistics is perfectly classical.

This is what I don't understand (I'm waiting for my book on quantum optics..., hence my silence). After all, these Wigner functions describe - if I understand well - the Fock states that correspond closely to classical EM waves. However, the statistics derived from these Fock states for photon detection assume a second quantization, no ? So how can you 1) use the statistics derived from second quantization (admitted, for cases that correspond to classical waves) to 2) argue that you don't need a second quantization ??

You see, the way I understand the message (and maybe I'm wrong) is the following procedure:
-Take a classical EM wave.
-Find the Fock space description corresponding to it (with the Wigner function).
-From that Fock space description, calculate coincidence rates and other statistical properties.
-Associate with each classical EM wave, such statistics. (*)
-Take some correlation experiment with some data.
-Show that you can find, for the cases at hand, a classical EM wave such that when we apply the above rules, we find a correct prediction for the data.
-Now claim that these data are explainable purely classically.

To me, the last statement is wrong, because of step (*)
You needed second quantization in the first place to derive (*).

cheers,
Patrick.

EDIT: I think that I've been too fast here. It seems indeed that two-photon correlations cannot be described that way, and that you always end up with the equivalence of a single-photon repeated experiment, the coherent state experiment, and the classical <E-(r1,t1) x E+(r2,t2> intensity cross correlation.
 
Last edited:
  • #131
nightlight said:
I still cannot see naively how you can have synchronized clicks with classical fields...

You can't get them and you don't get them with the raw data, which will always have either too much uncorrelated background or too much pair losses. You can always lower the sensitivity to lower the background noise in order to have smaller explicit subtractions, but then you will be discarding too many unpaired singles (events which don't have a matching event on the other side).

How about this:

http://scotty.quantum.physik.uni-muenchen.de/publ/pra_64_023802.pdf

look at figure 4

I don't see how you can explain these (raw, I think) coincidence rates unless you assume (and I'm beginning to understand what you're pointing at) that the classical intensities are "bunched" in synchronized intensity peaks.
However, if that is the case, a SECOND split of one of the beams towards 2 detectors should show a similar correlation, while the quantum prediction will of course be that they are anti-correlated.


2-photon beam === Pol. Beam splitter === beam splitter -- D3
.......|........|
....... D1......D2

Quantum prediction: D1 is correlated with (D2+D3) in the same way as in the paper ; D2 and D3 are Poisson distributed, so at low intensities anti-correlated.

Your prediction: D1 is correlated with D2 and with D3 in the same way, and D2 is correlated with D3 in the same way.

Is that correct ?

cheers,
Patrick.
 
Last edited by a moderator:
  • #132
vanesch After all, these Wigner functions describe - if I understand well - the Fock states that correspond closely to classical EM waves.

No, the Wigner (or Husimi) joint distributions and their differential equations are a formalism fully equivalent to the Fock space formulation. The so-called "deformation quantization" (an alternative form of quantization) is based on this equivalence.

It is the (Glauber's) "coherent states" which correspond to the classical EM theory. Laser light is a template for coherent states. For them the Wigner functions are always positive, thus there is no non-classical prediction for these states.

So how can you 1) use the statistics derived from second quantization (admitted, for cases that correspond to classical waves) to 2) argue that you don't need a second quantization ??

The Marshall-Santos PDC paper you cited uses standard Quantum Optics (in Wigner formalism) and a detection model to show that there is no non-classical correlation predicted by Quantum Optics for the PDC sources. They're not adding or modifying PDC treatment, merely rederiving it in Wingner function formalism.

The key is in analyzing with more precision and finesse (than usual engineering style QO) operational mapping rules between the theoretical distributions (computed under the normal operator ordering convention which correspond to Wigner's functions) and the detection and counting procedures. (You may need to check also few of their other preprints on detection for more details and specific models and calculations.) Their point is that the conventional detection & counting procedures (with the background subtractions and tuning to [almost] no-vacuum detection) amount to the full subtraction needed to produce the negative probability regions (conventionally claimed as non-classicality) of the Wigner distributions, thus the standard QO predictions, for the PDC correlations.

The point of these papers is to show that, at least for the cases analyzed, Quantum Optics doesn't predict anything non-classical, even though PDC, sub-Poissonian distributions, anti-bunching,... are a soft non-classicality (they're only suggestive, e.g. at the superficial engineering or pedagogical levels of analysis, but not decisive as the violation of Bell's inequality which absolutely no classical theory, deterministics or stochastic, can violate).

The classical-Quantum Optics equivalence of the thermal (or chaotic) light was known since 1950s (this was clarified during the Hanbury Brown and Twiss effect controversy). Similar equivalence was established in 1963 for the coherent states, making all the laser light effects (plus linear optical elements and any number of detectors) fully equivalent to a classical description. Marshall and Santos (and their students) have extended this equivalence to the PDC sources.

Note also that 2nd quantization is in these approaches (Marshall-Santos SED, Barut self-field ED, Jaynes neoclassical ED) viewed as a mathematical linearization procedure of the uderlying non-linear system, and not something that adds any new physics. After all the same 2nd quantization techniques are used in solid state physics and other areas for entirely different underlying physics. The 1st quantization is seen as a replacement of point particles by matter fields, thus there is no point in "quantizing" the EM field at all (it is a field already), or the Dirac matter field (again).

As a background to this point, I mentioned (few messages back) some quite interesting results by a mathematician Krzysztof Kowalski, which show explicitly how a classical non-linear ODE/PDE systems can be linearized in a form that looks just like a bosonic Fock space formalism (with creation/anihilation operators, vacuum state, particle number states, coherent states, bosonic Hamiltonian and standard quantum state evolution). In that case it is perfectly transparent that there is no new physics brought in by the 2nd quantization, it is merely a linear approximation of a non-linear system (it yields iteratively an inifinte number of linear equations from a finite number of non-linear equations). While Kowalski's particular linearization scheme doesn't show that QED is a linearized form of the non-linear equations such as Barut's self-field, it provides an example of this type of relation between the Fock space formalism and the non-linear clasical equations.

You see, the way I understand the message (and maybe I'm wrong) is the following procedure:
-Take a classical EM wave.


No, they don't do that (here). They're using the standard PDC model and are treating it fully within the QO, just in the Wigner function formalism.
 
Last edited:
  • #133
vanesch I don't see how you can explain these (raw, I think) coincidence rates

Those are not even close to raw counts. Their pairs/singles ratio is 0.286 and their QE is 0.214. Avalanche diodes are very noisy and to reduce background they have used short coincidence window, resulting in quite low (for any non-classicality claim) pair/singles ratio. Thus the combination of the background subtractions and the unpaired singles are much larger than what Marshal-Santos PDC classicality requires (which assumes only the vacuum fluctuation noise subtraction, with everything else granted as optimal). Note that M-S SED (which is largely equivalent in predictions to Wigner distributions with positive probabilities) is not some contrived model made to get around some loophole, but a perfectly natural classical model, a worked out and refined version of the Planck's 2nd Quantum Theory (of 1911, where he added an equivalent of 1/2 hv noise per mode).
 
  • #134
Brief question about Aharanov-Bohm effect

Hello Nightlight,

I follow your exceptional discussion with Vanesch and a couple of other contributors. It is maybe the best dispute that I have read on this forum so far. In particular, it looks as if you and Vanesch are really getting 'somewhere'. I am looking forward to the final verdict whether or not the reported proof of spooky action at a distance is in fact valid.

Anyway, I consider the Aharanov-Bohm effect as a similar fundamental non-local manifestation of QM as the (here strongly questioned) Bell violation.

To say it a bit more challenging, it also looks like a 'quantum mystery' which you seem to despise.

My question is: Are you familiar with this effect and, if yes, do you believe in it (whatever that means) or do you similarly think that there is a sophisticated semi-classical explanation?

Roberth
 
  • #135
Roberth Anyway, I consider the Aharanov-Bohm effect as a similar fundamental non-local manifestation of QM as the (here strongly questioned) Bell violation.

Unfortunately, I haven't studied it in any depth beyond a shallow coverage in the undergraduate QM class. I always classified it among the "soft" non-locality phenomena, those which are superficially suggestive of non-locality (after all, both Psi and A still evolve fully locally), but lack decisive criteria (unlike the Bell's inequalities).
 
  • #136
nightlight said:
Their pairs/singles ratio is 0.286 and their QE is 0.214. Avalanche diodes are very noisy and to reduce background they have used short coincidence window, resulting in quite low (for any non-classicality claim) pair/singles ratio.

Aaah, stop with your QE requirements I'm NOT talking about EPR kinds of experiments here, because I have to fight the battle upstream with you :smile: because normally, people accept second quantization, and you don't. So I'm looking at possible differences between what first quantization and second quantization can do. And in the mean time I'm learning a lot of quantum optics :approve: which is one of the reasons why I continue here with this debate. You have 10 lengths of advantage over me there (but hey, I run fast :devil:)

All single-photon situations are indeed fully compatible with classical EM, so I won't be able to find any difference in prediction there. Also, I have to live with low efficiency photon detectors, because otherwise you object about fair sampling, so I'm looking at a possibility of a feasible experiment with today's technology that proves (or refutes ?) second quantization. I'm probably a big naive guy, and this must have been done before, but as nobody is helping here, I have to do all the work myself :bugeye:

What the paper that I showed proves to me is that we can get correlated clicks in detectors way beyond simple Poisson coincidences. I now understand that you picture this as correlated fluctuations in intensity in the two classical beams.
But if the photon picture is correct, I think that after the first split (with the polarizing beam splitter) one photon of the pair goes one way, and the other one goes the other way, giving correlated clicks, superposed on about 3 times more uncorrelated clicks, and with a detection probability of about 20% or so, while you picture that as two intensity peaks in the two beams, giving rise to enhanced detection probabilities (and hence coincidences) at the two detectors (is that right ?).
Ok, up to now, the pictures are indistinguishable.
But I proposed the following extension of the experiment:

In the photon picture, each of the two branches just contains "single photon beams", right ? So if we put a universal beam splitter in one such branch (not polarizing), we should get uncorrelated Poisson streams in each one, so the coincidences between these two detectors (D2 and D3) should be those of two independent Poisson streams. However, D2 PLUS D3 should give a signal which is close to what the old detector gave before we split the branch again. So we will have a similar correlation between (D2+D3) and D1 as we had in the paper, but D2 and D3 shouldn't be particularly correlated.

In your "classical wave" picture, the intensity peak in our split branch splits in two half intensity peaks, which should give rise to a correlation of D2 and D3 which is comparable to half the correlation between D2 and D1 and D3 and D1, no ?

Is this a correct view ? Can this distinguish between the photon picture and the classical wave picture ? I think so, but as I'm not an expert in quantum optics, nor an expert in your views, I need an agreement here.

cheers,
Patrick.
 
  • #137
In your "classical wave" picture, the intensity peak in our split branch splits in two half intensity peaks, which should give rise to a correlation of D2 and D3 which is comparable to half the correlation between D2 and D1 and D3 and D1, no ?

You're ignoring few additional effects here. One is that the detectors counts are Poissonian (which are significant for the visible range photons). Another is that you don't have a sharp photon number state but a superposition with at least 1/2 photon equivalent spread.

Finally, the "classical" picture with ZPF allows a limited form of sub-Poissonian statistics for the adjusted counts (or the extrapolated counts, e.g. if you tune your detectors to a higher trigger threshold to reduce the explicit background subtractions in which case you raise the unpaired singles counts and have to extrapolate). This is due to the sub-ZPF superpositions (which enter the ZPF averaging for the ensemble statistics) of the signal and the ZPF in one branch after the splitter. Unless you're working in a high-noise, high sensitivity detector mode (which would show you, if you're specifically looking for it, a drop in the noise coinciding with the detection in the other branch), all you would see is an appearance of the sub-Poissonian behavior on the subtracted/extrapolated counts. But this is exactly the level of anticorrelation that the classical ZPF model predicts for the adjusted counts.

The variation you're describing was done for exactly that purpose in 1986 by P. Grangier, G. Roger and A. Aspect ("Experimental evidence for a photon anticorrelation effect on a beam splitter: a new light on a single photon interefernce" Europhys. Lett. Vol 1 (4) pp 173-179, 1986). For the pair source they have used their original Bell test atomic cascade. Of course, the classical model they tested against, to declare the non-classicality, was the non-ZPF classical field which can't reproduce the observed level of anticorrelation on the adjusted data. { I recall seeing another preprint of that experiment (dealing with the setup properties prior to the final experiments) which had more detailed noise data indicating a slight dip in the noise in the 2nd branch, for which they had some aperture/alignment type of explanation. }

Marshall & Santos had several papers following that experiment where their final Stochastic Optics (the SED applied for Quantum Optics) had crystallized, including their idea of "subthreshold" superposition, which was the key for solving the anticorrelation puzzle. A longer very readable overview of their ideas at the time, especially regarding the anticorrelations, was published in Trevor Marshall, Emilio Santos "Stochastic Optics: A Reaffirmation of the Wave Nature of Light" Found. Phys., Vol 18, No 2. 1988, pp 185-223, where they show that a perfectly natural "subthreshold" model is in full quantitative agreeement with the anticorrelation data (they also relate their "subthreshold" idea to its precursors, such as the "empty-wave" by Selleri 1984 and the "latent order" by Greenberger 1987; I tried to convince Trevor to adopt a less accurate but catchier term "antiphoton" but he didn't like it). Some day, when this present QM non-locality spell is broken, these two will be seen as Galileo and Bruno of our own dark ages.
 
Last edited:
  • #138
nightlight said:
You're ignoring few additional effects here. One is that the detectors counts are Poissonian (which are significant for the visible range photons). Another is that you don't have a sharp photon number state but a superposition with at least 1/2 photon equivalent spread.

You repeated that already a few times, but I don't understand this. After all, I can just as well work in the Hamiltonian eigenstates. In quantum theory, if you lower the intensity of the beam enough, you have 0 or 1 photon, and there is no such thing to my knowledge as half a photon spread, because you should then find the same with gammas, which isn't the case (and is closer to my experience, I admit). After all, gamma photons are nothing else but Lorentz-transformed visible photons, so what is true for visible photons is also true for gamma photons, it is sufficient to have the detector speeding at you (ok, there are some practical problems to do that in the lab :-)
Also, if the beam intensity is low enough, (but unfortunately the background scales with the beam intensity), it is a fair assumption that there is only one photon at a time in the field. So I'm pretty sure about what I said about the quantum predictions:

Initial pair -> probability e1 to have a click in D1
probability e2/2 to have a click in D2 and no click in D3
probability e3/2 to have a click in D3 and no click in D3

This is superposed on independent probability b1 to have a click in D1, a probability b2 to have a click in D2 and probability b3 to have a click in D3, independent, but all proportional to some beam power I.

It is possible to have a neglegible background by putting the thresholds high enough and isolating well enough from any lightsource other than the original power source. This can cost some efficiency, but not much.

If I is low enough, we consider that, due to the Poisson nature of the events, no independent events occur together (that is, we neglect probabilities that go in I^2). After all, this is just a matter of spreading the statistics over longer times.

So: rate of coincidences as predicted by quantum theory:
a) D1 D2 D3: none (order I^3)
b) D1 D2 no D3: I x e1 x e2/2 (+ order I^2)
c) D1 no D2 D3: I x e1 x e3/2 (+ order I^2)
d) D1 no D2 no D3: I x (b1 + e1 x (1- e2/2 - e3/2))
e) no D1 D2 D3: none (order I^2)
f) no D1 D2 no D3: I x (b2 + (1-e1)x e2/2)
g) no D1 no D2 D3: I x (b3 + (1-e1)xe3/2)


A longer very readable overview of their ideas at the time, especially regarding the anticorrelations, was published in Trevor Marshall, Emilio Santos "Stochastic Optics: A Reaffirmation of the Wave Nature of Light" Found. Phys., Vol 18, No 2. 1988, pp 185-223,

If you have it, maybe you can make it available here ; I don't have access to Foundations of Physics.

cheers,
Patrick.
 
Last edited:
  • #139
vanesch After all, I can just as well work in the Hamiltonian eigenstates.

The output of PDC source is not same as picking a state in Fock space freely. That is why they restricted their analysis to PDC sources where they can show that the resulting states will not have the Wigner distribution negative beyond what the detection & counting callibrated to null result for the 'vacuum fluctuations alone' would produce. That source doesn't produce eigenstates of free Hamiltonian (consider also the time resolution of such modes with sharp energy). It also doesn't produce gamma photons.

because you should then find the same with gammas, which isn't the case (and is closer to my experience, I admit).

You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters). The interaction constants, cross sections, tunneling rates,... don't scale with the photon energy. You can have a virtually perfect detector for gamma photons. But you won't have a perfect analyzer or a beam splitter. Thus, for gamma you can get nearly perfect particle-like behavior (and very weak wave-like behavior) which is no more puzzling or non-classical than a mirror with holes in the coating scanned by a thin light beam mentioned earlier.

To preempt the loose argument shifts of this kind, I will recall the essence of contention here. We're looking at a setup where a wave packet splits into two equal, coherent parts A and B (packet fragments in orbital space). If brought together to a common area, A and B will produce perfect interference. If any phase shifts are inserted in the paths of A or B, the interference pattern will shift depening on relative phase shift on two paths, implying that in each try the two packet fragments propagate on both paths (this is also the propagation that the dynamical/Maxwell equations describe for the amplitude).

The point of contention is what happens if you insert two detectors DA and DB in paths of A and B. I am saying that the two fragments propagate to respective detectors, interact with the detector and each detectors triggers or doesn't trigger, regardless of what happened on the other detector. The dynamical evolution is never suspended and the triggering is solely a result of the interaction between the local fragment and its detector.

You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.

The alleged empirical consequence of this conjecture will be the "exclusion" of the trigger B whenever trigger A occurs. The "exclusion" is such that it cannot be explained by the local mechanism of independent detection under the uninterrupted dynamical evolution of each fragment and its detector.

Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions, which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to, indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?

Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.

The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).

The real question is how to deal with the apparent sub-Poissonian cases as in PDC. That is where these kinds of trivial arguments don't help. One has to, as Marshall & Santos do, look at the specific output states and find the precise degree of the non-classicality (which they express for convenience in the Wigner function formalism). Their ZPF ("vacuum fluctuations" in conventional treatment) based detection and coincidence counting model allows for a limited degree of non-classicality in the adjusted counts. Their PDC series of papers shows that for PDC sources all non-classicality is of this apparent type (the same holds for laser/coherent/Poissonian sources and chaotic/super-Poissonian sources).

Without the universal locality principle, you can only refute specific overlooked effects of a particular claimed non-classicality setup. This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.

In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.
 
Last edited:
  • #140
If you have it, maybe you can make it available here ; I don't have access to Foundations of Physics.

I have only a paper preprint but no scanner handy which could make a usable electronic copy of it. The Los Alamos archive has their more recent preprints. Their preprint "The myth of the Photon" also reviews the basic ideas and contains a citation to a Phys.Rev. version of that Found.Phys. paper. For intro on Wigner functions (and the related pseudo-distributions, the Husimi and the Glauber-Sudarshan functions) you can check these http://web.utk.edu/~pasi/davidovich.pdf and a longer paper with more on their operational aspects.
 
Last edited:

Similar threads

Back
Top