Young's Experiment: Exploring Wave-Particle Duality

  • Thread starter Cruithne
  • Start date
  • Tags
    Experiment
In summary: This is not supported by the experiments.In summary, this article is discussing a phenomena that is observed in Young's experiment, however it contradicts what is said in the other examples of the experiment. The mystery of the experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. Additionally, the statement suggests that the interference patterns produced were not the result of any observations.
  • #141
nightlight said:
You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters).

I'm looking more into Santos and Co's articles. It's a slow read, but I'm working up my way... so patience :-) BTW, thanks for the lecture notes, they look great !


You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.

Not at all. My view (which I have expressed here already a few times in other threads) is quite different and I don't really think you need a "collapse at a distance" at all - I'm in fact quite a fan of the decoherence program. You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation. This means that macroscopic systems can be in a superposition, but that's no problem, just continuing the unitary evolution (this is th essence of the decoherence program). But the point was not MY view :-)

Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions, which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to, indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?

Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.

The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).

You're perfectly right, and I acknowledged that already a while ago when I said that there's indeed no way to distinguish "single photon" events that way. What I said was that such a single-photon event (which is one of a pair of photons), GIVEN A TRIGGER WITH ITS CORRELATED TWIN, will give you an indication of such an exclusivity in the limit of low intensities. It doesn't indicate any non-locality or whatever, but indicates the particle-like nature of photons, which is a first step, in that the marble can only be in one place at a time, and with perfect detectors WILL be in one place at a time. It would correspond to the 2 511KeV photons in positron annihilation, for example. I admit that my views are maybe a bit naive for opticians: my background is in particle physics, and currently I work with thermal neutrons, which come nicely in low-intensity Poissonian streams after interference all the way down the detection spot. So clicks are marbles :-)) There are of course differences with optics: First of all, out of a reactor rarely come correlated neutron pairs :-), but on the other hand, I have all the interference stuff (you can have amazing correlation lengths with neutrons!), and the one-click-one-particle detection (with 98% efficiency or more if you want), background ~ 1 click per hour.


This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.

In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.

I agree with you here concerning the scientific attitude to adopt, and apart from a stimulus for learning more quantum optics, it is the main motivation to continue this discussion :-) To me, these experiments don't exclude anything, but they confirm beautifully the quantum predictions. So it is very well possible that completely other theories will have similar predictions, it is "sufficient" to work them out. However, if I were to advise a student (but I won't because it is not my job) on whether to take that path or not, I'd strongly advise against it, because there's so much work to do first: you have to show agreement on SUCH A HUGE AMOUNT OF DATA that the work is enormous, and the probability of failure rather great. On the other hand, we have a beautifully working theory which explains most if not all of it. So it is probably more fruitful to go further in the successfull path than to err "where no man has gone before". On the other hand, for a retired professor, why not play with these things :-) I myself wouldn't dare, for the moment: I hope to make more "standard" contributions and I'm perfectly happy with quantum theory as it stands now - even though I think it isn't the last word, and we will have another theory, 500 years from now. But I can make sense of it, it works great, and that's what matters. Which doesn't mean that I don't like challenges like you're proposing :-)


cheers,
Patrick.
 
Last edited:
Physics news on Phys.org
  • #142
I might have misunderstood an argument you gave. After reading your text twice, I think we're not agreeing on something.

nightlight said:
Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions, which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Nope, I am assuming 100% efficient detectors. I don't really know what you mean with "Poissonian square law detectors" (I guess you mean some kind of Bolometers which give a Poissonian click rate as a function of incident energy). I'm within the framework of standard quantum theory and I assume "quantum theory" detectors. You can claim they don't exist, but that doesn't matter, I'm talking about a QUANTUM THEORY prediction. This prediction can be adapted with finite quantum efficiency, and assumes fair sampling. Again, I'm not talking about what really happens or not, I'm talking about standard quantum theory predictions, whether correct or not.

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2."

Well, the quantum prediction with 100% efficient detectors is 100% correlation, because there are EXACTLY as many photons, at the same moment (at least on the time scale of the window) in both beams. The photons can really be seen as marbles, in the same way as the two 511KeV photons from a positron desintegration can be seen, pairwise, in a tracking detector, or the tritium and proton desintegration components can be seen when a He3 nucleus interacts with a neutron.

Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

Yes, but this is the only Poissonian source, so if we turn down the production rate of couples (which will be a Poissonian source with much lower rate than the incident beam, which has to be rather intense). So the 2-photon states come indeed also in a Poissonian superposition (namely the state |0,0>, the state |1,1>, the state |2,2> ...) where |n,m> indicates n blue and m red photons, but with coefficients which are from a much lower rate Poissonian distribution, which means that essentially only the |0,0> and |1,1> contribute. So one can always, take the low intensity limit and work with a single state.

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers.

This is correct, for a single-photon coherent beam, if we take A RANDOMLY SELECTED TIME INTERVAL. It is just a dead time calculation, in fact.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%.

No, this is not correct, because there is a 100% correlation between the photon-2 trigger and the sum of all the photon-1 clicks. THE TIME INTERVAL IS NOT RANDOM ! You will have AT LEAST 1 click in one of the detectors (and maybe more, if we hit a |2,2> state).
So you have to scale up the above Poissonian probabilities with a factor e.

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Again, because of the trigger on detector 2, we do not have a random time interval, and we have to scale up the probabilities by a factor 10. So the probability of seeing a single ND trigger is 90%, and the probability of having more than 1 is 10%. The case of no triggers is excluded by the perfect correlation.


cheers,
Patrick.
 
  • #143
vanesch You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation.

And how did the world run and pick what to do before there was anyone to measure so they can interfere their results? I don't think universe is being ran by some kind of magnified Stalin, lording over the creation and every now and then erasing fallen comrades from the photos to make a different more consistent history.

This means that macroscopic systems can be in a superposition, but that's no problem, just continuing the unitary evolution (this is the essence of the decoherence program).

Unless you suspend the fully local dynamical evolution (ignoring the non-relativistic approximate non-locality of Coulomb potential and such), you can't reach, at least not coherently, a conclusion of non-locality (a no-go for purely local dynamics).

The formal decoherence schemes have been around since at least early 1960s. Without adding ad hoc, vague (no less so than the collapse) super-selection rules to pick a preferred basis, they still have no way of making a unitary evolution pick a particular result out of a superposition. And without the pick for each instance, it makes no sense to talk of statistics of many such picks. You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

It is just another try to come up with a new and an improved mind-numbing verbiage, more mesmerising and slippery than the old one which got worn out, to uphold the illusion of being in possession of a coherent theory, for just a bit longer until there is something truly coherent to take its place.

I am ashamed to admit, but I was once taken in, for couple years or so, by one of these "decoherence" verbal shell games, Prigogines' version, and was blabbing senslessly "superoperators" and "subdynamics" and "dissipative systems" and "Friedrichs model" ... to any poor soul I could corner, gave a seminar, then a lecture to undergraduates as their QM TA (I hope it didn't take),...

What I said was that such a single-photon event (which is one of a pair of photons), GIVEN A TRIGGER WITH ITS CORRELATED TWIN, will give you an indication of such an exclusivity in the limit of low intensities. It doesn't indicate any non-locality or whatever, but indicates the particle-like nature of photons, which is a first step, in that the marble can only be in one place at a time,

You seem to be talking about time modulation of Poisson P(n,k), where n=n(t). That does correlate 1 a 2 trigger rates, but that kind of exclusivity is equally representative of fields and particles. In the context of QM/QED, where you already have a complete dynamics for the fields, such informal duality violates the Occam's razor { Classical particles can be simulated in all regards by classical fields (and vice versa). It is the dual QM kind that lacks coherence.}

you have to show agreement on SUCH A HUGE AMOUNT OF DATA that the work is enormous, and the probability of failure rather great.

The explorers don't have to build roads, bridges and cities, they just discover new lands and if these are worthy, the rest will happen without any of their doing.

On the other hand, we have a beautifully working theory which explains most if not all of it.

If you read the Jaynes passage quoted few messages back (or his other papers on the theme, or Barut's views, or Einstein's and Schroedinger's, even Dirac's and some of contemporary greats as well), "beautiful" isn't the attribute that goes anywhere in the vicinity of QED, in any role and under any excuse. Its chief power is in being able to wrap tightly around any experimental numbers which come along, thanks to a rubbery scheme which can as happily "explain" a phenomenon today as it will its exact opposite tomorrow (see Jaynes for the full argument). It is not a kind of power directed forward to the new unseen phenomena, the way Newton's or Maxwell's theories were. Rather, it is more like a scheme for post hoc rationalizations of whatever came along from the experimenters (as Jaynes put it -- the Ptolomean epicycles of our age).

On the other hand, for a retired professor, why not play with these things :-)

I've met few like that, too. There are other ways, though. Many physicists in USA ended up, after graduate school or maybe one postdoc, on the Wall Street or in the computer industry, created their companies (especially in software). They don't live by the publish or perish dictum and don't have to compromise any ideas or research paths to academic fashions and politicking. While they have less time, they have more creative freedom. If I were to bet, I'd say that the future physics will come precisely from these folks (e.g. Wolfram).

even though I think it isn't the last word, and we will have another theory, 500 years from now.

I'd say it's around the corner. Who would be going into physics if he believed otherwise. (Isn't there a little Einstein hiding in each of us?)
 
  • #144
nightlight said:
vanesch You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation.

And how did the world run and pick what to do before there was anyone to measure so they can interfere their results?

It just continued in unitary evolution. It started collapsing when I was born, it is collapsing all the time now that I'm living, and it will continue to run unitarily after I die. If ever I reincarnate it will start collapsing again. Nobody else can do the collapse but me, and nobody else is observing but me. How about that ? It is a view of the universe which is completely in sync with my egocentric attitudes. I never even dreamed of physics giving me a reason to be that way :smile:

You should look a bit more at recent decohence work by Zeh and Joos for instance. Their work is quite impressive. I think you're mixing up the relative state view (which dates mostly from the sixties) with their work which dates from the nineties.

cheers,
Patrick.
 
  • #145
nightlight said:
You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

This is not true. If you take a probability of series of 1000 flips together as one observation in a decoherence-like way, then this probability is exactly equal to the probability you would get classically when considering each flip at a time and making up series of 1000, meaning the series with about 30% heads in it will have a relatively high probability as compared to series in which, say, you'll find 45% heads. It depends whether I personally observe each flip or whether I just look at the record of the result of 1000 flips. But the results are indistinguishable.

cheers,
Patrick.
 
  • #146
nightlight said:
I am ashamed to admit, but I was once taken in, for couple years or so, by one of these "decoherence" verbal shell games, Prigogines' version, and was blabbing senslessly "superoperators" and "subdynamics" and "dissipative systems" and "Friedrichs model" ... to any poor soul I could corner, gave a seminar, then a lecture to undergraduates as their QM TA (I hope it didn't take),...
...
I'd say it's around the corner. Who would be going into physics if he believed otherwise. (Isn't there a little Einstein hiding in each of us?)

Like you're now babbling senselessly about Santos and Barut's views ? :-p :-p :-p
Ok, that one was easy, I admit. No, these discussions are really fun, so I should refrain from provoking namecalling games :rolleyes:
As I said, it makes me learn a lot of quantum optics, and you seem to know quite well what you're talking about.

cheers,
Patrick.
 
  • #147
nightlight said:
Its chief power is in being able to wrap tightly around any experimental numbers which come along, thanks to a rubbery scheme which can as happily "explain" a phenomenon today as it will its exact opposite tomorrow (see Jaynes for the full argument).

Yes, I read that, but I can't agree with it. When you read Weinberg, QFT is derived from 3 principles: special relativity, the superposition principle and the cluster decomposition principle. This fixes completely the QFT framework. The only thing you plug in by hand is the representation of the gauge group and that group itself (U(1)xSU(2)xSU(3)), a Higgs potential, and out pops the standard model, completely with all its fields and particles, from classical EM over beta decay, to nuclear structure (true, the QCD calculations in the low energy range are still messy ; but lattice QCD starts giving results).
I have to say that I find this impressive, that from a handful of parameters, you can build up all of known physics (except gravity of course). That doesn't exclude the possibility that other ways exist, but you are quickly in awe for the monumental work such a task will hold.

cheers,
patrick.
 
  • #148
vanesch said:
Again, because of the trigger on detector 2, we do not have a random time interval, and we have to scale up the probabilities by a factor 10. So the probability of seeing a single ND trigger is 90%, and the probability of having more than 1 is 10%. The case of no triggers is excluded by the perfect correlation.

I'd like to point out that a similar reasoning (different from the "Poissonian square law" radiation detectors) holds even for rather low photon detection efficiencies. If the efficiencies are, say, 20% (we take them all equal), then in our above scheme, we will have a probability of exactly one detector triggering equal to 0.2 x 0.9 = 18%, the probability of having exactly two detectors triggering equal to 0.2 x 0.2 x (0.09...) = a bit less than 4% etc...
So indeed, there is some "Poisson-like" distribution due to the finite efficiencies, but it is a FIXED suppression of coincidences by factors of 0.2, 0.04...
At very low intensities, the statistical Poisson coincidences should be much lower than these fixed suppressions (which are the quantum theory way of saying "fair sampling"), so we'd still be able to discriminate the "anticoincidences" due to the fact that each time there's only one marble in the pipe, and the anticoincidences due to lack of efficiency if each detector is generating its Poisson series on its own.

A way to picture this is by using, instead of beam splitters, a setup which causes diffraction of the second photon, and a position-sensitive photomultiplier (which is just an array of independent photomultipliers) looking at the diffraction picture.
You will build up slowly the diffraction picture with the synchronized clicks from detector 1 (which looks at the first photon of the pair) ; of course, each time detector one clicks, you will only have a chance of 0.2 to find a click on the position-sensitive PM. If the beam intensity is low enough, you will NOT find of the order of 0.04 times a second click on that PM. This is something that can only be achieved with a particle and is a prediction of QM.

cheers,
Patrick.
 
  • #149
vanesch You should look a bit more at recent decohence work by Zeh and Joos for instance. Their work is quite impressive. I think you're mixing up the relative state view (which dates mostly from the sixties) with their work which dates from the nineties.

Zeh has been a QM guru pontificating on QM measurement when my QM professor was a student and still appears to be in a superposition of vews. I looked up some of his & Joos' recent preprints on the theme. Irreversible macroscopic aparatus, 'coherence destroyed very rapidly',... basically the same old stuff. Still the same problem as with the origina Daneri, Loinger and Prosperi macroscopic decoherence scheme of 1962.

It is a simple question. You have |Psi> = a1 |A1> + a2 |A2> = b1 |B1> + b2 |B2>. These are two equivalent orhtogonal exapansions of state Psi, for two observables [A] and , of some system (where the system may be a single particle, an apparatus with a particle, rest of the building with the apparatus and the particle,...). On what basis does one declare that we have value A1 of [A] for a given individual instance (you need this to be able to even to talk about statistics of the sequence of such values)?

At some strategically placed point wihin their mind-numbing verbiage these "decoherence" folks will start pretending that it was already established that a1|A1>+a2|A2> is the "true" expansion and A1 is its "true" result, in the sense that allows them talk about statistical poperties of a sequence of outcomes at all i.e. in exactly the same sense that in order to say: word "baloney" has 7 letters, you have (tacitly at least) assumed that the word has the first letter, the second letter,... Yet, these slippery folks will fight tooth an nail such conclusion, or even that they assume there is an individual word at all, yet this word-non-word still somehow manages to have exactly seven letters-non-letters. (The only innovation worthy a note in the "new and improved" version is that they start by telling you right upfront they're not going to do, no sir, not ever, absolutely never, that kind of slippery maneuver.)

No thanks. Can't buy any of it. Considering that the only rationale precluding one from plainly saying that an individual system has the definite properties all along is the Bell's "QM prediction" (which in turn cannot be deduced without assuming the non-dynamical collapse/projection postulate, the collapse postulate which is needed to solve the "measurement" problem, the problem of absence of definite properties in a superposition, thus the problem which still exists solely because of the Bell's QM prediction").

If you drop the non-dynamical collapse postulate, you don't have Bell's "prediction" (you would still have the genuine "predictions" of the kind that actually predict the data obtained, warts and all). There is no other result at that point preventing you from interpeting the wave function as a real matter field, evolving purely, without any interruptions, according to the dynamical equations (which happen to be nonlinear in the general coupled case) and representing thus the local "hidden" variables of the system. The Born rule would be reshuffled from its pedestal of a postulates to a footnote in scattering theory, the way and place it got the into QM. It is an approximate rule of thumb, its precise operational meaning depending ultimately on the apparatus design and measuring and counting rules (as it actually happens in any application, e.g. with the operational interpretations of Glauber P vs Wigner vs Husimi phase space functions), just as it would have beeen if someone had introduced it into the classical EM theory of light scattering in 19th century. The 2nd quantization is then only an approximation scheme for these coupled matter-EM fields, a linearization algorithm (similar to the Kowalski's and virtually identical to the QFT algorithms used in solid state and other branches of physics), adding no more new physics to the coupled nonlinear fields than, say, the Runge-Kutta numeric algorithm adds to the fluid dynamics Navier-Stokes equations.

Interestingly, in one of his superposed eigenviews, master guru Zeh himself insists that the wave function is a regular matter field and definitely not a probability "amplitude" -- see his paper "There is no "first" quantization", where he characterizes the "1st quantization" as merely a transition from a particle model to a field model (the way I did several times in this thread; which is of course how Schroedinger, Barut, Jaynes, Marshall & Santos, and others have viewed it). Unfortunately, somewhere down the article, his |Everett> state superposes in, nudging very gently at first, but eventually overtaking his |Schroedinger> state. I hope he makes up his mind in the next fourty years.
 
  • #150
nightlight said:
Unless you suspend the fully local dynamical evolution (ignoring the non-relativistic approximate non-locality of Coulomb potential and such), you can't reach, at least not coherently, a conclusion of non-locality (a no-go for purely local dynamics).

But that is exactly how locality is preserved in my way of viewing things ! I pretend that there is NO collapse at a distance, but that the record of remote measurements remains in a superposition until I look at the result and compare it with the other record (also in a superposition). It is only the power of my mind who forces a LOCAL collapse (beware: it is sufficient that I look at you and you collapse :devil:)

The formal decoherence schemes have been around since at least early 1960s. Without adding ad hoc, vague (no less so than the collapse) super-selection rules to pick a preferred basis, they still have no way of making a unitary evolution pick a particular result out of a superposition.

I know, but that is absolutely not the issue (and often people who do not know exactly what decoherence means make this statement). Zeh himself is very keen on pointing out that decoherence by itself doesn't solve the measurement problem ! It only explains why - after a measurement - everything looks as if it were classical, by showing which is the PREFERRED BASIS to work in. Pure relative-state fans (Many Worlds fans, of which I was one until a few months ago) think that somehow they will, one day, get around this issue. I think it won't happen if you do not add in something else, and the something else I add in is that it is my conciousness that applies the Born rule. In fact this comes very close to the "many minds" interpretation, except that I prefer the single mind interpretation :biggrin: exactly in order to be able to preserve locality. After all, I'm not aware of any other awareness except the one I'm aware of, namely mine :redface:.



And without the pick for each instance, it makes no sense to talk of statistics of many such picks. You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

I think you're misreading the decoherence program which you seem to confuse with hardline manyworlders. The decoherence program tells you that if you consider the collapse at each individual flip, or you only consider the collapse at the 1000 flips series, the result will be the same, because the non-diagonal terms in the density matrix vanish at a monstruously fast rate for any macroscopic system (UNLESS, of course, WE ARE DEALING WITH EPR LIKE SITUATIONS!). But in order to be able to even talk about a density matrix, you need to assume the Born rule (in its modern version). So the knowledgeable proponents of decoherence are well aware that they'll never DERIVE the Born rule that way, because they USE it. They just show equivalence between two different ways of using it.


cheers,
Patrick.
 
  • #151
nightlight said:
It is a simple question. You have |Psi> = a1 |A1> + a2 |A2> = b1 |B1> + b2 |B2>. These are two equivalent orhtogonal exapansions of state Psi, for two observables [A] and , of some system (where the system may be a single particle, an apparatus with a particle, rest of the building with the apparatus and the particle,...). On what basis does one declare that we have value A1 of [A] for a given individual instance (you need this to be able to even to talk about statistics of the sequence of such values)?


What the decoherence program indicates is that once you're macroscopic enough, certain *coherent* states survive (by factorization) the coupling with their environment, while others get hopelessly mixed up and cannot factorize out. It is the interaction hamiltonian of the system with the environment that determines this set of preferred (sometimes called coherent) states. This is the preferred basis problem which is then solved, and is the essential result of decoherence. But again, a common misconception is that decoherence deduces the Born rule and the projection which is not the case.

A simple example is the position of a charged particle at macroscopic distances. A superposition of macroscopic position states will entangle very quickly (through the Coulomb interaction) with its environment, so states with macroscopically distinguishable positions for a charged particle will not be able to get factorized out. However, a localized position (even though it doesn't have to be a Dirac pulse), will not be affected by this interaction.
So the "position" basis is preferred because it factors out.

There is no other result at that point preventing you from interpeting the wave function as a real matter field, evolving purely, without any interruptions, according to the dynamical equations (which happen to be nonlinear in the general coupled case) and representing thus the local "hidden" variables of the system.

Well, you still have the small problem of how a real matter field (take neutrons) always gives point-like observations. How do you explain the 100% efficient (or close) detection of spot-like neutron interactions from a neutron diffraction pattern that can measure 4 meters across (I'm working right now on such a project) ? And here, the flux is REALLY LOW, we're often at a count rate of a few counts per second, with a time resolution of 1 microsecond, a background of 1 per day, and a detection efficiency of 95%.

The 2nd quantization is then only an approximation scheme for these coupled matter-EM fields, a linearization algorithm (similar to the Kowalski's and virtually identical to the QFT algorithms used in solid state and other branches of physics), adding no more new physics to the coupled nonlinear fields than, say, the Runge-Kutta numeric algorithm adds to the fluid dynamics Navier-Stokes equations.

I already said this a few times, but you ignored it. There is a big difference between 2nd quantization and not. It is given by the Feynman path integral. If you do not consider second quantization, you take the integral only over the *classical* solution (which are the solution of the non-linear field equations you are always talking about). If you do take into account second quantization, you INTEGRATE OVER ALL THE POSSIBLE NON-SOLUTIONS, with a weight factor which is given by exp(i (S-S0)/h-bar), with S the action calculated for a particular non-solution, and S0 the action of your solution (action from the Lagrangian that gives your non-linear coupled EM and Dirac field equations). So this must make a difference.

Interestingly, in one of his superposed eigenviews, master guru Zeh himself insists that the wave function is a regular matter field and definitely not a probability "amplitude" -- see his paper "There is no "first" quantization", where he characterizes the "1st quantization" as merely a transition from a particle model to a field model (the way I did several times in this thread; which is of course how Schroedinger, Barut, Jaynes, Marshall & Santos, and others have viewed it).

But OF COURSE. This is the way quantum field theory is done! The old quantum fields are replaced by REAL MATTER FIELDS, and then we apply quantization (which is called second quantization, but is in fact the first time we introduce quantum theory). So there's nothing exceptional in Zeh's statements. Any modern quantum field theory book treats the solutions of the Dirac equation on the same footing as the classical EM field. What is called "classical" in a quantum field book, is what you are proposing: namely the solution to the nonlinearly coupled Dirac and EM field equations

But you need then to QUANTIFY those fields in order to extract the appearance of particles. And yes, if you take this in the non-relativistic limit, you find back the Schroedinger picture (also with multiple particle superpositions and all that)... AFTER you introduced "second" quantization.

cheers,
Patrick.
 
  • #152
vanesch This is not true. If you take a probability of series of 1000 flips together as one observation

And what if you don't take it just so. What happens in one flip. What dynamical equation is valid for that one instance? You can't hold that there is an instance of thousand flips with a definite statistics, or anything definite at all, while insisting there is no single flip within that instance with a definite result.

What if I define a Killoflip to consist of a sequence of thousand regular flips (doing exacly as before), and which I call one intsance of measurement of a variable which can take value 0 to 1000. And then I do 1000 Killoflips to obtain statistics of values. Does each Killoflip have a definite result? Even approximate value, say around 300. Or was that good only before I called what I was doing Killoflip, and now under the name Killoflip, you have to look "a series of 1000 Killoflips" as one observation to be able to say...

It is plain nonsense.

I was simply asking what is the dynamics (on the splitter setup) of wave fragment B in one instance, as it, reaches the detector (viewed as a physical system) and interacts with it. If the joint dynamics proceeds uninterrupted, it yields either to trigger or no trigger based solely on the precise local state of all the fields, regardless of what goes on in the interaction between the physical system Detector-A and the packet fragment A.

To put it even more clearly, imagine we are not "measuring" but simply want to compute dynamical evolution of a combined system, A,B, Splitter, DA, DB, in a manner of a fluid dynamics simulation. We have a system of PDEs and we put in some initial and boundary and run a program to compute what is going on in one instance, under the assumed I&B conditions. In each run, the program follows the combined packet (with whatever time-slicing we set it to do), as it splits to A and B, follows the fragments as they enter detectors, gets the first ionization, then the cascade if it these happen to result in this run for the given I&B conditions (of the full system). As it makes multiple runs, the program also accumulates the statistics of coincidences.

Clearly, the A-B statistics computed this way will always be classical. The sharpest it can be for any given intensity with expectation value n of photo-electrons emitted per try (on DA or DB) is the Poissonian distribution, which will have variance (sigma square) equal n (if the n varies from try to try, you get compound Poissonian). This is also precisely the prediction of both the semiclassical and the QED model of this process.

The point I am making is that if you claim that you are not going to suspend the program at any stage, but let it go through all the steps including counting for any desired number of runs (to collect statistics), you cannot get anything but the classical correlation in the counts.

Explain how you can claim that you will let the program run uninterrupted and that it will show different than classical Poissonian (at best) statistics. We're not talking "measurement" or "postulates" but simply about the computation of the PDE problem, so don't introduce non sequiturs such "perfect detector" or to understand it I now need to imagine 1000 computers together as one computer,...

in a decoherence-like way

Oh, yeah, that's it. Silly me, how come I didn't of so simple solution. I see.

It depends whether I personally observe each flip or whether I just look at the record of the result of 1000 flips. But the results are indistinguishable.

I see, as fallback, if the "decoherence-like way" fails to mesmerize, then we're all constructs of your mind, and your mind has constructed all these constructs to contain a belief construct of every other construct as the just a construct of the last construct's construct of mind.
 
Last edited:
  • #153
vanesch What the decoherence program indicates is that once you're macroscopic enough, certain *coherent* states survive (by factorization) the coupling with their environment,

The "enviroment" is assumed in |Psi>. A subsystem can trivially follow a non-unitary evolution in the subsystem's factor during the interaction. If our |Psi> includes all that you ever plan to add in, including yourself (whose mind has apparently constructed all of us anyway; do always argue as much with the other constructs of your mind?), you're back exactly where you started -- two different decompositions of |Psi> and you need to specify which is the "true" one and how would a precise postulate be formulated out of such criteria (saying which variables/observables, when and how, gain or lose definitive values, so that you can make sense when talking about a sequence of such values, statistics and such).

It is the interaction hamiltonian of the system with the environment that determines this set of preferred (sometimes called coherent) states. This is the preferred basis problem which is then solved, and is the essential result of decoherence.

The combined hamiltonian basis was claimed as preferred in older approaches (as well as other variants, such as integrals of motion, or Prigogine's "subdynamics" which represents a closed classical-like reduced/effective dynamics of the macroscopic apparatus whose variables can have definite values, a kind of an emergent property). Any basis you claim as preferred will have the same problems with the non-commuting observables if you decide to allow definite values for the preferred basis. It would be as if you declared the Sz observable of particle A and/or B in EPR-Bell model as the "preferred" one that has a definite value and claimed that somehow solves the problem. The problems always re-emerge once you include the stuff you kept outside to help decohere the subsystem. The outer layer never decoheres.

Well, you still have the small problem of how a real matter field (take neutrons) always gives point-like observations. How do you explain the 100% efficient (or close) detection of spot-like neutron interactions from a neutron diffraction pattern that can measure 4 meters across (I'm working right now on such a project) ? And here, the flux is REALLY LOW, we're often at a count rate of a few counts per second, with a time resolution of 1 microsecond, a background of 1 per day, and a detection efficiency of 95%.

The localization problem for matter fields hasn't been solved (even though there are heuristic models indicating some plausible mechanisms, e.g. Jaynes and Barut had toy models of this kind). If your counts are Poissonian (or super-Poissonian) for the buildup of the high visibility 4m large diffraction pattern, there should be no conceptual problem in conceiving a purely local self-focusing or some kind of toplogical/structural unravelling mechanism which could at least superficially replicate such point-like detections with the diffractions. After all, the standard theory doesn't have an explanation here other than saying that is how it is.

There is a big difference between 2nd quantization and not. It is given by the Feynman path integral. If you do not consider second quantization, you take the integral only over the *classical* solution (which are the solution of the non-linear field equations you are always talking about). If you do take into account second quantization, you INTEGRATE OVER ALL THE POSSIBLE NON-SOLUTIONS, with a weight factor which is given by exp(i (S-S0)/h-bar), with S the action calculated for a particular non-solution, and S0 the action of your solution (action from the Lagrangian that gives your non-linear coupled EM and Dirac field equations). So this must make a difference.

You're not using the full system nonlinear dynamics (a la Barut's self-field) for the QED in the path integral representation i.e. the S0 used is not computed for the Barut's full nonlinear solution but for the (iteratively) linearized approximations. The difference is even more transparent in the canonical quantization via Fock space, where it is obvious that you are forming the Fock space from the linear approximations (external fields/current approximation) of the nonlinear fields.

But OF COURSE. This is the way quantum field theory is done! The old quantum fields are replaced by REAL MATTER FIELDS, and then we apply quantization (which is called second quantization, but is in fact the first time we introduce quantum theory). So there's nothing exceptional in Zeh's statements.

Well, it is not quite so simple to weasel out. For the QM reasoning, such as Bell's QM prediction, you had used the same matter fields as the probability "amplitides" (be it for Dirac or for it's approximation, Schroedinger-Pauli particle) and insisted they are not local matter fields since they can non-dynamically collapse. How do you transition you logic from the "amplitude" and all that goes with it to just plain matter field right before you go on to quantize it now as plain classical system.

If it were a classical system all along, like the classical EM field, then the issue of collapse when the observer learns the result "in a decoherence-like way" or any other way, is plain nonsense. We never did the same kind of probability "amplitude" talk with the Maxwell field before quantizing it. It was just plain classical field with no mystery, no collapse "decoherence-like way" or jump-like way... There was no question of being able to deduce the no-go for LHVs by just using the dynamics of that field. Yet you somehow claim you can do that for the Dirac field in Stern-Gerlach, without having to stop or suspend its purely local dynamics (Dirac equation). Does it again come back to your mind in a decohernce-like way to have it do the collapse of the superposed amplitudes?

Then suddenly, both fields are declared just plain classical fields (essentially equivalent except for slightly different equations), that we proceed to quantize. There is a dichotomy here (and it has nothing to do with the switch from Schroedinger to Dirac equation).

That is precisely the dichotomy I discussed few messages back when responding to your charge of heresy for failing to show the offcially required level of veneration and befuddlement with the Hilbert product space.

Any modern quantum field theory book treats the solutions of the Dirac equation on the same footing as the classical EM field.

Yes, but my Jackson ED textbook doesn't treat EM fields as my Messiah QM textbook treats Dirac or Schroedinger matter field. The difference in treatment is hugely disproportionate to just the difference implied by the different form of equations.

Note also that, so far there is no experimental data showing that the local dynamics of these fields has to be replaced by a non-local one. There is such data for the "fair sampling" type of local theories, but neither Dirac nor Maxwell fields are of that "fair sampling" type.

What is called "classical" in a quantum field book, is what you are proposing: namely the solution to the nonlinearly coupled Dirac and EM field equations

Not at all. What is called classical in a QED book is the Dirac field in the external EM field approximation and the EM field in the external current approximation. These are linear approximations of the non-linear fields. It is these linear approximations which are being (second) quantized (be it canonically or via path integrals), not the full non-linear equations. Only then, with Fock space defined, the interaction is iteratively phased in via perturbative expansion which is defined in terms of the quantized linear fields. The whole perturbative expansion phasing in the interaction, with all the rest of the (empirically tuned ad hoc) rules on how to do everything just so in order to come out right, is what really defines the QED, not the initial creation of the Fock space from linearized "classical" fields.

That is exactly how the Kowalski's explicit linearization via the Fock space formalism appears. In that case the Fock space formalism was a mere linear approximation (an infinite set of linear equations) to the original nonlinear system, no new effect could exists in the quantized formalism than what was already present in the classical nonlinear system. In particular, one would have no non-locality even for sharp boson number states of the quantized formalism (the usual trump card for the QM/QED non-locality advocates). Any contradiction or a Bell-like no-go theorem for the classical model, should one somehow deduce any such for the Kowalski's Fock space bosons, would simply mean a faulty approximation. (Not that anyone has ever deduced Bell's QM "prediction" without instantaneous and interaction-free collapse of the remote state, but just using the QED dynamics and any polarizer and detector properties & interactions with the quantized EM field.)

And yes, if you take this in the non-relativistic limit, you find back the Schroedinger picture (also with multiple particle superpositions and all that)... AFTER you introduced "second" quantization.

The non-relativistic limit, i.e. the Dirac equation to Schroedinger-Pauli equation limit is irrelevant. The Bell EPR setup and conclusion for spin 1/2 particle was exactly same for Dirac particle or for its Schroedinger-Pauli approximation.
 
Last edited:
  • #154
nightlight said:
The "enviroment" is assumed in |Psi>. A subsystem can trivially follow a non-unitary evolution in the subsystem's factor during the interaction. If our |Psi> includes all that you ever plan to add in, including yourself

It is the "including yourself" part that distinguishes the decoherence part from hardline Many Worlds and at that point, in the decoherence program, the projection postulate has to be applied. At that point, you have included so many many degrees of freedom that you WILL NOT OBSERVE, that you restrict your attention to the reduced density matrix, which has become essentially diagonal in the "preferred basis of the environment".

It would be as if you declared the Sz observable of particle A and/or B in EPR-Bell model as the "preferred" one that has a definite value and claimed that somehow solves the problem. The problems always re-emerge once you include the stuff you kept outside to help decohere the subsystem. The outer layer never decoheres.

Exactly. That's why the outer layer has to apply the projection postulate. And that outer layer is the conscious observer. Again, the decoherence program doesn't solve the measurement problem in replacing the projection postulate. It only indicates which are the observables of the subsystem which are going to be observable during a while (factor out from all the rest which is not observed).

The localization problem for matter fields hasn't been solved (even though there are heuristic models indicating some plausible mechanisms, e.g. Jaynes and Barut had toy models of this kind). If your counts are Poissonian (or super-Poissonian) for the buildup of the high visibility 4m large diffraction pattern, there should be no conceptual problem in conceiving a purely local self-focusing or some kind of toplogical/structural unravelling mechanism which could at least superficially replicate such point-like detections with the diffractions. After all, the standard theory doesn't have an explanation here other than saying that is how it is.

I would be VERY SURPRISED that you can construct such a thing, because it is the main reason of existence of quantum theory. Of course, once this is done, I'd be glad to consider it, but let us not forget that it is the principal reason to quantify fields in the first place!

You're not using the full system nonlinear dynamics (a la Barut's self-field) for the QED in the path integral representation i.e. the S0 used is not computed for the Barut's full nonlinear solution but for the (iteratively) linearized approximations.

No, in PERTURBATIVE quantum field theory, both the nonlinear "classical dynamics" and the quantum effects (from the path integral) are approximated in a series devellopment, simply because we don't know how to do otherwise in all generality, except for some toy models. But in non-perturbative approaches, the full non-linear dynamics is included (for example solitons are a non-linear solution to the classical field problem).

The difference is even more transparent in the canonical quantization via Fock space, where it is obvious that you are forming the Fock space from the linear approximations (external fields/current approximation) of the nonlinear fields.

No, it is a series devellopment. If you consider external fields, that's a semiclassical approach, and not full QFT.

Well, it is not quite so simple to weasel out. For the QM reasoning, such as Bell's QM prediction, you had used the same matter fields as the probability "amplitides" (be it for Dirac or for it's approximation, Schroedinger-Pauli particle) and insisted they are not local matter fields since they can non-dynamically collapse. How do you transition you logic from the "amplitude" and all that goes with it to just plain matter field right before you go on to quantize it now as plain classical system.

Because there is an equivalence between the linear and non-relativistic part of the classical field equation as a quantum wave equation of a single particle and the single-particle states of the second quantized field. It only works if you are sure you work with one or a fixed number of particles. This is a subtle issue of which others know more than I here. I will have to look it up again in more detail.

If it were a classical system all along, like the classical EM field, then the issue of collapse when the observer learns the result "in a decoherence-like way" or any other way, is plain nonsense. We never did the same kind of probability "amplitude" talk with the Maxwell field before quantizing it. It was just plain classical field with no mystery, no collapse "decoherence-like way" or jump-like way... There was no question of being able to deduce the no-go for LHVs by just using the dynamics of that field.

No, you need a multiparticle wave function (because we're working with a 2-particle system) which is essentially non-local, OR you need a classical field which is second-quantized. The second approach is more general, but the first one was sufficient for the case at hand. If there's only ONE particle in the game, there is an equivalence between the classical field and the one-particle wave function: the maxwell equations describe the one-photon situation (but not the two-photon situation).

Then suddenly, both fields are declared just plain classical fields (essentially equivalent except for slightly different equations), that we proceed to quantize.

It is because of the above-mentioned equivalence.


Yes, but my Jackson ED textbook doesn't treat EM fields as my Messiah QM textbook treats Dirac or Schroedinger matter field.

Messiah still works with "relativistic quantum mechanics" which is a confusing issue at best. In fact there exists no such thing. You can work with non-relativistic fixed-particle situations, non-relativistic quantum field situations or relativistic quantum field situations, but there's no such thing as relativistic fixed-particle quantum mechanics.

The difference in treatment is hugely disproportionate to just the difference implied by the different form of equations.

No, Jackson treats the classical field situation, and QFT (the modern approach) quantizes that classical field. The difference comes from the quantization, not from the field equation itself.

Not at all. What is called classical in a QED book is the Dirac field in the external EM field approximation and the EM field in the external current approximation. These are linear approximations of the non-linear fields.
It is these linear approximations which are being (second) quantized (be it canonically or via path integrals), not the full non-linear equations. Only then, with Fock space defined, the interaction is iteratively phased in via perturbative expansion which is defined in terms of the quantized linear fields. The whole perturbative expansion phasing in the interaction, with all the rest of the (empirically tuned ad hoc) rules on how to do everything just so in order to come out right, is what really defines the QED, not the initial creation of the Fock space from linearized "classical" fields.

We must be reading different books on quantum field theory :-)
Look at the general formulation of the path integral (a recent exposition is by Zee, but any modern book such as Peskin and Schroeder, or Hatfield will do).
The integral clearly contains the FULL nonlinear dynamics of the classical fields.


cheers,
patrick.
 
  • #155
I would like to add something here, because the discussion is taking a turn that is in my opinion wrongheaded. You're trying to attack "my view" of quantum theory (which is 90% standard and 10% personal: the "standard" part being the decoherence program, and the personal part the "single mind" part). But honestly that's not very interesting because I don't take that view very seriously myself. To me it is a way, at the moment, with the current theories, to have an unambiguous view on things. Although it is very strange because no ontological view is presented - everything is epistemology (and you qualify it as nonsense), it is a consistent view that gives me "peace of mind" with the actual state of things. If one day another theory takes over, I'll think up of something else. At the time of Poisson, there was a lot of philosophy on the mechanistic workings of the universe, an effort that was futile. In the same way, our current theories shouldn't give us such a fundamental view that gives a ground to do philosophy with, because they aren't the final word. After all, what counts is the agreement with experiment, and that's all there is to it. If we have two different theories with the same experimental success, of course the temptation is to go for the one that fits nicest with our mental preferences. In fact, the best choice is dictated by what allows us to expand the theory more easily (and correctly).
That's why I was interested in what you are telling, in that I wasn't aware that you could build a classical field theory that gives you equivalent results with quantum field theory for all experimentally verified results. Honestly I have a hard time believing it, but I can accept that you can go way further than what is usually said with such a model.

What is called second quantization (but which is in fact a first quantisation of classical fields) has as main aim to explain the wave-particle duality when you can have particle creation and annihilation. You are right that the Fock space is build up on the linear part of the classical field equations ; however, that's not a linearization of the dynamics, but almost the DEFINITION of what are the associated particles with the matter field. Indeed, particles only have a well-defined meaning when they are propagating freely through space, without interaction (or where the interaction has been renormalized away). The fact that these states are used to span the Fock space should simply be seen as the classical analogon to do, say, a Fourier transform on the fields you're working with. You simply assume that your solutions are going to be written in terms of sine and cosines, not that the solutions ARE sines and cosines. So that, by itself, is not a "linearisation of the dynamics", it is just a choice of basis, in a way. But we're not that free in the choice: at the end of the day we observe particles, so we observe in that basis.

The only way that is known to find particles associated with fields (at least to me, which doesn't exclude others of course) is the trick with the creator-anticreator operators of a harmonic oscillator. That's why I said that I would be VERY SURPRISED INDEED if you can crank out a non-linear classical field theory (even including ZPF noise or whatever) that gives us nice particle-like solutions with the correct mass and energy momentum relationship which then propagate nicely throughout space, but which act for the rest like what quantum theory predicts (at least in those cases where it has been tested).

That's why I talked about gammas and then about neutrons. Indeed, once you consider that in one way or another you're only going to observe lumps of fields when you've integrated enough "field" locally, and that you're then going to generate a Poisson series of clicks, you can do a lot with a "classical field", and it will be indistinguishable from any quantum prediction as long as you have independent single-particle situations. The classical field then looks a lot like a single-particle wave function. But to explain WHY you integrate "neutron-ness" until you have a full neutron, and then you will click Poisson-like looks to me quite a challenging task if no mechanism is build into the theory that makes you have steps of neutrons in the first place. And as I said, the only way I know how to do that is through the method of "second quantization" ; and that supposes linear superposition of particle states (such as EPR like states).

cheers,
patrick.
 
  • #156
vanesch Nope, I am assuming 100% efficient detectors.

If you recall that 85% QE detector cooled to 6K, check the QE calculation -- it subtracts the dark current, which was exactly at 20,000 triggers/sec as the incident photon count. Thus your 100% efficient detector is no good for anticorrelations if half the triggers might be vacuum noise. You can look their curves, see if you get more suitable QE to noise deal (they were looking only to max out on the QE).

Unfortunately, you comments in this message show you are off again on an unrelated tangent, ignoring the context you're disputing, and I expect gamma photons to be trotted out any moment now.

The context was -- I am trying to pinpoint exactly the place you depart from the dynamical evolution in the PDC beam splitter case in order to arrive at the non-classical anticorrelation. Especially since you're claiming that the dynamical evolution is not being suspended at any point. My aim was to show that this position is self-contradictory -- you cannot obtain different result from the classical one without suspending the dynamics and declaring collapse.

In order to analyse this, the monotonous mantras of "strict QM", "perfect 100% detectors"... are much too shallow and vacuous to yield anything. We need to look at the "detectors" DA and DB as physical systems, subject to QM/QED (as anything else) in order to see whether they could do what you say they could without violating logic or any agreed upon facts (theoretical or empirical) and without bringing in your mind/consciousness into the equations, the decoherence-like way or otherwise.

To this end we are following a wave packet (in coordinate representation) of PDC "photon 2" and we are using "photon 1" as as an indicator that there is a "photon 1" heading to our splitter. This part of the correlation between 1 and 2 is purely classical amplitude based correlation i.e. a trigger of D1 at time t1 indicates that the incident intensity of "photon 2", I2(t) will be sharply increased within some time window T at starting at t1 (with an offset from t1, depending on path lengths).

The only implication of this amplitude correlation used is that we can have a time window defined via the "photon 1" trigger as [t1,t1+T] and DB. During this window the incident intensity of "photon 2" can be considered some constant I, or in terms of the average photon counts on DA/DB denoted as n. The constancy assumption within the window simplifies discussion and it is favorable to the stronger anticorrelation anyway (since the variable rate would result in the compound Poissonian which has greater variance for any given expectation value). This is again just a minor point.

I don't really know what you mean with "Poissonian square law detectors".

I mean the photon detectors for the type of photons we're discussing (PDC, atomic cascades, laser,...). The "square law" refers to the trigger probability in a given time window [t,t+T] being proportional to the incident EM energy in that time window. The Poissonian means that the ejected photo-electron count has a Poissonian distribution i.e. probability of k electrons being ejected in a given time interval is P(n,k)=n^k exp(-n)/k! where n=average numbers of photoelectrons ejected in that time interval. In our case, the time window is the short coincidence window (defined via the PDC "photon 1" trigger) and we assume n=constant in this window. As indicated earlier, the n(t) varies with time t, it is low generally except for the sharp peaks within the windows defined by the triggers of "photon 1" detector.

Note that we might assume an idealized multiphoton detector which amplifies optimally all photo-electrons, thus we would then have for the case of k electron ejection exactly k triggers. Alternatively, as suggested earlier, we can divide the "photon 2" beam via the L level binary tree of beam splitters obtaining ND=2^L reduced beams on which we place simpler "1-photon" detectors (which only indicate yes/no). These simple 1-photon detectors would receive intensity I1=I/ND and thus have the photo-electron Poissonian with n1=n/ND. But since they don't differentiate in their output k=1 from k=2,3,... we would have to count 1 when they trigger at all, 0 if they don't trigger. For "large" enough ND (see earlier msg for discussion) the single ideal multiphoton detector is equivalent to the array of ND 1-photon detectors.

For brevity, I'll assume the multiphoton detector (optimal amplification and photon number resolution). The rest of your comments indicate some confusion on what precisely this P(n,k) means, what does it depend on and how does it apply to our case. Let's try clearing that a bit.

There are two primary references which analyze photodetection from the ground up, [1] is semiclassical and [2] is full QED derivation. Mandel & Wolf's textbook [3] has one chapter for each approach with more readable coverage and nearly full detail of derivations. Papers [4] and [5] discuss in more depth the relation between the two approaches, especially the operational meaning of the normal ordering convention and of the resulting Glauber's correlation functions (the usual Qunatum Optics correlations).

Both approaches yield exactly the same conclusion, the square law property of photo-ionization and the Poisson distribution (super-Poisson for mixed or varying fields within the detection window) of the ejected photo-electrons. They all also derive detector counts (proportional to electron counts/currents) for single and multiple detectors in arbitrary fields and any positions and time windows (for correlations).

The essential aspect of the derivations relevant here is that the general time dependent photo-electron count distribution (photo-current) of each detector depends exclusively on the local field incident on that detector. There is absolutely no interruption of the purely local continuous dynamics and the photo ejections depend solely on the local fields/QED amplitudes which also never collapse or deviate in any way from the local interaction dynamics. (Note: the count variance is due to averaging over the detector states => the identical incident field in multiple tries repats at best up to a Poissonian distribution.)

The QED derivations also show explicitly how the vacuum contribution to the count yields 0 for the normal operator ordering convention. That immediately provides operational meaning of the quantum correlations as vacuum-normalized counts, i.e. to match the Glauber "correlation" function, all background effects need to be subtracted from the obtained counts.

The complete content of the multi-point correlations is entirely contained in the purely classical correlations of the instantaneous local intensities, which throughout follow purely local evolution. There are two main "quantum mysteries" often brought up here:

a) The "mysterious" quantum amplitude superposition (which the popular/pedagogical accounts make a huge ballyhoo out of) for the multiple sources is a simple local EM field superposition (even in QED treatment) which yields local intensity (which is naturally, not a sum of non-superposed intensities). The "mystery" here is simply due to "explaining" to student that he must imagine particles (for which the counts would add up) instead of fields which superpose into the net amplitude which then gets squared to get the count (resulting in the "mysterious" interference, the non-additivity of counts for separate fields, duh).

b) Negative probabilities, anti-correlations, sub-Poissonian light -- The multi-detector coincidence counts are computed (in QED and semiclassical derivations) by constructing a product of individual instantaneous counts -- a perfectly classical expression (same kind as Bell's LHV). These positive counts are then expressed via local intensities (operators in QED), which are still fully positive definite. It is at this point that QED treatment introduces the normal ordering convention (which simplifes integrals by canceling out the vacuum sourced terms in each detector's count integrals; see [4],[5]), redefining thus the observable whose expectation value is being computed, while retaining the classical coincidence terminology they started with, resulting in much confusion and bewilderment (in popular and "pedagogical" retelling, where to harmonize their "understanding" with the facts they had to invent "ideal" detectors endowed with magical power of reproducing G via plain counts correlations).

The resulting Glauber "correlation" function G(X1,X2...), (the standard QO "correlation" functions), is not the correlation of the counts at X1,X2,... but a correlation-like expression extracted (though the vacuum terms removal via [a+],[a] operator reordering) from the expectation value of the observable corresponding to the counts correlations (which, naturally, shows no negative counts or non-classical statistics).

-----
1. L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys. Soc. V84, 1964, pp. 435-444 (reproduced also in P.L. Knight's 'Concepts of Quantum Optics' )

2. P.L. Kelly, W.H. Kleiner "Theory of Electromagnetic Field Measurement and Photoelectron Counting" Phys. Rev. 136 (1964) pp. A316-A334.

3. L. Mandel, E. Wolf "Optical Coherence and Quantum Optics" Cambridge Univ. Press 1995.

4. L. Mandel "Physical Significance of Operators in Quantum Optics" Phys. Rev. 136 (1964), pp B1221-B1224.

5. C.L. Mehta, E.C.G Sudarshan "Relation between Quantum and Semiclassical Description of Optical Coherence" Phys. Rev. 138 (1965) pp B274-B280.
 
Last edited:
  • #157
vanesch I would like to add something here, because the discussion is taking a turn that is in my opinion wrongheaded. You're trying to attack "my view" of quantum theory (which is 90% standard and 10% personal: the "standard" part being the decoherence program, and the personal part the "single mind" part). But honestly that's not very interesting because I don't take that view very seriously myself.

Agree on this, that's a waste of time. I would also hate to have to defend the standard QM interpretation. Even arguing from this side, against its slippery language is like mud-wrestling a lawyer. It never goes anywhere.
 
  • #158
nightlight said:
To this end we are following a wave packet (in coordinate representation) of PDC "photon 2" and we are using "photon 1" as as an indicator that there is a "photon 1" heading to our splitter. This part of the correlation between 1 and 2 is purely classical amplitude based correlation i.e. a trigger of D1 at time t1 indicates that the incident intensity of "photon 2", I2(t) will be sharply increased within some time window T at starting at t1 (with an offset from t1, depending on path lengths).

I know that that is how YOU picture things. But it is not the case of the quantum description, where the two photons are genuine particles.

I don't really know what you mean with "Poissonian square law detectors".

I mean the photon detectors for the type of photons we're discussing (PDC, atomic cascades, laser,...). The "square law" refers to the trigger probability in a given time window [t,t+T] being proportional to the incident EM energy in that time window.

Again, that's your view of a photon detector, and it is not the view of quantum theory proper. So I used that distinction to point out a potential difference in predictions. A quantum photon detector detects a marble, or not. A finite quantum efficiency photon detector has QE chance of seeing the marble when it hits, and (1-QE) chance of not seeing it. But if the marble isn't there, it doesn't see it. This was exactly what I tried to point out in the anti-coincidence series. If the detectors are "square law" then they have a different behaviour than if they are "marble or no marble", so such an experiment can discriminate between both.

The Poissonian means that the ejected photo-electron count has a Poissonian distribution i.e. probability of k electrons being ejected in a given time interval is P(n,k)=n^k exp(-n)/k! where n=average numbers of photoelectrons ejected in that time interval. In our case, the time window is the short coincidence window (defined via the PDC "photon 1" trigger) and we assume n=constant in this window. As indicated earlier, the n(t) varies with time t, it is low generally except for the sharp peaks within the windows defined by the triggers of "photon 1" detector. For simplicity you can assume n(t)=0 outside the window and n(t)=n within the window (we're not putting numbers in yet, so hold on your |1,1>, |2,2>...).

Exactly, that's a "square law" detector. And not a quantum photon detector. But statistically you cannot distinguish both if you have no trigger, so the semiclassical description works in that case for both. However, this is NOT the case for a 2-photon state, because then, BEFORE TAKING INTO ACCOUNT THE PHOTOELECTRON distribution, you have somehow to decide whether the first photon was there or not. If it is not there, nothing will happen, and if it is there, you draw from the photoelectron distribution, which will generate you a finite probability of having detected the photon or not.

Both approaches yield exactly the same conclusion, the square law property of photo-ionization and the Poisson distribution (super-Poisson for mixed or varying fields within the detection window) of the ejected photo-electrons. They all also derive detector counts (proportional to electron counts/currents) for single and multiple detectors in arbitrary fields and any positions and time windows (for correlations).

I don't disagree with the Poissonian distribution of the photoelectrons of course, IN THE CASE A PHOTON DID HIT. The big difference is that you consider streams of 1/N photons (intensity peaks which are 1/N of the original, pre-split peak) which give rise to INDEPENDENT detection probabilities at each detector, while I claim that pure quantum theory predicts you individually the same rates, with the same distributions, but MOREOVER ANTICORRELATED in that the click (the undivisible) photon can be only at one place at a time. This discriminates both approaches, and is perfectly realizable with finite-efficiency detectors.

The essential aspect of the derivations relevant here is that the general time dependent photo-electron count (photo-current) of each detector depends exclusively on the local field incident on that detector.

Not in 2-photon states. They are not independent, in quantum theory. I'm pretty sure about that, but I'll try to back up my statements with other people's opinions in how QUANTUM THEORY works (how nature works is a different issue which can only be decided by experiment).

cheers,
Patrick.
 
  • #159
vanesch I know that that is how YOU picture things. But it is not the case of the quantum description, where the two photons are genuine particles.

One photon state spans infinite space and time and would require inifinite detector and infinite time to detect. Any finite time, finite space field is necessarily a superposition of multiple photon number states (also called Fock states). The "photon" is not what photo detectors count, despite suggestive name and the pedagocial/popular literature mixups. That's just a jargon, a shorthand. They count photo-electrons. The theory of detectors shows what these counts are and how the QED (or classical) field amplitudes relate to the photodetector counts. Of course, the integrated absolute squares of field amplitudes are proportional to the expectation value of the "photon number operator" [n], so one can loosely relate the average "photon number" <[n]> to the photo detector counts, but this is a relation between an average of an observable [n] and a (Poisson) distribution P(ne,k), which is a much too soft connection for the kind of tight enumerative reasoning required in the analysis of the anti-correlations or of the Bell's inequality tests.

There is nothing corresponding operationally to the "photon detector". And to speak of coincidence correlations of |1,1> at places X1,X2 at times t1 and t2 is vacuous to the square.

What photodetector counts are, is quite relevant if you wish to interpret what the Quantum Optics experiments are showing or even to state what the QED predictions are, at all.

Ignoring the toy "prediction" based on simple reasoning with |1,1>, |2,2> and such, the QED prediction for coincidences is given precisely by the Glauber's correlation functions (which includes their operational interpretation, the usual QO vacuum effects subtractions).

You can't predict any coincidences with |1,1> since |1,1> represents plane waves of infinite duration and infinite extent (they are expansion basis sensitive, too). All you can "predict" with such mnemonic devices is what the real coincidence prediction given via G() might roughly look like. Coincidence count is meaningless for |1,1>, it is operationally empty.

You did manage to miss the main point (which in retrospect appears a bit too fine a point), though, which is that what is shown in the papers cited (and explicitly stated in [4]), is that the "Quantum Optics correlation" G(X1,X2,..) is not the (expectation value of) observable representing the correlation of coincidence counts. The observable which does correspond to the correlation of coincidence counts (the pre-normal ordered correlation function) yields prefectly local (Bell's LHV type) predictions, no negative counts, no sub-Poissonian statistics, no anti-correlations. That is the result of both QED and semiclassical treatment of multi-detector coincidences.

Debating whether there is a "photon" and whether it is hiding inside the amplitude somehow (since the formalism doesn't have a counterpart for it), is like debating the count of dancing angels on a head of a pin. All you can say is that fields have such and such amplitudes (which are measurable for any given setup, e.g. via tomographic methods based on Wigner functions). The theory of detection then tells you how such amplitudes relate to counts (of photo-electrons) produced by the photo-detector.

While it is perfectly fine to imagine "photon" somewhere inside the amplitude (if such mnemonics helps you) you can't call the counts produced by the photodetectors the counts of your mnemonic devices (without risking a major confusion). That simply is not what those counts are, as the references cited show clearly and in great detail. After you clear up this bit, you're not done since there is the next layer of confusion to get through, which is the common mixup between the Glauber's G() "correlation" function and the detector counts correlations, for which [4] and [5] should help clear it up. And after that, you will come to still finer layers of mixups which Marshall and Santos address in their PDC papers.
 
Last edited:
  • #160
nightlight said:
The observable which does correspond to the correlation of coincidence counts (the pre-normal ordered correlation function) yields prefectly local (Bell's LHV type) predictions, no negative counts, no sub-Poissonian statistics, no anti-correlations. That is the result of both QED and semiclassical treatment of multi-detector coincidences.

My Mandel and Wolf is in the mail, but I checked the copy in our library. You are quite wrong concerning the standard quantum predictions. On p706, section 14.5, it is indicated that the anticoincidence in a 1-photon state is perfect, as calculated from the full quantum formalism.
On p 1079, section 22.4.3, it is indicated that PDC, using one photon of the pair as a trigger, gives you a state which is an extremely close approximation to a 1-photon state. So stop saying that what I told you are not the predictions of standard quantum theory.

I know YOUR predictions are different, but standard quantum mechanical predictions indicate a perfect anticorrelation in the hits. You should be happy about that because it indicates an experimentally verifiable claim of your approach, and not an unverifiable claim such as "you will never be able to build a photon detector with 92% efficiency and neglegible dark current".

cheers,
Patrick.

PS: I can see the confusion if you apply equation 14.3-5 in all generality. It has been deduced in the case of COHERENT states only, which have a classical description.
 
Last edited:
  • #161
while I claim that pure quantum theory predicts you individually the same rates, with the same distributions, but MOREOVER ANTICORRELATED in that the click (the undivisible) photon can be only at one place at a time. This discriminates both approaches, and is perfectly realizable with finite-efficiency detectors.

That is not a QED prediction, but a shorthand for the prediction which needs a grain of salt before use. If you check the detector foundations in the references given, you will find out that even preparing the absolutely identical field amplitudes in each try, you will still get (at best) the the Poisson distribution of photoelectrons in each try, thus the Poissonian for the detector's count (equivalent to the tree-split count of activated detectors). The reason for the non-repeatability of the exact count is the averaging over the states of the detector. The only thing which remains the same for the absolutely identical preparation in each try is the Poissonian average n=<k>.

Therefore the beam splitter, which can only approximate the absolutely identical packets in the two paths, will also yield in each try the independent Poissonian counts on each side, with only the Poissonian average n=<k> being the same on the two sides.

The finer effect that PDC (and some anticorrelation experiments) specifically add in this context is the apparent excess of the anti-correlation, the apparent sub-Poissonian behavior on the coincidence data processed by the usual Quantum Optics prescription (the Glauber's prescription of vacuum effects removal, corresponding to the normal ordering). Here you need to recheck Marshall & Santos PDC and detector papers for full details and derivations, I will only sketch the answer here.

First one needs to recall that the Glauber's correlation (mapped to the experimental counts via usual vacuum-effects subtractions) is not the (expected value of) observable corresponding to the counts correlation. It is only obtained by modifying the observable for the count corrrelations through normal ordering of operators (to in effect remove the vacuum generated photons, the vacuum fluctuations).

As pointed out in [4] these vacuum terms subtractions from the true count correlation observable, do not merely make G(X1,X2,..) depart from the true correlation observable, but if one were to rewrite G() in a form of a regular correlation function of some abstract "G counts" one would need to use negative numbers for these abstract "G counts" for some conceivable density operators [rho] (G is the expectation value of Glaubers observable [G] over density operator [rho] i.e. G=Tr([rho] [G])). There is never question, of course, of the regular correlation observable [C] requiring negative counts for any [rho] -- it is defined as a proper correlation function observable of the photo-electron counts (as predicted in the detection theory), which are always non-negative.

These abstract "G counts" lead to the exactly same kind "paradoxes" (if confused with the counts) that the superposition mystery presents (item (a) couple messages back) . Namely in the "superposition mystery", your counts are always (proportional to) C=|A|^2, where A is the total EM field amplitude. When you have two sources yielding in separate experiments amplitudes A1 and A2 and the corresponding counts C1=|A1|^2 and C2=|A2|^2, then if you have both sources turned on, the net amplitude is: A=A1+A2 and the net count is C=|A1+A2|^2, which is generally different from C1+C2=|A1|^2+|A2|^2.

If one wants to rewrite formally C as a sum of two abstract counts G1 and G2, one can say C=G1+G2, but one should not confuse C1 and C2 with the abstract counts G1 and G2. In the case of negative interference you could have A1=-A2, so that C=0. If one knows C1 and also misattributes G1=C1, G2=C2, then one would need to set G2=-G1, a negative abstract count. Indeed, the negative interference is precisely the aspect the students will be the most puzzled with.

It turns out, the PDC generates [rho] states which require <[G]> to use negative "abstract G counts" if one were to write it as a proper correlation function correlating these abstract "G counts". The physical reason (in Marshal-Santos jargon) it does so is because the PDC generation uses two vacuum modes (to turn them into down-converted photons on phase matching conditions), thus the vacuum fluctuation noise alone (without these down-converted photons) is actually smaller in the PDC photons detection region. Therefore the conventional QO vacuum effects subtractions from the observed counts, aiming to reconstruct Glauber's <[G]> and implicitly its abstract "G counts", oversubtract here since the remaining vacuum fluctuations effects are smaller in the space region traveling with the PDC photons (they are modified vacuum modes, where the modification is done by the nonlinear crystal via absorption and re-emission of the matching vacuum modes).
 
Last edited:
  • #162
vanesch My Mandel and Wolf is in the mail, but I checked the copy in our library. You are quite wrong concerning the standard quantum predictions. On p706, section 14.5, it is indicated that the anticoincidence in a 1-photon state is perfect, as calculated from the full quantum formalism.

The textbook doesn't dwell on the fine points of distinction between the Glauber's pseudo-correlation function <[G]> and the true count coincidence correlation observable <[C]> but uses the conventional Quantum Optics shorthand (which tacitly includes vacuum effects subtractons by standard QO procedures, to reconstruct <[G]> from the obtained counts correlation <[C]>).

This same Mandel, of course, wrote about these finer distinctions in ref [4].

Although Mandel-Wolf textbook is better than most, it is still a textbook, for students just learning the material and it has used there a didactic toy "prediction", not a scientific prediction one would find in a paper as predicting something experimenters ought to test against. Find me a real paper in QO which predicts (seriously) such anticorrelation for the plane wave, then goes to measure coincidences on them with infinite detector in infinite time. It is a toy derivation, for students, not a scientifc prediction that you can go out and measure and hope to get what you "predicted".

On p 1079, section 22.4.3, it is indicated that PDC, using one photon of the pair as a trigger, gives you a state which is an extremely close approximation to a 1-photon state. So stop saying that what I told you are not the predictions of standard quantum theory.

Well, the "extremely close" could be, say, about half a photon short of a single photon (generally it will be at best as close as the Poissonian distribution allows, which has a variance <[n]>).

Again, you're taking didactic material toy models without sufficient grain of salt. I cited Mandel & Wolf as a more readable overview of the photodectection theory than the master references in the original papers. It is not a scientific paper, though (not that those are The Word from Above, either). You can't take all it says at the face value and in the most literal way. It is a very nice and useful book (much nicer for a physicist than Yariv's QO textbook), if one reads it maturely and factors out the unavoidable didactic presentation effects.

I know YOUR predictions are different, but standard quantum mechanical predictions indicate a perfect anticorrelation in the hits. You should be happy about that because it indicates an experimentally verifiable claim of your approach, and not an unverifiable claim such as "you will never be able to build a photon detector with 92% efficiency and neglegible dark current".

I was describing what the standard detection theory says, what are the counts and what is the difference between <[G]> and <[C]>. You keep bringing up same didactic toy models and claims, taken in the most literal way.

The shorthand "correlation" function for <[G]> in Quantum Optics is just that. You need a grain of salt to translate what it means in terms of detector counts (it is standard QO reconstruction of <[G]> from the measurement of the count correlation observable <[C]>). It is not a correlation function of any counts -- [G] is not an observable which measures correlations in photo-electron counts. It is an observable which is measured by reconstruction of such observable [C], the count correlation observable (see [4] for distinctions).

The [G] is defined by taking [C] and commuting all field amplitude operators A+ to the left and all A to the right. The two, [C] and [G] are different observables. <[C]> is measured directly as the correlation among the counts (the measured photocurrents approximate the "ideal" photo-electron counts assumed in [C]). Then the <[G]> is reconstructed from <[C]> through the standard QO subtractions procedures.

As a rough shorthand one can think of <[G]> as a correlation function. But [G] is not a correlation observable of anything that can be counted (that observable is [C]). So one has to watch how far one goes, with such rough shorthand, otherwise one may end up wondering why the "counts" which correlate as <[G]> says they ought to, come out negative. They don't. There are no such "G counts" since [G] isn't a genuine correlation observable, thus <[G]> is not a genuine correlation function. You can, of course, rewrite it purely formally as a correlation of some abstract G_counts, but there is no direct operational mapping of these abstract G_counts to anything that can be counted. In contrast, for the observable [C] which is defined as a proper correlation observable of the photo-electron counts, the kind of counts you can assign operational meaning to (i.e. map to something which can be counted -- the photo-electrons, or approximately their amplified currents) the correlation function <[C]> has no such problems as the puzzling non-classical correlations or the mysterious negative counts.
 
Last edited:
  • #163
vanesch Look at the general formulation of the path integral (a recent exposition is by Zee, but any modern book such as Peskin and Schroeder, or Hatfield will do).
The integral clearly contains the FULL nonlinear dynamics of the classical fields.


You may be overinterpreting the formal path integral "solutions" here. The path integral computes the Green functions of the regular QFT (the linear evolution model in Fock space with its basis from the linearized classical theory), and not (explicitly) the nonlinear classical fields solution. The multipoint Green function is a solution only in the sense of approximating the nonlinear solution via the multipoint collisions (of quasiparticles; or what you call particles such as "photons", "phonons" etc) while the propagation between the collisions is still via the linear approximations of the nonlinear fields, the free field propagators. The collisions merely re-attach the approximate free (linearized) propagators back to the nonlinear template, or as I labeled it earlier, it is a piecewise linear approximation (for a funny tidbit on this, check L.S. Schulman "Techniques and applications of Path Integration" Wiley 1981; pp 39-41, the Appendix for the WKB chapter, where a bit too explicit re-attachment to the classical solution is labeled "an embarrassment to the purist"). If you already have the exact solutions of the classical nonlinear fields problem, you don't need the QFT propagators of any order at all.

For example, the Barut's replication of QED radiative corrections in his nonlinear fields model purely as the classical solution effects, makes redundant the computation of these same effects via the QED expansion and propagators (to say nothing of making a whole lot more sense). You can compute the QFT propagators if you absolutely want to do so (e.g. to check the error behavior of the perturbative linearized approximation). But you don't need them to find out any new physics that is missing in the exact solutions of the nonlinear classical fields that you already have.

Note also that path integrals and the Feynman diagrams with their 'quasiparticle' heuristic/mnemonic imagery are pretty standard approximation tool for many nonlinear PDE systems outside of the QFT, and even outside of physics (cf. R.D. Mattuck "A Guide to Feynman Diagrams in the Many-Body Problem" Dover 1976; or various math books on nonlinear PDE systems; Kowalski's is just one general treatment of that kind where he emphasized the linearization angle and the connection to the Backlund transformation and inverse scattering methods, which is why I brought him up earlier).
 
Last edited:
  • #164
nightlight said:
As a rough shorthand one can think of <[G]> as a correlation function. But [G] is not a correlation observable of anything that can be counted (that observable is [C]). So one has to watch how far one goes, with such rough shorthand, otherwise one may end up wondering why the "counts" which correlate as <[G]> says they ought to, come out negative.

My opinion is that you make an intrinsically simple situation hopelessly complicated. BTW, |<psi2| O | psi1>|^2 and |<psi2| :O: |psi1>|^2 are both absolute values squared of complex numbers, so are both positive definite. Hence the normal ordering cannot render G negative.

Nevertheless, this stimulated me to have another look at quantum field theory, which I thought was locked up in particle physics. But I think I'll bail out of this discussion for a while until I'm better armed to tear your arguments to pieces :devil: :devil:

cheers,
Patrick.
 
  • #165
nightlight said:
The multipoint Green function is a solution only in the sense of approximating the nonlinear solution via the multipoint collisions (of quasiparticles; or what you call particles such as "photons", "phonons" etc) while the propagation between the collisions is still via the linear approximations of the nonlinear fields, the free field propagators.

No, the multipoint green function IS the path integral with the full nonlinear solution and all the neighboring non-solutions. It is its series development in the interaction constants (the Feynman graphs) what you seem to be talking about, not the path integral itself.

But I will tell you a little secret which will make you famous if you listen carefully. You know, in QCD, the difficulty is that the series development in the coupling constant doesn't work well at "low" energies (however, it works quite well in the high energy limit). Now, the stupid lot of second quantization physicists are doing extremely complicated things in order to try to get a glimpse of what might happen at low energies. They don't realize that it is sufficient to solve a classical non-linear problem. If they would realize this, they would be able to simplify enormously the calculations, because you could then apply finite-element techniques. That would then allow them to calculate nuclear structure without any difficulty ; compared to what they try to do right away, it would be easy. So, hint hint, solve the classical non-linear problem of the fields, say, for a 3 up quarks and 3 down quarks and you'll find... deuterium ! :smile:

cheers,
Patrick.
 
  • #166
vanesch No, the multipoint green function IS the path integral with the full nonlinear solution and all the neighboring non-solutions. It is its series development in the interaction constants (the Feynman graphs) what you seem to be talking about, not the path integral itself.

Formally the path integral with full S contains implicitly the full solution of the nonlinear PDE system, just as the symbolic sum from 0 to infinity of Taylor series is formally the exact function. But each specific multipoint G is only a scattering matrix type piecewise linearization of the exact nonlinear problem. The multipoint Green function approximates the propagation between the collisions using the free field propagators (which are only the solutions of the linearized approximation of the nonlinear equation, but they're not the solutions of the exact nonlinear equation) while the interaction part, the full nonlinear system, is "turned on" only for the finite number of points (and in their infinitesimal vicinity), as in the scattering theory (this is all explicit in the S matrix formulations of QED).

That is roughly analogous to each x^n term of Taylor series approximating some function f(x) with the next higher polynomial, where the full function f(x) is "turned on" only in the point x=0 and its "infintesimal" vicinity to obtain the needed derivatives.

So, hint hint, solve the classical non-linear problem of the fields, say, for a 3 up quarks and 3 down quarks and you'll find... deuterium !

Before deciding that solving a general equations of interacting nonlinear QCD fields exactly is a trivial matter, why don't you try a toy problem of 3 simple point particles with simple Newtonian gravitational interaction and given masses m1, m2, m3 and positions r1, r2, r3. It is an infinitely simpler problem than the QCD nonlinear fields problem, so maybe, say in ten minutes, you could come back and give us the exact three body problem solution in closed form in the parameters as given? Try it, it ought to be trivial. I'll check back to see the solution.

Note also that even the solutions to the much simpler nonlinear coupled Maxwell-Dirac equations in 3+1 or even reduced dimensions hasn't budged an inch (in terms of getting exact solutions) for decades of pretty good mathematicians banging their heads against it. That's why physicists invented the QED expansion, to at least get some numbers out of it. Compare Barut's QED calculations to the conventional QED calculations of the same numbers and see which is easier and why it wasn't done via the nonlinear equations. The nonlinear field approach is only a conceptual simplification, not a computational one. The computational simplification was the QED.
 
Last edited:
  • #167
vanesch My opinion is that you make an intrinsically simple situation hopelessly complicated. BTW, |<psi2| O | psi1>|^2 and |<psi2| :O: |psi1>|^2 are both absolute values squared of complex numbers, so are both positive definite. Hence the normal ordering cannot render G negative.

It is not <[G]> that is being made negative (just as in the superposition example (a), it wasn't C that was made negative). What I said is that if you take <[G]> and re-express it formally as if it were an actual correlation function of some fictitious/abstract "G_counts" (i.e. re-express it in the form of a sum or an integral of the products GN1*GN2... over the sequence of time intervals/coincidence windows), it is these fictitious G_Counts, the GN's, that may have to become negative if they were to reproduce <[G]> (which in turn is reconstructed from measured <[C]>, the real correlation function) for some density operators (such as PDC). This is not an inherent problem of definition of <[G]> or its practical use (since nothing is counted as GN's, they're merely formal variables without any operational meaning), but it is a common mixup in the operational interpretation of Glauber/QO correlations (which mixes up the genuine correlation observable [C] with the Glauber's pseudo-correlation observable [G]).

This is the same type of negative probability on abstract counts as in the interference (a) example where, if you were to express the combined source count formally as a sum of fictitious counts ie. C=G1+G2, then some of these fictitious counts may have to be negative (see the earlier msg) to reproduce the measured C. The C is itself is not negative, though.

But I think I'll bail out of this discussion for a while ...

I am hitting a busy patch in my day job, too. It was fun and it helped me clear up few bits for myself. Thanks for the lively discussion, and to all the other participants as well.
 
Last edited:
  • #168
vanesch No, the multipoint green function IS the path integral with the full nonlinear solution and all the neighboring non-solutions. It is its series development in the interaction constants (the Feynman graphs) what you seem to be talking about, not the path integral itself.

But I will tell you a little secret which will make you famous if you listen carefully...


Your mixup here may be between the "full nonlinear solution" of the Langrangian in the S of the path integral (which is a single classical particle dynamics problem, a nonlinear ODE) and the full nonlinear solution of the classical fields (nonlinear PDE). These two are quite different kinds of "classical" nonlinear equations (ODE vs PDE), and they're only formally related via the substitution of the particle's momentum p (a plain function in the particle ODE) with the partial derivative iD/Dx for the fields PDE equations. That's the only way I can see any kind of a rationale (erroneous as it may be) behind your comments above (and the related earlier comments).
 
Last edited:
  • #169
How we can be sure that we send only one electron, and then the diffraction makes the interference pattern?
 
  • #170
nightlight said:
vanesch
Your mixup here may be between the "full nonlinear solution" of the Langrangian in the S of the path integral (which is a single classical particle dynamics problem, a nonlinear ODE) and the full nonlinear solution of the classical fields (nonlinear PDE).

And I was not going to respond... But I can't let this one pass :-p

The QED path integral is given by:
(there might be small errors, I'm typing this without any reference, from the top of my head)

Lagrangian density:

L = -1/4 F_uv F^uv + psi-bar (D_mu gamma^mu + m) psi

(I don't remember if it is m or m^2 in this way of writing...)

with D_m the covariant derivative: D_mu = d_mu - q A_mu

Clearly, this is the lagrangian density which gives you the coupled Maxwell-Dirac equations if you work out the Euler-Lagrange equations. Note that these are the non-linear PDE you are talking about, no ?
(indeed, the coupling is present through the term A in D_m)

the action is defined:

S[A_mu,psi] = Integral over spacetime (4 coordinates) of L, given a solution (or non-solution) of fields A_mu and psi.

S is extremal if A_mu and psi are the solutions to your non-linear PDE.

Path integral for an n-point correlation function (I take an example: a 4-point function, which is taken to be a positron-anti positron field and two photon fields, for instance corresponding to the pair annihilation ; but also Compton scattering - however, it doesn't matter what it represents in QED ; it is just the quantum amplitude corresponding to a product of 4 fields)

<0| psibar[x1] psi[x2] A[x3] A[x4] |0> = Path integral over all possible field configurations of A_m and psi of {Exp[ i / hbar S] psibar[x1] psi[x2] A[x3] A[x4]}

For the classical solution, the path integral reduces to one single field configuration, namely the classical solution psi_cl[x] and A_cl[x], and we find of course that the 4-point correlation function is then nothing else but
psi_cl[x1] psi_cl[x2] A_cl[x3] A_cl[x4] (times a phase factor exp(iS0) with S0 the action when you fill in the classical solution). This classical solution is the one that makes S extremal, and hence for which the psi_cl and A_cl obey the non-linear PDE which I think you propose.
But the path integral also includes all the non-solutions to the classical problem, with their phase weight, and it is THAT PROBLEM which is so terribly hard to solve and for which not much is known except series devellopment, given by Feynman diagrams. If it were only to find the classical solution to the non-linear PDE, it would be peanuts to solve :biggrin:.

Now, there is one thing I didn't mention, and that is that because of the fermionic nature of the dirac field, their field values aren't taken to be 4-tuples of complex numbers at each spacetime point, but taken to be anticommuting Grassman variables ; as such indeed, the PDE is different from the PDE where the dirac equation takes its solutions as complex fields. You can do away with that, and then you'd have some kind of bosonic QED (for which the quantum case falls on its face due to the spin-statistics theorem, but for which you can always find classical solutions).

But it should now be clear that the non-linear PDE that you are always talking about IS FULLY TAKEN INTO ACCOUNT as a special case in the path integral.

cheers,
patrick.

PS: I could also include nasty remarks such as not to confuse pedagogical introductions to the path integral with the professional use by particle physicists, but I won't :devil: :smile:


EDIT: there were some errors in my formulas above. The most important one is that the correlation is not given by the path integral alone, but that we also have to divide by the path integral of exp(iS) without extra factors (that takes out the factor exp(iS0) I talked about.
The second one is that one has a path integral over psi, and an independent one over psi-bar as if it were an independent variable. Of course, for the classical solution it doesn't change anything.
Finally, x1, x2, x3 and x4 have to be in time order. Otherwise, we can say that we are taking the correlation of the time-ordered product.
 
Last edited:
  • #171
Wouldn't it be easier to think of the electron in Young's experiment as a 'speedboat' passing through the vacuum, but that also generates a wave in the same way as a boat does on water.
What would the result be if you used a ripple tank and water to recreate Young's experiment? And how would we interpret it?
 
  • #172
Ian said:
Wouldn't it be easier to think of the electron in Young's experiment as a 'speedboat' passing through the vacuum, but that also generates a wave in the same way as a boat does on water.
What would the result be if you used a ripple tank and water to recreate Young's experiment? And how would we interpret it?

We know what happens when an electron (or any charge particles) generates a wake. The double-slit result is NOT due to a wake.

Zz.
 
  • #173
vanesch said:
But it should now be clear that the non-linear PDE that you are always talking about IS FULLY TAKEN INTO ACCOUNT as a special case in the path integral.

I would like to add that the classical solution has often the main contribution to the path integral, because the classical solution makes S stationary, which means that up to second order, all the neighboring field configurations (which are non-solutions, but are "almost" solutions) have almost the same S value and hence the same phase factor exp(iS). They add up constructively in the pathintegral as such. If the fields are far from the classical solution, their neighbours will have S values which change in first order and hence they have different phase factors, and tend to cancel out.
So for certain cases, it can be that the classical solution gives you a very good approximation (or even the exact result, I don't know) to the full quantum problem, especially if you limit yourself to a series development in the coupling constant. It can then be that the first few terms give you identical results. It is in that light that I see Barut's results.

cheers,
Patrick.
 
  • #174
nightlight said:
vanesch My Mandel and Wolf is in the mail, but I checked the copy in our library. You are quite wrong concerning the standard quantum predictions. On p706, section 14.5, it is indicated that the anticoincidence in a 1-photon state is perfect, as calculated from the full quantum formalism.

The textbook doesn't dwell on the fine points of distinction between the Glauber's pseudo-correlation function <[G]> and the true count coincidence correlation observable <[C]> but uses the conventional Quantum Optics shorthand (which tacitly includes vacuum effects subtractons by standard QO procedures, to reconstruct <[G]> from the obtained counts correlation <[C]>).

Well, it seems that pedagogical textbook student knowledge is closer to experimental reality than true professional scientific publications then :smile: :smile:

Ok, I asked the question on sci.physics.research and the answer was clear, not only about the quantum prediction (there IS STRICT ANTICORRELATION), but also about its experimental verification. Look up the thread "photodetector coincidence or anticoincidence" on s.p.r.

cheers,
Patrick.
 
  • #175
vanesch
<0| psibar[x1] psi[x2] A[x3] A[x4] |0> = Path integral over all possible field configurations of A_m and psi of {Exp[ i / hbar S] psibar[x1] psi[x2] A[x3] A[x4]}

Thanks for clearing up what you meant. I use term "path integral" when the sum is over paths and "functional integral" when it is over the field configurations. The Feynman's original QED formulation was in terms of the "path integrals" and no new physics is being added to it by re-expressing it in an alternative formalism, such as the "functional integrals" (just as the path integral formulation doesn't add any new physics to the canonically quantized QED or to S matrix formalism).

Therefore the state evolution in Fock space generated by the obtained multipoint Green functions is still piecewise linearized evolution (the linear sections are generated by the Fock space H) with the full interaction being turned on only within the infinitesimal scattering regions. If you a nice physical picture, though, of what goes on here in terms your favorite formalism, I wouldn't mind learning something new.

It is important to distinguish here that even though the 2nd quantization by itself doesn't add any new physics (being a general purpose linearization algorithm used in various forms in many other areas, an empty mathematical shell like Kowalski's Hilbert space linearization or, as Jaynes put it, like Ptolomean epicycles) that wasn't already present in the nonlinear fields, the new physics can be (and is likely being) added that wasn't present in the initial nonlinear fields model within the techniques working out the details of the scattering (in S matrix imagery), since in QED these techniques were tweaked over the years to fit the experimental data. Obviously, this new physics distilled from the experiments and absorbed into the computational rules of QED, is by necessity in a form given to it by the 2nd quantization formalism they were fitted into. The mathematical unsoundness and logical incoherence of the overall scheme as it evolved only aided its flexibility, to better wrap around whatever experimental data that turned up.

That is why it is likely that Barut's self-fields are not the whole story, the most notable missing pieces being the charge quantization and the electron localization (not that QED has an answer other than 'that's how it is and the infinities hocus-pocus go away'). In principle, had the nonlinear classical fields been the starting formalism instead of the 2nd quantization (in fact they were the starting formalism and the most natural way to look at the Maxwell-Schroedinger/Dirac equations, and that's how Schroedinger understood it from day one), all the empirical facts and new physics accumulated over the decades would have been incorporated into it and would have had a form appropriate to that formalism e.g. some additional interaction terms or new fields in the nonlinear equations. But this kind of minor tweaking is not where the physics will go; it's been done and that way can go only so far. My view is that Wolfram's NKS points in the direction of the next physics, recalling especially the http://pm1.bu.edu/~tt/publ.html cellular automata modelling of physics (note that his http://kh.bu.edu/qcl/ is wrong on his home page; see for example http://kh.bu.edu/qcl/pdf/toffolit199064697e1d.pdf ) and Garnet Ord's interesting models (which reproduce the key Maxwell, Dirac and Schroedinger equations as purely enumerative and combinatorial properties of plain random walks, without using imaginary time/diffusion constant; these can also be re-expressed in the richer modelling medium of cellular automata and the predator-prey eco networks, where they're much more interesting and capable). Or another little curiosity, again from a mathematician, Kevin Brown, showing the relativistic velocities addition formulas as a simple enumerative property of set unions and intersections.

But it should now be clear that the non-linear PDE that you are always talking about IS FULLY TAKEN INTO ACCOUNT as a special case in the path integral.

The "fully taken into account" is in the same sense that Taylor or Fourier series cofficients "fully take into account" the function the series is trying to approximate. In either situation, you have to take fully into account, explictly or implicitly, that which you are trying to approximate. How else could the algorithm know it isn't approximating something else. In the functional integrals formulation of QED, this tautological trait is simply more manifest than in the path integrals or the canonical quantization. There is thus nothing nontrivial about this "fully taking into account" tautological phenomenon you keep bringing up.

If there is anything it does for the argument, this more explicit manifestation of the full nonlinear dynamics only emphasizes my view, by pointing more clearly what is it at all that the algorithm is trying to ultimately get at, what is the go of it. And there it is.

But the path integral also includes all the non-solutions to the classical problem, with their phase weight,

If you approximata a parabola with a Fourier series, you will be including (adding) a lot of functions which are not even close to parabola. It doesn't mean that all this inclusion of non-parabolas amounts, after all is said and done, to anything more than what was already contained in the parabola. In other words, the Green functions do not generate in the Fock space the classical nonlinear PDE evolution but only a series of the piecewise linearized approximations of it, none of which is the same as the nonlinear evolution (note also that obtaining a Green function could be generally more useful in various ways than just having one classical solution of the nonlinear system). This kind of overshoot/undershoot busywork is a common trait of approximating.
 
Last edited by a moderator:

Similar threads

Back
Top