Young's Experiment: Exploring Wave-Particle Duality

  • Thread starter Cruithne
  • Start date
  • Tags
    Experiment
In summary: This is not supported by the experiments.In summary, this article is discussing a phenomena that is observed in Young's experiment, however it contradicts what is said in the other examples of the experiment. The mystery of the experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. Additionally, the statement suggests that the interference patterns produced were not the result of any observations.
  • #176
vanesch Ok, I asked the question on sci.physics.research and the answer was clear, not only about the quantum prediction (there IS STRICT ANTICORRELATION), but also about its experimental verification. Look up the thread "photodetector coincidence or anticoincidence" on s.p.r.

Now there is an authority. I was lucky to be fighting just the little points and thoughts of von Neumann, Bohr, Heisenberg, Feynman, Bell, Zeh, Glauber,... What do I do now, sci.physics.research is coming.

Should we go look for few other selected pearls of wisdom from over there? Just to give it some more weight. If you asked here you would get the same answer, too. If you asked for the shape of the Earth at some point, all would agree it was flat.

The physics (thankfully) isn't a litterary critique, where you can just declare, Derrida said... and you win. In physics and mathematics the facts and logic have to stand or fail on their own merits. Give them a link here if you need help, let them read the thread, let them check and pick apart the references, and then 'splain it to me how it really works.
 
Physics news on Phys.org
  • #177
nightlight said:
Therefore the state evolution in Fock space generated by the obtained multipoint Green functions is still piecewise linearized evolution (the linear sections are generated by the Fock space H) with the full interaction being turned on only within the infinitesimal scattering regions. If you a nice physical picture, though, of what goes on here in terms your favorite formalism, I wouldn't mind learning something new.

You seem to have missed what I pointed out. The "piecewise linearised evolution" is correct when you consider the PERTURBATIVE APPROXIMATION of the functional integral (you know, in particle physics everybody calls it the path integral) using Feynman diagrams. Feynman diagrams (or, for that matter, Wick's theorem) are a technique to express each term in the series expansion of the FULL CORRELATION FUNCTION as combinations of the FREE CORRELATION FUNCTIONS, which are indeed the exact solutions to the linear quantum field parts. But I wasn't talking about that. I was talking about the full correlation function itself. That one doesn't have any "piecewise linear approximations" in it, and contains the FULL DYNAMICS. Only, except in special circumstances, nobody knows how to solve that problem directly, but the quantity itself is no approximation or contains no linearisation at all, and contains, as a very special case, the full nonlinear classical solution - I thought that my previous message made that clear.
There are some attempts (which work out better and better) to tackle the problem differently than by series development, such as lattice field theory, but it requires still enormous amounts of CPU power, as compared to nonlinear classical problems such as fluid dynamics. I don't know much about these techniques myself, but there are some specialists here around. But in all cases, we try to solve the same problem, which is a much more involved problem than the classical non-linear field equations, namely calculate the full correlation functions as I wrote them out.

If you take your opinions and information from the sixties, indeed, you can have the impression that QFT is a shaky enterprise for which you change the rules as data come in. At a certain point, it looked like that. However, by now, it is on much firmer grounds (although problems remain). First of all, the amount of data which is explained by it has exploded ; 30 years of experimental particle physics confirm the techniques. If it were a fitting procedure, that would mean that by now we'd have thousands of different rules to apply to fit the data. It isn't the case. There are serious mathematical problems too, if QFT is taken as a fundamental theory. But not if you suppose that there is a real high energy cutoff determined by what will come next. And all this thanks to second quantization :-p


cheers,
Patrick.
 
  • #178
nightlight said:
In physics and mathematics the facts and logic have to stand or fail on their own merits. Give them a link here if you need help, let them read the thread, let them check and pick apart the references, and then 'splain it to me how it really works.

Well, you could start by reading the article of the experiment by Thorn
:-p

cheers,
Patrick.

EDIT: Am. J. Phys. Vol 72, No 9, September 2004.
They did EXACTLY the experiment I proposed - don't forget that it is my mind who can collapse any wavefunction :smile: :smile:

The experiment is described in painstaking detail, because it is meant to be a guide for an undergrad lab. There is no data tweaking at all.

They use photodetectors with QE 50% and 250 counts per second dark current.

They find about 100.000 cps triggers of the first photon, and about 8800 cps triggers of coincidences between the first and one of the two others, within time coincidence window of 2.5 nanoseconds.

The coincidences are hardwired logic gates which count.
If the first photon clicks are given by N_G, the two individual coincidences between trigger (first photon and second photon left or right) are N_GT and N_GR and the triple coincidence N_GTR, then they calculate (no subtractions, no efficiency corrections nothing):

g(2)(0) = N_GTR N_G / (N_GT N_GR)

In your model, g(2) has to be bigger than 1.

They find: 0.0177 +/- 0.0026 for a 40 minute run.

Now given the opening window of 2.5 ns and the rates of the different detectors, they also calculate what they expect as "spurious coincidences". They find 0.0164.
Now if that doesn't explain nicely the full anticorrelation as I told you, I don't know what will ever do so.
 
Last edited:
  • #179
vanesch Ok, I asked the question on sci.physics.research and the answer was clear, not only about the quantum prediction (there IS STRICT ANTICORRELATION), but also about its experimental verification. Look up the thread "photodetector coincidence or anticoincidence" on s.p.r.

I just checked, look the kind of argument he gives:

But for this, look at Davis and Mandel, in _Coherence and Quantum Optics_ (ed. Mandel and Wolf, 1973). This is a very careful observation of ``prompt'' electrons in the photoelectric effect, that is, electrons emitted before there would be time, in a wave model, for enough energy to build up (even over the entire cathode!) to overcome the potential barrier. The experiment shows that the energy is *not* delivered continuously to a photodetector, and that this cannot be explained solely by detector properties unless one is prepared to give up energy conservation.

This is the Old Quantum theory (the Bohr's atom era) argument for Einstein's 'light quantum'. The entire photoeffect is fully within the semiclassical model -- the Schroedinger atoms interacting with the classical EM waves. Any number you can pull out of the QED/Quantum Optics on this, you can get from the semiclassical model (usually much easier, check ref's [1] & [2] on detector theory to see the enormous difference in the efforts to reach the same result). The only thing you won't get is a lively story about a "particle" (but after all the handwaving, there won't be any number out of all that 'photon' particle ballyhoo). And, of course, you won't get to see the real photons, shown clearly as a day in the Feynman diagrams.

What a non-starter. Then he quotes the Grangier, et al, 1986 paper I told you myself to check out.
 
  • #180
vanesch Well, you could start by reading the article of the experiment by Thorn

Abstract: "We observe a near absence of coincidence counts between the two detectors—a result inconsistent with a classical wave model of light, "

I wonder if they tested against the Marshall-Santos classical model (with ZPF) for the PDC which covers any multipoint coincidence experiment you can do with PDC source, plus any number of mirrors, lenses, splitters, polarizers,... (and any other linear optical elements) with any number of detectors. Or was it just the usual strawman "classical model".

Even if I had AJP access, I doubt it would have been worth the trouble, considering the existence of general results for the thermal, laser and PDC sources, unless they explictly faced head on the no-go result of Marshall-Santos and have shown how their experiment gets around it. It is easy to fight and "win" if you get to pick the opponent (the usual QO/QM "classical" caricature), while knowing that no one will challenge your pick since the gatekeepers is you.

You can't, not even in principle, get any measured count correlation (which approximates the expectation value <[C]>, where [C] is the observable for the correlations in the numbers of photo-electrons ejected on multiple detectors; see ref [1] & [4] in the detector theory message) in Quantum Optics which violates classical statistics. That observable (the photo-electron counts correlations) has all counts strictly non-negative (they're proportional to the EM field intensity in the photo-detection area) and the counts are in the standard correlation form, Sum(N1*N2), same form as Bell's LHVs. Pure classical correlation.

You can only reconstruct the Glauber's pseudo-correlation function from the measured count correlation observable (via the Glauber's prescription for subtractions of vacuum generated modes, the standard QO way of coincidence "counting") -- and it is that recontructed function <[G]>, if you were to express it formally as a correlation function of some imaginary G_Counts, that is write it formally as: <[G]> = Sum(GN1*GN2), which would show that some of these G_Counts (the GN's) would need to be negative (because for the PDC the standard Glauber QO "counting" prescription, equivalent to subtracting the effects of vacuum generated modes, over-subtracts here, since the PDC pair actually uses up the phase-matching vacuum modes). The G_Counts have no direct operational meaning, there is no G_Count detector which counts these. There is not even a design for any such (Glauber's 1963 QO founding papers suggests a single atom might do it, although he stopped short of calling for the experimenters to hurry up and make one like that).

Note that for a single hypothetical "photon" counter device these G_Counts would be the value of the "pure incident" photon number operator, and as luck would have it, the full quantum field photon number operator, which happens to be a close relative of the one the design calls for, is none other than a Hermitean operator in the Fock space, thus it is an "observable", which means it is observable, which, teacher adds, this means kids that this photo-detector we have right here, in our very own lab, gives us the counts which are the values of this 'pure incident photon number observable' { the "pure incident" photons, interacting with the detector's atoms, are imagined in this "photon counter" concept, as somehow separated out from all the vacuum modes, even from those vacuum modes which are superposed with the incident EM field, not by any physical law or known data, but by the sheer raw willpower of the device designer, Glauber: "they [vacuum modes] have nothing to do with the detection of photons" and presto! the vacuum modes are all gone, even the superposed ones, miracle, Halleluiah, Praise the Almighty.} The G_Counts are a theoretical construct, not something any device actually counts. And it certainly is not what the conventional detectors (of the AJP article experiment) did -- those counted as always, or at least since 1964, just the plain old boring classical [C], the photoelectron counts correlation observable.

The photo-electron counts correlation observable [C], the one their detectors actually report as the count correlation <[C]> (and which is approximately measured as the correlation of the amplified photo-curents; see [1],[2],[4]) is fully 100% classical, no negative counts required, no non-classical statistics required to explain anything they can ever show.
 
Last edited:
  • #181
vanesch But I wasn't talking about that. I was talking about the full correlation function itself. That one doesn't have any "piecewise linear approximations" in it, and contains the FULL DYNAMICS. Only, except in special circumstances, nobody knows how to solve that problem directly, but the quantity itself is no approximation or contains no linearisation at all, and contains, as a very special case, the full nonlinear classical solution - I thought that my previous message made that clear.

I know what you were saying. The question is, for these few exactly solvable toy models, where you can get the exact propagator (which would have to be a non-linear operator in the Fock space) and compute the exact classical fields evolution -- do the two evolutions differ at all?

This is just a curiosity, but it is not a decisive matter regarding the QED or QCD. Namely, there could be conceivably a toy model where the two evolutions differ, since it is not given from above that any particular formalism (which is always a huge generalization of the empirical data) has to work correctly in all of its farthest reaches. More decisive would be to see whether Barut's type model could replicate QED prediction to at least another order or two, and if not whether the experiment can pick one or the other.

If you take your opinions and information from the sixties,

Wait a minute, that's the lowliest of the lowest lows:)

indeed, you can have the impression that QFT is a shaky enterprise for which you change the rules as data come in.

Jaynes had for his PhD advisor Eugene Wigner and for decades was in the circle and was a close friend of the names the theorems and models and theories were named after in the QED textbooks . Read what he said till well into the 1990s. In comparison, I would be classified as a QED fan.

And all this thanks to second quantization

Or, maybe, despite of it. I have nothing against it as an effective algorithm, unsightly as it is. Unfortunately too much physics has been ensnared into its evolving computational rules, largely implicitly, for anyone to be able to disentangle it out and teach the cleaned up leftover algorithm in the applied math class. But it shouldn't be taken as a sacred oracle, a genuine foundation or a way to go.
 
Last edited:
  • #182
vanesch The "piecewise linearised evolution" is correct when you consider the PERTURBATIVE APPROXIMATION of the functional integral (you know, in particle physics everybody calls it the path integral) using Feynman diagrams. Feynman diagrams (or, for that matter, Wick's theorem) are a technique to express each term in the series expansion of the FULL CORRELATION FUNCTION as combinations of the FREE CORRELATION FUNCTIONS, which are indeed the exact solutions to the linear quantum field parts.

Re-reading that comment, I think you too ought to pause for a moment and step back a bit and connect the dots you already have in front of you. I will just list them since there are several longer messages in this thread, with an in depth disscussion on each one with a hard fought battle back and forth, where each point took your and other contributors' best shots. And you can read them back, in the light of the totality listed here (and probably as many lesser dots I didn't list), and see what the outcome was.

a) You recognize here that the QED perturbative expansion is a 'piecewise linear approximation'. The QED field amplitude propagation can't be approximating another piecewise linear approximation, or linear fields. And each new order of the perturbation invalidates the previous order approximation as a possible candidate for the last and the exact evolution. Therefore, there must be a limiting nonlinear evolution, not equal to any of the orders of QED, thus an "underlying" (at the bottom) nonlinear evolution (of QED amplitudes in coordinate basis) being approximated by the QED perturbative expansion, which is (since nothing else is left of all the finite orders of QED expansion, each is invalid, thus not the last word) "some" local classical nonlinear field theory (we're not interpreting what this limit-evolution means, but just establishing its plausible existence).

b) You also read within the last ten days at least some of the Barut's results on radiative corrections, including the Lamb shift, the crown jewel of QED. There ought to be no great mystery then, what could it be, what kind of nonlinear field evolution is it that QED amplitudes are actually piecewise linearizing.

c) What is the most natural thing, say a classical mathematician, would have considered if someone had handed him Dirac and Maxwell equations and told him: here are our basic equations, what ought to be done. Would he consider EM field potential A occurring in Dirac equation external, or the Dirac currents in the Maxwell's equations as external? Or would he do exactly what Schroedinger, Einstein, Jaynes, Barut... thought needs to be done and tried to do -- see them as set of coupled nonlinear PDEs to be solved?

d) Oddly that this most natural and conceptually the simplest way to look at these classical field equations, nonlinear PDEs, fully formed in 1920s, already had the high precision result of Lamb shift in it, with nothing that had to be added or changed in them (all it needed was someone to carry out the calculations for a problem already posed along with the equations, the problem of H atom) -- it had the experimental fact which would come twenty years later.

e) In contrast, the Dirac-Jordan-Heisenberg-Pauli Quantum theory of EM fields at the time of the Lamb shift discovery (1947), a vastly more complex edifice, failed the prediction, and had to be reworked thorughly in the years after the result, until Dyson's QED finally had managed to put all the pieces together.

f) The Quantum Optics may not have the magic it shows off with. Even if you haven't got yet to the detector papers, or the Glauber's QO foundation papers, you at least should have the grounds for rational doubt and more questions, what is really being shown by these folks? Especially in view of dots (a)-(e) which imply the Quantum Opticians may be wrong, overly enthusiastic with their claims.

g) The point (f), considering that the plain classical nonlinear fields, the Maxwell-Dirac equations, already had the correct QED radiative corrections right from the start, and these are much closer to the core of QED, they're its crown jewels, couple orders beyond the orders at which the Quantum Optics operates (which is barely at the level of the Old QED).

h) In fact, the toppled Old QED (e), has already all that Quantum Optics needs, including the key of Quantum Magic, the remote state collapse. Yet that Old QED flopped and the Maxwell-Dirac worked. Would it be plausible that for the key pride of QED predictions, the Lamb shift, the one which toppled the Old QED, the Maxwell-Dirac just got lucky. What about (b) & (c), lucky again that QED appears to be approximating it, in the formalism through several orders, and well beyond the Quantum Optics level, as well? All just luck? And that Maxwell-Dirac nonlinear fields are the simplest and the most natural approach (d)?

i) The Glauber's correlations <[G]>, the standard QO "correlations" may not be what they are claimed to be (correlating something being literally and actually counted). Its flat-out vacuum removal procedures are over-subtracting whenever there is a vacuum generated mode. And this is the way the Bell test results are processed.

j) The photo-detectors are not counting "photons" but are counting photo-electrons and the established non-controversial detector theory neither shows nor claims any non-classical photo-electron count results (it is defined as a standard correlation function with non-negative p-e counts). Entire nonclassicality in QO comes from the reconstructed <[G]>, which doesn't correlate anything literally and actually being counted (the <[C]> is what is being counted). Keep also in mind dots (a)-(i)

k) The Bell theorem tests seem to be stalled for over three decades. So far they have managed to exclude only the "fair sampling" type local theories (none of such theories ever existed). The Maxwell-Dirac, the little guy from (a)-(j) which does happen to exist, is not a "fair sampling" theory, and more embarrassingly, it even predicts perfectly well what the actual counting data show and agrees perfectly with the <[C]> correlations, the photo-electron counts correlations, which is what the detectors actually and literally produce (within the photo-current amplification error limits).

l) The Bell's QM prediction is not a genuine prediction, in the sense of giving the range of its own accuracy. It is a toy derivation, a hint for someone to go and do full QO/QED calculation. Indeed such calculations exist, and if one removes <[G]> and simply leaves it at its bare quantitative prediction of detector counts (which would match fairly well the photo-electron counts obtained), the QO/QED predicts the obtained raw counts well and does not predict violation either. There is no prediction of detector counts (the photo-electron counts, the stuff that detectors actually count), not even in principle and not with the most ideal photo-electron counter conceivable, even with 100% QE, which would violate Bell's inequality. No such prediction and it cannot be deduced even in principle for anything that can be actually counted.

m) The Bell's QM "prediction" does require remote non-local projection/collapse. Without it can't be derived.

n) The only reason we need collapse at all is the allegedly verified Bell's QM prediction saying we can't have LHVs. Otherwise variables could have values (such as Maxwell-Dirac fields), just not known, but local and classical. The same lucky Maxwell-Dirac of the other dots.

o) Without the general 'global non-local collapse' postulate, Bell could not get state of particle (2) to become |+> for the sub-ensemble of particle (2) instances, for which the particle (1) gave (-1) result (and he does assume he can get that state, by Gleason's theorem by which the statistics determines the state; he assumes statistics of |+> state on the particle 2 sub-ensemble for which the particle 1 gave -1). Isn't it a bit odd that to deduce non-local QM prediction one needs to use non-local collapse as a premise? How could any other conclusion be reached, but the exclusion of locality, with the starting premise of non-locality?

p) Without collapse postulate, no Bell's QM prediction, thus no measurement problem (the Bell's no-go for LHV), thus no reason to keep collapse at all. The Born rule as an approximate operational rule would suffice (e.g. the way a 19th century physicist might have defined it for the light measurements: the incident energy is proportional to photocurrent, which is correct empirically and theoretically, the square law detection).

q) The Poissonian counts of photo-electrons in all Quantum Optics experiments preclude Bell's inequality violation ever in photon experiments. The simple classical model, the same Maxwell-Dirac from (a)...(p) points, the lucky one, predicts somehow exactly what you actually measure, the detector counts, with no need for untested conjectures or handwaving or euphemisms, all it uses is the established detector theory (QED based or Maxwell-Dirac based) and a model of an unknown but perfectly existent polarization. And it gets lucky again. While the QO needs to appologize and promise yet again, it will get it just as soon as the detectors which count Glauber's <[G]> get constructed, soon, no question about it.

r) Could it be all luck for Maxwell-Dirac? All the points above, just dumb luck? And the non-linear fields actually don't contradict any QED quantitative prediction, in fact agree to an astonishing precision with QED. QED amplitude evolution even converges to Maxwell-Dirac, as far as anyone can see and as precisely as anything that gets measured in this area. The Maxwell-Dirac disagrees only with the collapse postulate (the general non-local projection postulate), for which there is no empirical evidence, and for which there is no theoretical need of any sort, other than the conclusions it creates by itself (such as Bell's QM prediction or various QO non-classicalities based on <[G]> and collapse).

s) Decades of physicists have banged their heads to figure out how can QM work like that. And no good way out, but multiple universes or observers mind when all shots have been fired and all other QM "measurement theory" defense lines have fallen. It can't be defended with a serious face. That means, no way to solve it as long as the collapse postulate is there, otherwise someone would have thought it up. And the only thing that is holding up the whole of the puzzle is the general non-local collapse postulate. Why do we need it? As an approximate operational rule (as Bell himself advocated in his final QM paper) with only local validity it would be perfectly fine, no puzzle. What does it do, other than to uphold the puzzle of its own making, to earn its pricey keeps? What is that secret invaluable function it serves, the purpose so secret and invaluable that no one knows for sure what it is, to be able to explain it as plainly and directly as 2+2, but everyone believes that someone else knows exactly what it is? Perhaps, just maybe, if I were to dare to wildly conjecture here, there is none?

t) No single or even a few dots above may be decisive. But all of it? What are the odds?

PS: I will have to take at least a few weeks break from this forum. Thanks again to you 'vanesch' and all the folks who contributed their challenges to make this a very stimulating discussion.
 
Last edited:
  • #183
Ok, you posted your summary, I will post mine and I think that after that I'll stop with this discussion too ; it has been interesting but took quite some time, and the subjects are being worn out in that we now seem to camp on our positions. Also it starts to take a lot of time.

a) Concerning the Lamb shift: it is true that you get it out of the Dirac-Maxwell equations, because now I remember that that was how my old professor did it (I don't find my notes from long ago, they must be at my parents home 1000 km from here, but I'm pretty sure he must have done something like Barut). The name of my old professor is Jean Reignier, he's long retired now (but I think he's still alive).

b) The fundamental reason for "second quantization" (meaning, quantizing fields) was not the Lamb shift of course. It's principal prediction is the existence of electrons (and positrons) as particles. As I said, I don't know of any other technique to get particles out of fields. As a bonus, you also get the photon (which you don't like, not my fault). I think that people tried and tried and didn't manage to get multi-particle-like behaviour out of classical fields. There are of course soliton solutions to things like the Korteweg Devries equation. But to my knowledge there has never been a satisfying solution to multiparticle situations in the case of the classical Dirac-Maxwell equation. Second quantization brilliantly solved that problem, but once done, with the huge mathematical difficulties of the complex machinery set up, how to get nice predictions beyond the obvious "tree level" ? It is here that the Lamb shift came in: the fact that you ALSO get it out of QFT ! (historically of course, they were first, but that doesn't matter).

c) I can be wrong, but I don't think that the classical approach you propose can even solve something like the absorption lines of Helium (a 2-electron system) or another low-count multi-electron atom, EVEN if we allow for the existence of a particle-like nucleus (which is another problem you'll have to solve: there's more to this world than electrons and EM radiation, and if everything is classical fields, you'd have to show me where the nucleus comes from and doesn't leak out).
Remember that you do not reduce to 2-particle non-relativistic QM if you take the classical Dirac equation (but that you do in the case of second quantization). You have ONE field, in which you'll have to produce the 2-particle state in a Coulomb field - I suppose with two bumps in the field that whirl around or I don't know what. I'd be pretty sure you cannot get anything reasonable out of it.

d) Concerning the anticoincidence: come on, you said there wouldn't be anticoincidence of clicks, and you even claimed that quantum theory (as used by professionals, not by those who write pedagogical stuff for students) also predicted that. You went off explaining that I confuse lots of quantities I don't even know what they mean in QO and asked me to show you a single paper where 1) the quantum prediction of anticoincidence was made and 2) the experiment was performed. I showed you (thanks to professor Carlip, from s.p.r.) an article of very recent research (September 2004) where the RAW COINCIDENCE g(2,0) (which is expected in classical maxwell theory to be bigger than 1) is something of the order of 0.017 (much and much better than Grangier and this time with no detector coefficients, background subtraction etc...)
Essentially, they find about 100.000 "first photon" triggers per second, about 8800 coincidences per second between first and one of the two others (so these are the identified pairs), and, hold your breath, about 3 or 4 triple coincidences (of which I said there wouldn't be any) per second, which, moreover are explained as Poissonian double pairs within the coincidence window of about 2.5 ns, but that doesn't matter.
If we take it in classical electromagnetism that both intensity profiles are split evenly by the beam splitter and if the detectors were square law devices responding to these intensities, you'd expect about 200 hits per second for these triple coincidences.
Remember: no background subtractions, no efficiency corrections. These are raw clicks. No need for Glauber or whatever functions. Marbles do the trick!
Two marbles in; one triggers, the other one chooses his way on the beam splitter. If it goes left, a click left (or not) but no click right. If it goes right, a click right (or not) but no click left. Never a click left and right. Well, we almost never see a click left and right. And these few clicks can even be explained by the probability of having 2 pairs of marbles in the pipe.
According to you, first we would never find this anticoincidence as a prediction of "professional" quantum theory, and of course never measure it. Now that we measured it, it doesn't mean anything because you expected so.

e) Once you acknowledge the existence of particles from fields, such as proposed by second quantization, the fair-sampling hypothesis becomes much much more natural, because you realize (or not! depends on the openness of mind) that photodetectors are particle detectors with a finite probability of seeing the particle or not. I acknowledge of course that the EPR like experiments today do not exclude local realistic theories. They are only a strong indication that entanglement over spacelike intervals is true.

f) Nobody believes that QFT is ultimately true, and everybody thinks that something else will come after it. Some think that the strict linearity of QM will remain (string theorists, for instance), while others (like Penrose) think that one will have to do something about it and hopes that gravity will. But I think that simplistic proposals such as classical field theory (with added noises borrowed from QFT or whatever you want to add) are deluded approaches who will fail even before they achieve anything. Nevertheless, why don't you guys continue. You still have a lot of work to do, after all, before it turns into a working theory and get you Nobel prizes (or will go down the drain of lost ink and sweat...). Really, try the helium atom and the hydrogen molecule. If that works, try something slightly more ambitious, such as a benzene molecule. If that also works you are really on the way of replacing quantum theory by classical field physics, so then tackle solid state physics, in the beginning, simply with metals. Then try to work out the photoelectric effect to substantiate all the would be's in your descriptions of how photodetectors work.

g) QFT is not just a numerical technique to find classical field solutions in a disguised way, it solves a quite different problem (which is much more involved). In certain circumstances, however, it can produce results which are close to the classical field solution (after all, there is the correspondence principle which states that QFT goes into the classical theory if h -> 0). I know that in QCD the results don't work out, for instance. People tried it.

cheers,
Patrick.
 
Back
Top