Aspect's Experiment Was Flawed

  • Thread starter Maestro
  • Start date
  • Tags
    Experiment
In summary, the validity of quantum mechanics has been established through various experiments, not solely on Aspect's experiment. QM accurately describes the world we live in, but it does not provide an explanation for it. It is a recipe or owner's manual, and it has been tested and proven to work through various experiments such as the detailed description of black-body radiation, the discovery of spin, and the use of QM in modern electronics. While some may question the philosophical implications of QM, its accuracy and usefulness have been demonstrated.
  • #106
vanesch said:
As I explained during that long discussion (which I'm not going to repeat here), [...]
It would be helpful to other readers - such as myself - if we could find this earlier discussion. Do you have a link? If not, maybe approx when the discussion was, or the name of the thread? I'd be happy to search in the PF archives to find it, if you could give me some pointers.
 
Physics news on Phys.org
  • #107
Nereid said:
It would be helpful to other readers - such as myself - if we could find this earlier discussion. Do you have a link? If not, maybe approx when the discussion was, or the name of the thread? I'd be happy to search in the PF archives to find it, if you could give me some pointers.

The name of the thread was "Young's experiment" or something close, and the discussion was with someone with the nickname "nightlight"

cheers,
Patrick.
 
  • #108
vanesch said:
One shouldn't deny that there are "loopholes" in the Aspect like experiments. But as Dr. Chinese points out, experiments are "evidence" and not "mathematical proof" for scientific theories. It is the entire body of "evidence" that makes theories stand out or not, and not one single type of experiment It now happens that the way people correct for detection efficiencies (the major source of loopholes) is what has always been considered as acceptable ; only NOW it seems to be inacceptable, in order to show that EPR-like results are not violating any Bell equations. Of course, the point can be made, but a reasonable explanation *within the frame of the rest of physics* should be given why suddenly this accepted correction becomes unacceptable.

Exactly, well said. This is what I was trying to point out, that there is a double standard by the anti-Aspect group when it comes to evidence.

(By the way, Vanesch, I found some interesting discussions including you on some of these matters while Googling around last night - they were in the PhysicsForums archives.)

Let's review the situation so we can see it in a little perspective. All angles below are at 22.5 degrees where 0 degrees means perfect correlation.

a. The QM prediction for EPR correlations at 22.5 degrees is cos^2, or .8536.
b. The local realistic prediction would need to be .7500 to satisfy Bell's Theorem. (I plan to start another thread to discuss this in more detail)
c. Classical optics, as best as I can see it applies, would predict exactly the same as a., or .8536
d. The "flawed, loophole laden" actual experimental value for correlations is .85 with margin of error of about 2%.

To my thinking, it is good evidence when a theory makes a prediction that is closely matched by experiment (a. and d.) and also matches classical formulas (c.) known for over a hundred years. It is a bad theory that makes a prediction that cannot be confirmed by experiment (b. and d.).

Yet the local realitists - including Caroline - argue that the .85 result measured is actually proof of the .75 result. Somehow, her "chaotic ball" model ALWAYS causes all local realistic results to be restated to arrive at exactly the QM predicted value - even though the QM theory itself is actually wrong.

Does this about sum it up? It defies my common sense that: of all of the possible biased values you could measure if Caroline's chaotic ball model is correct - and the QM value is just one of many that could result - the one we actually measure is... the QM value. I mean, we could have seen results of .8 or .9 or .7 or .6 (since we never measure the local realistic predicted value of .75 anyway) with the chaotic ball or its sister models. But no, the one value we consistently get is the QM expectation value. Hmmm.
 
  • #109
vanesch said:
The name of the thread was "Young's experiment" or something close, and the discussion was with someone with the nickname "nightlight"

cheers,
Patrick.

https://www.physicsforums.com/archive/topic/t-44964_Young's_experiment.html

Yes, that was the same discussion I saw last night - a lot of work went into that by you! By the way, if I recall, nightlight also holds a similar diehard anti-Aspect position as Caroline.

Vanesch, I was researching that thread for another reason. I could use your and some of the others assistance on a minor question I have - but I will place that in a new thread as it is unconnected to this one. If you have time to look at it, it will be titled "Question about Other Tests of EPR Paradox". Thanks!

-DrC
 
Last edited by a moderator:
  • #110
DrChinese said:
... there is a double standard by the anti-Aspect group when it comes to evidence.

(By the way, Vanesch, I found some interesting discussions including you on some of these matters while Googling around last night - they were in the PhysicsForums archives.)

Let's review the situation so we can see it in a little perspective. All angles below are at 22.5 degrees where 0 degrees means perfect correlation.

a. The QM prediction for EPR correlations at 22.5 degrees is cos^2, or .8536.
b. The local realistic prediction would need to be .7500 to satisfy Bell's Theorem. (I plan to start another thread to discuss this in more detail)
c. Classical optics, as best as I can see it applies, would predict exactly the same as a., or .8536
d. The "flawed, loophole laden" actual experimental value for correlations is .85 with margin of error of about 2%.

I'm afraid the above figures merely confuse the issue, since you can't base a Bell test on just one correlation value. You need at least three, and in practice four.

The simplest way of comparing predictions is to look at the "visibility" of the coincidence curve, i.e. (max - min)/(max + min), as you vary the angle between detector settings. Under the conditions generally assumed, QM predicts a visibility of 1.0 whilst classical optics, using local realist logic, predicts 0.5. [I don't know how you are managing to obtain a different classical optics figure from the local realist one.]

Anyway, we have essentially two different loopholes that can explain why the observed visibility is nearer to 1.0 than 0.5.

Subtraction of accidentals
In certain experiments there were large numbers of "accidentals" and these were adjusted for by assuming them to have the same effect for all time intervals between detections. [It must be remembered that in practice the supposedly synchronous detections are actually separated in time by a random amount, with a limit whose interpretation depends on what theory you are going by.] This constant number of accidentals per sec was subtracted from the counts before calculation of the Bell test statistic, increasing the calculated value. If we're looking at visibility instead of an actual Bell test, it is clear that what we're doing is shifting the whole graph downwards till it almost hits the x axis. The process increases the visibility. It can easily be shown (as in quant-ph/9903066) that the raw data curve has visibility almost exactly 0.5 in the experiments for which this data is available, vis Aspect's 1981 one and Tittel's 1997 one(http://arxiv.org/abs/quant-ph/9707042). After subtraction the visibility increases to over the local realist limit, which is, iirc, 0.71 for this statistic.

Detection loophole
It is not so easy to prove from published results that this loophole is the true cause of the high visibilities in most of the remaining experiments, for which either accidental rates were low or not subtracted. Here there is an urgent need for more information. We need to know just how the total of the four coincidence rates -- the figure used as denominator when estimating the quantum correlation -- varied with difference between detector settings. When this has been tested, what angles have been looked at? I suspect that most experimenters have not fully understood why the test for constancy was needed and have restricted themselves to looking at just the four Bell test angles. Aspect reported in this PhD thesis that the total was not quite constant, but the variations were within one standard deviation so could, he thought, be ignored. He did not state what angles he had looked at, though, nor whether or not there was any hint of a consistent pattern between different repeats of the experiment. If only he'd decided to do sufficient repeats to reduce the SD of the mean to less than the observed discrepancy!

If only, too, all Bell test experiments had been repeated using a range of different settings for the detector efficiencies! If the results from these were available, whether or not the loophole is in operation might have become clear. QM predicts no change in either Bell test statistic or visibility as you change the efficiency. Local realism predicts that as you increase the detector efficiency you decrease both.

DrChinese said:
... the local realitists - including Caroline - argue that the .85 result measured is actually proof of the .75 result. Somehow, her "chaotic ball" model ALWAYS causes all local realistic results to be restated to arrive at exactly the QM predicted value - even though the QM theory itself is actually wrong.

I don't in general give any quantatitive predictions. All I say is that the loopholes mean that local realist models for the observed values exist. The exact predictions depend on the exact conditions of each experiment.

Caroline
 
  • #111
vanesch said:
Yes, I'm aware of these papers. I'm also aware (although not an expert) of stochastic electrodynamics and things like that. But you agree with me that this is NOT classical optics ...

Yes, but I think it best not to discuss SED here. It's best not to have any preconceived theory other than a general framework of local causality and a wave model of light. Both are, after all, supported by vast amounts of evidence.

vanesch said:
New ideas ARE introduced - such as the fact that we are exposed to background radiation, with an intensity comparable to the intensity of sunlight, but apparently, we calbrated this away in all our sensors, thermometers and so on so as not to notice it ; a lot of physics is to be rewritten that way: suddenly we don't understand statistical physics, atomic physics, solid state physics anymore. These new ideas should be backed up by specific predictions, and you cannot deny that you're left with the impression that the ONLY reason for doing so is to find a way to explain away the behaviour of light without photons in certain circumstances.

Yes, any challenge to QM means re-writing a great deal of physics, but I think it needs to be done. In optical areas I don't see it presenting any problem. Clearly when it comes to modelling actual particles there are going to be difficulties. I don't think they are insuperable, but, be this as it may, I think QM makes a big mistake in trying to apply the same theory to optics as it does to particles. Why not, as a start, just hive off optics from QM and return it to classical physics?

vanesch said:
And THAT is done because, as you point out, denying photons is the only hope to get around EPR. I'm sorry, it all gives too much the impression that this is to cling onto a religiously held belief in what you call "local realism".
The main reason for denying the photon is that I have never seen any evidence that it exists! [See my web site, especially http://freespace.virgin.net/ch.thompson1/History/forgotten.htm]

vanesch said:
... How do you rewrite particle physics without second quantization ? What happens to the standard model ? Do you see the mindboggling scale of the attempt you propose ?

I've no idea! This is not my area.

Re
J. J. Thorn, M. S. Neel, V. W. Donato, G. S. Bergreen, R. E. Davies, and M. Beck, Observing the quantum behavior of light in an undergraduate laboratory, Am. J. Phys. Vol 72, No 9, September 2004,​
I can't find any mention of these authors in arxiv.org so can't get hold of the paper without considerable effort. From what you say, though, it sounds as if it is no different from a number of other experiments, and I should dearly like to know just what beamsplitter they used.

vanesch said:
Now tell me, how does it occur that when a wave that is generated from a PDC spits "left-right" according to a feature of the beamsplitter, but when you shine a "classical" beam on that beamsplitter, it doesn't (in the sense that you can produce interference effects with the split beams, so you cannot send you bullet left or right at the beamsplitter)...
I don't know quite what you mean here. There is not supposed to be any essential difference in the nature of light output by PDC from light produced by, say, a laser, if you look at just one or other of the output beams.

vanesch said:
Also, don't you think that if beamsplitters were also spectral filters, that would have been noticed already a few times in undergraduate labs ?
Yes indeed, though what I had in mind might be too subtle for easy detection. I won't try and explain, partly because I havent' tried to work out the details.

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #112
Caroline Thompson said:
I'm afraid the above figures merely confuse the issue, since you can't base a Bell test on just one correlation value. You need at least three, and in practice four.

Despite what you (and others) might think, you don't need to change polarizer settings in flight or otherwise vary the angles to test Bell's Theorem. You only need to calculate the correlation percentages at three particular angle settings (these can be done fully independently). Then combine a la Bell.

Varying is only necessary if you are asserting that the measurement devices are (or might be) communicating with each other so as to affect the outcome of the correlation tests. We already know from Aspect that doesn't happen, because he did the experiments both ways and there was no difference in the outcomes! Even that should be a definitive conclusion of Aspect. Further regarding the varying issue:

a. If you are a local realist, I would assume that wouldn't be much of an issue to you since you think there are classical, intuitive explanations for everything anyway - strange new types of communication between measuring devices should not be an issue.
b. If, on the other hand, you follow the Copenhagen interpretation, varying also shouldn't matter as you don't isolate out communication with other parts of the measurement apparatus for any other type of experiment (such as double slit) either.
c. Also, if you believe the correlation is non-local then the varying analyzers are superfluous.
d. And finally, if you are a local non-realist like me :) then you already believe that the only "real" component being measured is the angle between the remote polarizers anyway i.e. the measurement is fundamental to the process.

So yes, we can meaningfully talk about a single correlation value, and the one I choose is to discuss 22.5 degrees because that (along with its sister 67.5 degrees) is the angle where the differences between the realistic expectation value, the QM expectation value, and the actual experimental values will be most highlighted. To be specific, where A is a detection at one polarizer setting and C is the other one:

[X1] A+ C+
[X2] A+ C-
[X3] A- C+
[X4] A- C-

That would be (X1+X4)/(X1+X2+X3+X4). Please note that we don't care at all about loopholes or other practical issues, just the LR and QM expectation values and whatever it is that Aspect is giving us a measurement of.

QM gives an expectation value of .8536, and Aspect measured a value very close to this. By my calculation, one should assert a LR expectation value of the correlation at that angle of .7500 if you want to be within the Bell Inequality range.

For Caroline: I will explain how I get this in a separate thread. I wonder if you will agree with .7500.
 
Last edited:
  • #113
DrChinese said:
Despite what you (and others) might think, you don't need to change polarizer settings in flight or otherwise vary the angles to test Bell's Theorem.
[I never suggested that changing during flight mattered, since your point (a) applies.]

DrChinese said:
... d. And finally, if you are a local non-realist like me :) then you already believe that the only "real" component being measured is the angle between the remote polarizers anyway i.e. the measurement is fundamental to the process.
What a curious belief! The detector angles are set by the experimenter so are of very little interest indeed to me.

I have met in wikipedia one other person (Frank Wappler) who seems to have thought like you, though, so I have learned to follow the idea. Somehow you use the QM prediction in reverse, deducing the angle from the coincidence rates, but what is the point? And how can you modify the idea to cover the case where you do not have rotational invariance, so that the coincidence rate is not a function of the difference in angles?

DrChinese said:
... Please note that we don't care at all about loopholes or other practical issues, just the LR and QM expectation values and whatever it is that Aspect is giving us a measurement of.
But if you want to talk about what Aspect measured you cannot avoid the matter of accidentals! He measured one set of counts then subtracted a substantial count and then calculated his Bell test statistic.

DrChinese said:
QM gives an expectation value of .8536, and Aspect measured a value very close to this.
I would advise turning to my paper http://arxiv.org/abs/quant-ph/9903066 if you want the true figures.

DrChinese said:
For Caroline: I will explain how I get this in a separate thread. I wonder if you will agree with .7500.
You have succeeded for now in confusing me! However, I trust my papers and look forward to your new thread.

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #114
Caroline Thompson said:
But if you want to talk about what Aspect measured you cannot avoid the matter of accidentals! He measured one set of counts then subtracted a substantial count and then calculated his Bell test statistic.

It matters to you, but doesn't matter quite so much to the rest of us. Why?

Because as I and others said above, it is the totality of the evidence that matters and Aspect's results are just one part of it. It is in context of the totality where your arguments fall on their face. When the arguments for both sides are viewed in this context, LR fails.

More importantly: After Bell, I would not accept local realism even if an experiment were never performed! What else is Bell but this point? So after Bell sunk in, the debate about LR was over for most scientists anyway. That is why Aspect was the nail in the coffin. If you totally unwound Aspect, it still would not change most scientists minds in favor of LR. That is why your "loopholes" are off base.

You need an experiment in favor of a different value for the observed correlations to convince anyone at this point. There is obviously an observed pattern: predict it and measure it! That is what Aspect did.
 
  • #115
DrChinese said:
For Caroline: I will explain how I get this in a separate thread. I wonder if you will agree with .7500.

(After looking at Caroline's:
appendix C: "Integration of the standard realist formula" in:
http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066.pdf
which uses the optical approach )

She will probably agree to (depending on the situation)


A) 0.8536 (!)

In the situation you seem to sketch were the photon source is
linear polarized at 0 degrees. In this case Malus law will give
[itex] \cos^2(0) \cos^2(22.5)[/itex] where the two normalized intensities (= photon
rates) after the polarizers are multiplied with each other to give
the detect incident rate.


B) 0.67677

In the situation handled in appendix C: The polarization of the
photons is random. (but equal for the two entangled photons)
In this case one must apply Malus law for all angles [itex]\lambda[/itex] and then
integrate over them:

[tex] A+B+ \ \ = \ \ \frac{1}{2\pi} \int \cos^2(\lambda-0) \cos^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} + \frac{1}{8} \cos(45) \ \ = \ \ 0.338388[/tex]
[tex] A+B- \ \ = \ \ \frac{1}{2\pi} \int \cos^2(\lambda-0) \sin^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} - \frac{1}{8} \cos(45) \ \ = \ \ 0.161611[/tex]
[tex] A-B+ \ \ = \ \ \frac{1}{2\pi} \int \sin^2(\lambda-0) \cos^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} - \frac{1}{8} \cos(45) \ \ = \ \ 0.161611[/tex]
[tex] A-B- \ \ = \ \ \frac{1}{2\pi} \int \sin^2(\lambda-0) \sin^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} + \frac{1}{8} \cos(45) \ \ = \ \ 0.338388[/tex]


Adding A+B+ and A-B- then gives 0.67677


Regards, Hans

P.S. The setup in (B) is also the one I presumed in the 3 photon
experiment with three instead of two polarizers. This is basically
the optical approach for the calculation of the intensities and
then presuming that the photon rate = intensity.

P.P.S: Tip: use http://integrals.wolfram.com/ to get the integrals.
 
Last edited by a moderator:
  • #116
Hans de Vries said:
She will probably agree to (depending on the situation)


A) 0.8536 (!)


B) 0.67677

In the situation handled in appendix C: The polarization of the
photons is random. (but equal for the two entangled photons)
In this case one must apply Malus law for all angles [itex]\lambda[/itex] and then
integrate over them:

[tex] A+B+ \ \ = \ \ \frac{1}{2\pi} \int \cos^2(\lambda-0) \cos^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} + \frac{1}{8} \cos(45) \ \ = \ \ 0.338388[/tex]
[tex] A+B- \ \ = \ \ \frac{1}{2\pi} \int \cos^2(\lambda-0) \sin^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} - \frac{1}{8} \cos(45) \ \ = \ \ 0.161611[/tex]
[tex] A-B+ \ \ = \ \ \frac{1}{2\pi} \int \sin^2(\lambda-0) \cos^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} - \frac{1}{8} \cos(45) \ \ = \ \ 0.161611[/tex]
[tex] A-B- \ \ = \ \ \frac{1}{2\pi} \int \sin^2(\lambda-0) \sin^2(\lambda-22.5) d\lambda \ \ = \ \ \frac{1}{4} + \frac{1}{8} \cos(45) \ \ = \ \ 0.338388[/tex]


Adding A+B+ and A-B- then gives 0.67677


Regards, Hans

P.S. The setup in (B) is also the one I presumed in the 3 photon
experiment with three instead of two polarizers. This is basically
the optical approach for the calculation of the intensities and
then presuming that the photon rate = intensity.

P.P.S: Tip: use http://integrals.wolfram.com/ to get the integrals.

Way to go, Hans! That is what I am talking about. A specific value pulled from the opposition that is within the Bell Inequality range. I don't think I made it clear above, but the .7500 value I mentioned was the top value (by my estimate) this could be and still satisfy Bell's Theorem. The .6767 value is a great start. I am sure you can see that sets up a great reference point - there is naturally a huge difference between that and the QM and experimental values! After all of Caroline's moaning about the errancy of QM, and why Aspect's results are no good even though the results are to the penny... she now has to explain why experiments do not support her predictions. Of course, I have a feeling we will see a bit of waffling on this point. She is quick to talk down Aspect, let's see her come forth with something positive to the debate rather than negative. Perhaps some experimental results?
 
  • #117
There's much more information on Aspect's experimental setup here:

http://chaos.swarthmore.edu/courses/phys6_2004/QM/17_EPR_Bell_Details.pdf

It shows that the experiment uses Wollaston Prisms instead of the
usual polarizers (page 310) and the total setup has 2x4 detectors
instead of 2x2. (page 316) (Like you suggested: let's forget the
in-flight switching)

It also seems (on this page 310) that the Wollaston Prisms don't have
an angle dependent intensity loss like polarizers have.


Regards, Hans
 
Last edited:
  • #118
Hans de Vries said:
There's much more information on Aspect's experimental setup here:
http://chaos.swarthmore.edu/courses/phys6_2004/QM/17_EPR_Bell_Details.pdf

It shows that the experiment uses Wollaston Prisms instead of the
usual polarizers (page 310) and the total setup has 2x4 detectors
instead of 2x2. (page 316)

From what you say it appears that the paper covers only one of Aspect's experiments, or, possibly, confuses them all.

His first, Physical Review 47,460 (1981), used parallel plate polarisers, with only the '+' results counted. It used the CH74 Bell inequality (see http://en.wikipedia.org/wiki/Clauser_and_Horne's_1974_Bell_test)

The second, Physical Review Letters 49, 91-94 (1982) used Wollaston Prisms (though he did not call them by that name. They were "polarising cubes"). These have two outputs and both '+' and '-' outcomes are counted. It used the CHSH test (see http://en.wikipedia.org/wiki/CHSH_inequality).

The third, PRL 49, 1804 (1982), used parallel plates again. This was the one with time switching, with two possible routes for the beam on each side, leading to detectors set at two different angles. The path was switched effectively randomly between the two paths so that for each experimental run there were four counts to be analysed, but logically the setup is (from a local realist point of view) just the same as his first. Only '+' results are counted. It again used the CH74 test.

It is the first that I have analysed (in http://arXiv.org/abs/quant-ph/9903066) with and without subtraction of accidentals, this being the only one for which sufficient data is available (from Aspect's PhD thesis).

Incidentally, Aspect presents a pretty comprehensive description of his experiments at:
A. Aspect, “Bell’s theorem: the naïve view of an experimentalist”, Text prepared for a talk at a conference in memory of John Bell, held in Vienna in December 2000. Published in Quantum [Un]speakables – From Bell to Quantum information, R. A. Bertlmann and A. Zeilinger (eds.), (Springer, 2002); http://arxiv.org/abs/quant-ph/0402001

As far as I remember, though, his coverage of the subtraction of accidentals leaves much to be desired (if, indeed, it is mentioned at all?) but, more interestingly from my point of view, it is clear from his description of the different Bell inequalities that he has, by using for the CH74 inequality the derivation given in the 1969 paper instead of the simpler one of 1974, persuaded himself that the CH74 test is at least as bad the CHSH one when it comes to the detection loophole. As far as the logic covered by my Chaotic Ball model is concerned, this is not true.

I discuss the matter in:
and

Hans de Vries said:
((Like you suggested: let's forget the
in-flight switching)

It also seems (on this page 310) that the Wollaston Prisms don't have
an angle dependent intensity loss like polarizers have.

What's the difference supposed to be between Wollaston Prisms (which were used, incidentally, by Weihs et al in their experiment with a more genuinely random switching system) and a polarising cube?

By a "polariser", do you mean something such as a polarising filter that you can use on sunglasses?

Caroline
 
Last edited by a moderator:
  • #119
DrChinese said:
Way to go, Hans! That is what I am talking about. A specific value pulled from the opposition that is within the Bell Inequality range. I don't think I made it clear above, but the .7500 value I mentioned was the top value (by my estimate) this could be and still satisfy Bell's Theorem. The .6767 value is a great start. I am sure you can see that sets up a great reference point - there is naturally a huge difference between that and the QM and experimental values! After all of Caroline's moaning about the errancy of QM, and why Aspect's results are no good even though the results are to the penny... she now has to explain why experiments do not support her predictions. Of course, I have a feeling we will see a bit of waffling on this point. She is quick to talk down Aspect, let's see her come forth with something positive to the debate rather than negative. Perhaps some experimental results?

You're quite right, and this is all covered in my various papers. The calculated classical prediction is for the "perfect" case, with the detectors such that the probability of detection is exactly proportional to the input intensity and the input from the polarisers emerges with intensity exactly complying with Malus' Law. In the situations in which the detection loophole pushes the local realist value up above the Bell limit, what must be happening is that either Malus' Law is not quite appropriate and/or the detector response is not quite proportional to the input intensity. The discrepancies, between them, cause the effective law to be not exactly Malus' Law, which depends on cos^2, but a slightly different one, in which the troughs of the cosine^2 curve are relatively wide. The same mathematics (Appendix C of http://arXiv.org/abs/quant-ph/9903066) then leads to higher visibilities for the coincidence curves.

You need, incidentally, to be very careful when interpreting the published figures. All will have been "normalised" in some manner,and some will in addition have had accidentals subtracted.

Anyway, if you read my various papers you will find that here and there I suggest how the experiments could be extended so as to give local realism a chance of showing its colours. It will make different predictions from QM if, for example, you vary the beam intensity and/or the kind of detector used. Before any actual figures can be predicted, the local realist model needs to be completed by inserting (empirically-determined?) functions to replace the assumed cos^2 terms. Even without any quantatitive predictions, though, we can make the qualitative one that the CHSH statistic will (other things being equal) increase as you decrease detector efficiency, decrease as you increase it.

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #120
Caroline Thompson said:
Yes, but I think it best not to discuss SED here. It's best not to have any preconceived theory other than a general framework of local causality and a wave model of light. Both are, after all, supported by vast amounts of evidence.

Well, my viewpoint is exactly the opposite! If your aim is to show that, for specific setups, other explanations than the one given by QM is possible, I won't argue with this ; the scientific method doesn't, in any way, allow to say the opposite. Even if your claim is that experiments didn't rule out LR, I will agree with you, and even say that I don't care too much.
The scientific method requires you to have a theory that can spit out numerical predictions of measured quantities in experiments, and we have 1 such theory, namely quantum theory. If you want to propose something else, you must come up with a specific theory, and then we'll compare. First we'll compare with all established results where QM gave the right result, and see if your theory does the same. And, as I pointed out, there's a huge amount of data to be explained: spectroscopy of atoms and molecules, quantum chemistry, solid state physics (semiconductors, phonons...), optics, particle physics... Remember that ALL of this forms, within the framework of quantum theory, one single machinery. You should come up with a viable alternative, from which we can calculate predictions in all the above mentioned cases.

Yes, any challenge to QM means re-writing a great deal of physics, but I think it needs to be done. In optical areas I don't see it presenting any problem. Clearly when it comes to modelling actual particles there are going to be difficulties. I don't think they are insuperable, but, be this as it may, I think QM makes a big mistake in trying to apply the same theory to optics as it does to particles. Why not, as a start, just hive off optics from QM and return it to classical physics?

What do you win ? First of all, I'd say that the more you have a united view of the physical world, the better. But ok, let's go for it.

So you want to save local realism. You know that if you keep the superposition principle of quantum mechanics, you are going to have at least a theoretical problem (cfr Bell's inequalities and the quantum predictions, which are not limited to optics, or even to spin). So taking out electromagnetism still leaves you with exactly the same conceptual problem, with,say, electrons (which is, however, much harder to test experimentally).

There's no discussion about the wavelike nature of "particles" (electrons, ...)
I'm into thermal neutron stuff right now, and what we do all day is diffraction of neutrons on matter (crystals, soft matter etc...). So you will have to accept some wavy matter stuff a la Schroedinger. But single-particle waves will do fine for you. However, multiparticle superpositions are going to be inacceptable for you (they automatically lead to entanglement).
This gives you already a serious problem in the prediction of, say, the Helium spectrum, where there is a significant difference between the prediction of the lines with and without the so-called "cross terms". You'll have to find a way to find the results of QM, without using it, but using single-particle matter waves or something of the kind. In a similar way, the quantum prediction of binding and anti-binding orbitals in molecules (which works out very well in quantum chemistry) is entirely based upon entangled electrons.

You will object that this is microscopic, and that there, you can use QM. But then you have to explain me why you can use multiparticle superpositions there, and not when it menaces local realism?
Worse: if you go to solid state physics, you get massive entanglement of electrons, giving rise to most of semiconductor behaviour. So again, why can we use it there, but not when it doesn't suit you?

You are going to have one hell of a difficult task, and it is not sufficient to demonstrate that certain properties you don't like in QM might not be absolutely essential: you will have to put a hard alternative on the table and do the calculations. Personally, I'm so much convinced that it won't work that I cannot spend much time on that. But your mindset is different, so why don't you go ahead ? After all, if you find ways to do so, maybe they lead to calculations which are much easier than in QM, and maybe that opens up methods and techniques to tackle problems that are, today, too hard to solve through the QM way. So you would not only be famous, but you'd be also very rich: think of all the chemical and pharmaceutical companies that would like to use your faster molecular modelling !

Next step: electron-positron annihilation.
The only way people have found to reconcile:
wavelike behaviour of matter/lumpedness of matter (energy-momentum relationship)/pair creation-annihilation is a quantum field. Feel free to think up another technique. This is a honest challenge. People don't know - in the sense of being completely ignorant - of how to describe the behaviour of electrons in another way than with a quantum field. Maybe there are other ways, good luck.
Also, people have only found one way in which they can make interact a quantum field with EM, that is by considering EM also as a quantum field. If you do that, you describe very well e+/e- annihilation and all other particle interactions.

The problem is, that if you accept special relativity, that there is no difference between the gamma pulse that comes out of this annihilation and a light pulse in an optics system (doppler effect). So the description should be the same. But the description that works very well in the case of the e+e- annihilation, is quantum field theory so it is a logical inconsistency to set optics apart. IT IS NOT POSSIBLE TO SET OPTICS APART FROM THE REST OF PHYSICS.

Or you rewrite all of it, or you rewrite none of it.

This is science, so nobody stops you from doing so. Nothing is graved in stone. But you should realize the scope of the undertaking. I wouldn't bet on it, honestly.

The main reason for denying the photon is that I have never seen any evidence that it exists!

No, the main reason why you deny it is that it takes away all the amunition you can shoot on the loopholes of Aspectlike experiments, and you want to cling onto LR at all cost. You don't seem to have similar conceptual difficulties with, say, the chemical bond, or the quantum hall effect.
The evidence for the photon is that it is part of a theory which turns out to be successful in atomic and molecular physics, solid state, elementary particle physics, nuclear physics, and we haven't got the slightest bit of a clue how we could achieve a similar success without it. I repeat myself: nobody stops you from trying, as long as you realize on what adventure you embark yourself :smile:

I can't find any mention of these authors in arxiv.org so can't get hold of the paper without considerable effort. From what you say, though, it sounds as if it is no different from a number of other experiments, and I should dearly like to know just what beamsplitter they used.

You're right, it is such a cube. Have a look at their website
http://people.whitman.edu/~beckmk/QM/

Maybe you can propose them to change it for a half-silvered mirror.


I don't know quite what you mean here. There is not supposed to be any essential difference in the nature of light output by PDC from light produced by, say, a laser, if you look at just one or other of the output beams.

What I meant was the following:
the classical picture of the two photons coming out of a PDC is just a correlated pulse in intensity, and in fact, the only reason for using the PDC is to have a time correlation of the intensity peaks in both beams.
If photodetectors are just producing clicks with a probability given by the incoming intensity - socalled square-law detectors (the only way to match the photon count rate, and the classical intensity), and we consider classical waves, there can be two things that happen at the beam splitter:

or it splits the intensity in 2 halves (that's the classical description of a beam splitter), but that would mean that both photodectectors see the same intensity. Given a finite efficiency, none can click, one can click or both can click, and given the square law, the number of cases when both click is a function of the number of cases when one clicks. THIS IS NOT OBSERVED EXPERIMENTALLY.

or something funny happens, and sometimes the whole intensity is sent left, and sometimes sent right. So only one can click at a time. THIS IS OBSERVED. However, if the beamsplitter sends intensity pulses once to the right, and once to the left, then the same beamsplitter cannot give rise to any (classical) interference ! Nevertheless, interference has been demonstrated for these beamsplitters by every first-year student.

You cannot at the same time have an equal-intensity split that gives rise to interference, without, using square law detectors, also generating a very predictable number of double clicks. The very fact that this double click is absent illustrates that what is observed is a one-photon state that has no classical description.
 
  • #121
vanesch said:
W Have a look at their website
http://people.whitman.edu/~beckmk/QM/

Experiments like the Quantum Eraser may turn out to be much
more effective in convincing people:

http://people.whitman.edu/~beckmk/QM/qe/qe.pdf

Switching an Interference Pattern on and off at one place
by manipulating a [itex]\lambda/2[/itex] Half-Wave plate at another place,
in another beam, that went the other way, to never to
go back to the place were the interference happens...


Regards, Hans
 
  • #122
To add one small comment to vanesch's post (and thanks to Dr Chinese for finding those 'old' threads): fortunately, eating an elephant is easier if you take it one bite at a time.

If Caroline (or anyone else) wants to develop an alternative, it may be sensible to start with something 'easy', and just do an OOM (a.k.a. back of the envelope) calculation. It's highly likely that if your favourite alternative doesn't come within an OOM or two, it won't work out when you do the detailed calculations ... best to avoid wasting more time on that; put it to one side, and try another. The good thing about OOMs is that they can often (usually?) be done in a day or two.
 
  • #123
Nereid said:
If Caroline (or anyone else) wants to develop an alternative, it may be sensible to start with something 'easy', and just do an OOM (a.k.a. back of the envelope) calculation. It's highly likely that if your favourite alternative doesn't come within an OOM or two, it won't work out when you do the detailed calculations ... best to avoid wasting more time on that; put it to one side, and try another. The good thing about OOMs is that they can often (usually?) be done in a day or two.


I am indeed absolutely in favour of that - it would be extremely exciting to see a working alternative. My personal problem with it is a problem of motivation: I'm so convinced that it won't work out that I cannot spend much effort on it. But people like Caroline, who are convinced that 99% of all physicists are deluding themselves since about 80 years now, should jump on the enormous occasion that presents itself to them. I tried to point that out. My intuition would be that any local realist theory would be computationally simpler than quantum theory, and if that is the case, it would be a revolution in computational chemistry, solid state physics and so on. Their methods would be monstruously more efficient. Think of the power of it, to model, say, macromolecules and their interactions!

However, they concentrate themselves just on arguing that "there might STILL be possibilities to develop alternatives" without presenting any. The only one I've seen is Stochastic Electrodynamics. This is essentially Maxwell's electrodynamics, together with the postulate that we are exposed to radiation which comes down to half a photon in each mode, and the argument that photodetectors are calibrated to "observe" what goes beyond this intensity. I know a bit about it, but before it can convince me, it should indeed first predict "basic" stuff where intuition says they will have difficulties, like with thermodynamics (given the huge flux of energy, how come that we don't boil off any glass of water in no time - probably naive, but these things should be addressed). I'm not very impressed with its success of demonstrating the ability to explore the efficiency loopholes in the Aspect like experiments, because _it has been invented for that purpose_. I would be more impressed if it shows that it fits in with the rest of physics.
Stochastic Electrodynamics, together with a classical Dirac field is supposed to supplant QED, and seems to make a correct prediction of the Lamb shift (Barut) (I didn't verify it) if you leave out the stochastic part :smile: ; however it doesn't even arrive at predicting the existence of the electron, and I challenged them to come up with the correct spectrum for Helium.

So we get a lot of blahblah on how there are still possibilities for local realist theories etc... but we don't get to see any that work! It is not even a matter of OOM calculations, there simply ISN'T any proposition of an alternative. Just an argument that its potential existence is not yet 100% ruled out yet. Big deal.

cheers,
Patrick.
 
  • #124
vanesch said:
I am indeed absolutely in favour of that - it would be extremely exciting to see a working alternative. My personal problem with it is a problem of motivation: I'm so convinced that it won't work out that I cannot spend much effort on it. But people like Caroline, who are convinced that 99% of all physicists are deluding themselves since about 80 years now, should jump on the enormous occasion that presents itself to them.

The classical explanation of actual "quantum eraser" experiments does not require any new model, once you've allowed for the properties of outputs from PDC sources. I have no theory of the physics of what goes on in a nonlinear crystal, but have reason to think that nobody else has either! I disagree with the Stochastic Electrodynamics explanation as well as with quantum theory. Can't we go back to the situation that existed 200 years ago when they first started trying to explain the polarisation properties of "iceland spar"? Various people had various ideas, which were discussed and tested. It was admitted that we did not know the truth!

Anyway, the absence of a satisfactory theory of the physics of the interaction of light with nonlinear crystals does not prevent us using them for interesting experiments. Instead of theory, though, we have to rely on the observed behaviour to find the empirical laws governing the output.

Unfortunately the quantum theorists have, it seems, from an early stage decided on their model and insisted on interpreting all that they see within this narrow framwork! This has led them into all sorts of apparent paradoxes, quantum erasers being just one of them.

My understanding of the properties of a particular class of PDC output -- that produced in the "degenerate case", when the frequencies of both "photons" are the same -- was initially a logical deduction from experiments on "induced coherence". The key properties are covered in:
Thompson, C H, “Rotational invariance, phase relationships and the quantum entanglement illusion”, http://arxiv.org/abs/quant-ph/9912082
and a paper I'm on the point of putting on my web site.

I think I'd better break off from PhysicsForum to do this! The paper is:
Homodyne detection and parametric down-conversion: a classical approach applied to proposed “loophole-free” Bell tests​

Don't worry too much if you've never met "homodyne detection" before. If you haven't met parametric down-conversion, though, perhaps now is the time to remedy the situation!

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #125
Caroline Thompson said:
Unfortunately the quantum theorists have, it seems, from an early stage decided on their model and insisted on interpreting all that they see within this narrow framwork!

And guess what ? This narrow framework has, up to now, always correctly predicted all experimental outcomes (which is the aim of a scientific theory). I'd be happy to know any narrow or broad framework which does the same, for the reasons I already explained: it might open up new ways of handling problems, lead to new calculational techniques etc...
Unfortunately, I don't know of ANY other such framework.
 
  • #126
vanesch said:
And guess what ? This narrow framework has, up to now, always correctly predicted all experimental outcomes (which is the aim of a scientific theory). I'd be happy to know any narrow or broad framework which does the same, for the reasons I already explained: it might open up new ways of handling problems, lead to new calculational techniques etc...
Unfortunately, I don't know of ANY other such framework.

When it comes to "quantum optics" experiments, the framework you need is local realism plus a classical wave model of light plus empirical functions to model the behaviour of apparatus such as beamsplitters and detectors. Oh, and you also need empirical functions to model the output from pumped nonlinear crystals. Given these, there are no special calculational difficulties.

I wonder if you have encountered the following useful little handbook on polarisation?
Shurcliff, W A and Ballard, S S, "Polarized Light", Van Nostrand 1964​
You can deduce from this how the notions of "projection operators" and the use of matrices came into quantum theory. They were there already in classical theory. The difference is that in classical theory it is accepted that the matrices won't give you exactly correct answers -- that you have to use empirical results in real applications.

Anyway, further to this morning's message, you can now see my new paper on my web site. I had hoped to get it in HTML as well as pdf format but I think maybe I've exceeded my web space. The diagrams in the HTML version don't work. The paper illustrates once again my approach to the analysis of real optical experiments. I don't attempt to analyse any other kind but do strongly suspect that something equivalent is needed in other areas of fundamental physics.

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:

Similar threads

Replies
50
Views
4K
Replies
24
Views
448
Replies
9
Views
988
Replies
5
Views
968
Replies
3
Views
1K
Replies
12
Views
2K
Replies
11
Views
1K
Replies
2
Views
880
Back
Top