Aspect's Experiment Was Flawed

  • Thread starter Maestro
  • Start date
  • Tags
    Experiment
In summary, the validity of quantum mechanics has been established through various experiments, not solely on Aspect's experiment. QM accurately describes the world we live in, but it does not provide an explanation for it. It is a recipe or owner's manual, and it has been tested and proven to work through various experiments such as the detailed description of black-body radiation, the discovery of spin, and the use of QM in modern electronics. While some may question the philosophical implications of QM, its accuracy and usefulness have been demonstrated.
  • #71
ZapperZ said:
Hey, I like your website. It looks quite useful in the sense that you have the historical collection of the EPR stuff. I am definitely putting your site as one of the links in the Yahoo e-Group that I run, so thanks for the effort.
Zz.

Zz,let us know more about your e-group--what's its name--how to join it
 
Physics news on Phys.org
  • #72
selfAdjoint said:
Sure, but an FTL signal is nonlocal, that is it violates relativity. Quantum mechanics explains the experimental results without violating relativity, but you have to give up the notion that the particle "really" had a polarization before they were measured. Quantum mechanics says not; it says they were in a superposition of polarization states, which was neither one nor the other nor both, but rather something different made of the two possible states and already correlated at the time the particle pair was created.

Yes, this is how I understand what Gell-Mann calls "the modern interpretation of quantum mechanics" is supposed to work.

Also, in a short overview article Zurek wrote called "Decoherence and the Transition from the Quantum to the Classical", he says:

And the experiments that show that such nonseparable quantum correlations violate Bell’s inequalities (Bell 1964) are demonstrating the following key point: The states of the two spins in a system described by | Φc > are not just unknown, but rather they cannot exist before the “real” measurement (Aspect et al. 1981, 1982). We conclude that when a detector is quantum, a superposition of records exists and is a record of a superposition of outcomes—a very nonclassical state of affairs.

http://arxiv.org/abs/quant-ph/0306072

A superposition of records is an interesting conclusion! I think understanding what superpositions really mean is important in understanding EPR and the like. :smile:
 
  • #73
caribou said:
A superposition of records is an interesting conclusion! I think understanding what superpositions really mean is important in understanding EPR and the like. :smile:

This is the VERY reason why one cannot just learn physics, and especially QM, in bits and pieces. You cannot understand why an EPR-type experiment differs from simple classical conservation of angular momentum if you do not understand quantum superposition/Schrodinger Cat-type experiments. There is an interconnectedness of QM that is essential as a single, coherent picture. It has always been the single most source of frustration (at least on my part) when someone picks on one aspect of QM but did not bother understanding all the connected ideas surrounding it.

You can't understand physics this way, and you certainly as hell cannot understand quantum mechanics this way.

Zz.
 
  • #74
I haven't read through most of this thread, but it reminded me of the experiments done earlier this year, by Shahriar Afshar (sp?), on a slit apparatus, where he detected a photon without collapsing its wavefunction . Another experimenter later came up with a rebuttal, but I think his experiment was intrinsically different in the way he identified the path of the photon.

Haven't really followed up on this though, so does anyone know what the final word on this is ?
 
  • #75
Gokul43201 said:
I haven't read through most of this thread, but it reminded me of the experiments done earlier this year, by Shahriar Afshar (sp?), on a slit apparatus, where he detected a photon without collapsing its wavefunction . Another experimenter later came up with a rebuttal, but I think his experiment was intrinsically different in the way he identified the path of the photon.

Haven't really followed up on this though, so does anyone know what the final word on this is ?

The damn thing still hasn't appeared in any peer-reviewed journal yet, in spite of all the advanced hype. :)

Zz.
 
  • #76
To follow with the "non-classical" points mentioned by ZapperZ and others above...

Bell's Theorem has portions which relate to both locality and reality. Specifically, the likelihood of a correlation X in a "classical" world must be:

0 <= X <= 1

Every time you try to restore classical determinism - in any form - you still run into this point because the empirical evidence does not support the above constraint. In other words, it is not just that the angle between the polarizers determines the results of the experiment. It is "as if" the original photon polarity was exactly matched (or anti-matched) to one or the other of the polarizers, and no other polarizer angle setting.

You don't get these results from the "Theory of Elementary Waves" as I read it. He doesn't really discuss this point while trying to sell his interpretation. Moreover, it no more predicts the observed polarity than the man in the moon in specific cases. So where is his claimed local determinism anyway? (As an aside, Lewis Little also claims that General Relativity does not stem from the curvature of space-time.) Ultimately, this paper goes nowhere when it acknowledges Bell as correct but denies Aspect. This is just another paper doing that same job, and all fall victim to the same reality:

a) Aspect is repeatable;
b) The results are clearly in line with the predictions of QM;
c) The results are outside the predictions of classical locality;
d) The greater the precision of the experiment, the greater the disagreement between classical and quantum worlds.
 
  • #77
There are no "action on a distance terms" in the Standard Model
It would simply render the path integral mechanism useless by inter-
connecting all points in space-time.

We have to do with an QM interpretation issue here and there is
a certain element of denial of the QM laws which then leads to
these "action on distance" conclusions:


What QM says:

Position and Momentum are not defined arbitrary accurate simultaneously.
The spread in one is reversely proportional to the spread in the other.

What people often think: (the denial in my opinion)

Yes, but somehow the both must be still there accurately. And we've
got the freedom to decide which one we measure accurately. Only if
we measure one accurately then we can not measure the other one
accurately.


So, In this interpretation it becomes a measurement issue. And then
the problem arrises that the measurements do not only exclude each
other locally, but also at any distance.

Heisenberg's position/momentum and time/energy relations are best
handled as a property of the Fourier Transform. There are many
ordinary situations which are governed by the same rule: The spectrum
analyser on your audio set can not determine the audio spectrum in
an infinitesimal small amount of time. There's nothing mysterious here.

If momentum is defined by a Fourier component then Heisenberg's
law follows automatically. If taken literally then there's no way that
"somehow, both must be still there accurately"

If two particles obtain the same spread (during entanglement) in
their spin components, then the outcome of the experiments will
show the correlation we see in today's experiments without the
need for any action on a distance.

Each individual particle must be presumed to have a certain
spread in one physical quantity, and an opposed spread in the
other associated quantity.

The spread in, for instance, position is lost if a particle hits the
wall at an exact x,y position. This doesn't mean that the spread
was not there. It's wrong to say that we have chosen to
"measure the position exact" It could have hit the wall at another
nearby position.

The x,y measurement has an error from the average position and
only repeated measurements will reveal the size of the error and
the spread. There is no way that the experiment can reduce the
spread the particle had during flight. (at the expense of the
spread of the momentum).

Still, it's this interpretation, that we have chosen to measure the
position exact, that leads to the idea that "both quantities must
somehow still be there accurately" and "we can choose which of the
two we measure exact at the expense of the other"



Regards, Hans.
 
  • #78
ZapperZ said:
The damn thing still hasn't appeared in any peer-reviewed journal yet, in spite of all the advanced hype. :)

Zz.

What ! :eek: I looked around and couldn't find it anywhere, but thought I wasn't trying hard enough.
 
  • #79
You mean this?

http://www.irims.org/quant-ph/030503/

Hmmm... Afshar's experiment is a bit like the standard textbook two-slit experiment but with wires in the dark bands, then remove the screen and adding interferometer arms to keep the particle's superposition until a much later detection.

The "detection" of the interference pattern by not interacting with the wires involves... er... no interaction, so it doesn't sounds like a true violation of complementarity to me.
 
  • #80
danitaber said:
I would like to take a moment to remind everyone of the basic fact that Quantum Mechanics does not explain the world we live in, it just accurately describes it. It is much like an owner's manual or (and this is overused, but I'll use it again) a recipe. The point is, it works. The previous posts do a better job of explaining why and how, so I'll leave that to them.

Maybe I'm wrong but you can see QM is absolutely epistemological just looking at the math. One could think that once the answer you have from asking (measuring) one system (at the quantum level) means nothing. As the answer comes as an eigenvalue of this operator related to the measuring, you just have the same asking in a new form, because an eigenvalue may be pictured as a representation of the operator in the correspondent eigenspace. So you just have the same asking in a new form. But the whole system collapses to the corresponding eigenspace so it's not just a new form of the asking. Otherwise you coldn't have the eigenvalue as the desired answer, once the eigenvalue would not be a representation of the operator. You end this line by thinking that to measure is to shape the system in some eigenspace of the operator that corresponds to the variable you want. Trying to figure out the quantum mechanics is impossible if you take a realistic classical view. To measure is just like to ask somebody. The answer were not previously in his head, but comes as he starts to think about. Now the one has a personal position about the theme of the asking. Nocommuting operators is just like to ask the answer of a paradox. Simply the answers of the paradox are contraditory, what means are based on alternative grounds. It's like to have two operators that don't have the same eigenspaces. I think almost all of the weird things about quantum mechanics can be pictured if you try to use human minds os something like it to explain to the laymen.
I think the final conclusion is: You can't have a reductionist or materialist approach if you want to grasp the fundamental concepts of the contemporary physics.

thank you.
 
Last edited:
  • #81
I'm quite possibly repeating, but the number of experiments and bodies of knowledge supporting QM is legion: chemical bonding, molecular dynamics, theory of mattter -- solid state physics, superconductivity, superfluids, semiconductors; ATOMIC spectra, nuclear composition, the Lamb Shift, Casimir force, and on and on and on.Like it or not, QM will be around forever, modified ? Probably. Any change in the interprretation of QM will have a lot of explaining to do. There is an astonishing stability and solidity to QM, as it has been practiced. It's a great theory, maybe the best ever.
Regards,
Reilly Atkinson
 
  • #82
How about those "accidentals"?

DrChinese said:
... it is not just that the angle between the polarizers determines the results of the experiment. It is "as if" the original photon polarity was exactly matched (or anti-matched) to one or the other of the polarizers, and no other polarizer angle setting.

This would be true if the experiments really did behave as claimed, but doesn't the title of this thread imply that they don't? In any event, my own studies have confirmed that there are enough loopholes in all the actual experiments to allow for explanations using ordinary ideas about polarisation and the accepted way in which light, as an electromagnetic wave, interacts with polarisers.

DrChinese said:
You don't get these results from the "Theory of Elementary Waves" as I read it. He doesn't really discuss this point while trying to sell his interpretation. Moreover, it no more predicts the observed polarity than the man in the moon in specific cases. So where is his claimed local determinism anyway?

I agree that Lewis Little's ideas don't help, but why are you assuming that Aspect's experiments really did support quantum mechanics? I had hoped that you had realized that there were serious flaws.

DrChinese said:
a) Aspect is repeatable;
b) The results are clearly in line with the predictions of QM;
c) The results are outside the predictions of classical locality;
d) The greater the precision of the experiment, the greater the disagreement between classical and quantum worlds.

(a) is true and so, in a sense, is (b), though it might have been interesting to see more results using different settings for parameters such as beam intensities and detector efficiencies. (c), however, is not, since the results analysed were not the raw data but the data after subtraction of "accidentals". There is very good reason (as people in the field now agree) to think that this is not, in the context of Bell tests, a legitimate procedure. It can be shown (see http://arXiv.org/abs/quant-ph/9903066) that the raw results in Aspect's first experiment did not exceed the Bell limits. It is extremely likely that those of the third experiment did not do so either, but the data to check this is not available. The only experiment in which the subtraction played no significant part was the second, and this one, using 2-channel polarisers and the CHSH test, was subject to the "detection loophole". As my work (confirming that of Pearle in 1970) has shown, the use of this test and low efficiency detectors is not valid. See http://arXiv.org/abs/quant-ph/9611037 and other papers on my web site.

The net result is that none of Aspect's three experiments can be said to have truly violated Bell's inequality. None of the tests can be considered to be valid unless one accepts a number of assumptions that are, to a local realist, unacceptable.

I don't know on what grounds you claim (d). Knowing how the detection loophole works, something that does seem clear is that increased detector efficiency will increase the gap between the quantum theory prediction and reality! With 100% efficiency, no Bell inequality will (unless other loopholes are introduced!) be violated.

Caroline
 
Last edited by a moderator:
  • #83
Caroline Thompson said:
(a) is true and so, in a sense, is (b), though it might have been interesting to see more results using different settings for parameters such as beam intensities and detector efficiencies.

(c), however, is not, since the results analysed were not the raw data but the data after subtraction of "accidentals". There is very good reason (as people in the field now agree) to think that this is not, in the context of Bell tests, a legitimate procedure. It can be shown (see http://arXiv.org/abs/quant-ph/9903066) that the raw results in Aspect's first experiment did not exceed the Bell limits. It is extremely likely that those of the third experiment did not do so either, but the data to check this is not available. The only experiment in which the subtraction played no significant part was the second, and this one, using 2-channel polarisers and the CHSH test, was subject to the "detection loophole". As my work (confirming that of Pearle in 1970) has shown, the use of this test and low efficiency detectors is not valid. See http://arXiv.org/abs/quant-ph/9611037 and other papers on my web site.

The net result is that none of Aspect's three experiments can be said to have truly violated Bell's inequality. None of the tests can be considered to be valid unless one accepts a number of assumptions that are, to a local realist, unacceptable.

I don't know on what grounds you claim (d). Knowing how the detection loophole works, something that does seem clear is that increased detector efficiency will increase the gap between the quantum theory prediction and reality! With 100% efficiency, no Bell inequality will (unless other loopholes are introduced!) be violated.

Caroline

Hi Caroline!

a) & b) we are in sufficient agreement on.

c) The results of the experiments are clearly outside the predictions of local reality. That is why the Aspect experiment is important and why you look for loopholes. Your argument is that a closer look at the evidence might indicate that the raw material might somehow show a different story, if you were allowed to exclude some of the runs. But the papers stand, and as such, you really can't argue that evidence has not been presented.

d) With the recent Innsbruck experiments showing greater precision and showing greater disagreement with your predictions, you really have to ask yourself how the greater disagreement is occurring if you are right all along. In other words, a reasonable person would have a hard time justifying a contrary position as the margin of error gets smaller but the differences get bigger. If your position is valid, greater counting efficiency should return us to the zone in which there is no violation of the Bell Inequality. That clearly isn't happening. Funny: the bias you allege in the experiments just happens to take us to the QM predictions, even though there is no obvious connection.

I freely acknowledge that there are tacit assumptions in the Bell Tests. Perhaps this should have been e). These may turn out to be loopholes, maybe not.

In other words: you could be right about Bell test loopholes. Perhaps future evidence will indicate that fair sampling is not happening. Or that the accidentals make the difference. Further, it possible that once the hypothesized loopholes are plugged, local reality will emerge as a valid possibility again. But I think that my a) b) c) d) are a fair and accurate summary of where things are today, and you clearly have a big hurdle to overcome.
 
Last edited by a moderator:
  • #84
Recent Bell test experiments

ZapperZ said:
You DO know that EPR-type experiments have progressed SIGNIFICANTLY beyond the Aspect experiment, and that more accurate tests by Zeilinger & Co. have produced even more accurate confirmation of QM, don't you?

Yes, I most certainly do know about recent experiments. I also know that the search for truly valid one (euphamistically termed a "loophole-free" one) is continuing, and have just finished a paper based on:

R. García-Patrón Sánchez, J. Fiurácek , N. J. Cerf , J. Wenger , R. Tualle-Brouri , and Ph. Grangier, “Proposal for a Loophole-Free Bell Test Using Homodyne Detection”, Phys. Rev. Lett. 93, 130409 (2004)
http://arxiv.org/abs/quant-ph/0403191

This experiment (in marked contrast any other recent one) does really look at if it would be loophole-free. Unfortunately, though, the argument they use to suggest that the light is going to be "non-classical" has serious flaws. I can show that the symptom they are going to use as an indicator of non-classicality is a natural consequence of the way homodyne detection works. The experiment will not, therefore, settle the matter one way or the other, as both quantum theorists and local realists will agree (once they've understood my paper!) that the whole thing is classical. Neither side will be surprised to find that the Bell test is not violated.

I'll be posting the paper soon on my web site, when the experts concerned have had a chance to review it.

Caroline
 
  • #85
DrChinese said:
Hi Caroline!

a) & b) we are in sufficient agreement on.

c) The results of the experiments are clearly outside the predictions of local reality.

[Correction: "The published results are clearly outside the predictions of local realism." But more of this later.]

DrChinese said:
That is why the Aspect experiment is important and why you look for loopholes. Your argument is that a closer look at the evidence might indicate that the raw material might somehow show a different story, if you were allowed to exclude some of the runs.

Hmmm ... But you've got this the wrong way around! It is Aspect who, by subtracting accidentals, is effectively trying to exclude some of the runs. As my paper (http://arXiv.org/abs/quant-ph/9903066) explains, the raw data available (from his first experiment, though the same applies logically to his third) was well within the region expected under local realism. The same can be said of the first experiment published by the Geneva group showing long-distance correlations. They did not analyse the raw data, which did not infringe the Bell inequality. In later papers they published both raw and adjusted results, in recognition of the fact that I was right: the adjustment was suspect.

As far as I know, no recent experiment has used adjusted data, but what they have done instead is use tests that rely on the fair sampling assumption.

DrChinese said:
d) With the recent Innsbruck experiments showing greater precision and showing greater disagreement with your predictions, you really have to ask yourself how the greater disagreement is occurring if you are right all along. In other words, a reasonable person would have a hard time justifying a contrary position as the margin of error gets smaller but the differences get bigger. If your position is valid, greater counting efficiency should return us to the zone in which there is no violation of the Bell Inequality. That clearly isn't happening. Funny: the bias you allege in the experiments just happens to take us to the QM predictions, even though there is no obvious connection.

Greater counting efficiency would help in some experiments, and, by counting something completely different but to which (if my analysis is correct) Bell's argument still applies, the latest proposed loophole-free tests (http://arxiv.org/abs/quant-ph/0403191 -- see other message) manages to achieve 100% efficiency in that every pair that is analysed produces a +1 or -1 result. They apply a "belt and braces" approach, having "event-ready" detectors as well as effectively 100% efficiency.

However, in most actual experiments there are other possible loopholes. How can you claim a "Bell test has been violated" when the assumptions on which that test is based are either clearly (in the view of realists) not valid or, at least, recognised as suspect?

DrChinese said:
In other words: you could be right about Bell test loopholes. Perhaps future evidence will indicate that fair sampling is not happening.

This is a matter of logic rather than the need for more experimental evidence, though the latter does come into the story. There are tests relating to "fairness" that could be done but are, in my opinion, either not being done at all or not being done appropriately. It is no use testing for constancy of the sample using only the angles used for the Bell tests, since everyone agrees that these are likely to be constant. They need to look at the total counts for the intermediate angles.

DrChinese said:
Or that the accidentals make the difference. Further, it possible that once the hypothesized loopholes are plugged, local reality will emerge as a valid possibility again. But I think that my a) b) c) d) are a fair and accurate summary of where things are today, and you clearly have a big hurdle to overcome.

OK! I've devoted over 10 years of my life to it so far and am prepared to continue until death or glory!

Caroline
 
  • #86
I think the viewpoint of Zurek and Omnes and quite possibly others like Gell-Mann and Hartle is that in EPR and similar experiments, there is a superposition of measurement outcomes in the measuring devices. This superposition then decoheres and one measurement result occurs. Or both occur if you like your many-worlds real. A lot like Schrodinger's Cat.

But that's really just my impression at the moment. :smile:

I'm wondering if a ideal von Neumann experiment could recreate a superposition in EPR and what this would mean. Something for me to think about, I guess.
 
Last edited:
  • #87
caribou said:
I think the viewpoint of Zurek and Omnes and quite possibly others like Gell-Mann and Hartle is that in EPR and similar experiments, there is a superposition of measurement outcomes in the measuring devices. This superposition then decoheres and one measurement result occurs. Or both occur if you like your many-worlds real.

That's close to my opinion on the issue, in that the von Neumann measurement occurs when the final observation of correlation is executed (the transported distant "measurement results" remain in superposition). I've discussed this a few times here before some months ago...

cheers,
Patrick.
 
  • #88
Caroline Thompson said:
OK! I've devoted over 10 years of my life to it so far and am prepared to continue until death or glory!

Caroline

Caroline,

I think what you are trying to do with the Bell tests is very noble, and certainly not a waste of your time. I don't always agree with your characterization of the state of the debate, though.

In medicine, experiments are routinely done on groups of people that are not randomly selected in the purest sense of the term "random". The question always arises, is it a fair sample? Because it is nearly impossible to get a true random sample, experimentalists do their best and are always looking to improve their sampling methods. Even without a true random sample, and without a rigorous proof theirs is a fair sample, the results are considered useful. It is still good science. That does not mean it can't be improved upon, and it does not mean some incorrect results may later be laid at the feet of a biased sample.

The same applies with the Bell tests. You can say all day long that the sample is biased, but you actually have shown nothing more that the results COULD POSSIBLY be biased enough to render an erroneous conclusion. But you really aren't showing any actual bias in the results.

OK, I think everyone recognizes this.

But the march of science in this area is moving away from your personal position of local realism - for which you lack even a shred of actual evidence of equal stature to the Aspect or Innsbruck tests. After all, if you are right, why do 100% of test results of local realism point AWAY from it? In other words, you cling to a position for which there are NO supporting tests and argue against a position for which there is at least SOME strong evidence. Who is really biased here?

I think your assessment of the state of Bell tests misses the mark by a wide margin, even though you make some valid points.

-DrC
 
  • #89
DrChinese said:
Caroline,
In medicine, experiments are routinely done on groups of people that are not randomly selected in the purest sense of the term "random". The question always arises, is it a fair sample? Because it is nearly impossible to get a true random sample, experimentalists do their best and are always looking to improve their sampling methods. Even without a true random sample, and without a rigorous proof theirs is a fair sample, the results are considered useful. It is still good science. That does not mean it can't be improved upon, and it does not mean some incorrect results may later be laid at the feet of a biased sample.

Have you read my Chaotic Ball paper? A recent version can be found at http://arxiv.org/abs/quant-ph/0210150 .

We are not talking about the ordinary kind of sampling bias here, where the experimenter is free to choose his sampling method. The sample is effectively chosen for him, and, if something like the assumption I make in my model is anywhere near correct, it is always going to be biased and will inevitably cause an increase in the Bell test statistic. If the detection loophole is simply assumed away (as is the general practice) then this means that the interpretation is being biased in favour of quantum theory.

This is absurd! Until Aspect inaugurated use of the CHSH test in 1982, it was generally understood that this bias was unacceptable. Other versions of the Bell test were used. Though the experiments all had loopholes, this obvious source of bias was avoided.

DrChinese said:
But the march of science in this area is moving away from your personal position of local realism - for which you lack even a shred of actual evidence of equal stature to the Aspect or Innsbruck tests. After all, if you are right, why do 100% of test results of local realism point AWAY from it? In other words, you cling to a position for which there are NO supporting tests and argue against a position for which there is at least SOME strong evidence. Who is really biased here?

I think you know my answer! Agreed, there is no hard evidence for my case, other than all the phenomena we have ever encountered in other contexts. All our everyday experience tells us that everything is local and real. Quantum theorists seem to be like man in the middle ages, prepared to believe that something over which he does not yet have full experimental control must work by magic. There must be dragons out there, where he has not yet explored.

But to get back to reality, there could be supporting tests. For the past 10 years I have been trying to tell experimenters what needs to be done in order to prove that the loopholes really are there and that alternative local realist explanations really do exist.

Proving the detection loophole open is easy -- as could have been known since 1970. All you need do is repeat the experiment with different detector efficiencies and see whether the Bell test statistic increases, stays the same, or decreases as efficiency increases. Quantum theory predicts that it stays still. Local realism predicts that, other things being equal, it will decrease.

Testing for other loopholes is equally straightforward. The reason the tests have not been conducted is, I think, that most of the people who have contributed to the literature on the subject have been theorists. They have not felt qualified to comment on the experimental details. Most have never even heard of the "subtraction of accidentals" loophole, or stopped to think whether or not the system for deciding whether or not we have a "coincidence" might be introducing bias.

DrChinese said:
I think your assessment of the state of Bell tests misses the mark by a wide margin, even though you make some valid points.
Time will tell!

Incidentally, if you want to know just a little more on the experimental side, you could do worse than consult wikipedia. Last summer I contributed a few pages, the key one being http://en.wikipedia.org/wiki/Bell's_Theorem . From here links cover the main variations on the Bell test, actual experiments and, last but not least, the various loopholes.

Caroline
 
  • #90
Caroline Thompson said:
1. We are not talking about the ordinary kind of sampling bias here, where the experimenter is free to choose his sampling method. The sample is effectively chosen for him, and, if something like the assumption I make in my model is anywhere near correct, it is always going to be biased and will inevitably cause an increase in the Bell test statistic. If the detection loophole is simply assumed away (as is the general practice) then this means that the interpretation is being biased in favour of quantum theory.

2. Incidentally, if you want to know just a little more on the experimental side, you could do worse than consult wikipedia. Last summer I contributed a few pages, the key one being http://en.wikipedia.org/wiki/Bell's_Theorem . From here links cover the main variations on the Bell test, actual experiments and, last but not least, the various loopholes.

Caroline

1. Your model is pure speculation (I don't mean that as an insult). As such it is not proof and it is certainly not a counter-example to Aspect's actual experimental evidence. You have to admit that there may in fact be no significant bias against local realism in Aspect's samples or methods - you just think there could be.

2. I want to talk to you about that. I looked at what you have done in Wikipedia to the Bell's Theorem page and was quite disappointed. In my opinion, you have essentially hijacked what should be a non-controversial page and used it to further your own non-mainstream ideas. Bell's Theorem is barely mentioned or discussed!

I fully support the spreading of your message - even though I personally disagree with its content - because I think that it helps to keep everyone on their toes. As you know, I even link to your site from my own page EPR, Bell & Aspect: The Original References. But I think Wikipedia's Bell Theorem slot is the wrong place for it and your content there probably violates the POV neutrality policy. I hope you will voluntarily shift your contributions on the subject there to a more suitable slot and return Bell's Theorem back to how it was.
 
  • #91
wikipedia Bell's Theorem page

DrChinese said:
I looked at what you have done in Wikipedia to the Bell's Theorem page and was quite disappointed. In my opinion, you have essentially hijacked what should be a non-controversial page and used it to further your own non-mainstream ideas. Bell's Theorem is barely mentioned or discussed!

I fully support the spreading of your message - even though I personally disagree with its content - because I think that it helps to keep everyone on their toes. As you know, I even link to your site from my own page EPR, Bell & Aspect: The Original References. But I think Wikipedia's Bell Theorem slot is the wrong place for it and your content there probably violates the POV neutrality policy. I hope you will voluntarily shift your contributions on the subject there to a more suitable slot and return Bell's Theorem back to how it was.

I strongly disagree, and if the above is what you feel the place to say it is in the wikipedia "talk" pages. The theorem is and ought to remain controversial, since it marks a point of bifurcation in the development of theoretical physics -- the point at which theory went wrong because people did not work hard enough at searching for local realist models. It was local realism that Bell himself expected to win. I don't know why he decided (reluctantly) to accept the general opinion that it had failed.

He once wrote that

“[The] entirely unauthorised `Bell's limit' sometimes plotted along with experimental points [is to be understood as relating to some] more or less ad hoc extrapolation [of the theory]”. Bell, John A, The Speakable and Unspeakable in Quantum Mechanics, Cambridge University Press 1987, P60

Caroline
 
  • #92
Caroline Thompson said:
I strongly disagree, and if the above is what you feel the place to say it is in the wikipedia "talk" pages. The theorem is and ought to remain controversial, since it marks a point of bifurcation in the development of theoretical physics -- the point at which theory went wrong because people did not work hard enough at searching for local realist models.

As far as I can tell, it is the policy of both PhysicsForums and Wikipedia that non-mainstream positions be placed in suitable context so as to identify that they are not mainstream.

I would not have noticed your contributions to Wikipedia had you not mentioned it above. It is my intention to determine if other members of PhysicsForums might desire to work with me to bring back a mainstream version of Bell's Theorem. However, I plan to do this outside of this thread.

It is my recommendation to you that you label your positions as non-mainstream when you present them in places in which others might be otherwise misled. I encourage you to continue presenting your ideas both here and elsewhere but you should respect the intent of the rules.

For anyone wondering what non-mainstream position of Caroline's I am referring to: She is a local realist who denies the existence of photons. ('Nuff said.)
 
  • #93
DrChinese said:
As far as I can tell, it is the policy of both PhysicsForums and Wikipedia that non-mainstream positions be placed in suitable context so as to identify that they are not mainstream.
Yes, this is definitely the policy at Wikipedia--see the section on how entries should express a "neutral point of view" below:

http://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view

Here's one relevant part:

What is the neutral point of view?

What we mean isn't obvious, and is easily misunderstood.

There are many other possible valid understandings of what "unbiased," "neutral," etc. mean. The notion of "unbiased writing" that informs Wikipedia's policy is "presenting conflicting views without asserting them." This needs further clarification, as follows.

First, and most importantly, consider what it means to say that unbiased writing presents conflicting views without asserting them. Unbiased writing does not present only the most popular view; it does not assert the most popular view as being correct after presenting all views; it does not assert that some sort of intermediate view among the different views is the correct one. Presenting all points of view says, more or less, that p-ists believe that p, and q-ists believe that q, and that's where the debate stands at present. Ideally, presenting all points of view also gives a great deal of background on who believes that p and q and why, and which view is more popular (being careful not to associate popularity with correctness). Detailed articles might also contain the mutual evaluations of the p-ists and the q-ists, allowing each side to give its "best shot" at the other, but studiously refraining from saying who won the exchange.
So, if Caroline Thompson presents any non mainstream-views, she should label them very clearly as non-mainstream views (presumably this would include views about how strongly different experiments demonstrate a violation of Bell's Inequality). I haven't looked at the Wikipedia article on Bell's Theorem very carefully, so I don't know if she does this or not.

Anyway, the discussion of "neutrality" is worth reading in full, because it goes into a lot more detail.
 
Last edited:
  • #94
JesseM said:
Yes, this is definitely the policy at Wikipedia--see the section on how entries should express a "neutral point of view" below:
http://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view
Yes, I'm well aware of this, and there has been some discussion in wikipedia on the "neutrality" of my contributions. I'm happy to admit that my views are not "mainstream", but where would I state this? The entries are usually (almost) anonymous, though one can generally find out who is mainly responsible by looking at the "history" page.

But, perhaps more importantly, my "views" are merely "little known facts". Almost all these facts are already known, some having been known since 1970 or ealier. Are not "facts" in themselves neutral? I can't help it if they happen to be little known! Hasn't the public the right to be told facts in preference to opinion? Before I came on the scene the Bell test pages in wikipedia were strongly biased in favour of the quantum-mechanical point of view and riddled with factual inaccuracies.

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #95
Caroline Thompson said:
Before I came on the scene the Bell test pages in wikipedia were strongly biased in favour of the quantum-mechanical point of view ...

I think that sums it up, LOL!
 
  • #96
Caroline Thompson said:
Yes, I'm well aware of this, and there has been some discussion in wikipedia on the "neutrality" of my contributions. I'm happy to admit that my views are not "mainstream", but where would I state this? The entries are usually (almost) anonymous, though one can generally find out who is mainly responsible by looking at the "history" page.
Well, the guidelines suggest that any non-mainstream views should be clearly flagged as such--you don't have to say "I, Caroline Thompson, believe X", but you should indicate something like "some dissenters to the mainstream opinion on the Aspect experiment believe X".
Caroline Thompson said:
But, perhaps more importantly, my "views" are merely "little known facts". Almost all these facts are already known, some having been known since 1970 or ealier. Are not "facts" in themselves neutral? I can't help it if they happen to be little known! Hasn't the public the right to be told facts in preference to opinion? Before I came on the scene the Bell test pages in wikipedia were strongly biased in favour of the quantum-mechanical point of view and riddled with factual inaccuracies.
If the facts are agreed upon by everyone then sure, they're neutral, but the implications of some facts are still a matter of opinion. For example, perhaps mainstream physicists would agree that there are small loopholes in existing tests, but think that there is very little reason to think these loopholes cast significant doubt on the results, perhaps because you'd need a very contrived set of local laws in order to take advantage of these loopholes, or because successive tests keep on narrowing the loopholes and confirming the violation of Bell's Inequality to greater and greater accuracy. If this is the case, it should be explained along with the loopholes themselves, in order to present the mainstream view fairly.

Again, I haven't gone over the wikipedia entry or the arguments about loopholes very carefully myself, so I don't know to what extent you have or haven't done this.
 
  • #97
ZapperZ said:
2. You cited a rather dubious source (C.H. Thompson) regarding the validity of the EPR experiment interpretation. Having had an "encounter" with her, ...

Zz.

ZapperZ,

It hurts me to say this: a) YOU were RIGHT about Caroline; and b) I'm throwing in the towel on her. I naively thought she would have enough professionalism to know where the line is with her opinions. She doesn't, and I have decided to remove that link as a result. Thanks for your input.

-DrC
 
  • #98
JesseM said:
Well, the guidelines suggest that any non-mainstream views should be clearly flagged as such--you don't have to say "I, Caroline Thompson, believe X", but you should indicate something like "some dissenters to the mainstream opinion on the Aspect experiment believe X". If the facts are agreed upon by everyone then sure, they're neutral, but the implications of some facts are still a matter of opinion. For example, perhaps mainstream physicists would agree that there are small loopholes in existing tests, but think that there is very little reason to think these loopholes cast significant doubt on the results, perhaps because you'd need a very contrived set of local laws in order to take advantage of these loopholes ...
What I try and emphasise is the fact that you do not need any "contrived" set of local laws to explain the violation of those Bell tests for which the detection loophole is open. Not many people know this! Surely it is only right that more people should have access to this information, and, equally, no real justification for prejudicing readers against the idea? "Accepted" opinion has been formed in ignorance of some of the facts. An empirically important loophole -- that concerning the subtraction of accidentals -- was first mentioned with hardly any publicity back in 1985 but seems not to have come to the attention of the community.

Should science progress on the basis of belief and ignorance, or on the basis of as full a version of the facts as possible?

JesseM said:
... or because successive tests keep on narrowing the loopholes and confirming the violation of Bell's Inequality to greater and greater accuracy. If this is the case, it should be explained along with the loopholes themselves, in order to present the mainstream view fairly.

I think if you read my wikipedia pages carefully you will see that claims of greater and greater "accuracy" are not true. What we have is observation of violations of the CHSH test by ever greater margins relative to the standard error, but if the detectors are not 100% efficient the violation has no significance due to the need for the fair sampling assumption. If you read my Chaotic Ball papers you will see why this assumption is just not reasonable.

JesseM said:
Again, I haven't gone over the wikipedia entry or the arguments about loopholes very carefully myself, so I don't know to what extent you have or haven't done this.

I hope you will now remedy this situation!

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #99
Caroline Thompson said:
Should science progress on the basis of belief and ignorance, or on the basis of as full a version of the facts as possible?

There are those of us that think YOU are the one espousing belief and ignorance, and it is you who is trying to present a highly edited version of the "facts" instead a more complete one. :smile:
 
  • #100
DrChinese said:
There are those of us that think YOU are the one espousing belief and ignorance, and it is you who is trying to present a highly edited version of the "facts" instead a more complete one. :smile:

DrChinese, as you know, I've devoted over 10 years now to the study of the actual Bell test experiments. I have looked up and found out one way or another enough about optics and how the various pieces of apparatus work to feel that I am on a par with most physicists working in the area. If you doubt this claim, please write privately and I can tell you some of the experts with whom I have had contact. I cannot off hand think of any who have not shown me respect, treating me almost as an equal. I think it likely that I know more facts in the area than you do. I am not ignorant, and what I know has never conflicted with what was, before the modern tendency to mystification of physics took root, generally considered a feature the real world and hence a necessary feature of any fundamental theory: local realism.

Please specify the facts that you think I have misrepresented.

Caroline
 
  • #101
Caroline Thompson said:
DrChinese, as you know, I've devoted over 10 years now to the study of the actual Bell test experiments. I have looked up and found out one way or another enough about optics and how the various pieces of apparatus work to feel that I am on a par with most physicists working in the area. If you doubt this claim, please write privately and I can tell you some of the experts with whom I have had contact. I cannot off hand think of any who have not shown me respect, treating me almost as an equal. I think it likely that I know more facts in the area than you do. I am not ignorant, and what I know has never conflicted with what was, before the modern tendency to mystification of physics took root, generally considered a feature the real world and hence a necessary feature of any fundamental theory: local realism.

Please specify the facts that you think I have misrepresented.

Caroline

What is a fact? What is evidence? Your definitions exclude evidence accepted by the physics community. Specifically, evidence in favor of Bell Inequality violation by Aspect and others.

Even in a court of law, flawed evidence is considered evidence. For example, eyewitness testimony is often unreliable - yet it may be the best evidence available. If I testify I saw a man commit a crime, you may try to cast doubt by saying it was not him - it was an imposter made up to look like the defendant. A jury listens and decides. A verdict is rendered and life goes on. There is a right to appeal, but until it is overturned the man is guilty.

Same in science.

I will start a new thread tomorrow to discuss the local realistic view of Bell tests. I have some questions of substance I wish to pose to you on the matter.
 
  • #102
DrChinese said:
Even in a court of law, flawed evidence is considered evidence. For example, eyewitness testimony is often unreliable - yet it may be the best evidence available. If I testify I saw a man commit a crime, you may try to cast doubt by saying it was not him - it was an imposter made up to look like the defendant. A jury listens and decides. A verdict is rendered and life goes on. There is a right to appeal, but until it is overturned the man is guilty.

Same in science.

If I may inject my 2 cents: a few months ago, I spend (way too) much time discussing with another anti-EPR fan here on this board. The problem seems to be not so much in the loopholes in the Aspect-like experiments but in what I would qualify as "the united view of physics".
One shouldn't deny that there are "loopholes" in the Aspect like experiments. But as Dr. Chinese points out, experiments are "evidence" and not "mathematical proof" for scientific theories. It is the entire body of "evidence" that makes theories stand out or not, and not one single type of experiment It now happens that the way people correct for detection efficiencies (the major source of loopholes) is what has always been considered as acceptable ; only NOW it seems to be inacceptable, in order to show that EPR-like results are not violating any Bell equations. Of course, the point can be made, but a reasonable explanation *within the frame of the rest of physics* should be given why suddenly this accepted correction becomes unacceptable.

In that long discussion I had, it turned out that the main discordance with anti-EPR proponents, is not about Bell's inequality. It is about the existence of photons as particles or not. They usually work with classical EM waves, and it is true that in that case, the efficiency corrections seem much more dubious. However, once photons are recognised as particles, it is much harder to find arguments against the fair sampling hypothesis that underlies the efficiency corrections in EPR experiments.

And the existence of photons, as correlated clicks, is very difficult to deny, not only from a theoretical point of view, but there are also very recent experiments that indicate very strongly the particle-like nature of light:

Am. J. Phys. Vol 72, No 9, September 2004.

The point of the experiment is the following:
A PDC (Parametric Down converting xtal) generates an "entangled pair of photons" also called a 2-photon state. One detector (the "trigger") detects one of the photons of the pair, and the other photon is sent onto a beam splitter.
The point is that in the case of a hit of the trigger, there is one photon in the other beamline (the one with the splitter) and as such, a double hit is essentially impossible (except by Poisson coincidence which is a known function of the incident beam intensity) if the photon is a particle, and statistically possible if it are continuous waves, the essence of a particle being that it can be only detected once. The article points out the very low double coincidence rate, which is further exactly explained by Poisson coincidence.
The nice thing about it is that no corrections by efficiencies are needed: raw data are presented, and they are clean enough to prove the point. Of course, this is not an EPR experiment. It is just an experiment that makes it extremely difficult to deny the existence of photons as particles. Indeed, in the classical wave picture, the energy in the second beam is split evenly by the beam splitter, and there's no real reason why there shouldn't be cases of triggering of both detectors, which independently see an incident radiation flux. The fact that there is a strong anti-coincidence indicates that a choice was made at the beamsplitter, and the choice is the path the photon took (naively ; more professionally, it is the detection of the one-photon state which is a non-classical state: it isn't a coherent state).

The first step in the discussion with an anti-EPR supporter should be about the existence of photons. Photons exist or not, and if they exist in one place, they exist everywhere, also in EPR experiments.

I think that people who deny the existence of photons will have a very hard time having a reasonable discussion here. I still have to encounter people who are anti-EPR fans but who accept the existence of photons.
 
  • #103
DrChinese said:
I will start a new thread tomorrow to discuss the local realistic view of Bell tests. I have some questions of substance I wish to pose to you on the matter.

Good!

Caroline
 
  • #104
vanesch said:
... as Dr. Chinese points out, experiments are "evidence" and not "mathematical proof" for scientific theories. It is the entire body of "evidence" that makes theories stand out or not, and not one single type of experiment It now happens that the way people correct for detection efficiencies (the major source of loopholes) is what has always been considered as acceptable ; only NOW it seems to be inacceptable, in order to show that EPR-like results are not violating any Bell equations. Of course, the point can be made, but a reasonable explanation *within the frame of the rest of physics* should be given why suddenly this accepted correction becomes unacceptable.

I think the reason you have not previously heard much about the objections is partly historical accident, partly the great difficulty that people with views similar to mine have had in getting these published. Objections to the assumption of fair sampling (needed to get around the detection loophole) have been known since 1970 and are, I presume, the main reason that tests in which this loophole was open were not used for the first 10 years of the Bell test experiments (1972-1981). It was only in 1982 that Aspect started using the CHSH test and the trouble became serious. The reasons for this change I have not managed to ascertain, despite correspondence with several of the people concerned.

Local realists at the time seem to have been represented by Marshall, Santos and Selleri. Unfortunately, their seminal article objecting to the QM interpretation of Aspect's experiments did not directly explain why fair sampling could not be assumed and went off at a tangent, concentrating on the idea that the assumption of "no enhancement" was flawed. It is only recently, on re-reading their paper, that I discovered the reaon for this: they had tried to analyse the published results, which were based on adjusted data. Though they did (iirc) register their objected to this adjustment, they don't appear to have realized how serious it was.

Their paper was:
T W Marshall, E Santos and F Selleri, F, “Local Realism has not been Refuted by Atomic-Cascade Experiments”, Physics Letters A, 98, 5-9 (1983)​

vanesch said:
... the main discordance with anti-EPR proponents, is not about Bell's inequality. It is about the existence of photons as particles or not. They usually work with classical EM waves, and it is true that in that case, the efficiency corrections seem much more dubious. However, once photons are recognised as particles, it is much harder to find arguments against the fair sampling hypothesis that underlies the efficiency corrections in EPR experiments.

Very true!

vanesch said:
... the existence of photons, as correlated clicks, is very difficult to deny, not only from a theoretical point of view, but there are also very recent experiments that indicate very strongly the particle-like nature of light:

Am. J. Phys. Vol 72, No 9, September 2004.

I should be most grateful if you could tell me the author, or where I can find this online? I lost my rights to access such journals a year ago, but perhaps there is a copy in http://arxiv.org?

I am familiar with this kind of experiment and with the usual arguments re coincidence rates after beamsplitters. I am not entirely sure of the true explanation for the low observed rates -- it may not always be the same. Marshall et al, with their Stochastic Electrodynamics theory, put it all down to the effect of superposition of the test beams with components of the zero point field. I favour at present an idea that may be mathematically equivalent: that the proportions in which the intensity is divided depends partly on the state of the beamsplitter.

Incidentally, it may be well worthwhile to make a study of how those beamsplitters actually work. They are not just half-silvered plates. If "polarising cubes" are used, there are many layers of dielectric and/or metal on the diagonal interface between the two prisms, with thicknesses carefully engineered so as to be exact half or quater wavelengths. Clearly the idea is to selectively encourage constructive or destructive interference of the partially-reflected or transmitted waves at each surface. This is a purely wave effect, yet is used to make the system "simulate" quantum theory! Which component (reflected or transmitted) is likely to dominate might depend on the exact wavelength. Perhaps careful analysis would reveal that the spectra of the two output beams are slightly different? [This last idea is a new one I had just now! I've had others at various times, but all depend on this kind of factor.]

Caroline
 
  • #105
Caroline Thompson said:
Their paper was:
T W Marshall, E Santos and F Selleri, F, “Local Realism has not been Refuted by Atomic-Cascade Experiments”, Physics Letters A, 98, 5-9 (1983)​

Yes, I'm aware of these papers. I'm also aware (although not an expert) of stochastic electrodynamics and things like that. But you agree with me that this is NOT classical optics. New ideas ARE introduced - such as the fact that we are exposed to background radiation, with an intensity comparable to the intensity of sunlight, but apparently, we calbrated this away in all our sensors, thermometers and so on so as not to notice it ; a lot of physics is to be rewritten that way: suddenly we don't understand statistical physics, atomic physics, solid state physics anymore. These new ideas should be backed up by specific predictions, and you cannot deny that you're left with the impression that the ONLY reason for doing so is to find a way to explain away the behaviour of light without photons in certain circumstances. And THAT is done because, as you point out, denying photons is the only hope to get around EPR. I'm sorry, it all gives too much the impression that this is to cling onto a religiously held belief in what you call "local realism".
As I explained during that long discussion (which I'm not going to repeat here), second quantization of fields is very difficult to avoid. You can do so for specific situations, but you wipe away too much verified physics in trying to cling to a classical field description of the world. Second quantization explains a lot of things extremely well and I truly have difficulties imagining how you can find tricks to go back to classical fields. How do you rewrite particle physics without second quantization ? What happens to the standard model ? Do you see the mindboggling scale of the attempt you propose ?



I should be most grateful if you could tell me the author, or where I can find this online? I lost my rights to access such journals a year ago, but perhaps there is a copy in http://arxiv.org?

here's the abstract, but I don't have the right to give you the article, unfortunately:


Observing the quantum behavior of light in an undergraduate laboratory

J. J. Thorn, M. S. Neel, V. W. Donato, G. S. Bergreen, R. E. Davies, and M. Beck
Department of Physics, Whitman College, Walla Walla, Washington 99362

(Received 4 December 2003; accepted 15 March 2004)

While the classical, wavelike behavior of light (interference and diffraction) has been easily observed in undergraduate laboratories for many years, explicit observation of the quantum nature of light (i.e., photons) is much more difficult. For example, while well-known phenomena such as the photoelectric effect and Compton scattering strongly suggest the existence of photons, they are not definitive proof of their existence. Here we present an experiment, suitable for an undergraduate laboratory, that unequivocally demonstrates the quantum nature of light. Spontaneously downconverted light is incident on a beamsplitter and the outputs are monitored with single-photon counting detectors. We observe a near absence of coincidence counts between the two detectors—a result inconsistent with a classical wave model of light, but consistent with a quantum description in which individual photons are incident on the beamsplitter. More explicitly, we measured the degree of second-order coherence between the outputs to be g(2)(0) = 0.0177±0.0026, which violates the classical inequality g(2)(0)>=1 by 377 standard deviations. ©2004 American Association of Physics Teachers.





I am familiar with this kind of experiment and with the usual arguments re coincidence rates after beamsplitters. I am not entirely sure of the true explanation for the low observed rates -- it may not always be the same. Marshall et al, with their Stochastic Electrodynamics theory, put it all down to the effect of superposition of the test beams with components of the zero point field. I favour at present an idea that may be mathematically equivalent: that the proportions in which the intensity is divided depends partly on the state of the beamsplitter.

Incidentally, it may be well worthwhile to make a study of how those beamsplitters actually work. They are not just half-silvered plates. If "polarising cubes" are used, there are many layers of dielectric and/or metal on the diagonal interface between the two prisms, with thicknesses carefully engineered so as to be exact half or quater wavelengths. Clearly the idea is to selectively encourage constructive or destructive interference of the partially-reflected or transmitted waves at each surface. This is a purely wave effect, yet is used to make the system "simulate" quantum theory! Which component (reflected or transmitted) is likely to dominate might depend on the exact wavelength. Perhaps careful analysis would reveal that the spectra of the two output beams are slightly different? [This last idea is a new one I had just now! I've had others at various times, but all depend on this kind of factor.]

Do you realize the twistedness of that explanation ?

Now tell me, how does it occur that when a wave that is generated from a PDC spits "left-right" according to a feature of the beamsplitter, but when you shine a "classical" beam on that beamsplitter, it doesn't (in the sense that you can produce interference effects with the split beams, so you cannot send you bullet left or right at the beamsplitter)...
Also, don't you think that if beamsplitters were also spectral filters, that would have been noticed already a few times in undergraduate labs ?

That's what I mean with "the united view of physics". You cannot invent, at hoc, an explanation for what annoys you in one particular case, without having to consider it in all generality, and apply it systematically to all of physics. If it helps you to explain 2 experiments, but is screws up 70% of the rest of physics, the idea goes into the dust bin. Thinking that you'll fix up that 70% is heroic, but close to hopeless. Most physicists think that when you have to do that 2 or 3 times in a row, you're simply on the wrong track.
I'm one of them :-p
 

Similar threads

Replies
50
Views
4K
Replies
24
Views
449
Replies
9
Views
988
Replies
5
Views
968
Replies
3
Views
1K
Replies
12
Views
2K
Replies
11
Views
1K
Replies
2
Views
880
Back
Top