Questions about Bell: Answering Philosophical Difficulties

  • Thread starter krimianl99
  • Start date
  • Tags
    Bell
In summary, Bell's results are in contradiction with quantum mechanical predictions, and it is difficult to get away with 1,2 and 3 and not have Bell's inequalities.
  • #36
Ian Davis said:
The possibility that quantum waves can travel backwards is being explored by:

http://faculty.washington.edu/jcramer/NLS/NL_signal.htm

It is also possible that light can be persuaded to travel backwards in time. The light peak shown in the following video, exits some distance from where it enters a light conveying medium, before that peak has even arrived at that medium. It superficially at least appears to be moving backwards in time at a velocity of ~ -2c. Stacking such light conveying mediums in series, might allow the peak of a light pulse to be transmitted an arbitrary distance in zero time, thus sending a signal backwards in time.

It is pretty obvious that one CAN'T signal faster-than-light in quantum theory. You can prove this mathematically, at least as long as all interactions are lorentz-invariant. But this is Cramer, with his transactional interpretation, which sees backward-in-time justifications everywhere, even in trivial optical experiments like Afshar's. Look at the references: it isn't particularly PRL or something.


Now, this seems to me a particularly misleading exposition, because the way it is represented, it would be EASY to signal faster-than-light: if the pulse exits an arbitrary long fibre even before it enters, or even before we LET IT ENTER, then it would be sufficient to show that it exits even if we don't let it enter :smile:

But if you read the article carefully, you see that there is ACTUALLY already part of the pulse inside, which is not shown on the animation, which does suggest faster-than-light (backward in time) transmission.

If the guy says that *theory predicts such a behaviour* then for sure it is not backwards in time, or FTL, as this can be proven in QED.

The presumption that nature abhors a paradox is somewhat anthropomorphic. It is rather we who abhor paradoxes, which once known to exist we then find means of explaining, which ends up convincing us that these paradoxes were not in fact paradoxes in the first place, but rather a fault in our initial understanding of them.

The last sentence is correct: paradoxes are only misunderstandings (by the whole scientific community, or just by some individuals who don't understand the explanation). And btw, Bell's theorem is NOT a paradox at all.

There is a risk in saying that because X is impossible, it therefore cannot happen. As a scientist one is better advised to believe that the impossible can happen, and then try to find out how to make the impossible happen.

You can never say anything for sure about nature. But you can say things for sure about a THEORY: you can say that this or that theory will not allow this or that to happen.

You know, I asked funding to continue my research on human body levitation by thought only, and it is each time refused. I know that current theories don't allow this (especially that silly old theory of Newtonian gravity and so on), but one should let the impossible be researched. Think of the enormous advantage: we could all float to the office by just concentrating. Think of the use gain in fossil fuel emissions! If my research works out, I would solve one of the biggest problems of humanity! I should get huge grants and I don't receive anything!
 
Last edited:
Physics news on Phys.org
  • #37
vanesch said:
Now, imagine I give you 3 iron bars, 1 of 1 meter long, 1 of 2 meter long, and 1 of 50 meters long, and I ask you if it is possible to construct a triangle with them.

You can do this by trying all possible orders of 1m, 2m and 50m for AB, AC, and BC, and figure out if you satisfy, for a given case, all 3 inequalities above. I won't do it here explicitly, but it's going to be obvious that it won't work out.

So, can I now conclude, yes, or no, that with a bar of 1m, a bar of 2m and a bar of 50m, I won't be able (in an Euclidean space) to make a triangle ?

Your example here is an interesting one in the case of Bell's theorem, because it demonstrates the way that assumptions can slide into ones thinking processes, and so cloud the truth of the matter before one. I personally would love to know whether a 30 light year distance, 40 light year distance and 50 light year distance set of rulers formed a triangle in the universe we live in irrespective of where we placed that rather large triangle. My lay reading on the matter suggests we should be much more suprised if it did than if it didn't because it is not predicted that everywhere in space we should discover space-time to be flat.

You are also using negative reasoning in the above example. You take a specific rather extreme example of trying to form a triangle with one side more than ten times the sum of the other two sides as an example of why one can't create triangles that violate cachy's inequality. To defend cachy's inequality using this approach you would have to consider all possible sets of three lengths and show that only those satisfying pythagoras's theorem formed triangles.. certainly an exercise invented by the terrible tedium (allusion to The Phantom Toll Booth). A wiser though perhaps equally ineffectual approach would be to look for one counter example, find it, and then announce done.

The classic case in point is Euclids postulate about parallel lines never meeting. People spent thousands of years trying to prove this conjecture from his other axioms, without any success whatsoever, and it was I think Riemann who found why it couldn't be done. It was his discovery which resulted in the notion of Euclidean space - a space people had played with for thousands of years -- but more importantly all those other types of spaces which none in those thousands of years had ever considered relevant to the equation at hand, or thought to use as vehicles for proving Euclids postulates as initially expressed false.

I readily agree that Bells theorem says that statistically given enough measurements X is impossible and that those exploring quantum mechanics see X happening. But that to me is what makes both Bells theorem and the quantum mechanical observations so interesting. I do not personally imagine that because things seem contradictory to me they are indeed in any real sense paradoxical. What I rather imagine is that all would be made plain to me if I just knew more.

But my initial commentary was limited to your claim that nothing useful could be discovered without using negative reasoning. Give me rulers long enough and a means of reading them, and I think I could prove the universe not flat, without recourse to negative reasoning. I think you would probably agree. Ergo, proof by contradiction :-)
 
Last edited:
  • #38
vanesch said:
You know, I asked funding to continue my research on human body levitation by thought only, and it is each time refused. I know that current theories don't allow this (especially that silly old theory of Newtonian gravity and so on), but one should let the impossible be researched. Think of the enormous advantage: we could all float to the office by just concentrating. Think of the use gain in fossil fuel emissions! If my research works out, I would solve one of the biggest problems of humanity! I should get huge grants and I don't receive anything!

I read a science fiction story as a child, that once read was never forgotten. It was about a government conspiracy to create the appearance that a brilliant but ficticious scientist had created an anti-gravity device only to die before he could get his results to print. The government fabricated this story, filmed the ficticious man using this device, constructed the ficticious mans residence and library, and filled this library with all the books they could imagine might be relevant to someone seeking to create an anti-gravity device, together with all sorts of quasi-theoretic ravings on random sheets of paper scattered wildly everywhere. They then implored the worlds greatest scientists as a matter of some urgency to visit the dead mans residence and duplicate this dead scientists astonishing feat.

Of course the story being fiction, it was a small step from there to have this subterfuge result in real tangable advances in the till then understand principles of gravity.

All of this was done to alter the perception in the minds of the remaining scientists, that what they had always imagined to be impossible, might in fact be possible. For until we believe that the impossible is possible, all things that currently seem impossible will be dismissed by the vast majority of scientists as not worth exploring. Believing something impossible is not the way to conduct research even for the few willing to explore the impossible. It creates a self fulfilling prophesy. Better to believe that self same thing possible, only to prove it otherwise by a later unanticipated proof of contradiction.

Great scientist throughout history are those that reformulated that which seemed impossible, to that which was the only possible. Impossible that one could distinguish gold from lead by placing the object in a bath and weighing it. Impossible that a cannon ball should fall at the same rate as a coin. Impossible that things should not when pushed later consequently stop moving. Impossible that time was other than a universally experienced thing. Impossible that the universe might have a beginning or an end. Impossible that the universe might be expanding at a more than linear rate given that gravity could only retard such expansion. Impossible that the pioneer space probes should not travel distances as predicted by the laws of force and gravity. Even in such absurd studies by Newton trying to turn lead into gold, it was not the study itself that was absurd but the attempt at producing the result, lacking the properly understanding of the mechanisms needed to turn hydrogen into gold.

I hope to live long enough to see the next impossible shattered. Thus I dream of impossible things. I think scientists should be more ready to listen to the advice (given I think by Sherlock Holmes) that when all things possible have been explored and found false, the one thing left, no matter how improbable, must be true.

Good luck getting that funding for your anti-gravity research.
 
Last edited:
  • #39
vanesch said:
Now, this seems to me a particularly misleading exposition, because the way it is represented, it would be EASY to signal faster-than-light: if the pulse exits an arbitrary long fibre even before it enters, or even before we LET IT ENTER, then it would be sufficient to show that it exits even if we don't let it enter :smile:

True, but I find just as intriguing the question as to which way that same pulse of light is traveling within the constructed medium. The explanation that somehow the tail of the signal contains all the necessary information to construct the resulting complex wave observed, and the coincidence that the back wave visually intersects precisely as it does with the entering wave, without in any way interferring with the arriving wave, seems to me a lot less intuitive than that the pulse on arriving at the front of the medium travels with a velocity of ~ -2c to the other end, and then exits. The number 2 of all numbers also seems strange. Why not 1. It seems a case where we are willing to defy occams razor in order to defend apriori beliefs. How much energy is packed into that one pulse of light, and how is this energy to be conserved when that one pulse visually becomes three. From where does that tail of the incoming signal derive the strength to form such a strong back signal. Is the signal fractal in the sense that within some small part is the description of the whole. Questions I can't answer not being a physicist, but still questions that trouble me with the standard explanations given, about it all being smoke and mirrors.

Likewise I find Feynmans suggestion that spontaneous construction and destruction of positron electron pairs is our world view is in reality electrons changing direction in time as consequence of absorbing/emitting a photon both intriguing and rather appealing.

It does seem that our reluctance to have things move other than forwards in time means that we must jump through hoops to explain why despite appearances things like light, electrons and signals cannot move backwards in time. My primary interest is in the question of time itself. I'm not well equipped to understand answers to this question, but it seems to me that time is the one question most demanding and deserving of serious thought by physicists even if that thought produces no subsequent answers.
 
Last edited:
  • #40
vanesch said:
You can in principle say that the absorbers didn't even exist (or didn't take their measurement positions) at the time of the emission of the pairs. So there must be a common cause in the past that dictates both: it can not be a direct effect of the existence of the absorbers on the source, right ?

Right.

But this means that that "cause in the past" will determine both the emission of pairs in the source, and whatever process we decided upon to set the absorbers. These can be, in principle, macroscopic processes, like throwing dice, "making up your mind", whatever.

Not exactly. That "cause in the past" will fix the position of a microscopic particle (the absorber). This will put a constraint on experimenter's freedom, but it is by no means the only constraint at work (energy conservation is another one). A better way of expressing this is to say that the decision must be in agreement with all physical laws, including the new proposed one. The experimenter cannot decide not to feel the force of gravity, to produce energy from nothing, etc. In the same way, he cannot decide to place an absorber where it is not supposed to be.

Imagine the experiment on a very large scale, with the arms light years long. So now we have a year to decide what button we are going to push. We could now use the same process of selection than the process that is used to decide if a patient in a medical test is going to get medecine A, medecine B or a placebo, at Alice to decide to push buttons A, B or C, and at Bob, we could look at the result of a set of patients (another one of course, of another test) whether it was the one with medecine A, medecine B or the one with the placebo who is now "most healed" to decide to push A, B or C there.

If there is a common cause in the past that influences the pushings of the buttons and the emitted pair, then there is hence a common cause in the past that decides as well what we give to a patient, and whether he gets better or not.

I am sorry, but I cannot follow your reasoning here. Anyway, what you need to show is that a generic medical test (not one explicitly based on an EPR experiment to make the decisions) is meaninglessl if superdeterminism is true.
 
  • #41
vanesch said:
Ok, but you still have to make it such, that whenever Alice pushes A and bob pushes A, they get always the same result. And same for B and C. How are you imposing this condition ? I don't see how you built this in. What we are interested in is not the precise format of Lambda, but of its EFFECTS (in terms of red/green when Alice and bob push a button). What's the effect of your table ?

It is from THIS condition that the 8 possibilities follow. We can only consider 8 different possible EFFECTS. Hence we can group all the different Lambdas in 8 groups.

What are those 16 possibilities ? Mind you that I'm not talking about the number of possible Lambdas (they can run in the gazillions), I'm talking about the different possible effects they can have at Alice and at Bob. I can only find 8, which I listed before.

Can you give me the list of 16 different possibilities (that is, the 16 different pairs of probabilities at Alice and Bob to have red), in such a way that we still obtain that each time they push the same button, they get always the same result ?

As soon as you said OK here you are acknowledging the possibility of D(1,2,3,4) rather than D(1,2,3) meaning four independent variables 16 possibilities not the 8 possibilities from only 3 independent variables.
I understand you only find 8 IF you use D(1,2,3); but to justify saying you “can only find 8” you must eliminate D(1,2,3,4) as the possible minimum solution. Nothing I’ve seen does that.


There are only four possible results (RR,RG,GR,GG). You cannot use the results of a fixed configuration, Alice pushes B and Bob pushes B giving (0,.4,.6,0); and then factor .4x.6 to get a .24 for RR, as you defined B;B the RR option already come up 0 times. And by your “OK” I think your recognizing that using the RG & GR probabilities here have no direct bearing on the odds for getting a RR observation. It requires a change in an independent variable like the function button selection made by Alice or another independent variable choice made by Bob to allow the possibility for a RR result.

All I’m saying is that the binary approach described here is not rigorously complete to justify it as proof against the Einstein LR claims when such a simple counter example can be provide.

Also important to note:
The counter example not only claims that Einstein LR may yet be correct, it also indicates the Neils Bohr claim of “Completeness” for Copenhagen HUP QM is still intact and not shown to wrong!

Remember that the Bohr claim was not that CQM was more complete than LR, but that it was complete and NO Other Theory would be able to show anything meaningfully more complete than what CQM already defined.
IMO nothing so far has, and if this binary proof was complete, it would be showing a level of completeness beyond that capable of CQM. Although there are options for interpretations that might be more complete (WMI, BM TI etc.) I’ve seen nothing that indicate that any of those or this binary approach as conclusively correct and that the CQM is wrong in claiming ‘completeness’.

Also the scientific community IMO in general does not take both Einstein and Bohr as definitively wrong or we would not see grants and experiments still moving forward to close “loopholes” in the EPR Bell Aspect type experiments. (Kwait Group at Illinois worked on one of these last year but I’m not aware of any results.)

If this boils down to different opinions of how to interpret this approach and what assumptions can be made without detailed proof - then let’s just keep our opinions, I don’t think there is enough remaining in this for either of us to change our opinion to merit debate.
 
  • #42
Originally Posted by ThomasT:
... is it that there is a statistical dependence between coincidental detections and associated angular differences between polarizers (that's the only way that I've seen the correlations mapped)?
vanesch said:
Yes. Of course. Event per event.
So these are the only correlations that we're talking about: how the rate of coincidental detection varies as you vary the angular difference between polarizer settings. (There's no correlation between events at A and events at B? That is, the detection attribute at one end isn't necessarily affected by the setting at the other, and the setting at one end isn't necessarily affected by the detection attribute at the other, and so on.)

A perfect correlation between coincidental detection and angular difference would be described by a linear function, wouldn't it? The surprise is that the observed correlation function isn't linear, but rather just what one would expect if one were analyzing the same optical properties at both ends for any given pair of detection attributes. (And it would seem to follow that if paired attributes have the same properties, then the emissions associated with them were spawned by a common event --eg. their interaction, or by tweaking each member of a pair in the same way, or, as in the Aspect et al experiments, their creation via an atomic transition.)

Originally posted by ThomasT:
The spatially separated data streams aren't paired randomly, are they? That is, they're determined by, eg., coincidence circuitry; so that a detection at one end severely limits the sample space at the other end.
vanesch said:
This is true in *experiments*. That is because experiments can only approximate "ideal EPR situations", and the most severe problem is the low efficiency of light detectors, as well as with the sources of entangled photons.

But we are not talking here about experiments (which are supposed to confirm/falsify quantum predictions), we are talking here about purely formal things. We can imagine sending out, at well-known times, a single pair of entangled systems, whether these be electrons, neutrons, photons, baseballs or whatever. This is theoretically possible within the frame of quantum theory. Whether there exists an experimental technique to realize this in the lab is a different matter of course, but it is in principle possible to have such states. We analyse the statistical predictions of quantum mechanics for these states, and see that they don't comply to what one would expect under the Bell conditions.
We compare the different formulations to each other and we compare them to the results of actual experiments. Quantum theory more closely approximates experimental results. We're trying to ascertain why.

As I had written in a previous post:
There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).
vanesch said:
Well, not really.
What is not really? Don't you think these assumptions are part (vis a vis classical optics) of the quantum mechanical approach?
vanesch said:
You don't NEED to assume a common cause, but it would the a priori the most practical way to EXPLAIN perfect anti-correlations in the case of identical settings.
I didn't say you NEED to assume a common cause, just that that assumption was part of the development of the quantum mechanical treatment. Quantum theory says that if the commonly caused optical disturbances are emitted in opposite directions then they can be quantitatively linked via the conservation of angular momentum.
vanesch said:
It would be even more puzzling if INDEPENDENT random events at each side would give perfect anti-correlations, wouldn't it ?

This is what some people in this thread seem to forget: we start from the premise that we find PERFECT CORRELATIONS in the case of identical settings on both side. These perfect correlations are already a prediction of quantum mechanics. This is experimentally verified BTW. But the question is: are these correlations a problem in itself ? STARTING from these perfect correlations, would you take as an assumption that the observed things are independent, or have somehow a common origin ? If you start by saying they are independent, you ALREADY have a big problem: how come they are perfectly correlated ?
So you can "delay" the surprise by saying that the perfect correlation is the result of a common cause (the Lambda in the explanation). Of course if there is a common cause, it is possible to have perfect anti-correlations. If there is no common cause, we are ALREADY in troubles, no ?

But if we now analyze what it means, that the outcomes are determined by a common cause, well then the surprise hits us a bit later, because it implies relations between the OTHER correlations (when Alice and Bob don't make the same measurement).
The relations are between the rate of coincidental detection and the angular difference of the settings of the polarizers, aren't they? What is so surprising about this relationship when viewed from the perspective of classical optics? That is, the angular dependency is just what one would expect if A and B are analyzing essentially the same thing with regard to a given pairing. And, isn't it only logical to assume that that sameness was produced at emission because the opposite moving optical disturbances were emitted by the same atom?
 
Last edited:
  • #43
RandallB said:
As soon as you said OK here you are acknowledging the possibility of D(1,2,3,4) rather than D(1,2,3) meaning four independent variables 16 possibilities not the 8 possibilities from only 3 independent variables.
I understand you only find 8 IF you use D(1,2,3); but to justify saying you “can only find 8” you must eliminate D(1,2,3,4) as the possible minimum solution. Nothing I’ve seen does that.

No, not at all, the 8 doesn't come from the fact that there are 3 "slots" in the D-function! It comes from the fact that there are only 8 different cases of Alice pushing A,B or C and getting green or red, together with the assumption of perfect correlation which means that Bob needs to get the same result if he pushes the same button.

Try to re-read the proof carefully. I think you totally misunderstood the mathematical reasoning.
 
  • #44
Ian Davis said:
The classic case in point is Euclids postulate about parallel lines never meeting. People spent thousands of years trying to prove this conjecture from his other axioms, without any success whatsoever, and it was I think Riemann who found why it couldn't be done. It was his discovery which resulted in the notion of Euclidean space - a space people had played with for thousands of years -- but more importantly all those other types of spaces which none in those thousands of years had ever considered relevant to the equation at hand, or thought to use as vehicles for proving Euclids postulates as initially expressed false.

I would say the opposite: the fact that people saw that they couldn't derive the 5th postulate from the others, even though they thought they could (and even occasionally they thought they DID it, but where then quickly forced into seeing that they made a hidden assumption), means that in formal reasoning on a few pages, hidden assumptions don't survive.
 
  • #45
Ian Davis said:
Of course the story being fiction, it was a small step from there to have this subterfuge result in real tangable advances in the till then understand principles of gravity.

And then the opposite has been so long the case too. Astrology is an example.

In as much as *nature* can surprise us sometimes, and *falsify* theories about how nature works, I think you have a much greater difficulty in giving an impressive list of FORMAL arguments which showed to be wrong.

Bell's theorem doesn't say anything about *nature*. It tells something about a *theory*, which is quantum mechanics: that this theory makes predictions which cannot be obtained by *another* hypothetical theory which satisfies certain conditions.
 
Last edited:
  • #46
Ian Davis said:
It seems a case where we are willing to defy occams razor in order to defend apriori beliefs. How much energy is packed into that one pulse of light, and how is this energy to be conserved when that one pulse visually becomes three. From where does that tail of the incoming signal derive the strength to form such a strong back signal.

What makes me wary with the given "explanations" is that apparently, everything still fits within KNOWN THEORY. Otherwise these guys would be on a Nobel: if they could say: hey, quantum optics predicts THIS, and we find experimentally THAT, and we've verified several aspects, and it is not a trivial error, we simply find SOMETHING ELSE than what known theory predicts, now that would be very interesting: we have, after about 80 years, falsified quantum theory, or at least, quantum optics! But that's NOT what they say ; they say: things behave AS THEORY PREDICTS, only, the stuff is traveling at 2c backwards. Well, that's nonsense, because current theory doesn't allow for that. So if their results DO follow current theory, then their interpretation for sure is wrong - or at best, it is only a possible interpretation amongst others.

Likewise I find Feynmans suggestion that spontaneous construction and destruction of positron electron pairs is our world view is in reality electrons changing direction in time as consequence of absorbing/emitting a photon both intriguing and rather appealing.

Sure, but then, in modern quantum field theory, there is a mathematical construction which you can interpret BOTH WAYS: or as an electron that changes time direction, or as the creation or annihilation of an electron-positron pair.

It does seem that our reluctance to have things move other than forwards in time means that we must jump through hoops to explain why despite appearances things like light, electrons and signals cannot move backwards in time.

This is true, but current theory allows for a view, in all circumstances, where you can see all particles go forward in time. The "price to pay" is that you have to accept pair creation and annihilation. But in fact, this view is a bit more universal than the "backwards in time" view, in that in as much as fermions (electrons, quarks...) CAN be seen as going backward and forward in time and as such explain "creation" and "destruction", with bosons (photons, gluons), this doesn't work out anymore. They DO suffer creation and destruction in any case. So what we thought we could win by allowing "back in time" with fermions (namely, the "explanation" for pair creation and annihilation) screws up as an explanation in any case with bosons. Which makes the "back in time" view an unnecessary view.

Again, modern QFT can entirely be looked upon as "going forward in time". As such, if people come up with an experiment that is "conform to modern theory", but "clearly shows that something is traveling backwards in time", then they are making an elementary error of logic.
 
  • #47
ThomasT said:
A perfect correlation between coincidental detection and angular difference would be described by a linear function, wouldn't it? The surprise is that the observed correlation function isn't linear, but rather just what one would expect if one were analyzing the same optical properties at both ends for any given pair of detection attributes.

I have no idea why you think that the correlation function should be linear ?

(And it would seem to follow that if paired attributes have the same properties, then the emissions associated with them were spawned by a common event --eg. their interaction, or by tweaking each member of a pair in the same way, or, as in the Aspect et al experiments, their creation via an atomic transition.)

But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.

Originally posted by ThomasT:
The spatially separated data streams aren't paired randomly, are they? That is, they're determined by, eg., coincidence circuitry; so that a detection at one end severely limits the sample space at the other end.

We compare the different formulations to each other and we compare them to the results of actual experiments. Quantum theory more closely approximates experimental results. We're trying to ascertain why.

As I had written in a previous post:
There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).

What is not really? Don't you think these assumptions are part (vis a vis classical optics) of the quantum mechanical approach?

I didn't say you NEED to assume a common cause, just that that assumption was part of the development of the quantum mechanical treatment. Quantum theory says that if the commonly caused optical disturbances are emitted in opposite directions then they can be quantitatively linked via the conservation of angular momentum.

The experimental optical implementation, with approximate sources and detectors, is only a very approximative approach to the very simple quantum-mechanical question of what happens to entangled pairs of particles, which are in the quantum state:

|up>|down> - |down> |up>

In the optical experiment, we are confronted with the fact that our source of entangled photons is emitting randomly,in time and in different directions, of which we can only capture a small amount of them, and of which we don't have a priori timing information. But that is not a limitation of principle, it is a limitation of practicality in the lab.

So we use time coincidence as a way to ascertain that we have a high probability to deal with "two parts of an entangled pair". We also have the limited detection efficiency of the photon detectors, which make that we don't trigger each time the detector when they receive an entangled pair. But we can have a certain sample of pairs of which we are pretty sure that they ARE from entangled pairs, as they show perfect (anti-) correlation, which would be unexplainable if they were of different origin.

It would be simpler, and it is in principle entirely possible, to SEND OUT entangled pairs of particles ON COMMAND, but we simply don't know how to make such an efficient source.
It would also be simpler if we had 100% efficient particle detectors. In that case, our experimental setup would ressemble more the "black box" machine of Bell.

The relations are between the rate of coincidental detection and the angular difference of the settings of the polarizers, aren't they? What is so surprising about this relationship when viewed from the perspective of classical optics?

Well, how do you explain perfect anti-correlation purely on the grounds of classical optics ? If you have, say, an incident pulse on both sides with identical polarisation, which happens to be 45 degrees wrt to the (identical) polariser axes at Bob and Alice, which make normally Alice have 50% chance to see "up" and 50% chance to see down, and Bob too, how come that they find each time the SAME outcome ? That is, how come that when Alice sees "up" (remember, with 50% chance), that Bob ALSO sees "up" and if Alice sees "down" that Bob also sees down ? You'd expect that you would have a total lack of correlation in this case, no ?

Now, of course, the source won't always send out pairs which are 45 degrees away from Alice's and Bob's axis, but sometimes it will. So how come that we find perfect correlation ?

That is, the angular dependency is just what one would expect if A and B are analyzing essentially the same thing with regard to a given pairing. And, isn't it only logical to assume that that sameness was produced at emission because the opposite moving optical disturbances were emitted by the same atom?

No, this is exactly what Bell analysed ! This would of course have been the straightforward explanation, but it doesn't work.
 
  • #48
vanesch said:
But in fact, this view is a bit more universal than the "backwards in time" view, in that in as much as fermions (electrons, quarks...) CAN be seen as going backward and forward in time and as such explain "creation" and "destruction", with bosons (photons, gluons), this doesn't work out anymore. They DO suffer creation and destruction in any case. So what we thought we could win by allowing "back in time" with fermions (namely, the "explanation" for pair creation and annihilation) screws up as an explanation in any case with bosons. Which makes the "back in time" view an unnecessary view.

I am not sure if I understand what you mean by emphasising that creation and destruction of bosons, being problematic in the context of time reversal. I'd understand bosons to be absorbed and emitted (converted to/from energy absorbed emitted by the fermion of the correct type), but I don't see how this understanding in any sense "screws up as an explanation in any case with bosons". Fermions and bosons seem such different things that one might equally well use the argument that boson's liking to be in the same state, screws up the idea that fermions hate to be in the same state. Trying to understand the behaviour of fermions as if they were bosons or visa versa seems to be comparing apples and oranges. Or is there some deep connection between bosons and fermions which relates notion of change in charge in electrons with emission/absorption photons, which is violated if positrons are imagined to be electrons moving backwards in time.

I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling. I've no idea what gluons would manifest themselves as under time reversal, but at a guess they'd provide the nuclear forces associated with anti-protons, etc. Time reversed gravitons at a guess would be good candidates to explain the force of dark energy because their exchange viewed from our perspective would be pulling things apart, while from the perspective of backwards time would have them behaving just like gravitons in creating an attraction between mass. All very lay reasoning, with no math to back it, but I've not encountered the notion that it is more unreasonable to imagine bosons moving backwards in time than fermions so wish to know more, the better to improve my understanding of what can be and what can't be.
 
Last edited:
  • #49
vanesch said:
No, not at all, the 8 doesn't come from the fact that there are 3 "slots" in the D-function! It comes from the fact that there are only 8 different cases of Alice pushing A,B or C and getting green or red, together with the assumption of perfect correlation which means that Bob needs to get the same result if he pushes the same button.

Try to re-read the proof carefully. I think you totally misunderstood the mathematical reasoning.

Sorry, Now you have me totally confused and rereading your proof and prior posts are of no help. Given this current statement I am at a loss to understand what the propose of the D-function was in your prior posts.
Nor can I figure what kind of math you used to take three options by Alice times three options by Bob to get 9 different cases? My off the cuff gorilla math gives 3 x 3 = 9.

Perhaps I missed the point of the 3 functions for Alice must be identical to the 3 functions used by Bob. That would unfairly eliminate the ability of Alice or Bob to randomly select from say a population of 180 different functions which ones they use for their private options of A, B, & C.

Sure it may be beneficial to reduce the options for calculation purposes. But if the options are contrived so as to eliminate any independence in selection between Alice and Bob as would exist in reality, the results obtained by such contrived options would need to be doubled to account for blocking that independence from the calculation in order to represent a minimum possible result, if that degree of independence were to be allowed in as should be expected in reality.

If you willing to do a bit of rereading yourself; Notice in my example as given in post #9 that only the first of nine outcomes require that both Alice and Bob only see RED if the other does too.
The kind of "toy examples" you describe where (AA) (BB) (CC) always has Alice and Bob seeing Red together requires a form of superdeterminism you should be avoiding! I consider demanding the A, B & C be exactly the same A, B, C functions used by the other observer an unrealistic elimination of a real indeterminate variability that cannot be ignored. I.E. a "toy example" that cannot be expected to be equivalent to real life.

As I said before, I don’t see how my doubts on this point can be changed.
And if after due reflection you are still have no doubt at all in the assumptions used and conclusions made in this method then you won’t be changing your mind either.
That simple leaves us with differing opinions about if there is any scientific doubt remaining in this approach and its claims.
I think there is doubt and you do not, that is OK by me.

We both gave it our best shot, no need for either of us to struggle on in a pointless debate that has no hope of changing the others opinion. So I will leave it at that.
 
  • #50
RandallB said:
The kind of "toy examples" you describe where (AA) (BB) (CC) always has Alice and Bob seeing Red together requires a form of superdeterminism you should be avoiding! I consider demanding the A, B & C be exactly the same A, B, C functions used by the other observer an unrealistic elimination of a real indeterminate variability that cannot be ignored. I.E. a "toy example" that cannot be expected to be equivalent to real life.
Aren't A, B & C supposed to stand for different possible angles the experimenters can choose to measure the spins of their respective particles? Do you agree that QM predicts that when you have two particles with entangled spins, if we measure each particle's spin on the same axis, they should always have opposite spins with 100% probability?
 
  • #51
Well sure what’s that got to do with not allowing the opportunity for Alice and Bob to select the angle for A, B, & C independently?
Are you saying A, B, & C must all be the same (0 or 90 degree shifts) from the three functions used by the other observer?
Don’t you think that eliminates an important element of independence between Alice and Bob to predetermine the types of tests they are allowed to use down to a set of three identical functions?

From such a starting point it seems more like a gimmick designed to produce (I’m sure not intentionally) an expected or wanted result than a rational evaluation of all the independent variables possible in the problem. I really don’t see where it is any better than the Van Newmwn proof.
 
  • #52
RandallB said:
Well sure what’s that got to do with not allowing the opportunity for Alice and Bob to select the angle for A, B, & C independently?
They do select them independently. Say on each trial Alice has a 1/3 chance of choosing A, a 1/3 chance of choosing B, and a 1/3 chance of choosing C, and the same goes for Bob. Then on some trials they will make different choices, like Alice-B and Bob-C. But on other trials they will happen to make the same choice, like both choosing B. What I'm saying is that if we look at the subset of trials where they both happen to choose the same angle, they are 100% guaranteed to get opposite spins (or guaranteed to get the same color lighting up in vanesch's example--either way the correlation is perfect). Do you agree?
RandallB said:
Are you saying A, B, & C must all be the same (0 or 90 degree shifts) from the three functions used by the other observer?
I don't understand what you're asking here. A, B & C are three distinct angles, like 0, 60, 90 or something. When you say they "must all be the same", are you talking about the assumption that each of the two particles must have a predetermined response for how it will behave if it's measured on any of the three angles? If so, this is just something we must assume if we want to believe in local realism and still explain how the particles' responses are always perfectly correlated when the experimenters happen to pick the same angle.
RandallB said:
Don’t you think that eliminates an important element of independence between Alice and Bob to predetermine the types of tests they are allowed to use down to a set of three identical functions?
I'm not sure what you're asking here either, maybe if you clarify what you meant in the previous part it'll become more clear to me.
 
Last edited:
  • #53
Ian Davis said:
I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling. I've no idea what gluons would manifest themselves as under time reversal, but at a guess they'd provide the nuclear forces associated with anti-protons, etc. Time reversed gravitons at a guess would be good candidates to explain the force of dark energy because their exchange viewed from our perspective would be pulling things apart, while from the perspective of backwards time would have them behaving just like gravitons in creating an attraction between mass. All very lay reasoning, with no math to back it, but I've not encountered the notion that it is more unreasonable to imagine bosons moving backwards in time than fermions so wish to know more, the better to improve my understanding of what can be and what can't be.
All the known fundamental laws of physics are already either time-symmetric (invariant under time-reversal) or CPT-symmetric (invariant under a combination of time reversal, matter/antimatter charge reversal, and parity inversion). For a time-symmetric set of laws, what this means is that if you take a movie of a system obeying those laws and play it backwards, there will be no way for another physicist to know for sure that you are playing the movie backwards rather than forwards, since the system's behavior in the backwards movie is still obeying exactly the same laws (though the backwards movie may appear statistically unlikely if it shows entropy decreasing in an isolated system). This is true of gravitation, which is perfectly time-symmetric--a backwards movie of a gravitating system will not involve the appearance of any kind of "antigravity", despite what you might think. I discussed this in post #68 here:
Actually, gravity is time-symmetric, meaning the laws of gravity are unchanged under a time-reversal transformation--in physical terms, this means that if you look at a film of objects moving under the influence of gravity, there's no way (aside from changes in entropy) to determine if you're watching the film being played forwards or if it's being played backwards. The reason it seems asymmetric is because of entropy, like how a falling object will smack the ground and dissipate most of its kinetic energy as sound and heat--if a falling object had a perfectly elastic collision with the ground so that no kinetic energy was dissipated in this way, each time it hit the ground it would bounce back up to an equal height as before, so this would look the same forwards as backwards (and the reversed version of the collision where kinetic energy is dissipated is not ruled out by the laws of physics, it's just statistically unlikely that waves of sound and the random jostling of molecules due to heat would converge to give a sudden push to an object that had been previously been resting on the ground...if it did happen, though, it would look just like a reversed movie of an object falling to the ground and ending up resting there). Likewise, any situation where no collisions are involved, like orbits, will still be consistent with the laws of gravity when viewed in reverse.
The idea behind CPT-symmetry is basically similar--if you take a movie of a system obeying CPT-symmetric laws, then play it backwards and take the mirror image so that the +x direction is now labeled -x, the +y now labeled -y and the +z now labeled -z (parity inversion) and you reverse the labels of particles and antiparticles (so that electrons in the original movie are now labeled as positrons in the reversed movie, and vice versa), then the new altered movie will still appear to be obeying the exact same laws as in the unaltered version.
 
  • #54
Ian Davis said:
I am not sure if I understand what you mean by emphasising that creation and destruction of bosons, being problematic in the context of time reversal. I'd understand bosons to be absorbed and emitted (converted to/from energy absorbed emitted by the fermion of the correct type), but I don't see how this understanding in any sense "screws up as an explanation in any case with bosons".

No, that's not what I wanted to say. I wanted to say that with fermions, one might "hate the idea" to have creation and annihilation, and then one can find an "explanation" for it, which is that fermions sometimes travel back in time. As such, one can then eliminate the need to consider "creation" and "annihilation".
But even considering "traveling back in time", one cannot eliminate the need to consider "creation" and "annihilation" of bosons. So if you ANYHOW have to consider creation and annihilation (which you wanted to avoid and adopted the "back in time" explanation for it) for bosons, then you can also accept it for fermions and any NEED to consider back in time propagation vanishes, as its explanatory power (its possibility of doing away with creation and annihilation) was in any case not working for bosons.

In other words, the assumption that particles go back in time is never needed, as it doesn't explain anything. And we can explain everything in QFT with particles going forward in time, and considering creation/annihilation.

I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling.

This is correct, so we can just as well take it that it goes forward, no ? It will not be possible to DEMONSTRATE that it goes backward in time, and that it CAN'T be seen as traveling forward in time. And this brings us back to the original article: something that complies with actual theory can never PROVE that it went back in time !
 
  • #55
RandallB said:
Sorry, Now you have me totally confused and rereading your proof and prior posts are of no help. Given this current statement I am at a loss to understand what the propose of the D-function was in your prior posts.

The whole idea of Bell's proof is that whether the red or the green light lights up at the Alice box is given by a probability that is determined by the "local inputs", which are two-fold: an input that comes from the "central box", and the button that Alice pushes.

That is, GIVEN these inputs, so given the message from the central box, and the choice of Alice, this gives us a probability for there to be "red" as a result (and hence, the complementary probability to have "green" of course).

Now, this can be a genuine probability, like, say, 0.6, or it can be a certainty, which comes down to the probability to be 0 (green for sure) or 1 (red for sure). We leave this open.

So GIVEN the message from the central box (lambda1 if you want), and GIVEN the choice by Alice (X, which is A, B or C), we have a function, which is P(X,lambda1), and gives us that famous probability.

We can hold the same reasoning at Bob's, where the function will be Q(Y,lambda2).

Now, D is the expectation value of the correlation function of Alice's and Bob's outcomes, when they have picked respectively X and Y, and when the message lambda1 was sent to alice, and the message lambda2 was sent to bob.

D is nothing else but the probability to have (red,red) times +1 plus the probability to have (green,green) times +1 plus the probability to have (red,green) times -1 plus the probability to have (green,red) times -1, under the assumption that Alice pushed X, that Bob pushed Y, that lambda1 was sent to Alice, and under the assumption tht lambda2 was sent to Bob.

As we assume that the "drawing" is done locally (all "common information" is already taken care off by the message lambda1 and lambda2, so we only look at the REMAINING uncertainties), we can assume that the probability to have, say, (red,red) is given by:

P(X,lambda1) x Q(Y,lambda2).

The probability to have, red-green is given by:
P(X,lambda1) x (1 - Q(Y,lambda2) )

etc...

And from this, we can calculate the above D function (the expectation over the remaining probabilities, given X, Y, lambda1 and lambda2) and we find:

D(X,Y, lambda1, lambda2) = ( 1 - 2 x P(X,lambda1) ) x (1 - 2 x Q(Y, lambda2))

Now there is a triviality, which seems to be confusing you, which I applied:
we can call a new mathematical structure: lambda = { lambda1, lambda2 }. If lambda1 is a real number, and lambda2 is a real number, then lambda can be seen as a 2-dim vector. If lambda1 was a text file, and lambda2 is a text file, then lambda can be seen as the concatenation of the two text files. It is just NOTATION.

Now, if in all generality, you have a function f(x), you can ALWAYS define a function g(x,y) which is equal to f(x) for all values of y, of course.
So if P(X,lambda1) is a function of lambda1, you can ADD lambda2 as an argument, which doesn't do anything: P'(X,lambda1,lambda2) = P(X,lambda1).
Same for Q, we can define Q'(Y,lambda1,lambda2) = Q(Y, lambda2).

But we have the "vector" notation lambda which stands for {lambda1, lambda2}, so we can write P'(X,lambda) and Q'(Y,lambda). They just have a "useless" argument more, but they are the same function, just as g(x,y) is in fact just f(x), and y doesn't play a role. But if this confuses you, I will continue to write lambda1, lambda2.

So we can write:
D(X,Y, lambda1, lambda2) = ( 1 - 2 x P'(X,lambda1,lambda2) ) x (1 - 2 x Q'(Y, lambda1,lambda2))

And we can drop the ', and call P', simply P, and Q' simply Q.

So we can write:
D(X,Y, lambda1, lambda2) = ( 1 - 2 x P(X,lambda1,lambda2) ) x (1 - 2 x Q(Y, lambda1,lambda2))

Ok, so D was the expectation value of the correlation, GIVEN the choice of Alice and Bob, and GIVEN the (hidden) messages sent from the central box.

It is important to note that D is always a real number between -1 and +1. This comes from the fact that P and Q are probabilities, and hence between 0 and 1.

Now, we assume that those messages themselves are randomly sent out with a given probability distribution. That means, there's a certain probability Pc(lambda1,lambda2) to send out a specific couple of messages, namely {lambda1,lambda2}.

Given that Alice and Bob can't see that message, THEIR correlation function (for a given choice X and Y) will be the expectation value of D over this probability distribution of the couples (lambda1, lambda2), right ? Bob and Alice will "average" their correlation function over the messages.

So how does this work out ? Well, you have to sum of course each value of D(X,Y,lambda1,lambda2) multiplied with the probability that the messages sent out will be {lambda1,lambda2}. THIS will give you the correlation function that Bob and Alice will find when they picked X and Y, in other words, C(X,Y).

So we have that:

[tex]
C(X,Y) = \sum_{(lambda_1,lambda_2)} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)
[/tex]

This "sum" can be an integral over whatever is the set of the couples (lambda1,lambda2). It can be a huge set. In the case of text files, we have to sum over all thinkable couples of textfiles (but some might have probability Pc=0 of course). In the case of real numbers, we have to integrate over the plane. It doesn't matter.

The above expression is valid for the 9 different C(X,Y) values: for C(A,A), for C(A,B),...

But we KNOW certain C values: C(A,A) = 1 for instance. Does C(A,A) = 1 impose a condition on D or on Pc ?

Yes, it does. This is the whole point. Let us write out the above expression for the case C(A,A):

[tex]
C(A,A) = 1 = \sum_{(lambda_1,lambda_2)} D(A,A,lambda1,lambda2) Pc(lambda1,lambda2)
[/tex]

Now,
[tex]
\sum_{(lambda_1,lambda_2)} Pc(lambda1,lambda2) = 1
[/tex]

because it is a probability distribution, all Pc values are between 0 and 1, and D(A,A,lambda1,lambda2) is a number between -1 and 1. Such a sum can only be equal to 1 if ALL D(A,A,lambda1,lambda2) values are equal to 1 (at least, for those lambda1 and lambda2 for which Pc is not equal to 0).

So we know that D(A,A,lambda1, lambda2) = 1 for all lambda1, and all lambda2.

But we also know that D(A,A,lambda1,lambda2) = ( 1 - 2 x P(A,lambda1,lambda2) ) x (1 - 2 x Q(A, lambda1,lambda2))

So we have that:
( 1 - 2 x P(A,lambda1,lambda2) ) x (1 - 2 x Q(A, lambda1,lambda2)) = 1 for all lambda1, and lambda2.

Well, (1 - 2 x) (1 - 2 y), with x and y between 0 and 1, can only be equal to 1 in two different cases:

x = y = 1 OR

x = y = 0.

This means that for each couple (lambda1, lambda2) we have only 2 possibilities:

OR
P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 1

OR
P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0

Of course, if you take a random lambda1 and lambda2, it can be, say 1, and if you take another lambda1 and lambda2, it can be 0, but it is in each case one of both.

So this means we can split the whole set of (lambda1,lambda2) couples into two parts:
those couples that give P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 1 and then the other couples, which necessarily give: P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0.

Concerning P(A,lambda1,lambda2), we hence don't need to know precisely what are lambda1, and lambda2 (text files, numbers,...), but just whether they fall in the first part, or in the second, because in the first part, P(A,lambda1,lambda2) will be equal to 1, and in the second part, it will be 0. In ANY case, P(A,lambda1,lambda2) = Q(A,lambda1,lambda2).

So if we know in which of the part the couple (lambda1,lambda2) falls, we know enough about it to know the value of P(A,lambda1,lambda2) and Q(A,lambda1,lambda2). It is either 1 or 0. So the split of the set of couples (lambda1,lambda2) comes about because of the fact that we deduced that in any case, P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) can only take up 2 possible values.

Now, we apply the same reasoning to C(B,B) = 1 and then to C(C,C) = 1, and we will now have 3 "partitions" in two of the set of (lambda1,lambda2) couples. The first partition, as we showed, determines the value of P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0 or 1. The second partition will determine the value of P(B,lambda1,lambda2) = Q(B,lambda1,lambda2) = 0. And the last one will do so for P(C,lambda1,lambda2) = Q(C,lambda1,lambda2) = 0

Now, if you apply 3 different partitions in 2 parts to any set, you will end up with at most 8 pieces. So our entire set of couples (lambda1,lambda2) is now cut in 8 pieces, and if we know in which piece a couple falls, we know what will be the results for the 6 functions:
P(A,lambda1,lambda2), P(B,lambda1,lambda2), P(C,lambda1,lambda2), Q(A,lambda1,lambda2), Q(B,lambda1,lambda2), Q(C,lambda1,lambda2).

Each of these functions is constant over each of the 8 different pieces of the set of (lambda1,lambda2) couples (either it is 1 or it is 0).

Now, if we know these 6 values, we know also the 9 values of
D(A,A,lambda1,lambda2), D(A,B,lambda1,lambda2), D(A,C,lambda1,lambda2) ...
D(C,C,lambda1,lambda2).

Each of these functions is CONSTANT over each of the 8 different pieces of our (lambda1,lambda2) set, because they depend on the P and Q functions which are constant. We can call these constant values D(X,Y,firstslice), D(X,Y,secondslice) ...
D(X,Y,8thslice)

Now, pick one of these, say, D(A,B,lambda1,lambda2). This function can only take on at most 8 different values, because we have only 8 different possibilities for P(A,lambda1,lambda2) and Q(B,lambda1,lambda2). But in fact it can take on only 4, because our 8 different possibilities included P(C,lambda1,lambda2) and this value doesn't enter into the calculation of D(A,B,lambda1,lambda2), so of our 8 different "slices", they will give 2 by 2 the same result (namely, the two slices that only differ for P(C,lambda1,lambda2) will not change the value of D).

Now, if we go back to
[tex]
C(X,Y) = \sum_{(lambda_1,lambda_2)} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)
[/tex]

split the sum over the entire set of couples (lambda1,lambda2) over the 8 different slices:

[tex]
C(X,Y) = \sum_{(lambda_1,lambda_2) in first slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2) +
[/tex]
[tex]\sum_{(lambda_1,lambda_2) in second slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2) + ...
[/tex]
[tex]
\sum_{(lambda_1,lambda_2) in 8th slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)
[/tex]

But within the first slice, D is constant! And within the second slice, too...
So we can bring this outside:

[tex]
C(X,Y) = D(X,Y,firstslice) \sum_{(lambda_1,lambda_2) in first slice} Pc(lambda1,lambda2) +
[/tex]
[tex]D(X,Y,secondslice) \sum_{(lambda_1,lambda_2) in second slice} Pc(lambda1,lambda2) + ...
[/tex]
[tex]D(X,Y,8thslice)\sum_{(lambda_1,lambda_2) in 8th slice} Pc(lambda1,lambda2)
[/tex]

And now the sums that remain, are nothing else but the sum of probabilities of each of the (lambda1,lambda2) couples in the first slice (which we call p1), of each of the (lambda1,lambda2) couples in the second slice (which we call p2), ...

So:
[tex]
C(X,Y) = D(X,Y,firstslice) p1 + D(X,Y,secondslice) p2 + ...
D(X,Y,8thslice) p8
[/tex]

But let us look a bit deeper into D(X,Y,firstslice). In the first slice, we have that P(A,lambda1,lambda2) = 1 = Q(A,lambda1,lambda2) AND
P(B,lambda1,lambda2) = 1 = Q(B,lambda1,lambda2) AND
P(C,lambda1,lambda2) = 1 = Q(C,lambda1,lambda2)

So this means that D(X,Y,firstslice) = 1 for all X and Y !

Now in the second slice, we have that:
P(A,lambda1,lambda2) = 1 = Q(A,lambda1,lambda2) AND
P(B,lambda1,lambda2) = 1 = Q(B,lambda1,lambda2) AND
P(C,lambda1,lambda2) = 0 = Q(C,lambda1,lambda2)

So this means that D(A,B,secondslice) = 1, D(A,C,secondslice) = -1, ...

Etc,...

In fact, we will find that those famous constants of are just 1 or -1, and we can calculate them (using D(X,Y) = (1-2P(X)) (1-2Q(Y)) ) in each slice. So there aren't even 4 possibilities for D, but only 2!

Given this, it means that we can calculate each of the 9 functions:
C(X,Y) as sums and differences of p1, p2, p3, ... p8.
But of course, we already know the C(A,A) = C(B,B) = C(C,C) = 1, because we imposed this. If you do the calculation (do it as an exercise!) you will find that each time, they come out to be p1 + p2 + ... + p8 = 1. That is because D(A,A...) = 1 for all of the slices, and D(B,B,...) = 1 for all of the slices and D(C,C,...) = 1 for all of the slices, as we already deduced before.
 
Last edited:
  • #56
Originally Posted by ThomasT
A perfect correlation between coincidental detection and angular difference would be described by a linear function, wouldn't it? The surprise is that the observed correlation function isn't linear, but rather just what one would expect if one were analyzing the same optical properties at both ends for any given pair of detection attributes.

vanesch said:
I have no idea why you think that the correlation function should be linear ?

Where did I say that I think it should be linear? I said that a perfect correlation would be linear. But I wouldn't expect that.

Originally Posted by ThomasT
(And it would seem to follow that if paired attributes have the same properties, then the emissions associated with them were spawned by a common event --eg. their interaction, or by tweaking each member of a pair in the same way, or, as in the Aspect et al experiments, their creation via an atomic transition.)

vanesch said:
But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

I don't think that's what Bell's theorem actually analyses, or maybe I just don't understand what you're saying. Anyway, let's continue.

vanesch said:
It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.

Keep in mind that we're not correlating what happens at A with what happens at B. We're correlating angular difference with coincidental detection.

If you only plot coincidence rates corresponding to 0 and 90 degree angular difference, then connect the dots, then you get a straight line, don't you? What does that tell you? It doesn't tell me much of anything necessarily.

vanesch said:
The experimental optical implementation, with approximate sources and detectors, is only a very approximative approach to the very simple quantum-mechanical question of what happens to entangled pairs of particles, which are in the quantum state:

|up>|down> - |down> |up>

In the optical experiment, we are confronted with the fact that our source of entangled photons is emitting randomly,in time and in different directions, of which we can only capture a small amount of them, and of which we don't have a priori timing information. But that is not a limitation of principle, it is a limitation of practicality in the lab.

So we use time coincidence as a way to ascertain that we have a high probability to deal with "two parts of an entangled pair". We also have the limited detection efficiency of the photon detectors, which make that we don't trigger each time the detector when they receive an entangled pair. But we can have a certain sample of pairs of which we are pretty sure that they ARE from entangled pairs, as they show perfect (anti-) correlation, which would be unexplainable if they were of different origin.

It would be simpler, and it is in principle entirely possible, to SEND OUT entangled pairs of particles ON COMMAND, but we simply don't know how to make such an efficient source.
It would also be simpler if we had 100% efficient particle detectors. In that case, our experimental setup would ressemble more the "black box" machine of Bell.

I take it, although I'm not sure, that you don't agree with:
There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).

So, I'll ask you again:
Don't you think these assumptions, (1) and (2) above, are part (vis a vis classical optics) of the quantum mechanical approach?

Originally Posted by ThomasT
The relations are between the rate of coincidental detection and the angular difference of the settings of the polarizers, aren't they? What is so surprising about this relationship when viewed from the perspective of classical optics?

vanesch said:
Well, how do you explain perfect anti-correlation purely on the grounds of classical optics? If you have, say, an incident pulse on both sides with identical polarisation, which happens to be 45 degrees wrt to the (identical) polariser axes at Bob and Alice, which make normally Alice have 50% chance to see "up" and 50% chance to see down, and Bob too, how come that they find each time the SAME outcome ? That is, how come that when Alice sees "up" (remember, with 50% chance), that Bob ALSO sees "up" and if Alice sees "down" that Bob also sees down ? You'd expect that you would have a total lack of correlation in this case, no ?

Now, of course, the source won't always send out pairs which are 45 degrees away from Alice's and Bob's axis, but sometimes it will. So how come that we find perfect correlation ?

I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and any one angular difference. That wouldn't mean anything. The rate is always (in the ideal) a certain number associated with a certain angular difference. In ascertaining the correlation between angular dependence and coincidence rate you would want to plot as many rates with respect to different angular differences as you could.

If you know (have produced) the polarization of the incident light, then you can use a classical treatment, can't you? The problem is that we don't know anything about the incident pulses. Quantum theory makes two assumptions: (1) they had a common source, and (2) they are, in effect, the same thing.

Anyway, I was talking about viewing the relationship between angular dependence and coincidence rate from the perspective of classical optics -- not actually calculating the results using classical optics.

Originally Posted by ThomasT
That is, the angular dependency is just what one would expect if A and B are analyzing essentially the same thing with regard to a given pairing. And, isn't it only logical to assume that that sameness was produced at emission because the opposite moving optical disturbances were emitted by the same atom?

vanesch said:
No, this is exactly what Bell analysed ! This would of course have been the straightforward explanation, but it doesn't work.

I think you're wrong about this, because it happens to be exactly what the developers of quantum theory did assume. However, in order to do accurate calculations and develop a consistent mathematical framework for the theory it was necessary to leave out certain details (about polarization for example) that were part of the classical theory, but which led to calculational problems when applied to quantum experimental phenomena. One simply can't say anything about the angle of polarization of the light incident on the polarizers-analyzers.

In place of all the metaphysical stuff of classical physics we have the quantum superposition of states (which doesn't pretend to be anything other than a mathematical contrivance).
 
  • #57
So what exactly is the difference between determinism and superdeterminism?
 
  • #58
ThomasT said:
So what exactly is the difference between determinism and superdeterminism?
I wrote something about this in post #29 of this thread:
From reading the wikipedia article I get the impression that superdeterminism is basically the same as the notion of a "conspiracy" in the initial conditions of the universe, which ensures that the hidden-variables state in which two particles are created will always be correlated with the "choice" of measurements that the experiments decide to make on them. So, for example, in any trial where the experimenters were predetermined to measure the same spin axis, the particles would always be created with opposite spin states on that axis, but in trials where the experimenters were not predetermined to measure the same spin axis, the hidden spin states of the two particles on any given axis would not necessarily be opposite.

Since in a deterministic universe the state of an experimenter's brain which determines his "choice" of what to measure on a given trial can be influenced by a host of factors in his past which have nothing to do with the creation of the particle (what he had for lunch that day, for example), the only way for such correlations to exist would be to pick very special initial conditions of the universe--the correlations would not be explained by the laws of physics alone (unless this constraint on the initial conditions is itself somehow demanded by the laws of physics).
 
  • #59
ThomasT said:
After reading a quote (from a BBC interview) of Bell about superdeterminism, I still don't understand the difference between superdeterminism and determinism. From what Bell said they seem to be essentially the same.

Is it that experimental violations of Bell inequalities show that the spatially separated data streams are statistically dependent?

Or, is it that there is a statistical dependence between coincidental detections and associated angular differences between polarizers (that's the only way that I've seen the correlations mapped)?
My understanding is that the "statistical independence" that is violated in superdeterminism is the independence of each particle's state prior to being measured (including the state of any 'hidden variables' associated with the particle) from each experimenter's choice of what angle to set their detector when measuring the particle. In other words, whatever it is that determines the particle's state, it must act as if it does not "know in advance" how the experimenter is going to choose to measure it. If there is a spacelike separation between the two experimenters' measurements, then if we find that the particles always give opposite results whenever the experimenters both happen to choose the same angle, the only way to explain this in a local realist universe is if both particles had predetermined answers to what result they'd give when measured on that angle, and both were assigned opposite predetermined answers when they were created at a common location. But if nature acts as if it doesn't "know in advance" what angles the experimenters will choose, then the only conclusion for a local realist must be that on every trial the particles are assigned predetermined (opposite) answers for what result they'll give when measured on any possible choice of angle.

Do you disagree with any of this?
 
  • #60
JesseM said:
My understanding is that the "statistical independence" that is violated in superdeterminism is the independence of each particle's state prior to being measured (including the state of any 'hidden variables' associated with the particle) from each experimenter's choice of what angle to set their detector when measuring the particle. In other words, whatever it is that determines the particle's state, it must act as if it does not "know in advance" how the experimenter is going to choose to measure it. If there is a spacelike separation between the two experimenters' measurements, then if we find that the particles always give opposite results whenever the experimenters both happen to choose the same angle, the only way to explain this in a local realist universe is if both particles had predetermined answers to what result they'd give when measured on that angle, and both were assigned opposite predetermined answers when they were created at a common location. But if nature acts as if it doesn't "know in advance" what angles the experimenters will choose, then the only conclusion for a local realist must be that on every trial the particles are assigned predetermined (opposite) answers for what result they'll give when measured on any possible choice of angle.

Do you disagree with any of this?

No, I don't disagree. But I still wouldn't be able to answer the question: what is the definition of superdeterminism. So, I don't agree either. :smile:

Thanks for the effort. I was looking for something a bit shorter. Is there a clear, straightforward definition for the term or isn't there?
 
  • #61
ThomasT said:
No, I don't disagree. But I still wouldn't be able to answer the question: what is the definition of superdeterminism. So, I don't agree either. :smile:

Thanks for the effort. I was looking for something a bit shorter. Is there a clear, straightforward definition for the term or isn't there?
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
 
  • #62
JesseM said:
I wrote something about this in post #29 of this thread:
Thanks, that thread was most helpful. My take on it is that the ideas of superdeterminism and determinism, for the purpose of ascertaining the meaning Bell's theorem, are essentially synonymous, and, more importantly, unnecessary.
 
  • #63
JesseM said:
I wrote something about this in post #29 of this thread:

JesseM said:
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
Put it in general form.
 
  • #64
ThomasT said:
Put it in general form.
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
 
  • #65
JesseM said:
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?
 
Last edited:
  • #66
JesseM said:
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
Random is defined at the instrumental level, isn't it? That being so, then the polarizer settings are random. But, the coincidence rates aren't random.

I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
These are the assumptions that quantum theory makes, and this is as far as it can go in talking about what is happening independent of observation. These assumptions come from the perspective of classical optics, and from these assumptions (and appropriate experimental designs) we would expect to see the observed angular dependency.

So, I don't think I need superdeterminism to avoid nonlocality.
 
  • #67
ThomasT said:
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?
I think that's all Bell meant by superdeterminism (see here and here), although different authors might not mean exactly the same thing by that word. Sometimes people talk about superdeterminism as a rejection of "counterfactual definiteness", meaning physics can no longer address questions of what would have happened if a different measurement had been made on the system, but I suppose this is just another way of saying that we cannot assume statistical independence between the choice of measurement on a system and the state of the system prior to measurement. Basically I think this amounts to a limitation on allowable initial conditions for the system and the experimenter, in statistical mechanics terms you can no longer assume that all microstates consistent with a given observed macrostate are physically allowable.
 
  • #68
ThomasT said:
Random is defined at the instrumental level, isn't it? That being so, then the polarizer settings are random. But, the coincidence rates aren't random.
No, the randomness here is about whether there's a correlation between the "hidden states" of particles prior to measurement and the experimenter's choice of what measurement setting to use, over a large number of trials. This is not a question that can be addressed "instrumentally", since by definition we have no way to find out what the hidden states on a given trial actually are. But if we take the perspective of an imaginary omniscient observer who knows the hidden states on each trial, it must be true that the observer either will or won't see a correlation between the complete state of a particle prior to measurement and the experimenter's choice of how to measure it--i.e. the particle either will or won't act as if it can "anticipate" in advance what the experimenter will choose.
ThomasT said:
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
But that's the whole idea that Bell's theorem intends to refute. Bell starts by imagining that the perfect correlation when both experimenters use the same detector setting is due to a common cause--each particle is created with a predetermined answer to what result it will give on any possible angle, and they are always created in such a way that they are predetermined to give opposite answers on each possible angle. But if you do make this assumption, it leads you to certain conclusions about what statistics you'll get when the experimenters choose different detector settings, and these conclusions are violated in QM.

Perhaps it would help if you looked at the example involving scratch lotto cards that I gave on another thread:
The key to seeing why you can't explain the results by just imagining the electrons had preexisting spins on each axis is to look at what happens when the two experimenters pick different axes to measure. Here's an analogy I came up with on another thread (for more info, google 'Bell's inequality'):

Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get opposite results--if Bob scratches box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a lemon.

Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs of cards in such a way that the "hidden" fruit in a given box of one card is always the opposite of the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the opposite of the other--if the first card was created with hidden fruits A+,B+,C-, then the other card must have been created with the hidden fruits A-,B-,C+.

The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find opposite fruits on at least 1/3 of the trials. For example, if we imagine Bob's card has the hidden fruits A+,B-,C+ and Alice's card has the hidden fruits A-,B+,C-, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be:

Bob picks A, Alice picks B: same result (Bob gets a cherry, Alice gets a cherry)

Bob picks A, Alice picks C: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks B, Alice picks A: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks B, Alice picks C: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks C, Alice picks A: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks C, Alice picks picks B: same result (Bob gets a cherry, Alice gets a cherry)

In this case, you can see that in 1/3 of trials where they pick different boxes, they should get opposite results. You'd get the same answer if you assumed any other preexisting state where there are two fruits of one type and one of the other, like A+,B+,C-/A-,B-,C+ or A+,B-,C-/A-,B+,C+. On the other hand, if you assume a state where each card has the same fruit behind all three boxes, like A+,B+,C+/A-,B-,C-, then of course even if Alice and Bob pick different boxes to scratch they're guaranteed to get opposite fruits with probability 1. So if you imagine that when multiple pairs of cards are generated by the machine, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C-/A-,B+,C+ while other pairs are created in homogoneous preexisting states like A+,B+,C+/A-,B-,C-, then the probability of getting opposite fruits when you scratch different boxes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get opposite answers in less than 1/3 of trials where you scratch different boxes, provided you assume that each card has such a preexisting state with "hidden fruits" in each box.

But now suppose Alice and Bob look at all the trials where they picked different boxes, and found that they only got opposite fruits 1/4 of the time! That would be the violation of Bell's inequality, and something equivalent actually can happen when you measure the spin of entangled photons along one of three different possible axes. So in this example, it seems we can't resolve the mystery by just assuming the machine creates two cards with definite "hidden fruits" behind each box, such that the two cards always have opposite fruits in a given box.
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)? And if you agree with this much, do you agree or disagree with the conclusion that if you predetermine the answers in this way, this will necessarily mean that when they pick different boxes to scratch they must get opposite fruits at least 1/3 of the time?

By the way, I also extended the scratch lotto analogy to a different Bell inequality in post #8 of this thread, if it helps.
 
Last edited:
  • #69
JesseM said:
They do select them independently.

I don't understand what you're asking here.
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles and expect that to represent “ They do select them independently”. We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer. In your example that means a selection of 6 angles (or at least five). Such as ALICE ( 0, 60, 90)and BOB (0 45 120). I can see allowing one angle (like 0) to be considered to come up the same by chance. But all three, no; that would risk over simplification of the problem to the point of making the conclusinions unreliable. All I've been saying is that this has been oversimplied and leaves the conclusion incomplete.

I see no point in rereading the same explanations of the same thing with the same predetermined restrictions being enforced on the separate, should have been independent, observers.
Maybe the two you believe this binary example is conclusive, IMO it is not.

I will close my input to this thread by requesting a binary opinion choice to clarify our differences and confirm our opinions really are different.

Last year the Kwiat Team at Illinois received and spent over $70,000 in funding on scientific testing aimed at closing “loopholes” in the EPR-Bell question represented in this example:

The Opinion choice is RED OR BLUE. pick only one

THE RED OPINION: And my opinion; agrees with scientists such as those on the Kwiat Team that do not consider any existing proof (including this binary one) conclusive. And that additional funding and experimental work on Bell EPR issues such as the tests at Illionis are justified.

THE BLUE OPINION: Your apparent position; that this binary proof is conclusive. Thus the efforts being expended and any additional funding of scientific testing of EPR-Bell issues is no longer justified. Such experiments as they exist along with this binary proof belong in a undergraduate teaching environment. And advanced labs should be concerned with more important work rather than rehashing old news no one has any doubts about.

Are you guys in fact picking BLUE as your opinion?

That is all I want your choice on this opinion RED or BLUE. No Green, no Gray, no Red&Blue, no explanations.
I'm satisfied that my choice of Red is reasonable and that a significant number of real practicing scientists share it.

If you choice really is Blue;
Please I need no further matrix of explanations. Address your concerns to the active scientist that obviously feel differently as new advanced Bell-EPR type testing efforts continue. If you are successful in convincing any of those doing such testing to publicly agree with you that their testing has been unjustified and future funding of that type is no longer justified; then I’ll know I need to relook at your arguments on this approach. No need to add them to this thread just reference us to any papers you may publish to make your point with the scientists that need to stop wasting their efforts. If the details in your papers are enough to convince the scientific community to change their opinion to BLUE it will be good reading for the rest of us.

I think we have shared more than enough on this with each other.
Other than looking for your opinion choice RED or BLUE I will unsubscribe from this thread.
 
  • #70
RandallB said:
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles
I'm not insisting there are only three possible angles, it's just a condition of the experiment that the two experimenters agree ahead of time that they will choose between three particular angles, even though there are many other possible angles they might have measured.
RandallB said:
and expect that to represent “ They do select them independently”.
Yes, they choose which of the three independently. Obviously, the three angles that they are choosing between were not themselves selected independently by the experimenters, as I said they made an agreement ahead of time along the lines of "on each trial, we'll always choose one of the three angles 0, 60, 90" or whatever.
RandallB said:
We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer.
What do you mean by "functions"? They could design their experiment so that each of the three buttons automatically set the detector to one of the three angles--button A might set it to 0, button B might set it to 60, and button C might set it to 90. It doesn't make sense to argue about the setup itself, because Bell's proof assumes this sort of setup, and then shows that the results QM predicts the experimenters will get when using this particular setup are inconsistent with local realism. Are you arguing that given this experimental setup, the results predicted by QM are not inconsistent with local realism?
RandallB said:
In your example that means a selection of 6 angles (or at least five). Such as ALICE ( 0, 60, 90)and BOB (0 45 120).
Again, it's just part of the assumed setup that they have each agreed to choose between the same three angles on each trial. If Alice is choosing between 0, 60, and 90, then Bob must have agreed to choose between 0, 60 and 90 as well. So on one trial you might have Alice-60 and Bob-90, on another trial you might have Alice-90 and Bob-0, but there will never be a trial where either of them picks an angle that isn't 0, 60, or 90 (if these are the three angles they have agreed in advance to pick between).
RandallB said:
I will close my input to this thread by requesting a binary opinion choice to clarify our differences and confirm our opinions really are different.

Last year the Kwiat Team at Illinois received and spent over $70,000 in funding on scientific testing aimed at closing “loopholes” in the EPR-Bell question represented in this example:

The Opinion choice is RED OR BLUE. pick only one

THE RED OPINION: And my opinion; agrees with scientists such as those on the Kwiat Team that do not consider any existing proof (including this binary one) conclusive. And that additional funding and experimental work on Bell EPR issues such as the tests at Illionis are justified.

THE BLUE OPINION: Your apparent position; that this binary proof is conclusive. Thus the efforts being expended and any additional funding of scientific testing of EPR-Bell issues is no longer justified. Such experiments as they exist along with this binary proof belong in a undergraduate teaching environment. And advanced labs should be concerned with more important work rather than rehashing old news no one has any doubts about.
I am not addressing the issue of whether actual experiments sufficiently resemble Bell's idealized thought-experiment to constitute experimental refutations of local realism, I'm just talking about theoretical predictions here. In Bell's thought-experiment, Bell's theorem shows definitively that any local realist theory must respect the Bell inequalities, and quantum theory definitively predicts the Bell inequalities will be violated in this experiment. When people talk about "loopholes" in EPR experiments that require better tests, they are pointing out ways in which previous experiments may have fallen short of the ideal thought-experiment (not successfully detecting every pair of particles, for example), they are not arguing that the predicted violations of Bell inequalities by quantum theory don't definitively prove that QM is incompatible with local realism (but experiments are needed to check if QM's predictions are actually correct in the real world). Do you agree that on a theoretical level, Bell's theorem shows beyond a shadow of a doubt that the predictions of QM are inconsistent with local realism?
 
Last edited:

Similar threads

Replies
66
Views
6K
Replies
87
Views
6K
Replies
197
Views
31K
Replies
190
Views
12K
Replies
2
Views
5K
Replies
9
Views
7K
Back
Top