Questions about Bell: Answering Philosophical Difficulties

  • Thread starter krimianl99
  • Start date
  • Tags
    Bell
In summary, Bell's results are in contradiction with quantum mechanical predictions, and it is difficult to get away with 1,2 and 3 and not have Bell's inequalities.
  • #71
ThomasT said:
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?

Sort of. Superdeterminism is sometimes offerred as a "solution" to Bell's Theorem that restores locality and realism. The problem is that it replaces it with something which is inifinitely worse - and makes no sense whatsoever. Superdeterminism is not really a theory so much as a concept: like God, its value lies in the eyes of the beholder. As far as I know, no actual working theory has ever been put forth that passes even the simplest of tests.
 
Physics news on Phys.org
  • #72
ThomasT said:
I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and any one angular difference.

Hum, no offense, but I think you totally misunderstood the EPR-Bell type experiments. There's a common source, two opposite arms (or optical fibers or anything) and two experimental setups: Alice's and Bob's.
Each experimental setup can be seen to consist of a polarizing beam splitter which splits the incoming light into an "up" part and a "down" part, and to each of these two channels, there's a photomultiplier. The angle of the polarizing beam splitter can be rotated.

Now, if a photon/lightpulse/... comes in which is "up" wrt to the orientation of the beamsplitter, then that photon is going to make click (ideally) once the "up" photomultiplier, and not the down one. If the photon is in the "down" direction, then it is going to make click the down photomultiplier and not the up one. If the photon is polarized in 45 degrees, then it will randomly or make click the up one and not the down one, or it will make click the down one and not the up one.

So, if a photon is detected, in any case one of both photomultipliers will click at Alice. Never both. That's verified. But sometimes none, because of finite efficiency.

At Bob, we have the same.

Now, we look only at those pulses which are detected both at Alice and Bob: if at Alice something clicks, but not at Bob, we reject it, and also vice versa. This is an item which receives some criticism, but it is due to the finite efficiency of the photomultipliers.

However, what one notices is that if both Alice's and Bob's analyzers are parallel, then EACH TIME there is a click at Alice and and Bob, it is the SAME photomultiplier that clicks on both sides. That is, each time that Alice's "up" photomultiplier clicks, well it is also Bob's "up" multiplier that clicks, NEVER the "down" one. And each time it is Alice's "down" photomultiplier that clicks, well it is also Bob's "down" multiplier that clicks, never his "up" one.

THIS is what one means with "perfect correlation".

This can easily be explained if both photons/lightpulses are always OR perfectly aligned with Bob and Alice's analysers, or "anti-aligned". But any couple of photons that would be "in between", say at 45 degrees, will hard to explain in a classical way: if each of them has 50-50% chance to go up or down, at Alice or Bob, why do they do *the same thing* at both sides ?

Moreover, this happens (with the same source) for all angles, as long as Alice's and Bob's are parallel. So if the source was emitting first only perfectly aligned and anti-aligned photon pairs with Alice's and Bob's angles both at, say, 20 degrees (so they are parallel), then it is hard to explain that when both Alice and Bob turn their angles at, say, 45 degrees, they STILL find perfect correlation with classical optics, no ?
 
Last edited:
  • #73
RandallB said:
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles and expect that to represent “ They do select them independently”. We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer.

But we're not talking about angles here ! I'm talking you about a thought experiment with a box which has 3 buttons! No photons. No polarizers.

We just have a black box machine of which we don't know how it works, but we suppose that it complies to some general ideas (the famous Bell assumptions of locality etc...)
Just 3 buttons on each side, labeled A, B or C, an indicator that the experiment is ready, and a red and a green light.

And then the conditions of functioning, that each time Alice and Bob happen to push the same button, they ALWAYS find that the same light lights up. It never happens that Alice and Bob both push the button C, and at Alice the green light lights up, and at Bob, the red one.
And also the condition that over a long run, at Alice, for the cases where she pushed A, she got on average about 50% red and 50% green, in the cases where she pushed B, the same, and in cases where she pushed C, the same.

These are the elements GIVEN for a thought experiment. It's the description of a thinkable setup. I can build you one with a small processor and a few wires and buttons which does this, so it is not an impossible setup.

The question is, what can we derive as conditions for the list of events where Alice happened to push A, and Bob happened to push B. And for the other list where Alice
happened to push B, and Bob happened to push C. etc...

THIS is the derivation of Bell's theorem (or rather, of Bell's inequalities). He derives some conditions on those lists, given the setup and given the conditions.

IT IS A REASONING ON PAPER. So I'm NOT talking about any *experimental* observationsj in the lab. That's a different story.

Now, it is true of course that the "setup" here corresponds more or less to a setup of principle where there is a common "emitter of pairs of entangled particles", and then two experimental boxes where one can choose between 3 settings of angles (that's equivalent to pushing A, B or C), and get a binary output each time (red or green). It is this idealized setup which is quantified in a simple quantum-mechanical calculation, and which is (more or less well) approximated by real experiments. So these "experimental physics" issues are of course the inspiration for our reasoning. But I repeat: the reasoning presented here has a priori nothing to do with particles, angles, polarizers or anything: just with a black box setup which has certain properties, and of which we try to deduce other properties, under a number of assumptions of the workings of the black box.

So I can answer your "red or blue" question: concerning GENUINE EXPERIMENTS, of course it is a good idea to try to bring the experiment closer to the ideal situation, which is still relatively far away. So yes, in as much as the proposed experiments are indeed improvements, it is a good idea to fund them. But that's a question of how NATURE behaves (does it deviate from quantum mechanics or not).

However, concerning the *formal reasoning*, no there is not much doubt. Quantum mechanics (as a theory) is definitely not compatible with the assumptions of Bell.

In about the same way as that there is not much doubt that Pythagoras' theorem follows from the assumptions (axioms) of Euclidean geometry. Whether or not "real space" is well-described by that Euclidean model. So in as much as spending money on an experiment that tests whether physical space follows or not, the Euclidean prescription might be sensible, spending money to see whether Pythagoras' theorem (on paper) follows from the Euclidean axioms would, I think, be some waste. And there is no link between both! It is not because in real physical space, Euclidean axioms are not correct, that suddenly, Pythagoras' proof from Euclid's axioms is wrong!
 
Last edited:
  • #74
JesseM said:
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)?

The nice thing about the proof presented earlier in this thread (which some here don't seem to understand) is that a priori, one even leaves in place the possibility of some random element in the generation of the results and that the assumption of locality only means that the *probability* of having a cherry or a banana is determined, but not necessarily the outcome, and that it FOLLOWS from the above requirement of perfect (anti-) correlation that they must be pre-determined.

I say this because sometimes a (senseless) objection to Bell's argument is that he *assumes* determinism. No assumption of determinism is necessary, but it FOLLOWS that the probabilities must be 0 or 1 (once common information is taken into account) from the perfect correlation.
 
  • #75
DrChinese said:
Superdeterminism is sometimes offerred as a "solution" to Bell's Theorem that restores locality and realism.

From a logical point of view it is a solution. No quotes needed.

The problem is that it replaces it with something which is inifinitely worse - and makes no sense whatsoever.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?

Superdeterminism is not really a theory so much as a concept: like God, its value lies in the eyes of the beholder.

Superdeterminism is nothing but the old, classical determinism with a requirement of logical consistency added.

As far as I know, no actual working theory has ever been put forth that passes even the simplest of tests.

This is true, but it says nothing about the possibility that such a theory might exist.
 
  • #76
ueit said:
From a logical point of view it is a solution. No quotes needed.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?

Logically, it is true that there is no argument that in a deterministic theory, superdeterminism is not supposed to hold. After all, everything in a deterministic frame is a function of the initial conditions, which can always be picked in exactly such a way as to obtain any correlation you want. It is based on such kind of reasoning that astrology has a ground.

However, as I pointed out already a few times, it is an empirical observation that things which don't seem to have a direct or indirect causal link happen to be statistically independent. This is the only way that we can "disentangle" cause-effect relationships (namely, by observing correlations between the "randomly" selected cause, and the observed effect). In other words, "coincidences" obey statistical laws.

It is sufficient that one single kind of phenomenon doesn't follow this rule, and as a consequence, no single cause-effect relationship cannot be deduced anymore. Simply because this single effect can always be included in the "selection chain" of any cause-effect relationship and hence "spoil" the statistical independence in that relationship.

So *if* superdeterminism is true, then it is simply amazing that we COULD deduce cause-effect relationships at all, in just any domain of scientific activity. Ok, this could be part of the superdeterminism too, but it would even be MORE conspirational: superdeterminism that mimics as determinism. Call it "hyperdeterminism" :smile:

Now that I come to think of it, it could of course explain a lot of crazy things that happen in the world... :-p
 
  • #77
Having looked up where it was brought up, Super-determinism is not a special case of determinism at all and is actually a fairly simple fourth assumption.

Such a term shouldn't even be used to describe this possibility, it is actually a whole other assumption that is unrelated to the other 3. Perhaps the name is just a way to try and hide this fact.

The assumption being referred to is that there was not something that occurred in the past that both caused the person to choose the detection settings and caused the particles to behave in such a way.

The implications of that being the case are a little farfetched, but other than that it is just plain old determinism. It is not the same thing as objective reality assumption, since it could just be in this one case.
 
Last edited:
  • #78
krimianl99 said:
since it could just be in this one case.

No, not really. If there is an influence that STRONGLY CORRELATES just ANY technique that I use to make the choices at Bob, the choices at Alice, and the particles sent out, then this means that there are such correlations EVERYWHERE. As I said in another post, nothing stops me in principle from using the selection for medecine/placebo in a medical test determine at the same time the settings at Bob, and use the results of the medical tests on another set of ill people at Alice. If there have to be correlations between the choices of Alice and Bob in all cases, then also in THIS case and hence between any medecine/placebo selection procedure on one hand, and medical tests on another.

But this would mean that any correlation between the outcome of a medical test and whether or not a person received a treatment is never a proof of the medicine working, as I found such correlations already between two DIFFERENT sets of patients (namely those at Bob to get the medecine on one hand, and those at Alice who, whether they got better or not, determined Alice's choice).
 
  • #79
vanesch said:
If there is an influence that STRONGLY CORRELATES just ANY technique that I use to make the choices at Bob, the choices at Alice, and the particles sent out, then this means that there are such correlations EVERYWHERE.

How so? Just because you don't know what could cause such a correlation doesn't mean the correlation would always be there just because it is there when tested. Maybe an event that causes the particles to become entangled radiates a mind control wave to the experimenters but only at times where the particles are about to be tested. It's not much more far fetched then the whole thing to start.

It's just a fourth assumption that has nothing to do with the others. Furthermore it illustrates my point about the differences between the limits of induction and people just making mistakes with deduction.

With more experiences, different points of view, and a lot of practice understanding the limits of induction and using them, the human race can definitely reduce uncertainty caused by the limits of induction.

But that is TOTALLY different than checking for errors in DEDUCTIVE reasoning. In one case you are checking for something similar to a typo, and in the other you are being totally paranoid that anything that you haven't already thought of could be going on.

In a real proof, all induction is limited to the premises. As long as you don't have a reason to doubt the premises, the proof holds. In so called "proof" by negation the whole thing is subject to the limits of induction.
 
Last edited:
  • #80
krimianl99 said:
In a real proof, all induction is limited to the premises. As long as you don't have a reason to doubt the premises, the proof holds. In so called "proof" by negation the whole thing is subject to the limits of induction.
What do you mean here? Would you deny that "proof by contradiction" is a deductive argument rather than an inductive one? It's often used in mathematics, for example (look at this proof that there is no largest prime number). And Bell's theorem can be understood as a purely theoretical argument to show that certain classes of mathematical laws cannot generate the same statistical predictions as the theory of QM.
 
  • #81
Ian Davis said:
True, but I find just as intriguing the question as to which way that same pulse of light is traveling within the constructed medium. The explanation that somehow the tail of the signal contains all the necessary information to construct the resulting complex wave observed, and the coincidence that the back wave visually intersects precisely as it does with the entering wave, without in any way interferring with the arriving wave, seems to me a lot less intuitive than that the pulse on arriving at the front of the medium travels with a velocity of ~ -2c to the other end, and then exits. The number 2 of all numbers also seems strange. Why not 1. It seems a case where we are willing to defy occams razor in order to defend apriori beliefs. How much energy is packed into that one pulse of light, and how is this energy to be conserved when that one pulse visually becomes three. From where does that tail of the incoming signal derive the strength to form such a strong back signal. Is the signal fractal in the sense that within some small part is the description of the whole. Questions I can't answer not being a physicist, but still questions that trouble me with the standard explanations given, about it all being smoke and mirrors.

Likewise I find Feynmans suggestion that spontaneous construction and destruction of positron electron pairs is our world view is in reality electrons changing direction in time as consequence of absorbing/emitting a photon both intriguing and rather appealing.

It does seem that our reluctance to have things move other than forwards in time means that we must jump through hoops to explain why despite appearances things like light, electrons and signals cannot move backwards in time. My primary interest is in the question of time itself. I'm not well equipped to understand answers to this question, but it seems to me that time is the one question most demanding and deserving of serious thought by physicists even if that thought produces no subsequent answers.

Re Feynman; the notion of going backwards in time is simply a metaphor. It turns out that the manipulations required to make a Dirac Hamiltonian with only positive energy eigenvalues, are, equivalent to having negative energy solutions travel backwards in time -- this is nothing more than setting (-E)) into (+E),in the expression exp(iEt). If you go back and review first the old-fashioned perturbation theory, and its successor, modern covariant field theory, you can see very clearly the origin's of Feynman's metaphor. Among other things, you will see how the old-fashioned perturbation theory diagrams combine to produce the usual covariant Feynman diagrams of, say the Compton Effect. You will get a much better idea of what "backwards in time" brings to the table -- in my judgment the idea is a creative fiction, but a very powerful one.

QFT is a somewhat difficult subject. To get even a basic understanding you need to deal with both the technical as well as the conceptual aspects. I highly recommend Weinberg's Chapter I of Vol I of Quantum Theory of Fields -- he gives a good summary of the basic stuff you need to know to start to understand QFT. Quite frankly, the physics community embraced Feynman's metaphor rather quickly -- along with the Schwinger and Tomonaga versions -- almost100%, and became part of "what everybody knows", as in tacit physics knowledge, as in no big deal. As a metaphor, Feynman's idea is brilliant and powerful; as a statement describing reality it is at least suspect.

Does the usual diagram of an RLC circuit mirror the physical processes of the circuit?

Regards,
Reilly Atkinson
 
Last edited:
  • #82
ueit said:
From a logical point of view it is a solution. No quotes needed.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?This is true, but it says nothing about the possibility that such a theory might exist.

There is no theory called superdeterminism which has anything to do with particle theory. There is an idea behind it, but no true theory called something like "superdeterministic quantum theory" exists. That is why quotes are needed. You cannot negate a theory which assumes that which it seeks to prove.

Note that superdeterminism is a totally ad hoc theory with no testable components. Adds nothing to our knowledge of particle behavior. And worse, if true, would require that every particle contain a complete history of the entire universe so it would be capable of matching the proper results for Bell tests - while remaining local.

In addition, there would need to be connections between forces - such as between the weak and the electromagnetic - that are heretofor unknown and not a part of the Standard Model. That is because superdeterminism would lead to all kinds of connections and would itself impose constraints.

Just as Everett's MWI required substantial work to be fleshed into something that could be taken seriously, and Bohm's mechanics is still being worked on, the same would be required of a "superdeterministic" theory before it would really qualify as viable. I have yet to see a single paper published which seriously takes apart the idea of superdeterminsm in a critical manner and builds a version which meets scientific rigors.

Here is a simple counter-example: The detectors of Alice and Bob are controlled by an algorithm based on radioactive decay of separate uranium samples. Thus, randomness introduced by the weak force (perhaps the time of decay) controls the selection of angle settings. According to superdeterminism, those separated radioactive samples actually independently contain the blueprint for the upcoming Bell test and work together (although locally) to insure that what appears to be a random event is actually connected.

Please, don't make me laugh any harder. :)
 
  • #83
vanesch said:
Logically, it is true that there is no argument that in a deterministic theory, superdeterminism is not supposed to hold. After all, everything in a deterministic frame is a function of the initial conditions, which can always be picked in exactly such a way as to obtain any correlation you want.

This is not what I have in mind. I don't see EPR explained by the initial conditions, but by a new law of physics that holds regardless of those parameters.

It is based on such kind of reasoning that astrology has a ground.

May be but that is not what I propose.

However, as I pointed out already a few times, it is an empirical observation that things which don't seem to have a direct or indirect causal link happen to be statistically independent.

1. In order to suspect a causal link you need a theory. In Bohm's theory the motion of one particle directly influences the motion of another no matter how far. The transactional interpretation has an absorber-emitter information exchange that goes backwards in time. None of these is obvious or intuitive. But we accept them (more or less) because the theory say that. Now, if a theory says that emission and absorbtion events share a common cause in the past then they "seem" causally related because the theory says it is so.

2. We are speaking about microscopic events. We have no direct empirical observation about this world and Heisenberg uncertainty introduces more limitations. So clearly you need a theory first and then to decide what is causaly related and what it is not.

This is the only way that we can "disentangle" cause-effect relationships (namely, by observing correlations between the "randomly" selected cause, and the observed effect). In other words, "coincidences" obey statistical laws.

So you wouldn't believe that a certain star can produce a supernova explosion until you "randomly" select a star and start throwing matter in it, right?

It is sufficient that one single kind of phenomenon doesn't follow this rule, and as a consequence, no single cause-effect relationship cannot be deduced anymore.

I strongly disagree. This is like saying that if a non-local theory is true then we cannot do science anymore because our experiments might be influenced by whatever a dude in another galaxy is doing. All interpretations bring some strange element but this is not necessarily present in an obvious way at macroscopic level.

Simply because this single effect can always be included in the "selection chain" of any cause-effect relationship and hence "spoil" the statistical independence in that relationship.

So, please show me how the assumption that any emitter-absorber pair has a common "ancestor" "spoils" the statistical independence in a medical test. I think you will need to also assume that a patient has all the emitters and the medic all the absorbers (or at least most of them) to deduce such a thing. But maybe you have some other proof in mind.

So *if* superdeterminism is true, then it is simply amazing that we COULD deduce cause-effect relationships at all, in just any domain of scientific activity. Ok, this could be part of the superdeterminism too, but it would even be MORE conspirational: superdeterminism that mimics as determinism. Call it "hyperdeterminism" :smile:

I think you are using a double-standard here. All interpretations have this kind of conspiracy. We have a non-determinismic theory that mimics as determinism, a non-local theory that mimics as local and a multiverse theory that mimics as a single, 4D-universe theory. Also there is basically no difference between determinism and superdeterminism except the fact that the first one can be proven to be logically inconsistent.

I think that the main error in your reasoning comes from a huge extrapolation from microscopic to classical domain. You may have statistical independence at macroscopic level in a superdeterministic theory just like you can have a local universe based on a non-local fundamental theory.
 
  • #84
ueit said:
1. In order to suspect a causal link you need a theory. In Bohm's theory the motion of one particle directly influences the motion of another no matter how far. The transactional interpretation has an absorber-emitter information exchange that goes backwards in time. None of these is obvious or intuitive. But we accept them (more or less) because the theory say that. Now, if a theory says that emission and absorbtion events share a common cause in the past then they "seem" causally related because the theory says it is so.

Yes, and it is about that class of theories that Bell's inequalities tell us something.

2. We are speaking about microscopic events.

No, we are talking about black boxes with choices by experimenters, binary results, and the correlations between those binary results as a function of the choices of the experimenter. Bell's inequalities are NOT about photons, particles or anything specific. They are about the link that there can exist between *choices of observers* on one hand, and *correlations of binary events* on the other hand.

We have no direct empirical observation about this world and Heisenberg uncertainty introduces more limitations. So clearly you need a theory first and then to decide what is causaly related and what it is not.

We consider a *class* of theories: namely those that are local, in which we do not consider superdeterminism of the kind that a distant choice by a distant observer can have a statistical correlation with a choice of a local observer, and with an eventual "central source" (given locality, this can then only happen through "special initial conditions"), and we consider that there are genuine binary outcomes each time. We consider now that whatever theory is describing the functioning of our black box experiment, it is part of this class of theories. Well, if that's the case, then there are relations between certain correlations one can observe that way. The particular relation that interests us here is that where it is given that for identical choices (Alice A ,and bob A for instance), the correlation is complete.
It then turns out that one has conditions on the OTHER correlations.

I strongly disagree. This is like saying that if a non-local theory is true then we cannot do science anymore because our experiments might be influenced by whatever a dude in another galaxy is doing.

But this is TRUE! The only way in which Newtonian gravity gets out of this, is because influences are diminishing with distance. If gravity weren't falling in 1/r^2, but go, say, as ln(r), it would be totally impossible to deduce the equivalent of Newton's laws, ever!


I think that the main error in your reasoning comes from a huge extrapolation from microscopic to classical domain. You may have statistical independence at macroscopic level in a superdeterministic theory just like you can have a local universe based on a non-local fundamental theory.

Of course! The only thing Bell is telling us, is that given the quantum-mechanical predictions, it will not be possible to do this with a non-superdeterministic, local, etc... theory. That's ALL.
 
  • #85
krimianl99 said:
How so? Just because you don't know what could cause such a correlation doesn't mean the correlation would always be there just because it is there when tested. Maybe an event that causes the particles to become entangled radiates a mind control wave to the experimenters but only at times where the particles are about to be tested. It's not much more far fetched then the whole thing to start.

Well, indeed, we make the assumption that no such thing happens. That's the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.
 
  • #86
DrChinese said:
Note that superdeterminism is a totally ad hoc theory with no testable components. Adds nothing to our knowledge of particle behavior.

Rather like the "theory" of intelligent design in biology.
 
  • #87
vanesch said:
Well, indeed, we make the assumption that no such thing happens. That's the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.

Vanesch, that's nice of you to say. (And I mean that in a good way.)

But really, I don't see that "no superdeterminism" is a true assumption any more than it is to assume that "the hand of God" does not personally intervene to hide the true nature of the universe each and every time we perform an experiment. There must be a zillion similar assumptions that could be pulled out of the woodwork ("the universe is really only 10 minutes old, prior history is an illusion"). They are basically all "ad hoc", and in effect, anti-science.

In no way does the presence or absence of this assumption change anything. Anyone who wants to believe in superdeterminism can, and they will still not have made one iota of change to orthodox quantum theory. The results are still as predicted by QT, and are different than would have been expected by EPR. So the conclusion that "local realism holds" ends up being a Pyrrhic victory (I hope I spelled that right).
 
  • #88
DrChinese said:
Vanesch, that's nice of you to say. (And I mean that in a good way.)
But really, I don't see that "no superdeterminism" is a true assumption any more than it is to assume that "the hand of God" does not personally intervene to hide the true nature of the universe each and every time we perform an experiment. There must be a zillion similar assumptions that could be pulled out of the woodwork ("the universe is really only 10 minutes old, prior history is an illusion"). They are basically all "ad hoc", and in effect, anti-science.

Well, the aim of what I wanted to show in this thread is that there is a logical conclusion that one can draw from a certain number of assumptions (and the meta-assumptions that logic and so on hold of course). Whether these assumptions are "reasonable", "evident" or whatever doesn't change anything to the fact that they are necessary or not in the logical deduction. And one needs the assumption of no superdeterminism in two instances:
1) When one writes that the residual uncertainty of the couple, say, (red,red) is the product of the probability to have red at Alice (taking into account the common information) and the product of the probability to have red at Bob: in other words the statistical independence of these two unrelated events
2) When one assumes that the distribution of lambda itself is statistically independent of the residual probabilities (we can weight D over this probability) and the choices at Alice and Bob (so is no function of X and Y).

This is like showing Pythagoras' theorem: it is not because you might find the fifth axiom of Euclid "so evident as not to be a true assumption", that you don't need it in the logical proof!
 
Last edited:
  • #89
vanesch said:
... the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.
Could you put this in observable terms? Something like, the assumption is made (in the formulation of Bell inequalities) that there is no connection between a pair of polarizer settings and the paired detection attributes associated with those polarizer settings?

I feel like I'm getting farther and farther away from clarifying this stuff for myself, and I still have to respond to some replies to my queries by you and Jesse. And thanks, by the way.

Anyway, the experimental results seem to make very clear the (quantitative) relationship between joint polarizer settings and associated joint detection attributes. The theoretical approach and test preparation methods inspired by quantum theory yields a very close proximity between qm predictions and results. This quantum theoretical approach assumes a common cause, and involves the instrumental analysis-filtering of (assumed) like physical entities by like instrumental analyzers-filters, and timer-controlled pairing techniques.

We know that there is a predictable quantitative relationship between joint polarizer settings and pairs of appropriately associated joint detection attributes. And so, it's assumed that there is a qualitative relationship also. This is the basis for the assumption of common cause and common filtration of common properties.

The experimental violation of Bell inequalities has shown that the assumptions of common cause and common filtration of common properties can't be true if one uses a certain sort of predictive formulation wherein one also assumes that events at A and B (for any given set of paired detection attributes) are independent of each other, and the probability of coincidental detections in this case would be the product of the separate probabilities at A and B. Of course, the experimental design(s) necessary to produce entanglement preclude such independence -- and the quantum mechanical predictive formulation in association with the first two (common cause) assumptions considered in light of the experimental results supports the common cause (and therefore similar or identical disturbances moving from emitter to filter during any given coincidence interval) assumption(s).
 
Last edited:
  • #90
Originally Posted by ThomasT
I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and anyone angular difference.

vanesch said:
Hum, no offense, but I think you totally misunderstood the EPR-Bell type experiments.
No offense taken. I realize that I can be a bit, er, dense at times. I'm here to learn, to pass on anything that I have learned and think is ok, and especially to put out here for criticism any insights that I think I might have. I very much appreciate you mentors and advisors, etc., taking the time to explain things.
vanesch said:
There's a common source, two opposite arms (or optical fibers or anything) and two experimental setups: Alice's and Bob's.
Each experimental setup can be seen to consist of a polarizing beam splitter which splits the incoming light into an "up" part and a "down" part, and to each of these two channels, there's a photomultiplier. The angle of the polarizing beam splitter can be rotated.

Now, if a photon/lightpulse/... comes in which is "up" wrt to the orientation of the beamsplitter, then that photon is going to make click (ideally) once the "up" photomultiplier, and not the down one. If the photon is in the "down" direction, then it is going to make click the down photomultiplier and not the up one. If the photon is polarized in 45 degrees, then it will randomly or make click the up one and not the down one, or it will make click the down one and not the up one.

So, if a photon is detected, in any case one of both photomultipliers will click at Alice. Never both. That's verified. But sometimes none, because of finite efficiency.

At Bob, we have the same.

Now, we look only at those pulses which are detected both at Alice and Bob: if at Alice something clicks, but not at Bob, we reject it, and also vice versa. This is an item which receives some criticism, but it is due to the finite efficiency of the photomultipliers.

However, what one notices is that if both Alice's and Bob's analyzers are parallel, then EACH TIME there is a click at Alice and and Bob, it is the SAME photomultiplier that clicks on both sides. That is, each time that Alice's "up" photomultiplier clicks, well it is also Bob's "up" multiplier that clicks, NEVER the "down" one. And each time it is Alice's "down" photomultiplier that clicks, well it is also Bob's "down" multiplier that clicks, never his "up" one.

THIS is what one means with "perfect correlation".
Ok. I understand what you're talking about wrt perfect correlation now. This is only applicable when the analyzers are aligned. And, in this case we're correlating 'up' clicks at A with 'up' clicks at B and 'down' clicks at A with 'down' clicks at B.

vanesch said:
This can easily be explained if both photons/lightpulses are always OR perfectly aligned with Bob and Alice's analysers, or "anti-aligned". But any couple of photons that would be "in between", say at 45 degrees, will hard to explain in a classical way: if each of them has 50-50% chance to go up or down, at Alice or Bob, why do they do *the same thing* at both sides ?
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get. Quantum mechanics gets around the problem of disturbance-filter angular relationship by not saying anything about specific emission angles. It just says that if the optical disturbances are emitted in opposite directions, then the analyzers will be dealing with essentially the same thing(s). And classical optics tells us that if the light between analyzer A and analyzer B is of a sort, then the results at detector A and detector B will be the same for any given set of paired detection attributes: if there's a detection at A then there will be a detection at B, and if there's no detection at A, then there will be no detection at B.

vanesch said:
Moreover, this happens (with the same source) for all angles, as long as Alice's and Bob's are parallel. So if the source was emitting first only perfectly aligned and anti-aligned photon pairs with Alice's and Bob's angles both at, say, 20 degrees (so they are parallel), then it is hard to explain that when both Alice and Bob turn their angles at, say, 45 degrees, they STILL find perfect correlation with classical optics, no ?
We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.

I think that one can get an intuitive feel for why the quantum mechanical predictions work by viewing them from the perspective of the applicable classical optics laws and experiments. Don't you think so?
 
  • #91
ThomasT said:
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get.

Uh, no, not at all! That's the whole point. If you send identical but independent light pulses to an analyser, it will *randomly* click "up" or "down", but the probabilities of them depend on the precise orientation between the analyser and the polarisation of the pulses.

In other words, imagine that two identical pulses arrive, one after the other, on the same analyser. You wouldn't expect this analyser to click twice "up" or twice "down" in a row, right ? You would expect the responses to be statistically independent. Well, the same for two identical pulses sent out to two different (but identical) analysers.

We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.

If you take the classical description, then you KNOW what the two pulses are going to do, no ?
 
  • #92
Originally Posted by ThomasT
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get.
vanesch said:
Uh, no, not at all! That's the whole point. If you send identical but independent light pulses to an analyser, it will *randomly* click "up" or "down", but the probabilities of them depend on the precise orientation between the analyser and the polarisation of the pulses.

In other words, imagine that two identical pulses arrive, one after the other, on the same analyser. You wouldn't expect this analyser to click twice "up" or twice "down" in a row, right ? You would expect the responses to be statistically independent. Well, the same for two identical pulses sent out to two different (but identical) analysers.
If you're talking about a setup where you have a polarizer between the emitter and the analyzing polarizer, then ok. However, in that setup, the opposite-moving pulses wouldn't be considered identical following their initial polarization in the same sense that they can be considered identical if they remain unaltered until they hit their respective analyzing polarizers. I'm considering the quantum EPR-Bell type setups (eg. Aspect et al experiment using time-varying analyzers, 1984 I think).

For use as an analogy to the EPR-Bell experiments (at least the simplest optical ones) I'm thinking of a polariscopic type setup. It's the only way to be sure that you've got the same optical disturbance incident on (extending between) both polarizers during a certain coincidence interval. I'm trying to understand, among other things, why Heisenberg alludes so frequently to classical optics in various writings on quantum theory. It would seem to give a basis for the so called projection postulate among other things. I mean Heisenberg, Schroedinger, Dirac, Born, Bohr, Pauli, Jordan, etc. didn't just snatch stuff out of thin air. They had reasons for adopting the methods they did, and if something worked then it was retained in the theory. Of course, sometimes their reasons aren't so clear to mortals such as myself. :smile: Heisenberg's explanations are particularly hard for me to understand sometimes. I'm not sure how much of this is due to his style of expression, and the fact that I don't know much German and must rely on translations.

Anyway, I understand that one can't quantitatively reproduce the results of EPR-Bell tests using strictly classical methods. Quantitative predictability isn't the sort of understanding that I'm aiming at here. Quantum theory already gives me that.

Returning to my analogy (banal thought it might be), a simple EPR-Bell optical setup might look like this:

detector A <--- polarizer A <--- emitter ------> polarizer B ---> detector B

And a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector

I'll get to the details of my analogy in a future post (if the connection doesn't immediately jump out at you :smile:). It provides a means of understanding that nonlocality (in the spacetime sense of the word) is not necessary to explain the results of EPR-Bell tests.

Originally Posted by ThomasT
We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.
vanesch said:
If you take the classical description, then you KNOW what the two pulses are going to do, no ?

Sorry that I didn't state this clearly at first. I'm not looking for a classical description per se. I don't think that's possible. Quantum theory is necessary.

I'm looking more for the classical basis for certain aspects of quantum theory, because, as far as I can tell, the meaning of Bell's theorem is that we can't, in a manner of speaking, count our chickens before they're hatched (sometimes called, most confusingly I think, Bell's realism assumption). Which is one important reason why quantum theoretical methods (eg. superposition of states) are necessary.

The realistic or hidden variable approach is actually, in contrast with quantum theory, the metaphysical speculation approach. Which so far has turned out to be not ok when applied to quantum experimental results.
 
  • #93
JesseM said:
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
Can I paraphrase the above as:

Superdeterminism says that, with regard to, say, the 1984 Aspect et al experiment that used time-varying analyzers, variables associated with photon production and variables associated with detector setting selection are statistically dependent (ie., strongly correlated).

But how is this different from regular old garden variety determinism?

And, of course, this is true. Isn't it? A photon pair (a coincidental detection) is produced during a certain time interval. This (temporal proximity) is how they're chosen to be paired. Even though the settings of the analyzing polarizers are varying perhaps several times during the photon production interval (and while the optical disturbances are enroute from emitter to polarizer), there's one and only one setting associated with each photon of the pair (which is determined by temporal proximity to the detection event).
 
Last edited:
  • #94
ThomasT said:
But how is this different from regular old garden variety determinism?
It would be a bizarre form of determinism where nature would have to "predict" the future choices of the experimenters (which presumably would depend on a vast number of factors in the past light cone of their brain state at the moment they make the choice, including things like what they had for lunch) at the moment the photons are created, and select their properties accordingly.
ThomasT said:
And, of course, this is true. Isn't it? A photon pair (a coincidental detection) is produced during a certain time interval.
Wait, are you equating the detection of the photons with their being "produced"? The idea of a local hidden variables theory is to explain the fact that the photons always give the same results when identical detector settings are used by postulating that the photons are assigned identical predetermined answers for each possible detector setting at the moment they are emitted from the source--nature can't wait until they are actually detected to assign them their predetermined answers, because there'd be no way to make sure they get the same answers without FTL being involved (as there is a spacelike separation between the two detection-events). So if you define superdeterminism as:
Superdeterminism says that, with regard to, say, the 1984 Aspect et al experiment that used time-varying analyzers, variables associated with photon production and variables associated with detector setting selection are statistically dependent (ie., strongly correlated).
...this can only be the correct definition if "photon production" refers to the moment the two photons were created/emitted from a common location, not the moment they were detected. Nature must have assigned them predetermined (and identical) answers for the results they'd give on each detector setting at that moment (there is simply no alternative under local realism--do you understand the logic?), and superdeterminism says that when assigning them their answers, nature acts as if it "knows in advance" what combination of detector settings the two experimenters will later use, so if we look at the subset of trials where the experimenters went on to choose identical settings, the statistical distribution of preassigned answers would be different in this subset than in the subset of the trials where the experimenters went on to choose different settings.
 
  • #95
ThomasT said:
Anyway, I understand that one can't quantitatively reproduce the results of EPR-Bell tests using strictly classical methods. Quantitative predictability isn't the sort of understanding that I'm aiming at here. Quantum theory already gives me that.

Returning to my analogy (banal thought it might be), a simple EPR-Bell optical setup might look like this:

detector A <--- polarizer A <--- emitter ------> polarizer B ---> detector B

And a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector

Yes, THIS setup will give you identical results as the EPR setup. But, you realize that we are doing here the measurements on the SAME lightpulse, while in the EPR setup, they are two SEPARATE pulses, right ? And that in the second setup there's no surprise that polarizer A will have an influence on the lightpulse that will be incident on polarizer B, given that it passed through A.

However, in the EPR setup, we are talking about 2 different light pulses, and the light pulse that went to B has never seen setup A.

edit:
So this is a bit as if, when someone would demonstrate the use of a faster-than-light telephone, where he talks to someone on alpha centaury, and gets immediately an anwer, you would say that that has nothing surprising, because you can think of a similar setup, where you have a telephone to the room next door, and it functions in the same way :smile:
 
Last edited:
  • #96
Originally Posted by ThomasT
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.

Originally Posted by JesseM
But that's the whole idea that Bell's theorem intends to refute. Bell starts by imagining that the perfect correlation when both experimenters use the same detector setting is due to a common cause--each particle is created with a predetermined answer to what result it will give on any possible angle, and they are always created in such a way that they are predetermined to give opposite answers on each possible angle. But if you do make this assumption, it leads you to certain conclusions about what statistics you'll get when the experimenters choose different detector settings, and these conclusions are violated in QM.

Originally Posted by vanesch
But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.


---------------------
Ok, let's forget about super-duper-quasi-hyper-whatever determinism for the moment. The first statement in this post isn't refuted by any formal treatment (Bell or other, involving choices-settings and binary results), and is the situation that the quantum mechanical treatment assumes in dealing with (at least certain sorts of) entangled pairs.

I've tried to show how this can be visualized by using the analogy of a polariscopic setup to the simplest optical Bell-EPR setup.

Viewed from this perspective, there's no mystery at all concerning why the functional relationship between angular difference and coincidence rate is the same as Malus' Law.

The essential lessen I take from experimental violations of Bell inequalities is that physics is a long way from understanding the deep nature of light -- but the general physical bases of quantum entanglement can be understood (in a qualitative, not just quantitative, sense) now.

To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
 
Last edited:
  • #97
ThomasT said:
The first statement in this post isn't refuted by any formal treatment (Bell or other, involving choices-settings and binary results)
If you're talking about the statement "I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing", that absolutely is refuted by Bell--do you still not understand that Bell started by assuming the very sort of "common cause" you're talking about, and proved that under local realism, it could not possibly explain the correlations seen by QM?

Did you ever look over the example involving scratch lotto cards I gave in post #68 on this thread? If so, do you see that the whole point of the example was to try to explain the perfect correlations in terms of a common cause--a source manufacturing cards so that the fruit behind each possible square on two matched cards would always be opposite? Do you see that this assumption naturally leads to the conclusion that when the experimenters pick different boxes to uncover, they will get opposite results at least 1/3 of the time? I gave a slight variation on this proof with an example involving polarized light in post #22 of this thread if you'd find that helpful. It would also be helpful to me if you answered the question I asked at the end of the post with the scratch lotto card example:
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)? And if you agree with this much, do you agree or disagree with the conclusion that if you predetermine the answers in this way, this will necessarily mean that when they pick different boxes to scratch they must get opposite fruits at least 1/3 of the time?
ThomasT said:
Viewed from this perspective, there's no mystery at all concerning why the functional relationship between angular difference and coincidence rate is the same as Malus' Law.
Of course there is. Malus' law doesn't talk about the probability of a yes/no answer, it talks about the reduction in intensity of light getting through a polarizer--to turn it into a yes/no question you'd need something like a light that would go on only if the the light coming through the polarizer was above a certain threshold of intensity (like in the example from post #22 of the other thread I mentioned above), or which had a probability of turning on based on the intensity that made it through the polarizer (which could ensure that if the wave is polarized at angle theta and the polarizer is set to angle phi, then the probability the light would go on would be cos^2[theta - phi]). But even if you did this, there'd be no possible choice of the waves' angle theta such that, if two experimenters at different locations set their two polarizer angles to phi and xi and measured two waves with identical polarization angle theta, the probability of both getting the same yes/no answer would be equal to cos^2[phi - xi]; Bell's theorem proves that it's impossible to reproduce this quantum relationship (which is not Malus' law) under local realism. If you don't see why, I'd ask again that you review the lotto card analogy and tell me if you agree that the probabilistic claim about getting correlated results at least 1/3 of the time when the two people pick different boxes to scratch should be guaranteed to hold under local realism; if you agree in that example but don't see how it extends to the case of polarized waves, I can elaborate on that point if you wish.
ThomasT said:
To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
It contradicts the idea that any sort of "common cause" explanation which is consistent with local realism can match the results predicted by QM.
 
  • #98
ThomasT said:
ThomasT said:
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
Ok, let's forget about super-duper-quasi-hyper-whatever determinism for the moment. The first statement in this post isn't refuted by any formal treatment (Bell or other, …….

I've tried to show how this can be visualized by using the analogy of a polariscopic setup to the simplest optical Bell-EPR setup. ... Viewed from this perspective, there's no mystery at all ……

To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
No the “common emission properties” defined in Local and Realistic (LR) terms is what EPR-Bell is intended to search for. And observations so far as applied to the problem have yet to reveal any LR means to explain Bell inequities.

I suspect you have been plowing through ideas like “Superdeterminism” and “Local Vs. Non-local” without really understanding the EPR-Bell issues. Example: I don’t think anyone knows what you mean by “polariscopic” where you say;
a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector
Someone should have told you that you cannot send a photon through a second polarizer as the first one completely randomizes the polarization to a new alignment. No useful information can be gained from using a second 'polarizer B'.

Strongly recommend you review the Bell notes like those at http://www.drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm" .
Focus on (figure 3) there, and explaining the inequity line, especially how the LR approach has yet to resolve the measurements at 22.5 and 67.5 degrees before claiming you know what “Bell's theorem doesn't contradict”.
Don’t bother with the “easy math A,B, C approach” stick with the stuff based real experiments.
 
Last edited by a moderator:
  • #99
JesseM said:
If you're talking about the statement "I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing", that absolutely is refuted by Bell--do you still not understand that Bell started by assuming the very sort of "common cause" you're talking about, and proved that under local realism, it could not possibly explain the correlations seen by QM?

It [Bell's theorem] contradicts the idea that any sort of "common cause" explanation which is consistent with local realism can match the results predicted by QM.

Right, under the assumptions of (1) statistical independence and (2) the validity of extant attempts at a description of the reality underlying the instrumental preparations, then "it could not possibly explain the correlations seen by QM".

Those assumptions are wrong, that's all. Isn't that what we've been talking about? For a Bell inequality to be violated experimentally, then one or more of the assumptions involved in its formulation must be incorrect.

I don't think that anyalyzing this stuff with analogies like washing socks, or lotto cards, etc., though I appreciate your efforts, will provide any insight into what's happening in optical Bell experiments. The light doesn't care about probabilities of yes/no answers. The light doesn't care how Bertelman washed his socks. The light in optical Bell experiments is behaving much as it behaves in an ordinary polariscopic setup. The question is, why.

It isn't known what's happening at the level of the light-polarizer interaction. QM assumes only that polarizers A and B are analyzing the same optical disturbance for a given coincidence interval. Along with this goes the assumption of common emitter for any given pair. (In the Aspect et al 1984 experiment using time-varying analyzers the emitters were calcium atoms. Much care was taken in the experimental preparation to ensure that paired photons corresponded to optical disturbances emitted from the same atom.)
 
  • #100
RandallB said:
Someone should have told you that you cannot send a photon through a second polarizer as the first one completely randomizes the polarization to a new alignment. No useful information can be gained from using a second 'polarizer B'.
The first polarizer is adjusted to transmit the maximum intensity. Varying the angle between polarizer A and polarizer B, and then measuring the intensity of the light after transmission (or maybe not) by polarizer B, results in a cos^s angular dependence. This is how Malus' Law was discovered a few hundred years ago, and it is, in my view, strikingly similar to what's happening in simple A-B optical Bell tests.
 
  • #101
ThomasT said:
Right, under the assumptions of (1) statistical independence and (2) the validity of extant attempts at a description of the reality underlying the instrumental preparations, then "it could not possibly explain the correlations seen by QM".
(1) Only if by "statistical independence" you are referring to the idea that at the moment the particles are created, whatever properties they are assigned (the 'common cause' which makes sure they both later give the same answers when measured on the same angle) are statistically independent of the future choices of the experimenters about what angle to set their polarizers--the source does not have a "crystal ball" to see into the future behavior of the experimenters as in superdeterminism. No other assumptions of statistical independence are being made here.

(2) Can you be more specific about what you mean by "extant attempts at a description of the reality underlying the instrumental preparations"? The only reality assumed by Bell was local realism, and the fact that each particle must, when created, have been given predetermined answers to what response they'd give to each possible detector angle, with the predetermined answers being the same for each particle (the common cause). The second follows from the first, as there is no other way to explain how particles could always give the same response to the same detector angle besides predetermined answers, if you rule out FTL conspiracies between the particles.
ThomasT said:
I don't think that anyalyzing this stuff with analogies like washing socks, or lotto cards, etc., though I appreciate your efforts, will provide any insight into what's happening in optical Bell experiments.
This seems like a knee-jerk reaction you haven't put any thought into. After all, if the light has predetermined answers to what response it will give to each of the three possible polarizer settings (as must be true under local realism--do you disagree?), then this is very much like the case where each card has a predetermined hidden fruit under each square. Where, specifically, do you think the analogy breaks down?
ThomasT said:
The light in optical Bell experiments is behaving much as it behaves in an ordinary polariscopic setup.
No it isn't. In an ordinary polariscopic setup it is impossible to set things up so that each of two experimenters always gets yes/no answers on each trial, and are picking between three possible detector settings, and when they pick the same detector setting they always get the same answer, but when they pick different detector settings they get the same answer less than 1/3 of the time. And yet this is what QM predicts can happen for entangled photons if the three polarizer angles are 0, 60, and 120.
ThomasT said:
It isn't known what's happening at the level of the light-polarizer interaction. QM assumes only that polarizers A and B are analyzing the same optical disturbance for a given coincidence interval.
What are you talking about? Bell didn't make any assumptions about "optical disturbances", he just pointed out that under local realism, if experimenters always get the same answer when they choose the same detector setting and both make their measurements at the same time, that must be because the "things" (you don't have to assume they're 'optical disturbances' or anything so specific) that are measured by each detector must have been assigned identical predetermined answers to what result they'd give for each detector setting at some point when they were in the same local region of space. Do you think it is possible for local realism to be true yet this second assumption to be false? If so, then you haven't thought things through carefully enough, but I can explain why this assumption follows necessarily from local realism if you wish.
 
  • #102
ThomasT said:
This is how Malus' Law was discovered a few hundred years ago, and it is, in my view, strikingly similar to what's happening in simple A-B optical Bell tests.
Well yah duh,
But saying it is “strikingly similar … to Bell” - only tells me you do not understand what Bell was looking for, let alone the point the observations apparently make.
Take some time to read Bell and understand what the straight Bell inequity line from 100% to 0% is.
How QM predictions and actual observations produce a “violation” by going above and below that line with the curve of a sine wave across the same range.

And finally how Local theories can only predict the same sine wave shape but limited to a range of 75% to 25% which keeps it inside (on the 50% side) of the Bell inequity line. The point is a true local theory has yet to explain how to match the observations as non-locals can.

You need to bring yourself up to speed on the real experiment and these issues or you will never be able to keep up with the discussions. Certainly before claiming you know what “Bell's theorem doesn't contradict” you have a lot more asking questions and understanding to attain before doing that.
Otherwise you will only generate pointless arguments here.
 
  • #103
JesseM said:
(1) Only if by "statistical independence" you are referring to the idea that at the moment the particles are created, whatever properties they are assigned (the 'common cause' which makes sure they both later give the same answers when measured on the same angle) are statistically independent of the future choices of the experimenters about what angle to set their polarizers--the source does not have a "crystal ball" to see into the future behavior of the experimenters as in superdeterminism. No other assumptions of statistical independence are being made here.

The assumption of statistical independence in Bell's formulation has to do with setting the probability of coincidental detection equal to the product of the probability of detection at A and the probability of detection at B: P(A,B) = P(A) P(B).

Factorability of the joint probability has been taken to represent locality. But, it doesn't represent locality. It represents statistical independence between events (probability of detection) at A and events (probability of detection) at B during any given coincidence interval.

That there is no such statistical independence is evident, and is dictated by the experimental design(s) necessary to test Bell's theorem (ie., necessary to produce entangled pairs).

So, as far as I can tell, Bell didn't make a locality assumption per se.


JesseM said:
(2) Can you be more specific about what you mean by "extant attempts at a description of the reality underlying the instrumental preparations"? The only reality assumed by Bell was local realism, and the fact that each particle must, when created, have been given predetermined answers to what response they'd give to each possible detector angle, with the predetermined answers being the same for each particle (the common cause). The second follows from the first, as there is no other way to explain how particles could always give the same response to the same detector angle besides predetermined answers, if you rule out FTL conspiracies between the particles.

We know that attempts, to date, to describe the light-polarizer interactions in realistic mathematical expressions have not matched the qm predictions for all settings.

It's interesting that if we just don't say anything realistic about these interactions (except that the polarizers are interacting with the same disturbance during a given coincidence interval), vis quantum theory, then the probability of joint detection can be fairly accurately calculated.

Apparently whatever is being emitted and is incident on the polarizers is not behaving according to classical polarization theories.


JesseM said:
This seems like a knee-jerk reaction you haven't put any thought into. After all, if the light has predetermined answers to what response it will give to each of the three possible polarizer settings (as must be true under local realism--do you disagree?), then this is very much like the case where each card has a predetermined hidden fruit under each square. Where, specifically, do you think the analogy breaks down?
It just seems to me like an unproductive way to think about this stuff. After all, people have been mulling over Bell's theorem for half a century with no agreement as to it's meaning. Why not try a different perspective?

Seeing the connection between simple A-B optical Bell tests and the classic polariscope might prove to be quite, er, fruitful. :smile:

JesseM said:
No it isn't. In an ordinary polariscopic setup it is impossible to set things up so that each of two experimenters always gets yes/no answers on each trial, and are picking between three possible detector settings, and when they pick the same detector setting they always get the same answer, but when they pick different detector settings they get the same answer less than 1/3 of the time. And yet this is what QM predicts can happen for entangled photons if the three polarizer angles are 0, 60, and 120.

The similarity between the two setups that I see involves the same light extending between polarizers A and B (forget about the emitter in the optical Bell tests), and the same detection rate angular dependence.

JesseM said:
What are you talking about? Bell didn't make any assumptions about "optical disturbances", he just pointed out that under local realism, if experimenters always get the same answer when they choose the same detector setting and both make their measurements at the same time, that must be because the "things" (you don't have to assume they're 'optical disturbances' or anything so specific) that are measured by each detector must have been assigned identical predetermined answers to what result they'd give for each detector setting at some point when they were in the same local region of space. Do you think it is possible for local realism to be true yet this second assumption to be false? If so, then you haven't thought things through carefully enough, but I can explain why this assumption follows necessarily from local realism if you wish.

The statistical independence representation and the assignment of specific emission property values together constitute what's usually called, misleadingly I think, the assumption of local realism.

The, obviously wrong, statistical independence assumption has nothing to do with locality. We're left with the possibility of realistic representations being contradicted.

If experimenters always get the same answer when they choose the same polarizer setting during a coincidence interval, that might be because the polarizers are analyzing the same thing (which was produced during the same emission interval). This is what quantum theory assumes.
 
Last edited:
  • #104
RandallB said:
Certainly before claiming you know what “Bell's theorem doesn't contradict” you have a lot more asking questions and understanding to attain before doing that.
One of my contentions is that Bell's theorem doesn't actually make a locality assumption. If you think it does, then point out where you think it is in his formulation.

If Bell's locality condition isn't, in reality, a locality condition, then Bell's theorem doesn't contradict locality.
 
  • #105
ThomasT said:
The assumption of statistical independence in Bell's formulation has to do with setting the probability of coincidental detection equal to the product of the probability of detection at A and the probability of detection at B: P(A,B) = P(A) P(B).
You'd have to be more specific about what "A" and "B" are supposed to represent here. For example, if A="experimenter 1 measures at angle 120, gets result spin-up", and B="experimenter 2 measures at angle 120, gets result spin-down" then it is certainly not true that Bell assumed that P(A,B) = P(A)*P(B)...if each experimenter has a 1/3 chance of choosing angle 120, then P(A) = P(B) = 1/6 (because on any given angle, there is a 1/2 chance of getting spin-up and a 1/2 chance of getting spin-down), but P(A,B) is not 1/6*1/6 = 1/36, but rather 1/18 (because there's a 1/3*1/3 = 1/9 chance that both experimenters choose angle 120, but if they both do it's guaranteed they'll get opposite spins, so there's a 1/2 chance experimenter 1 will get spin-up and experimenter 2 will get spin-down, and a 1/2 chance experimenter 1 will get spin-down and experimenter 2 will get spin-up).
ThomasT said:
We know that attempts, to date, to describe the light-polarizer interactions in realistic mathematical expressions have not matched the qm predictions for all settings.
Really? In what experiments have QM predictions not matched the results?
ThomasT said:
It's interesting that if we just don't say anything realistic about these interactions (except that the polarizers are interacting with the same disturbance during a given coincidence interval), vis quantum theory, then the probability of joint detection can be fairly accurately calculated.
I really have no idea what you're talking about here. Can you actually give the specifics of this calculation that you say gives a "fairly accurate" match for the probability of joint detection?
JesseM said:
This seems like a knee-jerk reaction you haven't put any thought into. After all, if the light has predetermined answers to what response it will give to each of the three possible polarizer settings (as must be true under local realism--do you disagree?), then this is very much like the case where each card has a predetermined hidden fruit under each square. Where, specifically, do you think the analogy breaks down?
ThomasT said:
It just seems to me like an unproductive way to think about this stuff. After all, people have been mulling over Bell's theorem for half a century with no agreement as to it's meaning.
Where did you get that idea? As far as I know, all physicists agree on the meaning: that Bell's theorem shows the predictions of QM for entangled particles are inconsistent with local realism. So what lack of agreement are you referring to?

Also, just waving away my argument with the word "unproductive" again suggests a knee-jerk reaction that you haven't put any thought into. It would be like if I presented a proof that there can be no largest prime number, and instead of addressing flaws in the proof, you just said "it seems like an unproductive way to think about prime numbers...people have been mulling over the prime numbers for years with no agreement as to their meaning." Sorry, but proofs are proofs, you have to address the specifics if you want to dispute their conclusions. And everything about my example involving lotto cards maps directly to the real experimental setup involving particles, if you don't see how this works I can explain, though it should be pretty obvious if you give it any thought (to get you started, the two cards map to the two particles being measured, the three possible boxes either person can scratch map to the three possible detector angles each experimenter can choose from, and the hidden fruits behind each box map to the notion that each particle has been assigned a predetermined answer to the result it will give when measured on any of the three possible angles).
ThomasT said:
Seeing the connection between simple A-B optical Bell tests and the classic polariscope might prove to be quite, er, fruitful. :smile:
What "connection" is that? All your statements are so hopelessly vague. Please give specifics, like numerical predictions that you think are the same.
ThomasT said:
The similarity between the two setups that I see involves the same light extending between polarizers A and B (forget about the emitter in the optical Bell tests), and the same detection rate angular dependence.
What detection rate angular dependence? You never give any specifics. There is nothing in the classical optics setup that says that if one experimenter has his polarizer set to angle phi and the other has his polarizer set to angle xi, then the probability of them getting the "same result" in some sense will be cos^2(phi - xi); that is a purely quantum rule for the detection rate angular dependence, which has nothing to do with Malus' law (which has to do with the angle between the polarization of the wave and the detector, not to do with the angle between two detectors).
ThomasT said:
The statistical independence representation and the assignment of specific emission property values together constitute what's usually called, misleadingly I think, the assumption of local realism.
As I said, it seems to me that the kind of "statistical independence" you referred to above is not assumed by Bell or any other physicist--I think you're just confused, or else you're being overly vague about what "A" and "B" are supposed to represent.
ThomasT said:
If experimenters always get the same answer when they choose the same polarizer setting during a coincidence interval, that might be because the polarizers are analyzing the same thing (which was produced during the same emission interval). This is what quantum theory assumes.
That's what Bell assumed too, that they were both analyzing two things with properties that were sufficiently "the same" to ensure they had the same predetermined answers to any possible measurement (or opposite answers, depending on the specific experiment being discussed). But he showed that even if you assume this, then under local realism this leads to conflicts with QM over the predicted statistics in trials where the experimenters choose different detector settings. I already explained this with the lotto card example, which you apparently refuse to look at--would you be happier if I performed some trivial editing on that example so that it was no longer about lotto cards, but instead about particles whose spin are measured at two different detectors? None of the math would need to be any different, but if you have some kind of psychological block about mapping analogies to the actual physical situation they're supposed to be analogous to, perhaps it would help you to see that the proof really is completely straightforward.
 
Last edited:

Similar threads

Replies
66
Views
6K
Replies
87
Views
6K
Replies
197
Views
31K
Replies
190
Views
12K
Replies
2
Views
5K
Replies
9
Views
7K
Back
Top