Bell experiment would somehow prove non-locality and information FTL?

In summary: Bell's theorem states that a theory cannot be both "local" and "realistic." You have to give up one or the other, if you accept the validity of Bell's Theorem.
  • #141
heusdens said:
I.

If the detectors have an equal setting, then the results are either ++ or --. The positive correlation (= same result from detectors) is 100% (or 1).

{Question:
a. can we still assume that each individual detector produces a really random result?
b. does the correlation only hold for exactly simultanious results?
c. the cances of either ++ or -- are 50% each ??

It would be weird if b and/or c does not hold and a still holds...
}

Second remarkable thing:

II.

If the detectors have an unequal setting, then we find that results of +- and -+ , that is negative correlation (= unequal result from detectors) happening with a change of 25% (or .25).

{Same questions as above, but for c now: chances of +- or -+ are 50% each??}

a. Each detector sees a random pattern, always.
b. There is a coincidence window used for detections, and the setup is calibrated so the middles of the windows are equivalent. But it does not actually matter at all if the detections are simultaneous.
c. The likelihood of + or - at any detector is always 50%. Just to be specific: when we have Type I PDC entangled photons, then we have perfect correlations (at identical settings). So you get ++ or -- almost all of the time.

In the II.c. case, there are 4 permutations: ++/-- and +-/-+ ) i.e. matches and non-matches. Matches can drop as low as 25% for certain settings (usually specified as 0, 120, 240 degrees).
 
Physics news on Phys.org
  • #142
heusdens said:
Now how can we explain this??

We first try to find independent explenations for the separate remarks.

First let us look at I. (detector settings equal)

We can make all kind of suggestions about how this could be the case.

1. For example, we could assume that both detectors have an exactly the same algorithm with which to produce the data. Each data separate is a random result, but the results of both sides are always the same. The algorithm works because what the detectors still have in common is time and possible also other easily overlooked common sources (external light source, common to both observers, and other such common sources).

{the assumption is here that if we take the detector results 'out of sync', for instance data of detector 1 at time t and data of detector 2 at time t + delta t (delta t>0) this results aren not produced; -- is that a realistic assumption??}

A less trivial approach is to suspect that detector 1 has received a signal from detector 2, and knows that it setting is the same, somehow, and can produce the positive correlation result. The signal need not be instantanious to explain it (if the signal contains the timestamp). The weird thing also for this explenation is that it breaks the symmetrie, since we could also suppose detector 2 somehow gets a signal from detector 1. If we assume symmetry, both signals would occur for this explenation. But then how could we get the correlation as we see, based on both the signals? In the a-symmetric case, we would have no trouble to find a possibility for correlation, since then only one detector would have to adjust to produce the corresponding output. It is more difficult this can happen for the symmetric case (both adjustments would cancel out), but if we assume that the setup is symmetric, we have to assume just that. This however can then be showed to be equal to the case in which both detectors receive a simultanious signal that the detector settings match, so both detectors can make equal and simultanious adjustments. This is like postulating that exactly in the middle (in order for simultanious arrival) between those detectors is a receiver/transmitter, that receives the detector signals, and transmit back wether they are equal or not.


Now we look at II. (detector settings unequal)

2. We get a .25 chance that we have unequal results (+- or -+).
This is same as a .75 chance for having equal results (++ or --).

Both detectors individually (if I assume correctly) still produce random results, but they results from both detectors are now equal in 3 out of 4 on average, which is the same as that they are unequal in 1 out of 4 on average.

In principle we can now suppose that same kind of things that were supposed to explain the outcomes in the previous case, also happen here, with the exception that the output that is generated is not always ++ or --, but only in 3 out of 4 cases.
This is then just assuming a different algorithm to produce that result.


3. Now we try to combine explenations I and II.

For each explenation I and II seperately we could assume that something purely internal generated the outcomes. But if I and II occur, we have no way of explaining this.

4. So, this already urges us to assume that the detector states (settings) are getting transmitted to a common source, exactly in the middle (that is in the orthogonal plane which intersects the line between both detectors in the middle point of that line).

If we can also verify that in the experiment (by placing detectors very far away) this hypothetical signal speed is like instantanious, by changing the dector setting simultaniously, and get instant correlations.
To cope with that, the hypothetical assumption is that this is like a signal that travels back in time from the detector to a common source, and travels forward in time to the detector.

Conclusion:
5. Although we did not setup this imaginary experiment with this common source, it already follows from the results of this experiment, that such a common source must be assumed, which communicates back and forth between the detectors.

1. Actual experiments are designed to rule out other sources.

2. It actually should be .25 for matches and .75 for mismatches if we have 1.0 for the matches in your I. case. This comes from the cos^2 rule with 120 degrees as the difference in settings.

3. This is correct, we need to consider both of these together.

4. Good, you are seeking possible explanations for the results. And now we find ourselves considering new physical phenomena not otherwise known... such as backwards in time signaling and hypothetical effects derived from previously unknown sources. But these have severe theoretical problems too, since they only appear for entangled particles.

5. It is not a requirement that there is a common source, but that is certainly one possibility.

Your general line of approach is definitely improving.
 
  • #143
JesseM said:
Although I tailored the short proofs I gave above to a particular thought-experiment, it's quite trivial to change a few words so they cover any situation where two people can measure one of three properties and they find that whenever they measure the same property they get opposite results. If you don't see how, I can do this explicitly if you'd like. I am interested in the physics of the situation, not in playing a sort of "gotcha" game where if we can show that Bell's original proof did not cover all possible local hidden variable explanations then the whole proof is declared null and void, even if it would be trivial to modify the proof to cover the new explanations we just thought up as well. I'll try reading his paper to see what modifications, if any, would be needed to cover the case where measurement is not merely revealing preexisting spins, but in the meantime let me ask you this: do you agree or disagree that if we have two experimenters with a spacelike separation who have a choice of 3 possible measurements which we label A,B,C that can each return two possible answers which we label + and - (note that these could be properties of socks, downhill skiers, whatever you like), then if they always get opposite answers when they make the same measurement on any given trial, and we try to explain this in terms of some event in both their past light cone which predetermined the answer they'd get to each possible measurement with no violations of locality allowed (and also with the assumption that their choice of what to measure is independent of what the predetermined answers are on each trial, so their measurements are not having a backwards-in-time effect on the original predetermining event, as well as the assumption that the experimenters are not splitting into multiple copies as in the many-worlds interpretation), then the following inequalities must hold:

1. Probability(Experimenter #1 measures A and gets +, Experimenter #2 measures B and gets +) plus Probability(Experimenter #1 measures B and gets +, Experimenter #2 measures C and gets +) must be greater than or equal to Probability(Experimenter #1 measures A and gets +, Experimenter #2 measures C and gets +)

2. On the trials where they make different measurements, the probability of getting opposite answers must be greater than or equal to 1/3

Dear Jesse,

1. YES; I agree that an experiment with your boundary conditions can deliver the results you claim. But you seem not to agree that a wholly classical experiment with the same boundary conditions can deliver a different result?

2. Could I therefore take up your offer and ask you to present the case where Alice and Bob get identical results for identical settings? (I think it will help us all, especially heusdens, being in my experience easier to discuss and follow than the ''opposite'' case.)

3. Then, given your interest in the physics of the situation, could I ask you to use more maths in your presentation? (That's a long sentence of yours, there, above)

4. So would you be happy to deliver the following inequality:

(1) P(BC = +1|bc) - P(AC = -1|ac) - P(AB = +1|ab) less than or equal to 0?

Here P(BC = +1|bc) denotes the probability of Alice and Bob getting the same result (+1, +1 or -1, -1) under the respective test conditions b and c.

5. (1) is just my way of attempting to standardise the way BT is presented. And I'm happy to accept most boundary conditions; say, consistent with common-sense.

Regards, wm
 
  • #144
DrChinese said:
Einstein of course supported what you call "pre-measurement values". This is because he said:

"I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

I cannot find a date and exact source.

OK. The point I make is that Einstein realism nowhere prohibits a ''measurement'' perturbing the pristine system.

So (in my view) when he says something like: Physical reality exists independent of substantion and perception he is not prohibiting measurement perturbation, a phenomenon well-known throughout classical and quantum physics. wm
 
Last edited:
  • #145
DrChinese said:
I tend to support a rejection of realism rather than a rejection of locality (in order to reconcile with Bell's Theorem). I do not know if there are beables, but there definitely are observables. I do not, for instance, if there is a one-to-one mapping of observables to beables. My guess would be that there is not, since there can be nearly an infinite number of observables for a single particle,

I definitely do not agree that it is a closed question (i.e. by definition) an observable must be observer dependent. That is one of the questions we seek the answer to. I happen to think it is, but I do not expect others to necessarily agree with this position. I believe that the Heisenberg Uncertainty Principle essentially calls for this position.

OK; we differ: I reject Bell realism and (so) locality remains unfettered.

To me there must be beables (from BEING), for how else do we deliver observables? By ''deliver'' I allow that sometimes we may deliver the beable ''unperturbed'' (say, charge) and most generally we deliver the beable perturbed (say, polarisation).

By ''observer dependent'' I meant the process whereby a beable becomes an observable. No process, no observable: a closed question, I'd like to think.

And YES, HUP seems applicable (even proof) of my position where quanta are involved. For how can a quantum be given or received without change = perturbation; and how can there be ''observation'' without a quantum change? wm
 
Last edited:
  • #146
DrChinese said:
Ah, but the Heisenberg Uncertainty Principle (HUP) is quite present in such cases! Note that we cannot learn MORE information than the HUP allows about one particle by studying its entangled twin!

Doc, this seems a strange use of the HUP? It had not occurred to me that you were using it that way.

And surely you are not correct? By testing one particle I can observe its pristine reaction to an a setting. By testing the other particle I can observe its pristine reaction to a b setting. I have THUS learned something MORE about each twin!

Given one particle only, HUP says this is impossible: and I agree; its that quanta again; a particle is pristine ONCE ONLY. BUT: Given two, its surely common-sense that we learn MORE about each?

Am I missing something here? wm
 
  • #147
wm said:
Dear Jesse,

1. YES; I agree that an experiment with your boundary conditions can deliver the results you claim. But you seem not to agree that a wholly classical experiment with the same boundary conditions can deliver a different result?
What do you mean? I didn't suggest a specific experiment, just a general idea of two experimenters doing two measurements where they each have a choice of three measurement settings A,B,C, and there are only two possible results + or - on each measurement. I was asking if you agreed the two inequalities I gave must be satisfied for any classical experiment in which they always get opposite results when they make the same measurement, not if they can be satisfied. If you think they could be violated for a classical experiment, can you present an example, or just point out where you see an error in the proofs I gave?
wm said:
2. Could I therefore take up your offer and ask you to present the case where Alice and Bob get identical results for identical settings? (I think it will help us all, especially heusdens, being in my experience easier to discuss and follow than the ''opposite'' case.)
Well, if Alice and Bob get identical results for identical settings, this is different from what happens when you measure spins of entangled particles on the same axis, which always have opposite spins on that axis. But sure, I can come up with some inequalities for this case. If they always get the same result for the same setting, then under the conditions I described above, the following inequalities must be obeyed:

* Probability(Bob measures A and gets +, Alice measures B and gets -) plus Probability(Bob measures B and gets +, Alice measures C and gets -) greater than or equal to Probability(Bob measures A and gets +, Alice measures C and gets -)

* When they pick different settings, the probability they get identical results must be greater than or equal to 1/3.

Do you think there would be any classical situation where locality is obeyed but either of these equalities could be violated? Do you see a flaw in the proofs I gave that these inequalities cannot be violated?
wm said:
3. Then, given your interest in the physics of the situation, could I ask you to use more maths in your presentation? (That's a long sentence of yours, there, above)
Can you tell me which part of the sentence you'd like to see elaborated? Most of it was just discussing the precise conditions of what I mean by a "local hidden variables theory", but if you just think in terms of a classical situation that obeys locality, you'll almost certainly be obeying those conditions. Another shorthand way of stating the condition is that we're assuming the reason both experimenters get identical results for the same setting is because the two objects/signals they receive were prepared in some state where the outcome of each possible measurement was predetermined, and the states are such that they are predetermined to get identical results with the same setting. Again, if you present an example, if it violates one of the conditions I discussed, I'll explain why, but as long as you think in classical terms it's unlikely there'll be a problem.
wm said:
4. So would you be happy to deliver the following inequality:

(1) P(BC = +1|bc) - P(AC = -1|ac) - P(AB = +1|ab) less than or equal to 0?

Here P(BC = +1|bc) denotes the probability of Alice and Bob getting the same result (+1, +1 or -1, -1) under the respective test conditions b and c.
So I assume P(AC = -1) mean the probability that they each get different results? This is somewhat confusing notation since you're using + and - both for individual results and for whether they both get the same or different results--would you mind if we use S for same and D for different instead? Also, when you say "under respective test conditions b and c", do you just mean that Bob uses the measurement setting B and Alice uses the measurement setting C? Assuming this is what you meant, then yes, I agree your inequality must be satisfied for any local classical experiment. The reason is that in order for (BC = S | bc) (using my S and D notation) to be satisfied, the thing they are measuring must be in one of the following four types of predetermined states:

1. A+ B+ C+
2. A- B+ C+
3. A+ B- C-
4. A- B- C-

(here a predetermined state of type A+ B- C- just means any state in which it is predetermined that if the experimenter chooses setting A she'll get +, if she chooses setting B or C she'll get -).

Now, notice that if it's in predetermined state 2 or 3, it will also satisfy (AC = D| ac), while if it's in predetermined state 1 or 4, it will also satisfy (AB = S| ab). So, any possible predetermined state that satisfies the first must also satisfy one of the other two (and either or both of the other two could be satisfied without satisfying the first, as with a predetermined state of type A+ B+ C-), so this makes it clear that P(BC = S | bc) must be less than or equal to P(AC = D| ac) + P(AB = S| ab) under any local hidden variables theory, regardless of whether measurement disturbs the state or not.
 
Last edited:
  • #148
JesseM said:
What do you mean? I didn't suggest a specific experiment, just a general idea of two experimenters doing two measurements where they each have a choice of three measurement settings A,B,C, and there are only two possible results + or - on each measurement. I was asking if you agreed the two inequalities I gave must be satisfied for any classical experiment in which they always get opposite results when they make the same measurement, not if they can be satisfied. If you think they could be violated for a classical experiment, can you present an example, or just point out where you see an error in the proofs I gave? Well, if Alice and Bob get identical results for identical settings, this is different from what happens when you measure spins of entangled particles on the same axis, which always have opposite spins on that axis. But sure, I can come up with some inequalities for this case. If they always get the same result for the same setting, then under the conditions I described above, the following inequalities must be obeyed:

* Probability(Bob measures A and gets +, Alice measures B and gets -) plus Probability(Bob measures B and gets +, Alice measures C and gets -) greater than or equal to Probability(Bob measures A and gets +, Alice measures C and gets -)

* When they pick different settings, the probability they get identical results must be greater than or equal to 1/3.

Do you think there would be any classical situation where locality is obeyed but either of these equalities could be violated? Do you see a flaw in the proofs I gave that these inequalities cannot be violated? Can you tell me which part of the sentence you'd like to see elaborated? Most of it was just discussing the precise conditions of what I mean by a "local hidden variables theory", but if you just think in terms of a classical situation that obeys locality, you'll almost certainly be obeying those conditions. Another shorthand way of stating the condition is that we're assuming the reason both experimenters get identical results for the same setting is because the two objects/signals they receive were prepared in some state where the outcome of each possible measurement was predetermined, and the states are such that they are predetermined to get identical results with the same setting. Again, if you present an example, if it violates one of the conditions I discussed, I'll explain why, but as long as you think in classical terms it's unlikely there'll be a problem. So I assume P(AC = -1) mean the probability that they each get different results? This is somewhat confusing notation since you're using + and - both for individual results and for whether they both get the same or different results--would you mind if we use S for same and D for different instead? Also, when you say "under respective test conditions b and c", do you just mean that Bob uses the measurement setting B and Alice uses the measurement setting C? Assuming this is what you meant, then yes, I agree your inequality must be satisfied for any local classical experiment. The reason is that in order for (BC = S | bc) (using my S and D notation) to be satisfied, the thing they are measuring must be in one of the following four types of predetermined states:

1. A+ B+ C+
2. A- B+ C+
3. A+ B- C-
4. A- B- C-

(here a predetermined state of type A+ B- C- just means any state in which it is predetermined that if the experimenter chooses setting A she'll get +, if she chooses setting B or C she'll get -).

Now, notice that if it's in predetermined state 2 or 3, it will also satisfy (AC = D| ac), while if it's in predetermined state 1 or 4, it will also satisfy (AB = S| ab). So, any possible predetermined state that satisfies the first must also satisfy one of the other two (and either or both of the other two could be satisfied without satisfying the first, as with a predetermined state of type A+ B+ C-), so this makes it clear that P(BC = S | bc) must be less than or equal to P(AC = D| ac) + P(AB = S| ab) under any local hidden variables theory, regardless of whether measurement disturbs the state or not.

1. You have now reframed your question to: ''Do you agree the two inequalities I gave must be satisfied for any classical experiment in which they always get opposite results when they make the same measurement.'' My answer is: NO.

2. The simpler ''same settings, same results'' experiment has to do with photons.

3. You gave the general case which, if valid, must include my specific case. I suggest, since you want to understand the physics, let's be specific and use more maths.

4. SO, to ensure that I am clear on your position: You insist that all classical experiments with the new boundary conditions must satisfy the following Bellian inequality:

(1) P(BC = +1|bc) - P(AC = -1|ac) - P(AB = +1|ab) less than or equal to 0.

Here P(BC = +1|bc) denotes the probability of Alice and Bob getting the same (S) individual results (+1, +1) xor (-1, -1) under the respective test conditions b (Alice) and c (Bob); that is, the first given result is Alice's (here B); the second Bob's (here C); etc.

5. A counter-example would then be a refutation of your position AND Bell's inequality.

PS: To be very clear: I accept the boundary conditions but not the limiting assumptions that you associate with them.

Are we agreed? wm
 
Last edited:
  • #149
DrChinese said:
a. Each detector sees a random pattern, always.
b. There is a coincidence window used for detections, and the setup is calibrated so the middles of the windows are equivalent. But it does not actually matter at all if the detections are simultaneous.
c. The likelihood of + or - at any detector is always 50%. Just to be specific: when we have Type I PDC entangled photons, then we have perfect correlations (at identical settings). So you get ++ or -- almost all of the time.

This last means, with 50/50 chance I suppose??

And if it does not matter if there is coincidence, there is something I don't understand then. My reasoning would be, it matters a lot that they are exactly in sync. How could an out of sync measurements be statistically random (for every separate measurement) AND correlated with the other measurement?

That is not clear to me.

In the II.c. case, there are 4 permutations: ++/-- and +-/-+ ) i.e. matches and non-matches. Matches can drop as low as 25% for certain settings (usually specified as 0, 120, 240 degrees).

And what about the other probabilities?

++ / -- has 50/50 (relative) probablity?

+- / -+ also 50/50 (relative) probability?

(in my imaginary experiment, I just assume to be the case, as well as that the negative correlated fraction has probability of 25%)
 
  • #150
wm said:
1. You have now reframed your question to: ''Do you agree the two inequalities I gave must be satisfied for any classical experiment in which they always get opposite results when they make the same measurement.'' My answer is: NO.

2. The simpler ''same settings, same results'' experiment has to do with photons.

3. You gave the general case which, if valid, must include my specific case. I suggest, since you want to understand the physics, let's be specific and use more maths.

4. SO, to ensure that I am clear on your position: You insist that all classical experiments with the new boundary conditions must satisfy the following Bellian inequality:

(1) P(BC = +1|bc) - P(AC = -1|ac) - P(AB = +1|ab) less than or equal to 0.

Here P(BC = +1|bc) denotes the probability of Alice and Bob getting the same (S) individual results (+1, +1) xor (-1, -1) under the respective test conditions b (Alice) and c (Bob); that is, the first given result is Alice's (here B); the second Bob's (here C); etc.
Right, assuming we've switched from the assumption that Bob and Alice always get opposite results when they perform the same measurement to your new assumption that they always get the same result when they perform the same measurement. Again though, it's really confusing to have +1 represent both a possible result of one person's measurement and an outcome where they both got the same results, so I suggest using my notation S and D instead.
wm said:
5. A counter-example would then be a refutation of your position AND Bell's inequality.

PS: To be very clear: I accept the boundary conditions but not the limiting assumptions that you associate with them.

Are we agreed? wm
By "boundary conditions" you mean the things I said about the two experimenters having three properties to measure, always getting the same results when they pick the same property, and with the assumption that only classical phenomena obeying locality are involved? (This is a nonstandard use of the phrase 'boundary conditions', which usually refers to conditions on the spatial or temporal boundary of a physical system, like the system's initial conditions.) And by "limiting assumptions" you mean my claim that the various inequalities must necessarily hold true given these conditions? (Of course this was not an 'assumption', it was something I tried to give a proof for.) If so, then yes, I agree. If you think you have a counterexample, or see a flaw in the short proof I gave, please present it.
 
Last edited:
  • #151
wm said:
Doc, this seems a strange use of the HUP? It had not occurred to me that you were using it that way.

And surely you are not correct? By testing one particle I can observe its pristine reaction to an a setting. By testing the other particle I can observe its pristine reaction to a b setting. I have THUS learned something MORE about each twin!

Given one particle only, HUP says this is impossible: and I agree; its that quanta again; a particle is pristine ONCE ONLY. BUT: Given two, its surely common-sense that we learn MORE about each?

Am I missing something here? wm

Alice tests her particle in the "pristine" condition for setting A, and then Bob tests his particle in the "pristine" condition for setting B. Yes, it is common sense that we learned something about Bob's particle from the test we did on Alice. But that would in fact violate the HUP, and so it turns out that the common sense explanation is false.

For us to have learned about Bob's particle at setting A, we would need to be able to perform another test on that particle at setting A and get the answer we expect due to our test on Alice at setting A. If you actually perform such a test, you do not get any higher match rate (we would be looking for more perfect correlations at that point).

The above holds true REGARDLESS of the order or timing of the 2 measurements. So the HUP (Heisenberg Uncertainty Principle) is not violated.

In my opinion, both the HUP and Relativity are fundamental and important principles that guide how we observe particle events. These are both consistent with Bell's Theorem.
 
  • #152
DrChinese said:
1. Actual experiments are designed to rule out other sources.

2. It actually should be .25 for matches and .75 for mismatches if we have 1.0 for the matches in your I. case. This comes from the cos^2 rule with 120 degrees as the difference in settings.

3. This is correct, we need to consider both of these together.

4. Good, you are seeking possible explanations for the results. And now we find ourselves considering new physical phenomena not otherwise known... such as backwards in time signaling and hypothetical effects derived from previously unknown sources. But these have severe theoretical problems too, since they only appear for entangled particles.

5. It is not a requirement that there is a common source, but that is certainly one possibility.

Your general line of approach is definitely improving.
Yeah, I'm learning :smile:

And some remarks:

2. I got it the other way round in my imaginary case, but the arguments I pose are uneffected, so regard the experiment as if .25 chance for matches and .75 for mismatches.

3. Yes. Otherwise, we could just make the reasoning that nothing else but some entirely within the box just calculates values. Although I assume that at the basis that there IS a (hidden ?) common source, which is TIME.
Therefore - in that reasoning - I expect that the simutaneity DOES matter significantly (it must match EXACTLY).

4. Well, that is a bit of a hypothesis, but not entirely on my own, since I remember remarks by Feynmann, who has claimed that a positron just behaves (in the mathematical description) as an electron moving backward in time. Now a photon would be it's own anti-particle...

[ And as a side note, usually we would say (from common sense, or formal logic) it is nonsence, since this sort of thing would assume that an effect could predate a cause. Again that is something with which dialectics has no trouble in understanding, which is another example of why these preconditions of formal logic and/or formal thinking often limits us in seeing what happens. ]

5. Not a requirement? I think that it follows from 3... We have by definition a common source, since time is a common source. But that is not in itself enough, there must be a way for each detector to be influenced by the other detector (an influence on the value we obtain from the detector). That is the whole point I tried to make.
 
Last edited:
  • #153
heusdens said:
This last means, with 50/50 chance I suppose??

And if it does not matter if there is coincidence, there is something I don't understand then. My reasoning would be, it matters a lot that they are exactly in sync. How could an out of sync measurements be statistically random (for every separate measurement) AND correlated with the other measurement?

That is not clear to me.



And what about the other probabilities?

++ / -- has 50/50 (relative) probablity?

+- / -+ also 50/50 (relative) probability?

(in my imaginary experiment, I just assume to be the case, as well as that the negative correlated fraction has probability of 25%)

1. The odds of the ++ case always equals the -- case.
2. The odds of the +- case always equals the -+ case.
3. BUT odds of the ++ case DOES NOT equal the +- case (except at very specific settings which are not worth discussing). Ditto for the other permutations.

Yes, it is strange. Each stream of outcomes follows a perfectly random sequence. Each sequence will individually contain an equal number of + and - outcomes. Yet there will be a discernible pattern when these streams are correlated and it WILL violate Bell's Inequality.
 
  • #154
heusdens said:
Although I assume that at the basis that there IS a (hidden ?) common source, which is TIME.
Therefore - in that reasoning - I expect that the simutaneity DOES matter significantly (it must match EXACTLY).

There is a "common source" of the photons themselves, because they are actually described by a single wave function until one of them is observed. At that point the combined wave function collapses.

Now, when it comes to the time of measurement/observation, it does NOT matter to the results whether one is measured first or not. (The order of the observations is not relevant to Bell tests at least, although it might be noticable for Quantum Erasers or certain other experimental setups.)

The easiest way to see this is by examing the following 3 cases:

A. Alice and Bob's detectors are exactly 10 meters from the PDC source (in opposite directions).
B. Alice's detector is exactly 10 meters from the PDC source, while Bob's detector is exactly 50 meters from the PDC source (in opposite directions).
C. Alice's and Bob's detectors are exactly 10 meters from the PDC source, and are co-located, but Bob's photon is routed through fiber for an extra 40 meters before arriving at the detector.

All Bell tests will yield exactly the same results in all 3 of the above cases. In all cases, we always define a time window for coincidence counting - usually something like +/- 20 nanoseconds. The actual choice is made based on the particular experimental setup, the laser intensity, etc.
 
  • #155
DrChinese said:
There is a "common source" of the photons themselves, because they are actually described by a single wave function until one of them is observed. At that point the combined wave function collapses.

Now, when it comes to the time of measurement/observation, it does NOT matter to the results whether one is measured first or not. (The order of the observations is not relevant to Bell tests at least, although it might be noticable for Quantum Erasers or certain other experimental setups.)

The easiest way to see this is by examing the following 3 cases:

A. Alice and Bob's detectors are exactly 10 meters from the PDC source (in opposite directions).
B. Alice's detector is exactly 10 meters from the PDC source, while Bob's detector is exactly 50 meters from the PDC source (in opposite directions).
C. Alice's and Bob's detectors are exactly 10 meters from the PDC source, and are co-located, but Bob's photon is routed through fiber for an extra 40 meters before arriving at the detector.

All Bell tests will yield exactly the same results in all 3 of the above cases. In all cases, we always define a time window for coincidence counting - usually something like +/- 20 nanoseconds. The actual choice is made based on the particular experimental setup, the laser intensity, etc.

This assumes a continuous stream of photons?

What if we change the setup so that there is one photon per time unit?

I assume either, the outcomes then will be totally different, or in that case, the sync must match.

I mean it would be rather like taking the data of one of the detectors, and place them in a queue - in order for there being a out of sunc on purpose - before we qualify them as either matching or non-matching [the 'coincidence' monitor[, etc. which can not give the same result.
Or differently: we just make visible and separate output of both detectors.
If we combine the results to see if there is coincidence, we must of course match corresponding results, and if we do not, this can not give the same probability for coincidences.
Place them in two tables, like this:

Detector #1

Seqno Setting Result
1 A +
2 B +
3 C -

etc.

and same for

Detector #2

Seqno Setting Result
1 C +
2 B +
3 A -

etc.

Now we should combine of course dector results with the same seq no.
If we make an arbitrary choice, like combining detector #1 seqno 1 with detector #2 seqno 3, we of course get invalid results.
 
  • #156
JesseM said:
Right, assuming we've switched from the assumption that Bob and Alice always get opposite results when they perform the same measurement to your new assumption that they always get the same result when they perform the same measurement. Again though, it's really confusing to have +1 represent both a possible result of one person's measurement and an outcome where they both got the same results, so I suggest using my notation S and D instead. Agreed. If you think you have a counterexample, or see a flaw in the short proof I gave, please present it.

Jesse, the flaw that I see is this: Your example relies for its success on the very limiting notion of Bell-reality; aka naive or strong reality. I have a counter-example, and am waiting for clarification re the rules (my post #74) here, so will contact you off-PF. (PS: I'm seeking to avoid the need for retyping it here.) wm
 
  • #157
And another remark. Suppose we emit like 100 discrete photons.
Does each get measured at both sides?
We assume the time unit is that large that we have a measurement within the time unit at both detectors.

Suppose in my example (previous post) of the tables of outcomes, that we just store the data somewhere, until all the photons are measured, and only then do a coincidence count.

In that case, and if then also we can make a arbitray synchronisation, I would be really baffled, that would mean sheer magic happens (already printed results must then suddenly change AFTER we did the experiment!).
 
  • #158
wm said:
Jesse, the flaw that I see is this: Your example relies for its success on the very limiting notion of Bell-reality; aka naive or strong reality.
If you're referring to the idea that measurement doesn't disturb the state, I specifically avoided using that assumption in my proof. I said that "a predetermined state of type A+ B- C- just means any state in which it is predetermined that if the experimenter chooses setting A she'll get +, if she chooses setting B or C she'll get -". So if you have a predetermined state X which is "of type A+ B- C-", you aren't assuming that the state X involves spin-up on the A axis and spin-down on the B and C axis prior to measurement, you're just assuming that given the initial state X and a measurement on the A axis this will deterministically cause the experiment to register spin-up, and given a measurement on the B or C axis this will deterministically cause the experiment to register spin-down. That's assuming A, B, C are measurements of spins, but the point would remain the same if you were talking about some other properties--I'm not assuming the measurement reveals a preexisting property, just that each possible initial state will give a determinate response to each of the three measurements.
wm said:
I have a counter-example, and am waiting for clarification re the rules (my post #74) here, so will contact you off-PF. (PS: I'm seeking to avoid the need for retyping it here.) wm
I think the rules would allow you to post what you think is a classical example that violates a Bell inequality, provided you present it in a tentative way where you're asking for feedback on the example and willing to be shown that it doesn't really violate Bell's theorem, rather than presenting it as a definitive disproof of mainstream ideas (note that heusdens' 'three datastreams' idea was not edited or deleted by the moderators, for example).

I'd encourage you to write up your example and then post it here so we all can see it (presenting it in the tentative way I suggested), and then if it's deleted by the moderators you could always resend it to me via PM.
 
Last edited:
  • #159
DrChinese said:
Alice tests her particle in the "pristine" condition for setting A, and then Bob tests his particle in the "pristine" condition for setting B. Yes, it is common sense that we learned something about Bob's particle from the test we did on Alice. But that would in fact violate the HUP, and so it turns out that the common sense explanation is false.

For us to have learned about Bob's particle at setting A, we would need to be able to perform another test on that particle at setting A and get the answer we expect due to our test on Alice at setting A. If you actually perform such a test, you do not get any higher match rate (we would be looking for more perfect correlations at that point).

The above holds true REGARDLESS of the order or timing of the 2 measurements. So the HUP (Heisenberg Uncertainty Principle) is not violated.

In my opinion, both the HUP and Relativity are fundamental and important principles that guide how we observe particle events. These are both consistent with Bell's Theorem.

I see no violation of HUP here. I see no need to abandon common-sense.

Does this help: If we carry out identical tests on the separated singlet-correlated twins (particles, say photons), then we confirm that such pristine twins do indeed return a certain (identical) polarisation.

HUP says you can never get such confirmation on a single pristine particle -- it being no longer pristine after the first (even single quanta) interaction.

Of course the situation is different if you have Bell/naive/strong-realism in mind. So: Rather than abandon common-sense, just abandon that erroneous realism (I say). wm
 
  • #160
heusdens said:
This assumes a continuous stream of photons?

What if we change the setup so that there is one photon per time unit?

I assume either, the outcomes then will be totally different, or in that case, the sync must match.

I mean it would be rather like taking the data of one of the detectors, and place them in a queue - in order for there being a out of sunc on purpose - before we qualify them as either matching or non-matching [the 'coincidence' monitor[, etc. which can not give the same result.
Or differently: we just make visible and separate output of both detectors.
If we combine the results to see if there is coincidence, we must of course match corresponding results, and if we do not, this can not give the same probability for coincidences.
Place them in two tables, like this:

Detector #1

Seqno Setting Result
1 A +
2 B +
3 C -

etc.

and same for

Detector #2

Seqno Setting Result
1 C +
2 B +
3 A -

etc.

Now we should combine of course dector results with the same seq no.
If we make an arbitrary choice, like combining detector #1 seqno 1 with detector #2 seqno 3, we of course get invalid results.
Each photon has only a single entangled "twin" that it can be matched with, you don't have a choice of which measurement of Alice's to match with which measurement of Bob's, if that's what you're asking. The spin-entanglement is because the two photons were emitted simultaneously by a single atom...they don't necessarily have to be measured at the same time though (Alice and Bob could be different distances from the atom the photons were emitted, so the photons would take different times to reach them).
 
  • #161
heusdens said:
This assumes a continuous stream of photons?

What if we change the setup so that there is one photon per time unit?

I assume either, the outcomes then will be totally different, or in that case, the sync must match.

As you come to learn how the experiment is calibrated, it becomes easier to visualize. There are a number of sub-steps in the actual process that are usually skipped over for the sake of brevity and comprehension.

It is easy know when the apparatus is properly calibrated: you get perfect correlations at identical angle settings. Keep that in mind. That way there is no possibility of a mistake.

1. A PDC outputs perhaps 2,000* entangled pairs per second. They occur at semi-random intervals. Each photon detected is recorded as being at a precise timestamp, along with whether it is a + or a -.

2. A window of 1/2000 would be too big, there might be 2 pairs in it (and that would be no good). So you reduce the time window to a much smaller amount: 100 ns or less. There are 10,000,000 such windows in one second. This allows you to be sure you are matching up the right pair. If you are not ending up with near 100% matches, you have more calibration to do.

3. Once you have a time calibrated system, you know how to match up the pairs - regardless of the polarizer settings at either end. They must be calibrated as well so that 0 degrees on one is comparable to 0 degrees on the other. The same principle is used for such calibration.

4. So now you have a time and angle calibrated system, and you are ready to perform your Bell test.

*There are *many* more photons coming from the laser pump (the input to the PDC). These are easily diverted because a PDC crystal (generously) sends the desired entangled pairs out at a slightly different path than the non-entangled photons. It is as if all the desired photons come out one door, and the unused remainder (the ones you don't want to see anyway) go out another. You know you have cut through all the technical issues in the end, because you always start with the perfect correlations.
 
Last edited:
  • #162
wm said:
HUP says you can never get such confirmation on a single pristine particle -- it being no longer pristine after the first (even single quanta) interaction.

That is correct. One of the observations is always first. At that point the entangled wave functions collapses, and they are two independent photons. Once you work through it, you will see that the observed results are identical regardless of which is actually measured first. And you NEVER learn anything more about one photon than the Heisenberg Uncertainty Principle (HUP) allows. This can be easily demonstrated.

As ZapperZ has said many times, there is nothing to prevent you from measuring non-commuting particle attributes to any level of precision you desire (apparently defeating the HUP). The problem is that you still don't violate the HUP: because you are NOT learning anything more about that particle. Each measurement changes the state of the particle. If you like, I can explain this point further.
 
  • #163
heusdens said:
And another remark. Suppose we emit like 100 discrete photons.
Does each get measured at both sides?

If you get 100 entangled pairs, you will have a very high rate of matches (seen at both detectors within the time window). The very small number that are not matched fit within the inefficiency of the overall apparatus.

I *strongly* suggest you do not attempt to challenge the actual experimental methodology until you *fully* understand it. It has some complexity to it, and you will need to read at least half a dozen experiments in detail to get a fair idea of what is actually going on. Suffice it to say that the theoretical and practical optics have been carefully reviewed by hundreds of the world's leading scientists. (This is not really the right thread to be working through such issues anyway. And if you want to critique the experiments, you should do your homework thoroughly first.)
 
  • #164
DrChinese said:
That is correct. One of the observations is always first. At that point the entangled wave functions collapses, and they are two independent photons. Once you work through it, you will see that the observed results are identical regardless of which is actually measured first. And you NEVER learn anything more about one photon than the Heisenberg Uncertainty Principle (HUP) allows. This can be easily demonstrated.

As ZapperZ has said many times, there is nothing to prevent you from measuring non-commuting particle attributes to any level of precision you desire (apparently defeating the HUP). The problem is that you still don't violate the HUP: because you are NOT learning anything more about that particle. Each measurement changes the state of the particle. If you like, I can explain this point further.

Further explanation would be appreciated; for I'm still in disagreement. NOT violating HUP (that's for sure), but surely learning more about the other twinned particle? wm
 
  • #165
wm said:
Further explanation would be appreciated; for I'm still in disagreement. NOT violating HUP (that's for sure), but surely learning more about the other twinned particle? wm

Sure, I can explain.

We have Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120.

NOPE. That is demonstrably wrong. If it were true, then we could measure Bob at 0 degrees now and expect to get a + every time. But that is not what happens! We get many pluses, but enough minuses to convince us that we don't know anything about Bob at 0 degrees at all.

On the other hand, we can re-measure Bob at 120 degrees all day long. Each re-test will give exactly the same results, a - at 120 degrees! So obviously the physical act of having the photon move through a polarizer is not itself changing the photon - because the photon does not appear to be changing.

So we did not, in the end, learn anything more than is allowed about a single particle.

The HUP is the limiting factor. Most of the time, the influence of the HUP is ignored in discussion of Bell... but it shouldn't be. This is the exact point that EPR attempted to exploit originally.

So what actually happened in our example above? Below is the correct explanation.

We have entangled Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a + at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. This will occur 75% of the time when we have Bob as a + at 0 degrees. Malus' Law - cos^2 theta - governs this. It does not matter that we did not measure Bob at 0 degrees, Bob acts as if we did that measurement first anyway... because we measured Alice (Bob's entangled partner) at 0 degrees. Once that measurement is performed on Alice, Bob acts accordingly; but the pair are no longer entangled. You may not like the explanation, but that is in fact the mechanics of how to look at it - andpredict the results.
 
  • #166
JesseM said:
I think the rules would allow you to post what you think is a classical example that violates a Bell inequality, provided you present it in a tentative way where you're asking for feedback on the example and willing to be shown that it doesn't really violate Bell's theorem, rather than presenting it as a definitive disproof of mainstream ideas (note that heusdens' 'three datastreams' idea was not edited or deleted by the moderators, for example).

I'd encourage you to write up your example and then post it here so we all can see it (presenting it in the tentative way I suggested), and then if it's deleted by the moderators you could always resend it to me via PM.

Jesse, thanks for this. I'd welcome critical, correctional and educational comments on the following first-DRAFT.

In response to recent posts: Is this a classical refutation of Bell's theorem?

1. Let's modify a typical Aspect/Bell test (using photons) in the following way, retaining no significant connection between Alice's detector (oriented a, orthogonal to the line-of-flight axis) and Bob's (oriented b, orthogonal to the line-of-flight axis). (a and b are unit vectors, freely chosen.)

2. We place the Aspect-style singlet-source in a box. The LH and RH sides of the box (facing Alice and Bob respectively) each contain a dichotomic-polariser, the principal axis of which is aligned with the principal axis (say) of the box. (We thus have a classical source of correlated photon-pairs in identical but unknown states of linear polarisation.)

3. Unbeknown to Alice, her dichotomic polariser-analyser (detector) is yoked to the box such that her freely-chosen detector-setting (principal axis at unit-vector a) becomes also the setting of the principal axis of the box.

4. Typically (and beyond their control), Alice's results vary +1 xor -1; Bob's results likewise. Our experimental setting thus satisfies the boundary conditions for Bellian-inequalities based on this +1 xor -1 relation (cf Peres, Quantum Theory 1995: 162).

5. From classical analysis, and in a fairly obvious notation, the following equations hold:

(1) P(AB = S|ab) = cos^2(a, b),
(2) P(AB = D|ab) = sin^2(a, b);

where P = Probability. S = Same result (+1, +1) or (-1, -1) for Alice and Bob; D = Different result (+1, -1) or (-1, +1). Thus P(BC = S|bc) would denote the probability of Alice and Bob getting the same (S) individual results (+1, +1) xor (-1, -1) under the respective test conditions b (Alice) and c (Bob); that is, the first given result is Alice's (here B); the second Bob's (here C); etc.

6. Now: The boundary conditions that we have satisfied yield (via a typical Bellian analysis, deriving a typical Bellian-inequality -- cf https://www.physicsforums.com/showpost.php?p=1215927&postcount=151 ):

(3) P(BC = S|bc) - P(AC = D|ac) - P(AB = S|ab) less than or equal to 0.

7. However: For the differential-direction set {(a, b) = 67.5°, (a, c) = 45°, (b, c) = 22.5°} we have from (1) and (2):

(4) LHS (3) = 0.85 - 0.5 - 0.15 = 0.2.

Comparing RHS (3) with (4), we conclude: The Bellian-inequality (3) is (in general) FALSE!

Any and all comments will be appreciated, wm
 
Last edited:
  • #167
It's obviously not a refutation of Bell's theorem. The question is if it's a classical violation of Bell's theorem. It's not, because one of the hypotheses (parameter independence) is violated: P(B=+|ab) is dependent on a.
 
Last edited:
  • #168
DrChinese said:
Sure, I can explain.

We have Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120.

NOPE. That is demonstrably wrong. If it were true, then we could measure Bob at 0 degrees now and expect to get a + every time. But that is not what happens! We get many pluses, but enough minuses to convince us that we don't know anything about Bob at 0 degrees at all.

On the other hand, we can re-measure Bob at 120 degrees all day long. Each re-test will give exactly the same results, a - at 120 degrees! So obviously the physical act of having the photon move through a polarizer is not itself changing the photon - because the photon does not appear to be changing.

So we did not, in the end, learn anything more than is allowed about a single particle.

The HUP is the limiting factor. Most of the time, the influence of the HUP is ignored in discussion of Bell... but it shouldn't be. This is the exact point that EPR attempted to exploit originally.

So what actually happened in our example above? Below is the correct explanation.

We have entangled Alice and Bob. We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a + at 0 degrees. We measure Bob at 120 degrees and see that Bob is a -. This will occur 75% of the time when we have Bob as a + at 0 degrees. Malus' Law - cos^2 theta - governs this. It does not matter that we did not measure Bob at 0 degrees, Bob acts as if we did that measurement first anyway... because we measured Alice (Bob's entangled partner) at 0 degrees. Once that measurement is performed on Alice, Bob acts accordingly; but the pair are no longer entangled. You may not like the explanation, but that is in fact the mechanics of how to look at it - andpredict the results.

DocC; No, no, no; surely not? Why talk about classical objects like Alice and Bob; why not talk about a single twinned-pair of particles that they ''measure''? I truly believe that you are caught up in Bellian-realism:

FOR, using your terms, you say: We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees.

My realism allows me to deduce no such thing. Alice is a+ after a perturbing measurement interaction at 0 degrees. (Think of measuring how high she jumps in the 0 direction when hit on the toe with a sledge-hammer.) So I deduce that Bob will perturb to a+ when measured at 0 degrees.

You say: We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120. But Bob (heretofore pristine) has only been perturbed by the 120 degree measurement; he hasn't experienced the sledge-hammer ''measurement''.

So you are assigning perturbing measurement outcomes to pristine unperturbed objects. My realism does not allow me to do that. What am I missing? For it seems to me that you are foist on your own use of HUP and the related perturbing effect of a single quantum; more impactful on a photon than a sledge-hammer on your toe? wm
 
  • #169
Hurkyl said:
It's obviously not a refutation of Bell's theorem. The question is if it's a classical violation of Bell's theorem. It's not, because one of the hypotheses (parameter independence) is violated: P(B=+|ab) is independent on a.

Do you mean ''dependent''?

But P(B=+|ab) = one-half for any a; which is independence (as it should be)?

That is: P(B=+|ab) = P(B=+|b) = one-half. Yes?

Thanks, wm
 
  • #170
wm said:
Jesse, thanks for this. I'd welcome critical, correctional and educational comments on the following first-DRAFT.

In response to recent posts: Is this a classical refutation of Bell's theorem?

1. Let's modify a typical Aspect/Bell test (using photons) in the following way, retaining no significant connection between Alice's detector (oriented a, orthogonal to the line-of-flight axis) and Bob's (oriented b, orthogonal to the line-of-flight axis). (a and b are unit vectors, freely chosen.)

2. We place the Aspect-style singlet-source in a box. The LH and RH sides of the box (facing Alice and Bob respectively) each contain a dichotomic-polariser, the principal axis of which is aligned with the principal axis (say) of the box. (We thus have a classical source of correlated photon-pairs in identical but unknown states of linear polarisation.)
I'm afraid I don't know enough about optics to follow this--what is a "dichotomic-polariser", for example? Also, if you say this is a purely classical situation which doesn't depend on specifically quantum properties of entangled photons, would it be possible to restate it in terms of Alice and Bob having *simulated* detectors on computers, and when they type in what measurement they want to make, the computer chooses what results to display based on a classical signal from the source (a string of 1's and 0's encoding information) which tells the computer the relevant properties of each simulated photon?
wm said:
3. Unbeknown to Alice, her dichotomic polariser-analyser (detector) is yoked to the box such that her freely-chosen detector-setting (principal axis at unit-vector a) becomes also the setting of the principal axis of the box.
But if the photons leave the box before Alice makes her choice of detector setting, how is that possible without the effects of Alice's choice traveling backwards in time? And since as I said I'm not very familiar with optics, can you explain the significance of the "principle axis of the box"? Are you just talking about the box emitting ordinary classical polarized light in such a way that if a polarization filter is oriented parallel to the axis of the box 100% of the light gets through, and if it's oriented at 90 degrees relative to the axis of the box 0% gets through, as in the demo here? If so, then in terms of my simulation question above, could you just have the source sending signals which tell Alice's computer the angle that each simulated photon is polarized, and when Alice inputs the angle of her simulated polarization filter the computer calculates the probability the simulated photon gets through?
 
Last edited:
  • #171
JesseM said:
I'm afraid I don't know enough about optics to follow this--what is a "dichotomic-polariser", for example?

Given a set of random photons, incident on a dichotomic polariser: half will be pass, polarised in line with the principal axis; and half will pass, polarised orthogonal to the principal axis.

Also, if you say this is a purely classical situation which doesn't depend on specifically quantum properties of entangled photons, would it be possible to restate it in terms of Alice and Bob having *simulated* detectors on computers, and when they type in what measurement they want to make, the computer chooses what results to display based on a classical signal from the source (a string of 1's and 0's encoding information) which tells the computer the relevant properties of each simulated photon?

Recall that Alice sets the orientation of the source. So you'd need that computer setting fed to the source. Then a twinned-pair of identical strings of randomised +1 and -1, each digit linked with the source setting, one string to each computer.

But if the photons leave the box before Alice makes her choice of detector setting, how is that possible without the effects of Alice's choice traveling backwards in time?

Good point; I should add to the effect that: Alice's re-orientation time is short in relation to the detector dwell time. So the mismatch -- the number of ''prior-orientation'' photons in transit -- has little effect on the overall probability distribution.

And since as I said I'm not very familiar with optics, can you explain the significance of the "principle axis of the box"?

The principal axis of the box is a reference-axis linking the orientation of the polarisers in the box to Alice's detector orientation. (For Alice unknowingly orients the box's princ. axis.) INCIDENTALLY: That unknowing bit is to ensure that Alice and Bob think that they're working with a genuine unmodified singlet source; for such is the correlation.

Are you just talking about the box emitting ordinary classical polarized light in such a way that if a polarization filter is oriented parallel to the axis of the box 100% of the light gets through, and if it's oriented at 90 degrees relative to the axis of the box 0% gets through, as in the demo here?

Sort of. The box emits photons in pairs, each pair polarised in the direction of the principal axis xor orthogonal thereto. The detectors have dichotomic polarisers; the setting of these axes allows the normal Malus' Law distribution. BUT 100% of photons pass due to the dichotomicity!

If so, then in terms of my simulation question above, could you just have the source sending signals which tell Alice's computer the angle that each simulated photon is polarized, and when Alice inputs the angle of her simulated polarization filter the computer calculates the probability the simulated photon gets through?

The design is such that Alice detects 100% of the principal-axis photons as polarised on her princ. axis (+1); and 100% of the orthogonal photons as polarised on her orthogonal axis (-1). The overall distribution is random; approx. 50% of each.

Hope this helps, wm
 
  • #172
JesseM said:
Each photon has only a single entangled "twin" that it can be matched with, you don't have a choice of which measurement of Alice's to match with which measurement of Bob's, if that's what you're asking. The spin-entanglement is because the two photons were emitted simultaneously by a single atom...they don't necessarily have to be measured at the same time though (Alice and Bob could be different distances from the atom the photons were emitted, so the photons would take different times to reach them).

That is the answer I was looking for. So it assumes perfect synchronization for the experiment to work.
 
  • #173
heusdens said:
That is the answer I was looking for. So it assumes perfect synchronization for the experiment to work.

You have to know which measurement goes with which one, because the measurements have to be matched upon entangled pairs. However, there are many ways to know this. With 100% efficient detectors, you simply have to COUNT the events (the 33th event at Alice will go with the 33th event at Bob).
Or you can send off "tags" which contain the pair number to both Alice and Bob. There doesn't need to be any time relationship, however.
Alice can keep her photons in different boxes for years, and THEN do the measurement - this wouldn't alter any result (if, of course, Alice had a box in which to keep photons for such a long time...).
This kind of experiment has been done (on smaller scales) by sending one of the photons in a long optical fibre, which is rolled up in a box. Of course the delay was not a year !
 
  • #174
wm said:
Jesse, thanks for this. I'd welcome critical, correctional and educational comments on the following first-DRAFT.

In response to recent posts: Is this a classical refutation of Bell's theorem?

...

6. Now: The boundary conditions that we have satisfied yield (via a typical Bellian analysis, deriving a typical Bellian-inequality -- cf https://www.physicsforums.com/showpost.php?p=1215927&postcount=151 ):

(3) P(BC = S|bc) - P(AC = D|ac) - P(AB = S|ab) less than or equal to 0.

7. However: For the differential-direction set {(a, b) = 67.5°, (a, c) = 45°, (b, c) = 22.5°} we have from (1) and (2):

(4) LHS (3) = 0.85 - 0.5 - 0.15 = 0.2.

Comparing RHS (3) with (4), we conclude: The Bellian-inequality (3) is (in general) FALSE!

Any and all comments will be appreciated, wm

wm,

The math you have is exactly correct. The conclusion you draw from it is not.

Your (3) is a standard presentation of the Bell Inequality.

Your (4) is a standard presentation of the predictions of QM, as relates to (3).

This shows that QM is incompatible with the Bell Inequality. It does NOT show that Bell's Inequality is violated for classical situations.
 
  • #175
wm said:
DocC; No, no, no; surely not? Why talk about classical objects like Alice and Bob; why not talk about a single twinned-pair of particles that they ''measure''? I truly believe that you are caught up in Bellian-realism:

FOR, using your terms, you say: We measure Alice at 0 degrees and she is a +. We deduce (because Alice and Bob are twins) that Bob is also a plus at 0 degrees.

My realism allows me to deduce no such thing. Alice is a+ after a perturbing measurement interaction at 0 degrees. (Think of measuring how high she jumps in the 0 direction when hit on the toe with a sledge-hammer.) So I deduce that Bob will perturb to a+ when measured at 0 degrees.

You say: We measure Bob at 120 degrees and see that Bob is a -. So now we know Bob is a + at 0 and a - at 120. But Bob (heretofore pristine) has only been perturbed by the 120 degree measurement; he hasn't experienced the sledge-hammer ''measurement''.

So you are assigning perturbing measurement outcomes to pristine unperturbed objects. My realism does not allow me to do that. What am I missing? For it seems to me that you are foist on your own use of HUP and the related perturbing effect of a single quantum; more impactful on a photon than a sledge-hammer on your toe? wm

Perhaps I was not clear. Alice and Bob are intended to be synonymous with the entangled photons and their measurement. So I am providing a description of quantum objects. Yes, I totally agree that these are only "pristine" (to use your term) once. So no disagreement there.

But you say that Bob is not affected when the "sledgehammer" is applied to Alice. Oh, but that is NOT true at all! Bob absolutely acts as if he was given the same sledgehammer as Alice! That is the non-local collapse of the wave function, and it is definitely and demonstrably instantaneous. If this did not happen, we wouldn't have anything interesting to discuss in this thread. :smile:
 

Similar threads

Replies
50
Views
4K
Replies
12
Views
2K
Replies
24
Views
646
Replies
11
Views
2K
Replies
1
Views
1K
Back
Top