Simple proof of Bell's theorem

In summary, the conversation discusses the Bell inequality and its implications for the locality assumption in physics. The SPOT detectors are used to demonstrate that there is a limit on how different the outputs of two detectors can be if they receive the same inputs and if the number of bits flipped depends only on the angle of the detector. The experiment shows a 25% correlation between the outputs, which goes against the prediction of a local theory. The conversation also touches on the concept of entanglement and how it affects the correlation between measurements.
  • #71
Zafa Pi said:
Thank you for replying, and hopefully you can clear some things up for me.

That may well be, but what I am trying to figure out is what does it means to say "give up locality". A simple and common meaning of locality is no FTL influence or communication.
1) So to "give up locality" mean that FTL communication is possible, like my quikfone in post #40?
2) If not, why? (how does it conflict with nature?)
3) If so, does that not provide a non-local realistic way to replicate the correlations in any of the Bell examples (including the GHZ example, see post #40)?

You may find this hard to accept, but all theories featuring non-local elements are not the same. Just saying it is non-local does not even come close to explaining Bell Inequality or Leggett Inequality violations. They might, but it really depends on the nature of the non-locality, don't you think? Perhaps there is signalling from Alice to Bob, but Bob does nothing on getting the signal. Or maybe Bob sometimes acts but not always. Maybe sometimes he listens to Chris and Dale instead of Alice. Who's to say? Ultimately you do when constructing such a theory, but until you do and present it, we can't really address it. The point is: what are the parameters of your theory, and is it realistic in the sense of the referenced paper?

Ones that follow the parameters described in the referenced paper are excluded by experiment. Others that are also non-local are not.
 
Physics news on Phys.org
  • #72
stevendaryl said:
There is no real distinction between one variable versus two or 100. You can always lump them altogether into a single variable. I don't see how it would make any difference.
If one of the variables is pre-existing and the other is random until measured how can they they be lumped together?
 
  • Like
Likes edguy99
  • #73
Jilang said:
If one of the variables is pre-existing and the other is random until measured how can they they be lumped together?

The whole idea of hidden variables is to explain EPR in terms of purely local interactions. So if it's purely local, then Alice's result depends on [itex]\lambda[/itex], which is state information that she shares with Bob, [itex]\alpha[/itex], which is Alice's choice of settings, and possibly [itex]\lambda_A[/itex], which is some other variable local to Alice (maybe it describes the details of Alice's detector). Similarly, Bob's result depends on [itex]\lambda[/itex], [itex]\beta[/itex], which is Bob's choice of settings, and possibly [itex]\lambda_B[/itex], some details about Bob's local situation.

So what I think you're suggesting is that [itex]\lambda_A[/itex] and [itex]\lambda_B[/itex] might be random, determined at the moment that Alice and Bob, respectively, perform their measurements. I'm pretty sure that can't possibly make any difference, unless you somehow say that the settings of [itex]\lambda_A[/itex] and [itex]\lambda_B[/itex] are correlated, which would just push the problem back to how their correlations are enforced.

In any case, the perfect anti-correlations of EPR imply that [itex]\lambda_A[/itex] and [itex]\lambda_B[/itex] can have no effect in a local model.

Let [itex]P_A(A | \alpha, \lambda, \lambda_A)[/itex] be the probability that Alice gets measurement result [itex]A[/itex] ([itex]\pm 1[/itex]) given shared hidden variable [itex]\lambda[/itex], setting [itex]\alpha[/itex] and local random variable [itex]\lambda_A[/itex]. Similarly, let [itex]P_B(B | \beta, \lambda, \lambda_B)[/itex] be the probability that Bob gets [itex]B[/itex] given his setting [itex]\beta[/itex], the value of the shared hidden variable, [itex]\lambda[/itex] and his local random variable [itex]\lambda_B[/itex].

The anti-correlated EPR probabilities tells us that if [itex]\alpha = \beta[/itex], then there is no possibility of Alice and Bob getting the same result. What that means is that for all possible values of [itex]\lambda[/itex], the product

[itex]P_A(A | \alpha, \lambda, \lambda_A) P_B(A | \alpha, \lambda, \lambda_B) = 0[/itex]

This means that if [itex]P_A(A | \alpha, \lambda, \lambda_A) \neq 0[/itex], then [itex]P_B(A |\alpha, \lambda, \lambda_B) = 0[/itex]. Since there are only two possible results for Bob, if one of the results has probability 0, then the other has probability 1. So we conclude:

This means that if [itex]P_A(A | \alpha, \lambda, \lambda_A) \neq 0[/itex], then [itex]P_B(A |\alpha, \lambda, \lambda_B) = 0[/itex]. Since there are only two possible results for Bob, if one of the results has probability 0, then the other ([itex]-A[/itex]) has probability 1. So we conclude:

If [itex]P_A(A | \alpha, \lambda, \lambda_A) \neq 0[/itex], then [itex]P_B(-A |\alpha, \lambda, \lambda_B) = 1[/itex].

But it's also true that [itex]P_A(-A | \alpha, \lambda, \lambda_A) P_B(-A | \alpha, \lambda, \lambda_B) = 0[/itex], so if [itex]P_B(-A | \alpha, \lambda, \lambda_B) = 1[/itex], then [itex]P_A(-A | \alpha, \lambda, \lambda_A) = 0[/itex] and so [itex]P_A(+A | \alpha, \lambda, \lambda_A) = 1[/itex]. So we have:

If [itex]P_A(A |\alpha, \lambda, \lambda_A) \neq 0[/itex] then [itex]P_A(A | \alpha, \lambda, \lambda_A) = 1[/itex]

Similarly,

If [itex]P_B(B |\alpha, \lambda, \lambda_B) \neq 0[/itex] then [itex]P_B(B | \alpha, \lambda, \lambda_B) = 1[/itex]

That means that the probabilities for Alice's possible results are either 0 or 1, and similarly for Bob. That means that Alice's result is actually a deterministic function of the parameters [itex]\alpha, \lambda, \lambda_A[/itex], and similarly, Bob's result is a a deterministic function of [itex]\beta, \lambda, \lambda_B[/itex]. So there are two functions, [itex]F_A(\alpha, \lambda, \lambda_A)[/itex] which returns +1 or -1, giving the result of Alice's measurement, and a second function, [itex]F_B(\alpha, \lambda, \lambda_B)[/itex] giving the result of Bob's measurement.

Now, again, perfect anti-correlation means that if [itex]\alpha = \beta[/itex], then [itex]F_A(\alpha, \lambda, \lambda_A) = - F_B(\alpha, \lambda, \lambda_B)[/itex]. That has to be true for all values of [itex]\lambda_A[/itex]. That means that [itex]F_A(\alpha, \lambda, \lambda_A)[/itex] doesn't actually depend on [itex]\lambda_A[/itex], and similarly [itex]F_B(\beta, \lambda, \lambda_B)[/itex] doesn't actually depend on [itex]\lambda_B[/itex]. So extra hidden variables, if they are local and uncorrelated, have to be irrelevant.
 
  • Like
Likes zonde
  • #74
I quite agree. However it is not the perfect anti-correlation that causes the issues, is it?
 
  • #75
Jilang said:
I quite agree. However it is not the perfect anti-correlation that causes the issues, is it?

Well, you don't have to have perfect anticorrelations in order to violate Bell's inequality. I'm just saying that in the case of perfect anticorrelations, you may as well assume that the output is a deterministic function of the setting and the hidden variable.
 
  • #76
jeremyfiennes said:
Quantum variables, where there is only a probability of getting a given result, are therefore non-realistic.
If my interpretation is correct, quantum properties that affect the probability distribution of outcomes are undefined before the outcomes occur. If you could flip a quantum coin, while it is spinning through the air both faces would be blank; neither heads nor tails.
 
  • #77
jeremyfiennes said:
The thread I wanted to post my question on got closed. Recapitulating:

The best (simplest) account I have found to date for the Bell inequality (SPOT stands for Single Photon Orientation Tester):
Imagine that each random sequence that comes out of the SPOT detectors is a coded message. When both SPOT detectors are aligned, these messages are exactly the same. When the detectors are misaligned, "errors" are generated and the sequences contain a certain number of mismatches. How these "errors" might be generated is the gist of this proof. Step One: Start by aligning both SPOT detectors. No errors are observed. Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees. Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees. Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees. What is now the expected mismatch between the two binary code sequences? We assume, following John Bell's lead, that REALITY IS LOCAL. Assuming a local reality means that, for each A photon, whatever hidden mechanism determines the output of Miss A's SPOT detector, the operation of that mechanism cannot depend on the setting of Mr B's distant detector. In other words, in a local world, any changes that occur in Miss A's coded message when she rotates her SPOT detector are caused by her actions alone. And the same goes for Mr B. The locality assumption means that any changes that appear in the coded sequence B when Mr B rotates his SPOT detector are caused only by his actions and have nothing to do with how Miss A decided to rotate her SPOT detector. So with this restriction in place (the assumption that reality is local), let's calculate the expected mismatch at 60 degrees. Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%. In fact the mismatch should be less than 50% because if the two errors happen to occur on the same photon, a mismatch is converted to a match. Thus, simple arithmetic and the assumption that Reality is Local leads one to confidently predict that the code mismatch at 60 degrees must be less than 50%. However both theory and experiment show that the mismatch at 60 degrees is 75%. The code mismatch is 25% greater than can be accounted for by independent error generation in each detector. Therefore the locality assumption is false. Reality must be non-local.


Great. Finally an explanation of Bell's theorem that even I can understand! My question relates to the following part: "Imagine that each random sequence that comes out of the SPOT detectors is a coded message. When both SPOT detectors are aligned, these messages are exactly the same. When the detectors are misaligned, "errors" are generated and the sequences contain a certain number of mismatches." A "mismatch" however would be a mismatch with respect to the code emitted by the other detector, implying a communication between the two. Does not this violate their independence?
Thanks.
Where did you get the statement (e.g. in your Step One) that tilting one detector so as to reach 25% errors occurs at 30 degrees?
 
  • #78
ljagerman said:
Where did you get the statement (e.g. in your Step One) that tilting one detector so as to reach 25% errors occurs at 30 degrees?
In this toy model it's arbitrary what the mismatch at any angle is. The point being made is that the mismatch when both both detectors are tilted should not exceed twice the mismatch when one detector is tilted, no matter what it is.
 
  • #79
ljagerman said:
Where did you get the statement (e.g. in your Step One) that tilting one detector so as to reach 25% errors occurs at 30 degrees?
This toy model is parallel with entangled photons Bell experiment. So it's prediction of QM for entangled photons. Mismatch rate changes as ##\sin(\alpha-\beta)^2##.
For more details you can take a look at this paper: https://arxiv.org/abs/quant-ph/0205171
 
  • #80
DrChinese said:
You may find this hard to accept, but all theories featuring non-local elements are not the same. Just saying it is non-local does not even come close to explaining Bell Inequality or Leggett Inequality violations. They might, but it really depends on the nature of the non-locality, don't you think?
Indeed I do. I believe the type of non-locality of BM (which does not allow my quikfone #40) is different than Leggett's.
DrChinese said:
The point is: what are the parameters of your theory, and is it realistic in the sense of the referenced paper?
I think(?) I see what you mean. The existence of my quikfone doesn't provide a consistent theory to duplicate quantum correlations.
If Alice and Bob know the entangled state of the photons and know what settings to employ then they can use the quikfone to mimic the quantum correlations. However, they in general do not have that info and thus can not in general conspire to get the appropriate correlations.

Nevertheless, given any Bell type theorem whose conclusion can be violated by QM, then I can produce a single algorithm, using the quikfone, that will will violate all such theorems. It will not in general agree with a QM violation and will likely fall short of a consistent theory in other ways. It is realistic.
But in the case of GHZ the algorithm provides the same violation as QM (there is no leeway).
 
Last edited:
  • #81
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
 
  • #82
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
When you say "this case", do you mean the toy model that you were using at the start of this thread, or do you mean a real Bell-type experiment done with pairs of entangled particles?
 
  • #83
Neither, but rather "the predictions of Quantum Mechanics" that the experimental results support.
 
  • #84
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
The predictions aren't "premises/assumptions", they are predicted experimental results. Quantum mechanics predicts that Bell's inequality (or related inequalities such as CHSH) will be violated under some conditions. All local hidden variable theories predict that these inequalities will not be violated.

Experiments show that the inequalities are violated, so they support the predictions of quantum mechanics. Give me a moment and I'll dig up a specific example.

Edit: here's an example: https://arxiv.org/pdf/1508.05949v1.pdf
Look at equation #1; quantum mechanics predicts that ##S## will take on values as large as ##2\sqrt{2}## while all local hidden variable theories predict that ##S## will never exceed 2.
 
Last edited:
  • #85
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?
Minimal formalism of Quantum Mechanics gives only statistical predictions. I would say that QM is sophisticated phenomenological theory rather than fundamental theory.
Any LHV on the other hand is supposed to be a fundamental theory.
 
  • #86
jeremyfiennes said:
"No physical theory of Local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics". In what essential way do the premisses/assumptions of Quantum Mechanics differ from those of Local Hidden Variable theories, in this case?

Here's the way I would put it: According to a local realistic theory, the outcome of any measurement depends only on conditions local to that measurement. So if Alice is measuring the spin of one particle, and far away, Bob is measuring the spin of another particle, then Alice's result depends only on facts about her particle (and measurement equipment, and maybe other things near Alice), and Bob's result depends only on facts about his particle (and measurement equipment, etc.). Alice's result does not depend on anything happening at Bob's location, and vice-versa.

"Depend" here is not in the sense of causality, but in the sense of prediction. In a locally realistic theory, knowing something about Bob shouldn't allow you to predict anything about Alice that isn't already captured by the local situation at Alice. A violation of local realism would be if knowing something about Bob allowed you to predict Alice's measurement result, but this information cannot be deduced from anything local to Alice. EPR violates local realism, because knowing Bob's result for a spin measurement of his particle allows you to predict Alice's result, and there is nothing in the region near Alice that would allow you to make this prediction.
 
  • #87
Thanks all three. I am with you on experimental violations. My present query is however theoretical. Bell formulated his theorem before it was possible to test it experimentally. The premisses/ assumptions of LHV theories predict straight-line limits to the coincidence/angle relation, the inequalities. Whereas QM predicts an S-curve that violates those limits. This has now been confirmed by experiment. My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?
 
  • #88
jeremyfiennes said:
My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?
Start with equation #1 in Bell's original paper: http://www.drchinese.com/David/Bell_Compact.pdf
This equation is stating an assumption that is common to all LHV theories: that the result at either detector depends on properties of the particle arriving at that detector (which may be correlated with various properties of the particle arriving at the other detector, because they the two particles share a common origin) and the way that detector has been set up, but not on the way the other detector has been set up. Bell starts with that assumption and ends up with his inequality; and because all LHV theories share that assumption then all LHV theories must obey the inequality.

However, quantum mechanics predicts that the probability of getting a coincidence between the two detectors is a function of the angle between the two detectors: for example, in the photon polarization experiments the probability that one photon will pass and the other not pass is ##\cos^2\theta## where ##\theta## is the angle between the two detectors. Note that the state of both detectors goes into this calculation, so quantum mechanics is not making the assumption in equation #1. Furthermore, for some values of ##\theta## the ##\cos^2\theta## correlation violates the inequality; so not only does QM not require the #1 assumption, but also the #1 assumption cannot be consistent with QM.

So the key distinction between QM and the LHV theories that are precluded by Bell's theorem is the #1 assumption.
 
Last edited:
  • #89
jeremyfiennes said:
The premisses/ assumptions of LHV theories predict straight-line limits to the coincidence/angle relation, the inequalities. Whereas QM predicts an S-curve that violates those limits. This has now been confirmed by experiment. My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?

This is one of many confusing points about Bell's theorem. He mentions in his discussion of the EPR experiment that one would expect a linear relationship between distant measurements in the case of a classical model, while QM predicts a nonlinear relationship. However,
  1. He doesn't (as far as I know) explain why the relationship should always be linear in the classical case.
  2. He doesn't actually use the linearity in the proof of his inequality, anyway.
Fact #2 means that you can just forget about linearity and you don't really miss anything. But his remark about linearity is a little mysterious.

You can prove linearity for a very specific toy hidden-variables model. The toy model is this:
  • Associated with each twin-pair of anti-correlated spin-1/2 particles is a unit vector [itex]\vec{\lambda}[/itex].
  • The value of [itex]\vec{\lambda}[/itex] is a random unit vector, with equal probability density for pointing in any direction.
  • When Alice measures the component of the spin of one particle along axis [itex]\vec{\alpha}[/itex], she gets [itex]+\frac{\hbar}{2}[/itex] if [itex]\vec{\lambda} \cdot \vec{\alpha} > 0[/itex], and she gets [itex]-\frac{\hbar}{2}[/itex] if [itex]\vec{\lambda} \cdot \vec{\alpha} < 0[/itex].
  • Bob measuring the component of spin of the other particle along axis [itex]\vec{\beta}[/itex] gets the opposite of Alice: [itex]-\frac{\hbar}{2}[/itex] if [itex]\vec{\lambda} \cdot \vec{\beta} > 0[/itex], and [itex]+\frac{\hbar}{2}[/itex] if [itex]\vec{\lambda} \cdot \vec{\beta} < 0[/itex]
You can prove, under these assumptions, that if the angle between Alice's axis, [itex]\vec{\alpha}[/itex] and Bob's axis, [itex]\beta[/itex] is [itex]\theta[/itex], then the strength of the anti-correlation decreases linearly with [itex]|\theta|[/itex] from a maximum at [itex]\theta = 0[/itex].

I think that the linearity is more general, though. But I don't know the mathematical argument.
 
  • #90
jeremyfiennes said:
My question is: what is the essential difference between the premisses/assumptions of LHV theories and those of QM, that lead to the latter predicting an S-curve rather than a straight-line coincidence/angle relation?

Your question is actually backwards. Your question should be: what is the essential difference between the premises/assumptions of LHV theories and those of QM, that lead to the former predicting a straight-line rather than a S-curve coincidence/angle relation?

QM predicts the "S-curve" relationship due to specific theoretical considerations (which I will not go into). There is no specific LHV theory which predicts a the straight line relation because it has been known for over 200 years that is incorrect as compared to observation (Malus, ca. 1809). The reason the straight line relation is even brought up is that it would MINIMIZE the delta to the QM prediction (and observation), and it give the same answers at certain key settings. And it would in fact satisfy Bell. It is the simplest too. But it bears no connection to reality and would not even be discussed except in relation to Bell.
 
Last edited:
  • Like
Likes zonde and RockyMarciano
  • #91
DrChinese said:
Your question is actually backwards. Your questions should be: what is the essential difference between the premises/assumptions of LHV theories and those of QM, that lead to the former predicting a straight-line rather than a S-curve coincidence/angle relation?

QM predicts the "S-curve" relationship due to specific theoretical considerations (which I will not go into). There is no specific LHV theory which predicts a the straight line relation because it has been known for over 200 years that is incorrect as compared to observation (Malus, ca. 1809).

I think that's a little bit misleading. Malus' equation is about sequential operations on a single beam of light---send it through a polarizing filter at this orientation, then send it through a filter at that orientation. But Bell's remarks about linear relationships is about correlations between distant measurements. It happens to be true that for the twin-photon EPR experiment, the correlations between measurements performed on correlated pairs of photons is described by Malus' equation, as well, but that prediction was certainly not made 200 years ago. They didn't know how to produce entangled photon pairs 200 years ago, did they?
 
  • Like
Likes zonde
  • #92
I found an en.wikipedia quote that sums up nicely my doubt:
"All Bell inequalities describe experiments in which the predicted result assuming entanglement differs from that following from local realism."
What exactly does "assuming entanglement" here involve, in everyday terms?
 
  • #93
jeremyfiennes said:
I found an en.wikipedia quote that sums up nicely my doubt:
"All Bell inequalities describe experiments in which the predicted result assuming entanglement differs from that following from local realism."
What exactly does "assuming entanglement" here involve, in everyday terms?
To be sure, you'd have to ask the author of that quote (although it appears elsewhere on the internet, so there is some possibility that whoever added it to wikipedia was copying and pasting without complete understanding).

However, it seems likely that they're trying to say that the situations in which the quantum mechanical predictions will be different from the predictions of a theory that agrees with equation #1 in Bell's paper (which is to say, any LHV theory) will be the situations that involve entanglement. Thus, any experiment that will go one way if QM is right and another way if there is a valid LHV theory will involve entanglement.
 
  • #94
jeremyfiennes said:
What exactly does "assuming entanglement" here involve, in everyday terms?

I would adopt Nugatory's interpretation here with the proviso that, strictly speaking, if one is only interested in violating the inequality then entanglement is not actually necessary.

That seems like it runs counter to accepted wisdom, but I believe it's important to understand because it highlights the essential features of QM from which the possibility of violation emerges.

If we look at the maths of Bell's proof there's a very critical step which is the locality assumption. In the maths it's the bit where ##P(A| \alpha , \beta , \lambda )## gets written as ##P(A| \alpha , \lambda )##. Here ##\alpha## and ##\beta## are the settings of the detectors, ##A## is the result at Alice's detector and ##\lambda## stands for the hidden variables. So we're making the assumption that the probability of getting a certain result at Alice, conditioned upon the device settings and the hidden variables, does not depend upon the setting of the remote device.

There's no requirement that the devices of Alice and Bob are spacelike separated - it's irrelevant for the proof of the inequality. The ansatz that probabilities of results 'here' are not affected by settings 'there', the locality assumption, is assumed to hold whether or not the devices are spacelike separated.

Now it's possible that there is some unknown, and strange, mechanism that allows the device 'here' to know about the settings 'there' - some unknown field that carries the information about remote settings whatever experiment we set up and for whatever measurement device. In this case we couldn't make our ansatz because the existence of something like this field would affect the probabilities.

The importance of the spacelike separation step is to force any information about remote settings to have to be transmitted FTL. Now it becomes a very big deal. Before this step we could, conceivably, have some hitherto unknown weird and wonderful physics going on that allows the probabilities to be affected. With this spacelike separation step this hypothesized new physics would have to violate the principles of relativity.

So what about entanglement? Well if we ditch the requirement for spacelike separated measurements then it's possible to observe a Bell inequality violation with single, non-entangled, particles. The violation occurs in this instance between the preparation statistics of Alice and the measurement statistics of Bob. I won't go into the details but suffice it to say that it's possible. What this is telling us is that the violation of the mathematical inequality is not dependent on the devices being spacelike separated (which we already knew from the maths anyway). Furthermore, it's telling us that in this case we can obtain violations even without entangled particles. So something about QM allows this violation even without considerations of entanglement.

The spacelike separation - a very critical step if you want to rule out local hidden variable theories - is the icing on the cake - but it's not the essential reason why we see a violation of the math inequality. Nor is entanglement, per se.

If you want to see violation for spacelike separated measurements, then you need entanglement.
 
  • Like
Likes Nugatory
  • #95
stevendaryl said:
I think that's a little bit misleading. Malus' equation is about sequential operations on a single beam of light---send it through a polarizing filter at this orientation, then send it through a filter at that orientation. But Bell's remarks about linear relationships is about correlations between distant measurements. It happens to be true that for the twin-photon EPR experiment, the correlations between measurements performed on correlated pairs of photons is described by Malus' equation, as well, but that prediction was certainly not made 200 years ago. They didn't know how to produce entangled photon pairs 200 years ago, did they?

Yes, all true. But what I said was not misleading, as there was never a point in time (certainly after 1809) in which the polarization we are talking about was considered "straight-line". The starting point for entanglement (I think electron entanglement was first) was always a cos function of some type. So probably since the 1940's, perhaps. was that specifically considered?
 
  • #96
Thanks all. Thinking-cap time needed.
 
Back
Top