- #71
Lynch101
Gold Member
- 768
- 85
Thank you PD, although I don't quite follow your reasoning, these are the kinds of insights I was hoping to get, to help me get a better understanding.
If the photon pairings form cases [1] and [8] are possible pairings, then those lines would contribute less than 0.333.
I'm not saying they are possible. I was just filling in values which would satisfy the criterion that
I was hoping that, if they're not actually possible, then someone might point that out and I would thereby get a better understanding. In much the same way you were able to see at a glance, that I was inadvertently invoking superdeterminism in a previous post.
Can this not also be read from a table representing the sample space* for statistically independent events?*I'm using "sample space" here because the table represents a finite set, not the abstract infinite ensemble.
I understand that, which is why I was playing around with the weighting to see if it would change the predicted average outcome to match the predictions of quantum theory.PeterDonis said:Yes, and doing so will change the predicted average outcome. It has to: that's just math.
I can see that every line in DrC's table would contribute at least 0.333. However, I was under the (incorrect?) impression that DrC's table represents the case of a single photon passing through consecutive filters, while (some) tests of Bell's Inequality involve photon pairs, each passing through a single misaligned filter, such that:PeterDonis said:What changing the weighting won't do is give you an average that violates the Bell inequalities or their equivalents; that's impossible on any weighting. In the original table that @DrChinese was commenting on in post #36, every line in the table would contribute at least 0.333 to the average, so it is mathematically impossible to get an average less than 0.333 no matter how you weight the lines. And 0.333 does not violate the relevant inequality for that case. But the QM prediction for that case is 0.25, which does violate the inequality.
In your tables in posts #62 and #63, you've reversed the signs, so to speak, from the original table
This led me to attempt to make a table representing that, by reversing the signs.when the detectors at both sides are set at the same angle we get the opposite results (+ at one detector, - at the other) every single time, probability 100%, no exceptions.
If the photon pairings form cases [1] and [8] are possible pairings, then those lines would contribute less than 0.333.
I'm not saying they are possible. I was just filling in values which would satisfy the criterion that
when the detectors at both sides are set at the same angle we get the opposite results (+ at one detector, - at the other) every single time, probability 100%, no exceptions.
I was hoping that, if they're not actually possible, then someone might point that out and I would thereby get a better understanding. In much the same way you were able to see at a glance, that I was inadvertently invoking superdeterminism in a previous post.
Am I incorrect in my understanding that the inequality tells us what the (minimum) predicted average outcome would be?PeterDonis said:that @DrChinese was commenting on, so 0.25 or less does not violate the relevant inequality; so the fact that you can get averages of 0.25 or less by changing the weightings, while true, is pointless. You need to compute the correct inequality for that case and then look at what range of averages you are able to obtain by varying the weightings in your table; then you will see that no matter how you vary the weightings, you cannot get an average that violates the correct inequality for that case. And if you compute the correct QM pprediction for that case, you will see that it does violate the inequality.
Can this not also be read from a table representing the sample space* for statistically independent events?*I'm using "sample space" here because the table represents a finite set, not the abstract infinite ensemble.