- #36
nightlight
- 187
- 0
vanesch Good. So your claim is that we will never find raw data which violates Bell's inequality.
Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.
Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.
You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].
Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be. To paraphrase a kid's response when told that thermos bottle keeps the hot liquids hot and cold liquids cold -- "How do it know?"
Or, another twist, you take 99 different angles for B and ontain sets of data A1[n],B1[n]; A2[n],B2[n]; ... A99[n],B99[n]. Now you're ready for the angle B100. This time the A100[n] has to keep rearranging itself to avoid matching all 99 previous arrays Ak[N].
Then you extend the above and, say, collect r=2^n data sets for 2^n different angles (they could be all the same angle, too). This time at the next angle, B_(2^n+1), the data for A_(2^n+1)[n] would have to avoid all 2^n arrays Ak[n], which it can't do. So you get that in each test there would be one failed QM prediction, for at least one angle, since that Bell inequality would not be violated.
Then you take 2^n*2^n previous test,... and so on. As you go up, it gets harder for the inequality violator, its negative test count has a guaranteed growth. Also, I think this therem is not nearly restraining enough and the real state is much worse for the inequality violator (as simple computer enumerations suggest when counting the percentages of violation cases for the finite data sets).
Or, you go back and start testing, say, angle B7 again. Now the QM magician in heaven has to allow new A7[n] to be the same as the old A7[n], which was prohibited up to that point. You switch B to B9, and now, the QM magician, has to disallow again the match with A7[n] and allow the match with old A9[n], which was prohibited until now.
Where is the memory for all that? And what about the elaborate mechanisms or the infrastructure needed to implement the avoidance scheme? And why? What is the point of remembering all that stuff? What does it (or anyone/anything anywhere) get in return?
The conjectured QM violation of of Bell's inequality basically looks sillier and sillier once these kind of implications are followed through. It is not any more mysterious or puzzling but plainly ridiculous.
And what do we get from the absurdity? Well, we get the only real confirmation for the collapse since Bell's theorem uses collapse to produce the QM prediction which violates the inequality. And what do we need the collapse for? Well, it helps "solve" the measurement problem. And why is there the measurement problem? Well, because Bell's theorem shows you can't have LHVs to produce definite results. Anything else empirical from either. Nope. What a deal.
The collapse postulate first lends a hand to prove Bell's QM prediciton which in turn, via LHV prohibition, creates a measurement problem which then the collapse "solves" (thank you very much). So the collapse postulate creates a problem then solves it. What happens if we take out collapse postulate all together. No Bell's theorem, hence no measurement problem, hence no problem at all. Nothing else is sensitive to the existence (or the lack) of the collapse but the Bell's inequality experiment. Nothing else needs the collapse. It is a parasitic historical relic in the theory.
Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.
Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.
You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].
Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be. To paraphrase a kid's response when told that thermos bottle keeps the hot liquids hot and cold liquids cold -- "How do it know?"
Or, another twist, you take 99 different angles for B and ontain sets of data A1[n],B1[n]; A2[n],B2[n]; ... A99[n],B99[n]. Now you're ready for the angle B100. This time the A100[n] has to keep rearranging itself to avoid matching all 99 previous arrays Ak[N].
Then you extend the above and, say, collect r=2^n data sets for 2^n different angles (they could be all the same angle, too). This time at the next angle, B_(2^n+1), the data for A_(2^n+1)[n] would have to avoid all 2^n arrays Ak[n], which it can't do. So you get that in each test there would be one failed QM prediction, for at least one angle, since that Bell inequality would not be violated.
Then you take 2^n*2^n previous test,... and so on. As you go up, it gets harder for the inequality violator, its negative test count has a guaranteed growth. Also, I think this therem is not nearly restraining enough and the real state is much worse for the inequality violator (as simple computer enumerations suggest when counting the percentages of violation cases for the finite data sets).
Or, you go back and start testing, say, angle B7 again. Now the QM magician in heaven has to allow new A7[n] to be the same as the old A7[n], which was prohibited up to that point. You switch B to B9, and now, the QM magician, has to disallow again the match with A7[n] and allow the match with old A9[n], which was prohibited until now.
Where is the memory for all that? And what about the elaborate mechanisms or the infrastructure needed to implement the avoidance scheme? And why? What is the point of remembering all that stuff? What does it (or anyone/anything anywhere) get in return?
The conjectured QM violation of of Bell's inequality basically looks sillier and sillier once these kind of implications are followed through. It is not any more mysterious or puzzling but plainly ridiculous.
And what do we get from the absurdity? Well, we get the only real confirmation for the collapse since Bell's theorem uses collapse to produce the QM prediction which violates the inequality. And what do we need the collapse for? Well, it helps "solve" the measurement problem. And why is there the measurement problem? Well, because Bell's theorem shows you can't have LHVs to produce definite results. Anything else empirical from either. Nope. What a deal.
The collapse postulate first lends a hand to prove Bell's QM prediciton which in turn, via LHV prohibition, creates a measurement problem which then the collapse "solves" (thank you very much). So the collapse postulate creates a problem then solves it. What happens if we take out collapse postulate all together. No Bell's theorem, hence no measurement problem, hence no problem at all. Nothing else is sensitive to the existence (or the lack) of the collapse but the Bell's inequality experiment. Nothing else needs the collapse. It is a parasitic historical relic in the theory.
Last edited: