Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #1,016
billschnieder said:
Oh not your mistake. I was agreeing with you from a different perspective, there is a missing also somewhere there!
Oh, my bad. I can read it with that interpretation now also. The "also" would have made it clear on the first read.
 
Physics news on Phys.org
  • #1,017
JenniT said:
Dear Dmitry67, many thanks for quick reply. I put 2 small edits in CAPS above.

Hope that's correct?

But I do not understand your "imagine" ++ example.

Elaboration in due course would be nice.

Thank you,

JenniT

Some strings define real numbers. For example,

4.5555
pi
min root of the following equation: ... some latex code...

As any string is a word in finite alphabet, the set of all possible strings is countable, like integers. However, the set or real numbers has the power of continuum.

Call the set of all strings, which define real numbers E
Call the set of all real numbers, defined by E as X
Now exclude X from R (set of all real numbers). The result (set U) is not empty (because R is continuum and X is countable). It is even infinite.

So you have a set with infinite number of elements, for example... for example... well, if you can provide an example by writing a number itself (it is also a string) or defining it in any possible way, then you can find that string in E and the corresponding number in X. Hence there is no such number in U.

So you have a very weird set U. No element of it can be given as example. U not only illustrates that Axiom of Choice can be also contre-intuitive (while intuitively all people accept it). Imagine that some property P is true for only elements in U, and always false for elements in X. In such case you get these ghosts lke Banach-Tarski paradox...
 
Last edited:
  • #1,018
Dmitry67 said:
Some strings define real numbers. For example,

4.5555
pi
min root of the following equation: ... some latex code...

As any string is a word in finite alphabet, the set of all possible strings is countable, like integers. However, the set or real numbers has the power of continuum.

Is this quite right?

The fact that you include pi suggests that your understanding of `string' allows a string to be countably long. If so, then it is not true that the set of all possible strings is countable, even if the alphabet is finite: the set of all sequences of the two letter alphabet {1, 0} has the power of the continuum.

On the other hand, if we restrict our attention to finite strings, then the set of all finite strings in a finite alphabet *is* indeed countable. Indeed, the set of all finite strings in a *countable* alphabet is countable.
 
  • #1,019
you can define PI using a finite string: you just don't need to write all digits, you can simple write string

sqrt(12)*sum(k from 0 to inf: (-3**(-k))/(2k+1))
 
  • #1,020
I see what you're saying now and what construction you're giving - 'string' refers to the two letter symbol `pi' rather than the infinite expansion. So the strings are finite.

I don't want to derail this thread - but I thought using the notion of definability without indexing it to a language ('definability-in-L', with this concept being part of the metalanguage) lead to paradoxes.
 
  • #1,021
my_wan said:
DrC, two questions,
1) Do you agree that "fair sampling" assumptions exist, irrespectively of validity, that does not involve the assumption that photon detection efficiencies are less than perfect?
2) Do you agree that averaged over all possible settings, not just a choice some subset of settings, that the QM and classical correlation limit leads to the same overall total number of detections?

The Fair Sampling Assumption is: due to some element(s) of the collection and detection apparati, either Alice or Bob (or both) did not register a member of an entangled photon pair that "should" have been seen. AND FURTHER, that photon, if detected, was one which would support the predictions of local realism and not QM. The Fair Sampling concept makes no sense if it is not misleading us.

It is not the Fair Sampling Assumption to say that the entire universe is not sampled. That is a function of the scientific method and applies to all experiments. Somewhat like saying the speed of light is 5 km/hr but that experiments measuring the usual value are biased because we chose an unfair day to sample. The requirement for science is that the experiment is repeatable, which is not in question with Bell tests. The only elements of Fair Sampling that should be considered are as I describe in the paragraph above.

So I think that is a YES to your 1), as you might detect a photon but be unable to match it to its partner. Or it might have been lost before arriving at the detector.

For 2), I am not sure I follow the question. I think QM and LR would make the same predictions for likelihood of detection. But I guess that to make the LR model work out, you have to find some difference. But no one was looking for that until Bell tests started blowing away LR theories.
 
  • #1,022
DevilsAvocado said:
You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...

So true. Seems sort of strange to suggest that some photons are over-counted and some are under-counted... and that depends on the angle between. And whether they support LR or QM as to whether they are over or under counted.

We know that for some angles - say 60 degrees - the predictions are noticably different for QM (25%) vs LR (33.3%). Considering these are averages of correlated and uncorrelated pairs, that means that 1 in 4 of the correlated pairs was undercounted - but NONE of the uncorrelated pairs was undercounted! That is reeeeeeeeaaaaaallllllly asking a lot if you think about it.

But then at 30 degrees it works the OTHER way. It's QM (75%) vs LR (66.6%) now. So suddenly: 1 in 4 of the UNcorrelated pairs are undercounted - but NONE of the correlated pairs are undercounted!
 
  • #1,023
I really really don't get the equivocation in your responses, unless it's to intentionally maintain a conflation. I'll demonstrate:
(Going to color code sections of your response, and index my response to those colors.)

DrChinese said:
The Fair Sampling Assumption is: due to some element(s) of the collection and detection apparati, either Alice or Bob (or both) did not register a member of an entangled photon pair that "should" have been seen. AND FURTHER, that photon, if detected, was one which would support the predictions of local realism and not QM. The Fair Sampling concept makes no sense if it is not misleading us.
{RED} - The question specifically avoided any detection failures whatsoever. It has no bearing whatsoever on the question, but ok. Except that you got less specific in blue for some reason.
{BLUE} - Here when you say "would support", by what model would the "would support" qualify? In fact one of the many assumptions contained in "would support" qualifier involves how you chose define the equivalence of simultaneity between two spatially separated time intervals. Yet with "would support" you are tacitly requiring a whole range of assumptions to be the a priori truth. It logically simplifies to the statement: It's true because I chose definitions to make it true.


DrChinese said:
It is not the Fair Sampling Assumption to say that the entire universe is not sampled. That is a function of the scientific method and applies to all experiments. Somewhat like saying the speed of light is 5 km/hr but that experiments measuring the usual value are biased because we chose an unfair day to sample. The requirement for science is that the experiment is repeatable, which is not in question with Bell tests. The only elements of Fair Sampling that should be considered are as I describe in the paragraph above.
{RED} - But you did not describe any element in the paragraph above. You merely implied such elements are contained in the term "would support", and left it to our imagination that since "would support" defines itself to contains proper methods and assumptions, and that "would support" contains it's own truth specifier, then it must be valid. Intellectually absurd.

DrChinese said:
So I think that is a YES to your 1), as you might detect a photon but be unable to match it to its partner. Or it might have been lost before arriving at the detector.
{RED} - I did not specify "a" partner was detected. I explicitly specified that BOTH partners are ALWAYS detected. Yet that still doesn't explicitly require the timing of those detections to match.
{BLUE} - Note the blue doesn't specify that the detection of the partner photon didn't fail. I explicitly specified that this partner detection didn't fail, and that only the time window to specify it as a partner failed.

So granted, you didn't explicitly reject that both partners were detected, but you did explicitly input interpretations which were explicitly defined not to exist in this context, while being vague on the both partner detections with correlation failures.

So, if I accept your yes answer, what does it mean? Does it mean both partners of a correlated pair can be detected, and still not register as a correlation? Does it mean you recognized the truth in the question, and merely chose to conflate the answer with interpretations that is by definition are invalid in the stated question, so that you can avoid an explicitly false answer while implicitly justifying the conflation of an an entirely different interpretation that was experimentally and a priori defined to be invalid?

I still don't know how to take it, and I think it presumptuous of me to assume a priori that "yes" actually accepts the question as stated. You did after all explicitly input what the question explicitly stated could not possibly be relevant, and referred to pairs in singular form while not acknowledging the success of the detection of both photons. This is a non-answer.

DrChinese said:
For 2), I am not sure I follow the question. I think QM and LR would make the same predictions for likelihood of detection. But I guess that to make the LR model work out, you have to find some difference. But no one was looking for that until Bell tests started blowing away LR theories.
True, I clearly and repeatedly, usually beginning with "Note", clarified that it did not in any way demonstrate the legitimacy of any particular LR model. All it does is demonstrate that, even if photon detection efficiency is 100%, then a model that only involves offsets in how the photon detection pairs are correlated need not result in excess or undercount of total photon detections. It was specifically designed, and failed, as an attempt to invalidate a "fair sampling" argument when that "fair sampling" argument did not involve missing detections.

There may be, as previously noted, other ways to rule out this type of bias. By recording and comparing the actual time stamps of the uncorrelated photon detections, then, if this is happening, the time window spread between near correlated photon pairs should statistically appear to increase as the angle difference was increased. If pairs of uncorrelated detections were truly uncorrelated, then there should be no statistical variance in the timing of the pairs of time stamps. The assumption that they are correlated, even when not measured to be so, is what would make such a statistical variance possible. May be worth investigating experimentally. Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.
 
Last edited:
  • #1,024
my_wan said:
There may be, as previously noted, other ways to rule out this type of bias. By recording and comparing the actual time stamps of the uncorrelated photon detections, then, if this is happening, the time window spread between near correlated photon pairs should statistically appear to increase as the angle difference was increased. If pairs of uncorrelated detections were truly uncorrelated, then there should be no statistical variance in the timing of the pairs of time stamps. The assumption that they are correlated, even when not measured to be so, is what would make such a statistical variance possible. May be worth investigating experimentally. Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.

That is actually what I want to do in order to demonstrate the difficulties involved with the Unfair Sampling Hypothesis. There should be a pattern to the bias if it is tenable.

The time stamps are recorded at each station whenever a detection is made. There are 2 detectors for Alice and 2 for Bob, 4 total. That way there is no question. I have actual data but it is in raw form. I expect it will be a while before I have much to share with everyone.

In the meantime, I can tell you that Peter Morgan has done a lot of analysis on the same data. He has not looked for that specific thing, but very very close. He analyzed delay characteristics for anomalies. There were some traces, but they were far far too weak to demonstrate a Fair Sampling issue. Peter has not published his result yet, as I recall.
 
  • #1,025
my_wan said:
{RED} - The question specifically avoided any detection failures whatsoever. It has no bearing whatsoever on the question, but ok. Except that you got less specific in blue for some reason.
{BLUE} - Here when you say "would support", by what model would the "would support" qualify? In fact one of the many assumptions contained in "would support" qualifier involves how you chose define the equivalence of simultaneity between two spatially separated time intervals. Yet with "would support" you are tacitly requiring a whole range of assumptions to be the a priori truth. It logically simplifies to the statement: It's true because I chose definitions to make it true.

If there are no detection failures, then where is the sampling coming into play? You have detected all there are to detect!

As to the supporting idea: obviously, if there is no bias, then you get the same conclusion whether you look at the sample or the universe. If you push LR, then you are saying that an unusual number of "pro QM" pairs are detected and/or an unusual number of "pro LR" pairs are NOT detected. (Except that relationship varies all over the place.) So I guess I don't see what that has to do with assumptions. Just seems obvious that there must be a bias in the collection if the hypothesis is to be tenable.
 
  • #1,026
my_wan said:
By recording and comparing the actual time stamps of the uncorrelated photon detections, then, if this is happening, the time window spread between near correlated photon pairs should statistically appear to increase as the angle difference was increased. If pairs of uncorrelated detections were truly uncorrelated, then there should be no statistical variance in the timing of the pairs of time stamps. The assumption that they are correlated, even when not measured to be so, is what would make such a statistical variance possible. May be worth investigating experimentally. Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.

Some work has been done towards this, and the results appear to support your suspicion. See Appendix A of this paper (starting at page 19 http://arxiv.org/abs/0712.2565, J. Phys. Soc. Jpn. 76, 104005 (2007))
 
  • #1,027
billschnieder said:
Some work has been done towards this, and the results appear to support your suspicion. See Appendix A of this paper (starting at page 19 http://arxiv.org/abs/0712.2565, J. Phys. Soc. Jpn. 76, 104005 (2007))

We are all working off the same dataset and this is a very complicated subject. But there is not the slightest evidence that there is a bias sufficient to account for a LR result. So, no, there is no basis - at this time - for my_wan's hypothesis. However, it is my intent to document this more clearly. As I mention, it is very complicated and not worth debating without going through the whole process from start to finish. Which is a fairly massive project.

All of the teams looking at this are curious as to whether there might be a very small actual bias. But if it is there, it is small. But any at all could mean a potential new discovery.
 
Last edited:
  • #1,028
DrChinese said:
If there are no detection failures, then where is the sampling coming into play? You have detected all there are to detect!
Oh, so I was fully justified in thinking I was being presumptuous in taking your "yes" answer at face value.

Whether or not a coincidence is detected is independent of whether or not a photon is detected. Coincidence detection is wholly dependent on the time window, while photon detection is dependent only on detecting the photon at 'any' proper time. Thus, in principle, a correlation detection failure can occur even when both correlated photons are detected.

This is a bias, and qualifies as a "fair sampling" argument even when 100% of the photons are detected.

DrChinese said:
As to the supporting idea: obviously, if there is no bias, then you get the same conclusion whether you look at the sample or the universe. If you push LR, then you are saying that an unusual number of "pro QM" pairs are detected and/or an unusual number of "pro LR" pairs are NOT detected. (Except that relationship varies all over the place.) So I guess I don't see what that has to do with assumptions. Just seems obvious that there must be a bias in the collection if the hypothesis is to be tenable.
Again, I am specifically specifying a 100% detection rate. The "bias" is only in the time window in which those detections take place, not the failure of 'photon' detection, a failure of time synchronization to qualify a correlated pair of detections as correlated.

This is to illustrate the invalidity of applying empirical invalidity of one type of "fair sampling" bias to all forms of "fair sampling" bias. It's not a claim of a LR solution to BI violations. It may be a worthy investigation though, because it appears to have empirical consequences that might be checked.

Any number of mechanisms can lead to this, such as the frame dependence of simultaneity, a change in the refractive index of polarizers as the angle changes relative to a photons polarization, etc. The particular mechanism is immaterial to testing for such effects, and immaterial to the illegitimacy of assuming the lack of legitimacy of missing photon detections automatically rule out missing correlation detections even when both photon detections are recorded. Missing a time window to qualify as a correlation detection is an entirely separate issue from missing a photon detection.

DrChinese said:
That is actually what I want to do in order to demonstrate the difficulties involved with the Unfair Sampling Hypothesis. There should be a pattern to the bias if it is tenable.

The time stamps are recorded at each station whenever a detection is made. There are 2 detectors for Alice and 2 for Bob, 4 total. That way there is no question. I have actual data but it is in raw form. I expect it will be a while before I have much to share with everyone.

In the meantime, I can tell you that Peter Morgan has done a lot of analysis on the same data. He has not looked for that specific thing, but very very close. He analyzed delay characteristics for anomalies. There were some traces, but they were far far too weak to demonstrate a Fair Sampling issue. Peter has not published his result yet, as I recall.
This is interesting, and a logical next step for EPR issues. I haven't got such raw data. Perhaps, if the statistical signature is too weak, better constraints could be derived by a variance across various specific settings. The strongest correlations should occur the nearer the two detector settings are the same, but must differ some, in the ideal case, to get uncorrelated photon detection sets to compare. The variance of 'near hit' time stamps should increase as the relative angle increases. It would be useful to rule this out, but invalidating a fair sampling bias that involves missing detections doesn't do it. It still falls withing the "fair sampling" class of models, which is the main point I wanted to make.
 
  • #1,029
DevilsAvocado said:
You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...

DrChinese said:
So true. Seems sort of strange to suggest that some photons are over-counted and some are under-counted... and that depends on the angle between. And whether they support LR or QM as to whether they are over or under counted.

my_wan said:
Physically it's exactly equivalent to a tennis ball being bounced off a wall taking a longer route back as the angle it hits the wall increases. It only requires the assumption that the more offset a polarizer is the longer it takes the photon to tunnel through it. Doesn't really convince me either without some testing, but certainly not something I would call nature being tricky. At least not any more tricky than even classical physics is known to be at times. Any sufficiently large set of dependent variables are going to be tricky, no matter how simple the underlying mechanisms. Especially if it looks deceptively simple on the surface.


my_wan & DrChinese, wouldn’t a very simple (almost silly) way of ruling out all these questions around the possible 'weaknesses' in the setup (angles / time window / etc), be to run one tests with not entangled pairs?

If there’s something 'wrong', the same biases must logically show for 'normal' photons also, right...??


(And if I’m right – this has of course already been done.)
 
  • #1,030
DrChinese said:
The De Raedt simulation is an attempt to demonstrate that there exists an algorithm whereby (Un)Fair Sampling leads to a violation of a BI - as observed - while the full universe does not (as required by Bell).


I have only done a Q&D inspection of the code, and I’m probably missing something here, but to me it looks like angle2 is always at a fixed (user) offset to angle1:

angle2 = angle1 + Radians(Theta) ' fixed value offset always

Why? It would be fairly easy to save two independently random angles for angle1 & angle2 in the array for result, and after the run sort them out for the overall statistics...

Or, are you calling the main function repeatedly with different random values for argument Theta...? If so, why this solution...?

Or, is angle2 always at a fixed offset of angle1? If so, isn’t this an extreme weakness in the simulation of the "real thing"??
 
  • #1,031
DevilsAvocado said:
my_wan & DrChinese, wouldn’t a very simple (almost silly) way of ruling out all these questions around the possible 'weaknesses' in the setup (angles / time window / etc), be to run one tests with not entangled pairs?

If there’s something 'wrong', the same biases must logically show for 'normal' photons also, right...??


(And if I’m right – this has of course already been done.)
There may be ways to check specific mechanisms, like refractive index, but in this case the bias is not presumed to miss any photon detections. The only bias is in the time window that determines whether we define two correlated photons to be correlated or not. Thus the general case, involving how we test correlations, can only be tested with photons we can reasonably assume are correlated.

You might also try passing femtosecond pulses through a polarizer and checking how it effects the spread of the pulse. A mechanism may also involve something similar to squeezed light, which, due to the Uncertainty Principle, maximizes the uncertainty of measurables. The photons with the largest offsets relative to the polarizer may effectively be squeezed more, thus inducing a spread, higher momentum uncertainty, in the output.

Still, a general test must involve correlations, and mechanisms can be investigated once an effect is established. Uncorrelated photon sources may, in some cases, be able to test specific mechanism, but not the general case involving EPR correlation test.
 
  • #1,032
DevilsAvocado said:
my_wan & DrChinese, wouldn’t a very simple (almost silly) way of ruling out all these questions around the possible 'weaknesses' in the setup (angles / time window / etc), be to run one tests with not entangled pairs?

If there’s something 'wrong', the same biases must logically show for 'normal' photons also, right...??


(And if I’m right – this has of course already been done.)

Pretty much all of these variations are run all the time, and there is no hint of anything like this. Unentangled and entangled photons act alike, except for correlation stats. This isn't usually written up because it is not novel or interesting to other scientists - ergo not too many papers to cite on it. I wouldn't publish a paper saying the sun came up this morning either.

You almost need to run variations with and without entanglement to get the apparatus tuned properly anyway. And it is generally pretty easy to switch from one to the other.
 
  • #1,033
my_wan said:
There may be ways to check specific mechanisms, like refractive index, but in this case the bias is not presumed to miss any photon detections. The only bias is in the time window that determines whether we define two correlated photons to be correlated or not. Thus the general case, involving how we test correlations, can only be tested with photons we can reasonably assume are correlated.

You might also try passing femtosecond pulses through a polarizer and checking how it effects the spread of the pulse. A mechanism may also involve something similar to squeezed light, which, due to the Uncertainty Principle, maximizes the uncertainty of measurables. The photons with the largest offsets relative to the polarizer may effectively be squeezed more, thus inducing a spread, higher momentum uncertainty, in the output.

Still, a general test must involve correlations, and mechanisms can be investigated once an effect is established. Uncorrelated photon sources may, in some cases, be able to test specific mechanism, but not the general case involving EPR correlation test.

I don't see why you say that uncorrelated sources cannot be used in the general case. I think that should not be an issue, as you can change from uncorrelated to correlated almost at the flip of an input polarizer setting.
 
  • #1,034
DevilsAvocado said:
I have only done a Q&D inspection of the code, and I’m probably missing something here, but to me it looks like angle2 is always at a fixed (user) offset to angle1:

angle2 = angle1 + Radians(Theta) ' fixed value offset always

Why? It would be fairly easy to save two independently random angles for angle1 & angle2 in the array for result, and after the run sort them out for the overall statistics...

Or, are you calling the main function repeatedly with different random values for argument Theta...? If so, why this solution...?

Or, is angle2 always at a fixed offset of angle1? If so, isn’t this an extreme weakness in the simulation of the "real thing"??

I wanted to graph every single degree from 0 to 90. Since it is a random test, it doesn't matter from trial to trial. I wanted to do X iterations for each theta, and sometimes I wanted fixed angles and sometimes random ones. The De Raedt setup sampled a little differently, and I wanted to make sure that I could see clearly the effect of changing angles. A lot of their plots did not have enough data points to suit me.
 
  • #1,035
DrChinese said:
I don't see why you say that uncorrelated sources cannot be used in the general case. I think that should not be an issue, as you can change from uncorrelated to correlated almost at the flip of an input polarizer setting.

Of course I constantly switch between single beams that are polarized and unpolarized, as well as mixed polarizations, and from polarizer setting to stacked polarizers, to counterfactual assumptions with parallel polarizers, to correlations in EPR setups. Of course you can compare correlated and uncorrelated cases.

The problem is in the correlation case involving variances in time windows to establish such a correlation exist, any corresponding effect in uncorrelated cases is very dependent on the specific mechanism, realistic or not, inducing the time widow offsets in the EPR case. Thus testing time window variances in correlation detections test for the existence of the effect independent of the mechanism inducing such an effect. Testing uncorrelated beam cases can only test very specific sets of mechanism in any given test design. It may be there in the uncorrelated beam case, but I would want to know there was an effect to look for before I went through a myriad of uncorrelated beam test to search for it.

It's not unlike the PEAR group at Princeton deciding that they should investigate the applications of telekinesis without actually establishing such an effect exist. At least non-realist have a real effect to point at, and a real definition to work with. :biggrin:
 
  • #1,036
my_wan said:
Oh, so I was fully justified in thinking I was being presumptuous in taking your "yes" answer at face value.

Whether or not a coincidence is detected is independent of whether or not a photon is detected. Coincidence detection is wholly dependent on the time window, while photon detection is dependent only on detecting the photon at 'any' proper time. Thus, in principle, a correlation detection failure can occur even when both correlated photons are detected.

This is a bias, and qualifies as a "fair sampling" argument even when 100% of the photons are detected.

OK, sure, I follow now. Yes, assuming 100% of all photons are detected, you still must pair them up. And it is correct to say that you must use a rule of some kind to do this. Time window size then plays a role. This is part of the experimentalist's decision process, that is true. And in fact, this is what the De Raedt model uses as its exploit.

I don't think there is too much here, but again I have not personally looked at the dataset (I am still figuring out the data format and have been too busy to get that done in past weeks). But I will report when I have something meaningful.

In the meantime, you can imagine that there is a pretty large time interval between events. In relative terms, of course. There may be 10,000 pairs per second, so perhaps an average of 10-100 million picoseconds between events. With a window on the order of 20 ps, you wouldn't expect to have a lot of difficulty pairing them up. On the other hand, as the window increases, you will end up with pairs that are no longer polarization entangled (because they are distinguishable in some manner). Peter tells me that a Bell Inequality is violated with windows as large as 100 ps.

My point being - again - that it is not so simple to formulate your hypothesis in the presence of so many existing experiments. The De Raedt simulation shows how hard this really is.
 
  • #1,037
DrChinese said:
I don't think there is too much here, but again I have not personally looked at the dataset (I am still figuring out the data format and have been too busy to get that done in past weeks). But I will report when I have something meaningful.
I have significant doubts myself, for a myriad of reasons. Yet still having the actual evidence in hand would be valuable. The point in the debate here was the lagitamacy of applying the "fair sampling" no-go based on photon detections to time windows correlation coupling, which is also a form of "fair sampling", i.e., over-generalizing a no-go.

Peter tells me that a Bell Inequality is violated with windows as large as 100 ps.
That, I believe, is about a 0.03 cm spread at light speed. I'm also getting a 100 million picosecond average between events, which you need significantly less than that time window to avoid significant errant correlation counts. Still, correlations being effectively washed out beyond 100 ps, about 1 millionth of the average event spread, seems awfully fast. I need to recheck my numbers. What's the wavelength of the light in this dataset?

It might be interesting to consider only the misses in a given dataset, with tight initial time window constraints, and see if a time window shift will recover a non-random percentage of correlations in just that non-correlated subset. That would put some empirical constraints on photon detection timing variances, for whatever physical reason. That correlations would be washed out averaged over enough variation is not surprising, my napkin numbers look a bit awkward.

DrChinese said:
My point being - again - that it is not so simple to formulate your hypothesis in the presence of so many existing experiments. The De Raedt simulation shows how hard this really is.
Yes it is hard, and interesting. I'm not one to make claims about what De Raedt's simulation actually mean. I'm only warning against presumptions about meaning based on over-generalizations of the constraints we do have, for the very difficulties stated. This was the point of formulating a "fair sampling" argument that explicitly avoided missing photon detections. It wasn't to claim a classical resolution to BI violations, but this caution also applies to the meaning of BI violations.
 
  • #1,038
DrChinese said:
Pretty much all of these variations are run all the time, and there is no hint of anything like this. Unentangled and entangled photons act alike, except for correlation stats. This isn't usually written up because it is not novel or interesting to other scientists - ergo not too many papers to cite on it. I wouldn't publish a paper saying the sun came up this morning either.

You almost need to run variations with and without entanglement to get the apparatus tuned properly anyway. And it is generally pretty easy to switch from one to the other.

my_wan said:
There may be ways to check specific mechanisms, like refractive index, but in this case the bias is not presumed to miss any photon detections. The only bias is in the time window that determines whether we define two correlated photons to be correlated or not. Thus the general case, involving how we test correlations, can only be tested with photons we can reasonably assume are correlated.

my_wan said:
Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.


Thanks guys. It looks like the sun will come up tomorrow as well. :smile:

By this we can draw the conclusion that all talk about "unfair angles" is a dead-end. All angles are treating every photon alike, whether it’s entangled or not.

I’ve found this (not peer reviewed) paper that thoroughly examine wide and narrow window coincidences on raw data from EPR experiments conducted by Gregor Weihs and colleagues, with tests on window sizes spanning from 1 ns to 75,000 ns using 3 different rules for identifying coincidences:

http://arxiv.org/abs/0906.5093"

A Close Look at the EPR Data of Weihs et al
James H. Bigelow
(Submitted on 27 Jun 2009)

Abstract: I examine data from EPR experiments conducted in 1997 through 1999 by Gregor Weihs and colleagues. They used detection windows of 4-6 ns to identify coincidences; I find that one obtains better results with windows 40-50 ns wide. Coincidences identified using different windows have substantially different distributions over the sixteen combinations of Alice's and Bob's measurement settings and results, which is the essence of the coincidence time loophole. However, wide and narrow window coincidences violate a Bell inequality equally strongly. The wider window yields substantially smaller violations of no-signaling conditions.
 
Last edited by a moderator:
  • #1,039
DrChinese said:
I wanted to graph every single degree from 0 to 90. Since it is a random test, it doesn't matter from trial to trial. I wanted to do X iterations for each theta, and sometimes I wanted fixed angles and sometimes random ones. The De Raedt setup sampled a little differently, and I wanted to make sure that I could see clearly the effect of changing angles. A lot of their plots did not have enough data points to suit me.

Ahh! That makes sense! Thanks.
 
  • #1,040
DevilsAvocado said:
Thanks guys. It looks like the sun will come up tomorrow as well. :smile:

By this we can draw the conclusion that all talk about "unfair angles" is a dead-end. All angles are treating every photon alike, whether it’s entangled or not.
Not sure what an unfair angle is, but valid empirical data from any angle is not unfair :-p

DevilsAvocado said:
I’ve found this (not peer reviewed) paper that thoroughly examine wide and narrow window coincidences on raw data from EPR experiments conducted by Gregor Weihs and colleagues, with tests on window sizes spanning from 1 ns to 75,000 ns using 3 different rules for identifying coincidences:
http://arxiv.org/abs/0906.5093

A Close Look at the EPR Data of Weihs et al
James H. Bigelow
(Submitted on 27 Jun 2009)

Abstract: I examine data from EPR experiments conducted in 1997 through 1999 by Gregor Weihs and colleagues. They used detection windows of 4-6 ns to identify coincidences; I find that one obtains better results with windows 40-50 ns wide. Coincidences identified using different windows have substantially different distributions over the sixteen combinations of Alice's and Bob's measurement settings and results, which is the essence of the coincidence time loophole. However, wide and narrow window coincidences violate a Bell inequality equally strongly. The wider window yields substantially smaller violations of no-signaling conditions.
Very cool! This is something new to me :!)

A cursory glance and it's already intensifying curiosity. I don't even want to comment on the information in the abstract until I get a chance to review it more. Interesting :smile:
 
  • #1,041
DrChinese said:
OK, sure, I follow now. Yes, assuming 100% of all photons are detected, you still must pair them up. And it is correct to say that you must use a rule of some kind to do this. Time window size then plays a role. This is part of the experimentalist's decision process, that is true.

Even if you succeeded in pairing them up, It is not sufficient to avoid the problem I explained in post #968 here: https://www.physicsforums.com/showpost.php?p=2783598&postcount=968

A triplet of pairs extracted from a dataset of pairs, is not guaranteed to be equivalent to a triplet of pairs extracted from a dataset of triples. Therefore, even for a 100% detection with perfect matching of the pairs to each other, you will still not obtain a fair sample, fair in the sense that the terms within the CHSH inequality will correspond to the terms calculated from the experimental data.

I have shown in the above post, how to derive Bell's inequalities from a dataset of triples without any other assumptions. If a dataset of pairs can be used to generate the terms in the inequality, it should be easy to derive Bell's inequalities based only on the assumption that we have a dataset of pairs and anyone is welcome to try to do that -- it is not possible.
 
  • #1,042
DrC,
Maybe this will reconcile some incongruencies in my napkin numbers. Is the dataset you spoke of from Weihs et al (1998) and/or (2007)? This would put the time windows in the ns range. The Weihs et al data was drawn from settings that were randomly reset every 100 ns, with 14 ns of data collected in the switching time discarded.

The data, as presented by the preprint DevilsAvocado posted a link to, in spite of justifying certain assumptions about detection time variances, still has some confounding features I'm not clear on yet.

I may have to get this raw data from Weihs et al and put it in some kind of database in raw form, so I can query the results of any assumptions.
 
  • #1,043
my_wan said:
Not sure what an unfair angle is, but valid empirical data from any angle is not unfair :-p
Sorry, I meant unfair "tennis ball"... :-p

my_wan said:
Very cool! This is something new to me :!)
Yeah! It looks interesting!

my_wan said:
I may have to get this raw data from Weihs et al and put it in some kind of database in raw form, so I can query the results of any assumptions.
Why not contact http://www.rand.org/about/people/b/bigelow_james_h.html" ? He looks like a nice guy:

[PLAIN]http://www.rand.org/about/people/photos/b/bigelow_james_h.jpg

With a dead serious occupation in computer modeling and analysis of large data files, and modeling of training in Air Force fighter pilots! (+ Ph.D. in operations research + B.S. in mathematics)

(= As far as you can get from Crackpot Kracklauer! :smile:)
 
Last edited by a moderator:
  • #1,044
billschnieder said:
Even if you succeeded in pairing them up, It is not sufficient to avoid the problem I explained in post #968 here: https://www.physicsforums.com/showpost.php?p=2783598&postcount=968

A triplet of pairs extracted from a dataset of pairs, is not guaranteed to be equivalent to a triplet of pairs extracted from a dataset of triples. Therefore, even for a 100% detection with perfect matching of the pairs to each other, you will still not obtain a fair sample, fair in the sense that the terms within the CHSH inequality will correspond to the terms calculated from the experimental data.

I have shown in the above post, how to derive Bell's inequalities from a dataset of triples without any other assumptions. If a dataset of pairs can be used to generate the terms in the inequality, it should be easy to derive Bell's inequalities based only on the assumption that we have a dataset of pairs and anyone is welcome to try to do that -- it is not possible.

I re-read your post, and still don't follow. You derive a Bell Inequality assuming realism. Same as Bell. So that is a line in the sand. It seems that should be respected with normal triples. Perhaps if you provide a dataset of triples from which doubles chosen randomly would NOT represent the universe - and the subsample should have a correlation rate of 25%. That would be useful.
 
  • #1,045
my_wan said:
... put it in some kind of database in raw form

If you do get your hands on the raw data, maybe you should consider using something that’s a little more "standard" than AutoIt...?

One alternative is MS Visual Studio Express + MS SQL Server Express, which are both completely free. You will get the latest top of the line RAD environment, including powerful tools for developing Windows applications, and extensive help in IntelliSense (autocompletion), etc.

The language in Visual Basic Express has many similarities to the BASIC language in AutoIt.

SQL Server Express is a very powerful SQL DBMS, with the only real limit in the 10GB db size (compared to standard versions).

VSE20101.png



"[URL Visual Studio 2010 Express

[URL]http://www.microsoft.com/express/s/img/logo_VSE2010.png[/URL]


"[URL SQL Server 2008 R2 Express

[URL]http://www.microsoft.com/express/s/img/logo_SQL2008R2.png[/URL]


(P.S. I’m not employed by MS! :wink:)
 
Last edited by a moderator:
  • #1,046
I run MySql (not MS), and Apache with PHP. I used to run Visual Studios (before dot net) and tried the express version. Don't really like it, though it was easy enough to program.

MySql would be fine, and will work with AutoIt, Dos, or PHP.

These days I use AutoIt a lot simply because I can rapidly write anything I want, with or without a GUI, and compile or just run the script. It doesn't even have to be installed with an installer. It's functions can do things that takes intense effort working with API's to do in lower level languages, including Visual Studios. I can do things in an hour that would take me weeks in most other languages, and some things I've never figured out how to do in other languages. I've also run it with Linux under wine. Not the fastest or most elegant, but suits my needs so perfectly 99.9% of the time. Effectively impossible to hide source code from any knowledgeable person, but not something that concerns me.
 
  • #1,047
my_wan said:
suits my needs so perfectly 99.9% of the time
Okidoki, if you say so.
my_wan said:
impossible to hide source code
My thought was to maybe make it easier to show the code to the world, if you find something very interesting... :rolleyes:
 
  • #1,048
DrChinese said:
You derive a Bell Inequality assuming realism. Same as Bell.
Absolutly NOT! There is no assumption about locality or realism. We have a dataset from an experiment, in which we collected data for three boolean variables x, y, z. That is, each data point consists of 3 values one each for x, y and z, with each value either 1 or 0. We could say, our dataset is (xyzi, i=1..n). Our task is then to derive inequalities which sums of products of pairs extracted from this dataset of triplets must obey. From our dataset we can generate pair products (xy, yz, xz). Note that there is no mention of the type of experiment, it could be anything, a poll in which we ask three (yes,no) question, or the EPR situation. We completely divorce ourselves from the physics or specific domain of the experiment and focus only on the mathematics. Note also, that there is no need for randomness here, we are using the full universe of the dataset to obtain our pair products. We do that and realize that the inequalities obtained are Bell-like. That is all there is to it.

The question then is, if a dataset violates these inequalities, what does it mean? Since there was no physical assumptions in their derivation, violation of the inequalities must mean ONLY that the dataset which violates the inequalities is not mathematically compatible with the dataset used to generate the inequalities.

The example I presented involving doctors and patients, shows this clearly.

Perhaps if you provide a dataset of triples from which doubles chosen randomly would NOT represent the universe - and the subsample should have a correlation rate of 25%. That would be useful.
I'm not sure you understand the point yet. The whole point is to show that any pairs extracted from a dataset of triples MUST obey the inequalities, but pairs from a dataset of just pairs will not! So your request is a bit strange. Do you agree that in Aspect type experiments, no triples are ever collected, only pairs? In other words, each data point consists of only two values of dichotomous variables, not three?

This shows that there is a simple mathematical reason why Bell-type inequalities are violated by experiments, which has nothing to do with locality or "realism" -- ie, Bell's inequalities are derived assuming values per data point (a,b,c), however in experiments only pairs are ever measured (a,b), therefore the dataset from the experiments is not mathematically compatible with the one assumed in Bell's derivation.

So if you are asking for a dataset of pairs which violates the inequalities, I will simply point you to all Bell-test experiments ever done in which only pairs of data points were obtained, which amounts to 100% of them.

Now, you could falsify the above in the following way:
1) Provide an experiment in which triples were measured and Bell's inequalities were violated
2) OR, derive an inequality assuming a dataset of pairs right from the start, and show that experiments still violate it

EDIT:
DrC,
It should be easy for you to simulate this and verify the above to be true since you are a software person too.
 
Last edited:
  • #1,049
Not to deliberately pull this thread in a different direction, but another thing to consider when discussing whether action at a distance is possible is the effect that multiple universe theory has on this concept.

I'm not an expert, but from what I have gathered about MU is that each time an entanglement is formed, each universe contains a matched set of entangled particles (i.e. in the universe where particle A is produced with spin 'up', entangled particle B will have spin 'down'). Since all possible outcomes are produced in correlation with the probability of the outcome, there will necessarily be universes with each possible 'combination' of entangled pairs. Then when we measure the entangled attributes of one of the particles, we are not actually having any effect on the other entangled particle at all. The 'effect' is local, and is on us, the observer, as we are now cut off from the other universes that contain the other results. So, for example, since we now observe only the universe where particle A has spin 'up', we know that when we observe entangled particle B (or someone else in our observable universe observes particle B and passes the information to us) we will see the complimentary measurement to our observed particle A.

So, in this theory, no spooky action at a distance, just local interaction between the observer and the observed particle, which occurs at lightspeed or slower.
 
  • #1,050
DougW said:
Not to deliberately pull this thread in a different direction, but another thing to consider when discussing whether action at a distance is possible is the effect that multiple universe theory has on this concept.
Why consider it? The multiverse is just an ontological construct crafted so as not to actually say anything new (provide any actual physics), that QM doesn't already contain. So if were considering the set of all possible models consistent with QM, for the purposes of BI, then QM already covers this special case. Unless you can say exactly what it entails in terms of BI in a relevant way.
 

Similar threads

  • Quantum Physics
2
Replies
45
Views
2K
  • Quantum Physics
Replies
4
Views
994
Replies
20
Views
1K
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
Replies
18
Views
2K
Replies
3
Views
1K
  • Quantum Physics
3
Replies
100
Views
9K
Replies
6
Views
3K
Replies
3
Views
740
Back
Top