Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #491
DrChinese said:
Score: Avocado 1, DrC 1.

Okay, I give up, you are right (as always)... :redface:

Score: (Smashed)Avocado ≈1, DrC >1.

:smile:
 
Physics news on Phys.org
  • #492
DrChinese said:
But there is some evidence of delay on the order of a few ns. This is far too small to account for pairing problems.

It is not my intention to discuss the De Raedt model here intensively, but the time window used in the simulation and in the real experiment seems to be in the order of a few nano seconds, so in the same range as the evidence you mention (I haven't seen the articles with this evidence yet). Or am I misreading your statement?

But more important to this thread I think is that when his time tag calculation is set off, the event by event simulation can be used to test other LR theories.
 
  • #493
ajw1 said:
It is very common to use pseudo random numbers in these kind of simulations, and often not worth the effort to get real random values. I don't think this is really an issue, provided that your pseudo random generator is ok for the purpose.

Okay, you have spent at lot more time on this than me. At the same time this is interesting, as pseudo-random numbers are deterministic in the sense that if we know the seed, we can calculate the 'future' in a deterministic way. To my understanding, this is exactly what LHV does, right?

Conclusion: If we can make a computer version of EPR/BTE to produce the correct statistics with pseudo-random numbers, we have then automatically proved that (N)LHVT is correct!

ajw1 said:
So in real experiments one has to use a time frame for determining whether to clicks at two detectors belong to each other or not.
There are indications that particles are delayed by the angle of the filter. This delay time is used by de Raedt, and he obtains the exact QM prediction for this setup (well, similar results as the real experiment, which more or less follows the QM prediction).

Yes I know, http://en.wikipedia.org/wiki/Coincidence_counting_(physics)" in real experiment, where detections must be sorted into time bins. This is a interesting problem because, as we all know, there are no noise or disturbances in a sequential executed code without bugs, and there is no problem to get a 100% detection (unless we do a BASIC GOTO SpaghettiMessUpRoutine() :smile:):
Start
...
Detection1
...
Detection2
...
EvaluateDetections
...
End​
(= almost impossible to fail)

So what does de Raedt do? He implements the 'weakness' of real experiments, and that’s maybe okay. What I find 'peculiar' is how pseudo-random numbers * measurement, has anything to do with real time bins and Coincidence counting... I don’t get it...

ajw1 said:
I don't know about 'all-purpose'. It seems to me that a De Raedt like simulation structure should be able to obtain the datasets DrChinese often mentions, for all kinds of new LR ideas.

Okay, if you say so.
 
Last edited by a moderator:
  • #494
DrChinese said:
1. The delay issue is complicated, but the bottom line is this is a testable hypothesis. I know of several people who are investigating this by looking at the underlying data using a variety of analysis techniques. I too am doing some work in this particular area (my expertise is in the data processing side). At this time, there is no evidence at all for anything which might lead to the bias De Raedt et al propose. But there is some evidence of delay on the order of a few ns. This is far too small to account for pairing problems.

DrC, I haven’t had the time to test/modify your code (need new/bigger HDD to install Visual Studio), but what happens if you completely skip "Coincidence counting" in the code??

(To me it seems very strange to build a whole scientific theory on noise and the 'troubles' of real measurements... :confused:)


EDIT: Ahh! I see ajw1 just answered the question...
 
Last edited:
  • #495
ajw1 said:
... but the time window used in the simulation and in the real experiment seems to be in the order of a few nano seconds ...

How can you convert "a few nano seconds" to this code?

6oztpt.png
 
  • #496
DevilsAvocado said:
How can you convert "a few nano seconds" to this code?

6oztpt.png

The remark is base on http://rugth30.phys.rug.nl/pdf/shu5.pdf" , see for example page 8.
 
Last edited by a moderator:
  • #497
ajw1 said:
The remark is base on http://rugth30.phys.rug.nl/pdf/shu5.pdf" , see for example page 8.

Okay, thanks. It could most certainly be my lack of knowledge in polarizer’s and physics, that makes me see this as "strange", but I can’t help it – how can anyone derive "a few nano seconds" from this? It’s just a mystery to me? There are no clocks or timing in the sequential code, just Pi, Cos and Radians?
5.4 Time Delay

In our model, the time delay tn,i for a particle is assumed to be distributed uniformly over the interval [t0, t0 + T ]. In practice, we use uniform pseudo-random numbers to generate tn,i . As in the case of the angles ξn, the random choice of tn,i is merely convenient, not essential. From (2), it follows that only differences of time delays matter. Hence, we may put t0 = 0. The time-tag for the event n is then tn,i ∈ [0,T ]. There are not many reasonable options to choose the functional dependence of T . Assuming that the particle “knows” its own direction and that of the polarizer only, T should be a function of the relative angle only. Furthermore, consistency with classical electrodynamics requires that functions that depend on the polarization have period π [27]. Thus, we must have T (ξn − θ1) = F((Sn,1 • a)2) and, similarly, T (ξn − θ2) = F((Sn,2 • b)2), where b = (cos β, sin β). We found that T (x) = T0|sin 2x|d yields the desired results [15]. Here, T0 = maxθ T (θ) is the maximum time delay and defines the unit of time, used in the simulation. In our numerical work, we set T0 = 1.


To me, this looks like "trial & error", but I could be catastrophically wrong...
 
Last edited by a moderator:
  • #498
You simulation guys might be interested in this paper Corpuscular model of two-beam interference and double-slit experiments with single photons

where they demonstrate single particle interference with a computer model which models the particles as "information carriers" exchanging information with the experimental apparatus. No wave function or non-local effects are assumed.

I like the idea that particles might exchange protocols like packets in a wifi network, but it seems a bit unlikely :smile:
 
  • #499
Thanks UN, I have to leave for shorter break, but I’ll check the link later!
 
  • #500
ajw1 said:
It is not my intention to discuss the De Raedt model here intensively, but the time window used in the simulation and in the real experiment seems to be in the order of a few nano seconds, so in the same range as the evidence you mention (I haven't seen the articles with this evidence yet). Or am I misreading your statement?

As I say, it is a bit complicated. Keep in mind that the relevant issue is whether the delay is MORE for one channel or not. In other words, similar delays on both sides have little effect. I use the setup of Weihs et al as my "golden standard".

Violation of Bell's inequality under strict Einstein locality conditions, Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger (Submitted on 26 Oct 1998)
http://arxiv.org/abs/quant-ph/9810080

As to the size of the window itself: Weihs uses 6 ns for their experiment. As there are about 10,000 detections per second, the average separation between clicks might be on the order of 25,000 ns. The De Raedt simulation can be modified for the size you like obviously.

It follows that if you altered the window size and got a different result, that would be significant. But with a large time difference between most events, I mean, seriously, what do you expect to see here? ALL THE CLICKS ARE TAGGED! It's not like they were thrown away.

When I finish my analysis of the data (which is a ways off), I will report on anything I think is of interest. In the meantime, I might suggest the following article if you want to learn more from someone who has studied this extensively:

http://arxiv.org/abs/0801.1776

Violation of Bell inequalities through the coincidence-time loophole, Peter Morgan, (11 Jan 2008)

"The coincidence-time loophole was identified by Larsson & Gill (Europhys. Lett. 67, 707 (2004)); a concrete model that exploits this loophole has recently been described by De Raedt et al. (Found. Phys., to appear). It is emphasized here that De Raedt et al.'s model is experimentally testable. De Raedt et al.'s model also introduces contextuality in a novel and classically more natural way than the use of contextual particle properties, by introducing a probabilistic model of a limited set of degrees of freedom of the measurement apparatus, so that it can also be seen as a random field model. Even though De Raedt et al.'s model may well contradict detailed Physics, it nonetheless provides a way to simulate the logical operation of elements of a quantum computer, and may provide a way forward for more detailed random field models."

Peter has been designing theoretical models for a number of years, with an emphasis on those with local random fields. I don't consider him a local realist (although I am not sure how he labels himself) because he respects Bell.
 
  • #501
DevilsAvocado said:
So what does de Raedt do? He implements the 'weakness' of real experiments, and that’s maybe okay. What I find 'peculiar' is how pseudo-random numbers * measurement, has anything to do with real time bins and Coincidence counting... I don’t get it...

I don't think that would be a fair characterization of the de Raedt model. First, it is really a pure simulation. At least, that is how I classify it. I do not consider it a candidate theory. The "physics" (such as the time window stuff) is simply a very loose justification for the model. I accept it on face as an exercise.

The pseudo-random numbers have no effect at all (at least to my eyes). You could re-seed or not all you want, it should make no difference to the conclusion.

The important thing - to me - is the initial assumptions. If you accept them, you should be able to get the desired results. You do. Unfortunately, you also get undesired results and these are going to be present in ANY simulation model as well. It is as if you say: All men are Texans, and then I show you some men who are not Texans. Clearly, counterexamples invalidate the model.
 
  • #502
DevilsAvocado said:
To me, this looks like "trial & error", but I could be catastrophically wrong...

I would guess that they did a lot of trial and error to come up with their simulations. It had to be reverse engineered. I have said many times that for these ideas to work, there must be a bias function which is + sometimes and - others. So one would start from that. Once I know the shape of the function (which is cyclic), I would work on the periodicity.
 
  • #503
Thanks for the clarification DrC.
 
  • #504
unusualname said:
... No wave function or non-local effects are assumed.

I like the idea that particles might exchange protocols like packets in a wifi network, but it seems a bit unlikely :smile:

Yeah! I also like this approach.
In our simulation approach, we view each photon as a messenger carrying a message. Each messenger has its own internal clock, the hand of which rotates with frequency f. As the messenger travels from one position in space to another, the clock encodes the time-of-flight t modulo the period 1/f. The message, the position of the clock’s hand, is most conveniently represented by a two-dimensional unit vector ...


Thru this I found 3 other small simulations (with minimal code) for Mathematica which relate to EPR:

"[URL
Bell's Theorem[/URL]

"[URL
Generating Entangled Qubits[/URL]

"[URL
Retrocausality: A Toy Model[/URL]

(All include a small web preview + code)
 

Attachments

  • thumbnail.jpg
    thumbnail.jpg
    2.4 KB · Views: 306
  • thumbnail.jpg
    thumbnail.jpg
    1.6 KB · Views: 284
  • thumbnail.jpg
    thumbnail.jpg
    2.5 KB · Views: 323
Last edited by a moderator:
  • #505
And this:

"[URL
Event-by-Event Simulation of Double-Slit Experiments with Single Photons[/URL]
 

Attachments

  • thumbnail.jpg
    thumbnail.jpg
    1.8 KB · Views: 298
Last edited by a moderator:
  • #506
DrChinese said:
1. As I say, it is a bit complicated. Keep in mind that the relevant issue is whether the delay is MORE for one channel or not. In other words, similar delays on both sides have little effect.

2. I use the setup of Weihs et al as my "golden standard". Violation of Bell's inequality under strict Einstein locality conditions, Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger (Submitted on 26 Oct 1998)
http://arxiv.org/abs/quant-ph/9810080

3. As to the size of the window itself: Weihs uses 6 ns for their experiment. As there are about 10,000 detections per second, the average separation between clicks might be on the order of 25,000 ns. The De Raedt simulation can be modified for the size you like obviously.

It follows that if you altered the window size and got a different result, that would be significant. But with a large time difference between most events, I mean, seriously, what do you expect to see here? ALL THE CLICKS ARE TAGGED! It's not like they were thrown away.

4. When I finish my analysis of the data (which is a ways off), I will report on anything I think is of interest.

1. I think the delay is only important when it depends on the angle of the filter. This relation can be equal on both sides.

2. De Raedt's work is based on the same article/data

3. All clicks are tagged, but not all clicks are used (that's why one uses a time window). It appears from the analysis of De Raedt of the data from Weihs et al. one needs to use a time window in the order of several ns to obtain QM like results, the optimum being near 4 ns. Either a larger time window or a smaller will yield worse results (and the reason for the latter is not because the dataset is getting too small for correct statistics).

4. You were able to obtain the raw data from Weihs et al.? I tried to find them, but I think they are no longer available on their site.
 
Last edited:
  • #507
ajw1 said:
1. I think the delay is only important when it depends on the angle of the filter. This relation can be equal on both sides.

2. De Raedt's work is based on the same article/data

3. All clicks are tagged, but not all clicks are used (that's why one uses a time window). It appears from the analysis of De Raedt of the data from Weihs et al. one needs to use a time window in the order of several ns to obtain QM like results, the optimum being near 4 ns. Either a larger time window or a smaller will yield worse results (and the reason for the latter is not because the dataset is getting too small for correct statistics).

4. You were able to obtain the raw data from Weihs et al.? I tried to find them, but I think they are no longer available on their site.

1. Keep in mind, the idea of some delay dependent on angle is purely hypothetical. There is no actual difference in the positions of the polarizers in the Weihs experiment anyway. It is fixed. To change angle settings:

"Each of the observers switched the direction of local
polarization analysis with a transverse electro-optic modulator.
It’s optic axes was set at 45◦ with respect to the
subsequent polarizer. Applying a voltage causes a rotation
of the polarization of light passing through the modulator
by a certain angle proportional to the voltage [13].
For the measurements the modulators were switched fast
between a rotation of 0◦ and 45◦."


2. Yup. Makes it nice when we can all agree upon a standard.


3. I think you missed my point. I believe Weihs would call attention to the fact that it agrees with QM for the 6 ns case but not the 12 ns case (or whatever). It would in fact be shocking if any element of QM was experimentally disproved, don't you think? As with any experiment, the team must make decisions on a variety of parameters. If anyone seriously thinks that there is something going on with the detection window, hey, all they have to do is conduct the experiment.


4. I couldn't find it publicly.
 
  • #508
DrChinese said:
... the idea of some delay dependent on angle is purely hypothetical ...

That’s a big relief! :approve:
 
  • #509
DrChinese said:
3. I think you missed my point. I believe Weihs would call attention to the fact that it agrees with QM for the 6 ns case but not the 12 ns case (or whatever). It would in fact be shocking if any element of QM was experimentally disproved, don't you think? As with any experiment, the team must make decisions on a variety of parameters. If anyone seriously thinks that there is something going on with the detection window, hey, all they have to do is conduct the experiment.
I was not suggesting any unfair playing by Weihs (re-reading my post I agree it looks a bit that way) :wink:. Furthermore as I said De Raedt has analysed the raw data from Weihs et al. and published the exact relation between the chosen time window and the results http://rugth30.phys.rug.nl/pdf/shu5.pdf" . But surely you must have read this article.
 
Last edited by a moderator:
  • #510
DrChinese said:
1. Pot calling the kettle...

2. You apparently don't follow Mermin closely. He is as far from a local realist as it gets.

Jaynes is a more complicated affair. His Bell conclusions are far off the mark and are not accepted.
Again you've missed the point. I'm guessing that you probably didn't bother to read the papers I referenced.

DrChinese said:
I am through discussing with you at this time. You haven't done your homework on any of the relevant issues and ignore my suggestions. I will continue to point out your flawed comments whenever I think a reader might actually mistake your commentary for standard physics.
You haven't been discussing any of the points I've brought up anyway. :smile: You have a mantra that you repeat.

Here's another question for you. Is it possible that maybe the issue is a little more subtle than your current understanding of it?

If you decide you want to answer the simple questions I've asked you or address the salient points that I've presented (rather than repeating your mantra), then maybe we can have an actual discussion. But when you refuse to even look at a paper, or answer a few simple questions about what it contains, then I find that suspicious to say the least.
 
  • #511
DrChinese said:
That's bull. I am shocked you would assert this. Have you not been listening to anything about Bell? You sound like someone from 1935.
I take it then you do not understand the meaning of "correlation".

EDIT:
There are no global correlations. And on top of my prior post, I would like to mention that a Nobel likely awaits any iota of proof of your statement. Harmonic signals are correlated in some frames, but not in all.
You contradict yourself by finally admitting that in fact all harmonic signals are correlated. The fact that it is possible to screen-off correlations in some frames does not eliminate the fact that there exists a frame in which a correlation exists. In reverse, just because it is possible to find a frame in which a correlation is screened-off does not imply that the correlation does not exist.

In any case, my original point which I believe still stands is the fact that two entities can be correlated even if they have never been in the same space-time area. It is trivial to understand that two systems governed by the same physical laws will be correlated whether or not they have been in the same space-time area or not.

I could go a step further and claim that every photon is correlated with every other photon just due to the fact that they are governed by the same physical laws, but I wouldn't as it is fodder for a different thread. ;-)
 
Last edited:
  • #512
... Houston, we've have a problem with the FTL mechanism ...

The EPR paradox seems to be a bigger problem than one might guess at first sight. Bell's theorem has ruled out local hidden variables (LHV), both theoretically and practically by numerous Bell test experiments, all violating Bell inequalities.

To be more precise: Bell inequalities, LHV and Local realism are more or less the same thing, stating – there is an underlying preexisting reality in the microscopic QM world, and no object can influence another object faster than the speed of light (in vacuum).

There are other theories trying to explain the EPR paradox, like the non-local hidden variables theory (NLHV). But as far as I can tell, this has lately also been ruled out experimentally by Anton Zeilinger et.al.

Then we have other interpretations of QM, like Many Worlds Interpretation (MWI), Relational Blockworld (RBW), etc. Many of these interpretations have the 'disadvantage' of introducing a mechanisms that, too many, are more 'astonishing' than the EPR paradox itself, and thereby a contradiction to Occam's razor – "entities must not be multiplied beyond necessity".

Even if it seems like "Sci Fi", (the last and) the most 'plausible' solution to the EPR paradox seems to be some 'mechanism' operating faster than the speed of light between Alice & Bob. As DrChinese expresses (my emphasis):
DrChinese said:
... Because I accept Bell, I know the world is either non-local or contextual (or both). If it is non-local, then there can be communication at a distance between Alice and Bob. When Alice is measured, she sends a message to Bob indicating the nature of the measurement, and Bob changes appropriately. Or something like that, the point is if non-local action is possible then we can build a mechanism presumably which explains entanglement results.


If we look closer at this claim, we will see that even the "FTL mechanism" creates another unsolvable paradox.

John Bell used Probability theory to prove that statistical QM probabilities differ from LHV. Bell's theorem thus proves that true randomness is a fundamental part of nature:
"No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

Now, what happens if we solve the EPR paradox with the "FTL mechanism"? Well, as DrC says, Alice sends a message to Bob to inform him about her angle and result, and what Bob needs to change appropriately.

Does this look like a fundamental and true randomness of the QM nature?

To me it doesn’t. Even if FTL is involved, there is a cause for Alice to send a message to Bob, and Bob will have a cause for his changes!?

This doesn’t make sense. This is a contradiction to the true randomness of QM, which Bell's theorem is proving correct!?

Any thoughts, anyone?
 
Last edited:
  • #513
Since https://www.physicsforums.com/showpost.php?p=2729969&postcount=485".

As I've previously noted, it's not a "probability" being described as negative, it's possible case instance E(a,b) of a probability P(A,B|a,b). To explain the different between a "possible case instance" and a "probability", consider a coin toss. The "probability" of a heads or tails is 50% each. A "possible case instance" will be either a heads or a tails, but no "probability" is involved after the coin toss and we know which side the coin landed on. What being compared is a large group of deterministically selected case instances.

Thus saying that case instances where the coin did in fact land on tails negatively interferes with heads is true, but makes no sense in terms of "probabilities". It's a case instance, not a probability. By itself this doesn't fully describe the negative [strike]probabilities[/strike] possibilities described on the "negative probabilities" page, because there is still too many negative possibilities to account for in coin tosses.

As is well known, in the derivation of Bell's inequalties, negative possibilities only occur in case 'instances' where detections are more likely in only one of the detectors, rather than neither or both. So far exactly as you would expect in coin tosses. To understand where the extra coin tosses c0me from we need to look at a definition.
1) Bell realism - An element of reality is defined by a predictable value of a measurement.

Have have 2 measuring instruments A and B, which each consist of a polarizer and a photon detector. Each measure is considered an element of reality, per Bell realism, such that a measure by each of our 2 measuring instruments constitutes 2 elements of reality. Now we are going to emit a single photon at our detectors. Only detector A has a particular polarization setting, and detector B is not another detector, but another setting we could have chosen for detector A, i.e., a counterfactual measurement.

Now, by definition we are looking for 2 elements of reality, i.e., predictable measures per Bell realism. Yet if A detects our single photon, we know B can't, and visa versa. But if counterfactually both A and B was in principle capable of separately detecting that one photon, we are allowed to presume that only sometimes did A and B both see the photon (since we can call it both ways counterfactually), and sometimes not. So if that counterfactual measure can sometimes see the same photon we are required to call that a separate element of reality per Bell Realism, even though it's the same photon. Yet that requires us to also call the times A detected the photon but B didn't 2 separate elements of reality also.

If we call it the other way, and call both measurements the same element of reality per photon, it makes sense in those case where one detector detects the photon, but not the other. But violates Bell realism in cases where both detectors were capable of detecting that same photon. The negative possibility page presumes each measurement represents it's own distinct element of reality, which makes sense in those cases where both A and B could have detected the same photon. Thus, in those cases where our single photon can't counterfactually be detected by both detectors, it appears as if reality itself has been negatively interfered with.

Objections:
But we are talking statistics of a large number of photons, not a single photon. The negative probabilities are of a large number of detections.

True, but by academic definition, the large number of cases where derived from the special cases E(a,b) of the general probability P(A,B|a,b). It's tantamount to flipping a coin many times, taking all the cases where E(a,b)=tails, and calling that a probability because we are dealing with many cases of tails, rather than just one.

This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting. I empirically justify this by the following empirical facts:
1) A polarizer will pass 50% of all 'randomly' polarized light.
2) A polarizer set at a 45 degree angle to a polarized beam of light will pass 50% of the light in that beam.

Now this is perfectly well described in QM and HUP, and this uncertainty is a LOCAL property of the individual photon itself. In QM, polarization is also perfectly well described as a quantum bit, where it can have values between 0 and 1. It is these partial values between 0 and 1 that allows the same photon to 'sometimes' be counterfactually detected with multiple polarizer settings. Yet this bit range is still a LOCAL property of the bit/photon.

We only have to accept the reality of HUP as a real property of the LOCAL photon polarization bit to get violations of Bell realism (a distinct issue from correlations). Yet the fact that correlations exist at all, and anti-twins (anti-correlated particles) can repeat the same response to polarizers deterministically, even with offsets in the 0/1 bits, indicates that as real as HUP is, it doesn't appear to be fundamental. So in this conception we have real LOCAL bit value ranges via HUP, legitimizing the QM coincidence predictions, with correlations that indicate HUP is valid, but not fundamental. The LOCAL validity of HUP is enough to break Bell's inequalities. While the breaking of Bell realism itself, due to LOCAL HUP, breaks the negative "possibility" proof.

The one to one correspondence between an element of reality (photon) and a detection is broken (Bell realism), when counterfactually a different detector setting can sometimes detect the same photon, and sometimes not. It does not explicitly break realism wrt the reality and locality of the photon itself. Detector and counterfactual detector is, after all, effectively in the same place.
 
Last edited by a moderator:
  • #514
ajw1 said:
I was not suggesting any unfair playing by Weihs (re-reading my post I agree it looks a bit that way) :wink:. Furthermore as I said De Raedt has analysed the raw data from Weihs et al. and published the exact relation between the chosen time window and the results http://rugth30.phys.rug.nl/pdf/shu5.pdf" . But surely you must have read this article.

Sure. And I consider it reasonable for them to make the argument that a change in time window causes some degradation of the results, although not enough to bring into the realistic realm. This is a good justification for their algorithm then, because theirs does not perfectly model the QM cos^2 relationship. But it does come sort of close and it does violate a Bell Inequality (as it should for their purposes) while providing a full universe which does not. Again, as a simulation, I think their ideas are OK to that point. My issue comes at a different step.
 
Last edited by a moderator:
  • #515
DevilsAvocado said:
...Now, what happens if we solve the EPR paradox with the "FTL mechanism"? Well, as DrC says, Alice sends a message to Bob to inform him about her angle and result, and what Bob needs to change appropriately.

Does this look like a fundamental and true randomness of the QM nature?

To me it doesn’t. Even if FTL is involved, there is a cause for Alice to send a message to Bob, and Bob will have a cause for his changes!?

This doesn’t make sense. This is a contradiction to the true randomness of QM, which Bell's theorem is proving correct!?

Any thoughts, anyone?

I would tend to agree. FTL seems to fill in the cause. As I understand the Bohmian program, it is ultimately deterministic. Randomness results from stochastic elements.
 
  • #516
billschnieder said:
1. You contradict yourself by finally admitting that in fact all harmonic signals are correlated.

2. I could go a step further and claim that every photon is correlated with every other photon just due to the fact that they are governed by the same physical laws, but I wouldn't as it is fodder for a different thread. ;-)

1. I never said anything of the kind. Some synchronization is possible in some frames. Entangled particles are entangled in all frames as far as I know.

2. Maybe they are. That would be a global parameter. c certainly qualifies in that respect. Beyond that, exactly what are you proposing?
 
  • #517
my_wan said:
As is well known, in the derivation of Bell's inequalties, negative possibilities only occur in case 'instances' where detections are more likely in only one of the detectors, rather than neither or both. ...

Have have 2 measuring instruments A and B, which each consist of a polarizer and a photon detector. Each measure is considered an element of reality, per Bell realism, such that a measure by each of our 2 measuring instruments constitutes 2 elements of reality. Now we are going to emit a single photon at our detectors. Only detector A has a particular polarization setting, and detector B is not another detector, but another setting we could have chosen for detector A, i.e., a counterfactual measurement.

Now, by definition we are looking for 2 elements of reality, i.e., predictable measures per Bell realism. Yet if A detects our single photon, we know B can't, and visa versa.

...


If we call it the other way, and call both measurements the same element of reality per photon, it makes sense in those case where one detector detects the photon, but not the other. But violates Bell realism in cases where both detectors were capable of detecting that same photon. The negative possibility page presumes each measurement represents it's own distinct element of reality, which makes sense in those cases where both A and B could have detected the same photon. Thus, in those cases where our single photon can't counterfactually be detected by both detectors, it appears as if reality itself has been negatively interfered with.

Objections:
But we are talking statistics of a large number of photons, not a single photon. The negative probabilities are of a large number of detections.

True, but by academic definition, the large number of cases where derived from the special cases E(a,b) of the general probability P(A,B|a,b). It's tantamount to flipping a coin many times, taking all the cases where E(a,b)=tails, and calling that a probability because we are dealing with many cases of tails, rather than just one.

This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting. .

OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle. Show me a single photon - anywhere anytime - that cannot be detected by a polarizing beam splitter. Your assertion is simply incorrect! (Yes, in an ordinary PBS there is some inefficiency so 100% will not get through, but this is not what you are referring to.)

Further, the Bell program is to look for at least 3 elements of reality, not 2. The EPR program was 2.
 
  • #518
billschnieder said:
In any case, my original point which I believe still stands is the fact that two entities can be correlated even if they have never been in the same space-time area. It is trivial to understand that two systems governed by the same physical laws will be correlated whether or not they have been in the same space-time area or not.

Oh really? Trivial, eh? You really like to box yourself in. Well cowboy, show me something like this that violates Bell inequalities. I mean, other than entangled particles that have never been in each others light cones. LOL.

You see, it is true that you can correlate some things in simple ways. For example, you could create spatially separated Alice and Bob that have H> polarization. OK. What do you have? Not much. But that really isn't what we are discussing is it? Those photons are polarization correlated in a single basis only. Not so entangled photons, which are correlated in ALL bases. So sure, we all know about Bertlmann's socks but this is not what we are discussing in this thread.
 
  • #519
DrChinese said:
OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle. Show me a single photon - anywhere anytime - that cannot be detected by a polarizing beam splitter. Your assertion is simply incorrect! (Yes, in an ordinary PBS there is some inefficiency so 100% will not get through, but this is not what you are referring to.)

Further, the Bell program is to look for at least 3 elements of reality, not 2. The EPR program was 2.
Sentence 1): "OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle."

My argument is contingent upon the assumption that single photons can (counterfactually) be detected at more than 1 angle.

Sentence 2): "Show me a single photon - anywhere anytime - that cannot be detected by a polarizing beam splitter."

Simple enough. I'll do it for a whole beam of photons. Simply polarize the beam to a particular polarization, and turn a polarizer to 90 degrees of that beam. Effective none of the photons will get through the polarizer to a detector. Not sure why you specified a "beam splitter" here, as I'm only talking about how a photon responds to a polarizer at the end of it's trip. When final detection takes place for later coincidence comparisons. But it doesn't make a lot of difference.

Just because a quantum bit has effective values between 0 and 1 doesn't entail an equal likelihood of a measurement producing a 0 or a 1 in all cases.

Sentence 3): "Your assertion is simply incorrect!"
Suspect, given that sentence 1) indicates I claimed against what I claimed on reasonable empirical grounds.

Sentence 3): (Yes, in an ordinary PBS there is some inefficiency so 100% will not get through, but this is not what you are referring to.)
True, not what I was referring to. As a matter of fact I'm quiet happy to assume 100% efficiency for practical purposes, even if not strictly valid. Nor does my argument include the PBS, only the polarizers at the distant detection points, at the time of final detection but before the coincidence counts takes place. The one that's paired with the photon detector.
 
  • #520
Oops, I left out sentence 4): "Further, the Bell program is to look for at least 3 elements of reality, not 2. The EPR program was 2."

Yes, and it is this 3rd "elements of reality" that I am saying is sometimes a distinct "elements of reality", when the photons are unique, and sometimes not, when counterfactually the same photon would have been detected by both the 2nd and counterfactual 3rd so called "element of reality" (detector).

What this would mean is that the negative probability you calculated is the percentage of photons that would have been detected by the 2nd and counterfactual 3rd "element of reality" (detectors).
 
  • #521
I'm still a bit shocked at sentence 1):
"
DrChinese said:
OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle."

Let's enumerate sentences to the contrary in the specific post you responded to:
(Let's put the granddaddy of them first, even if out of occurrence order)

1) This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting.

2) But if counterfactually both A and B was in principle capable of separately detecting that one photon, we are allowed to presume that only sometimes did A and B both see the photon (since we can call it both ways counterfactually), and sometimes not.

3) So if that counterfactual measure can sometimes see the same photon we are required to call that a separate element of reality per Bell Realism, even though it's the same photon.

4) It is these partial values between 0 and 1 that allows the same photon to 'sometimes' be counterfactually detected with multiple polarizer settings.

5) The one to one correspondence between an element of reality (photon) and a detection is broken (Bell realism), when counterfactually a different detector setting can sometimes detect the same photon, and sometimes not.

These are the sentences that explicitly require the opposite of what you claimed I said, but many more contingent upon it.
 
  • #522
my_wan said:
I'm still a bit shocked at sentence 1):
"

Let's enumerate sentences to the contrary in the specific post you responded to:
(Let's put the granddaddy of them first, even if out of occurrence order)

1) This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting.

I am so confused at what you are asserting. :bugeye:

My version does not require counterfactuality. I can do the experiment all day long. A photon can be observed at any angle at any time. I can do angle A, then B, then C, then C again, etc. And I still have a photon. So again, I am calling you out: please quote a reference which describes the behavior you mention, and point to a spot in Bell where this is referenced. Or alternately say that it is your personal speculation.
 
  • #523
DrC,
This model I am describing is new to me, only occurred to during this debate a few days ago. Before then the contextuality issue was purely theoretical, however reasonable it appeared to me as at least possible. Now I'm trying to express it as it's being investigated. I'm quiet aware that I haven't been completely lucid in my account of it in all cases, but it seemed to me that the underlying idea should be fairly clear. Maybe that's just a perspective though.

Regardless, debating you has been far more fruitful and informative than I could possibly have hoped. It's rare for me to have the pleasure of such a worthy debate. The science certainly will not be decided by this debate, and to declare a winner or loser would not be science by any stretch of the imagination.

Beyond the rebuttals I supplied, which I found to be reasonably, and stick to my limited proof claim, this is quickly turning into independent research for me. Due to my newfound 'definable' contextuality scheme. So I'll answer questions if interested, but this debate does not exist to win, rather for learning, and I have learned more than I could have hoped, thanks to you. If I'm not expressing myself clearly enough to get the quality rebuttals the debate started with, it's time I run with my newfound understanding, and put my money where my mouth is. Thank you for such a worthy debate.

P.S. :-p
I understand that your counterfactual C can be run as a separate experiment. But when counterfactually matching it against the previously 'actual' experiment there's a crossover in certain 'instances' where sometimes the photons from B and C should show up as common events (where B and C are calling the same events distinct). Whereas in the individual experiments they were indeed distinct elements of reality. Your calculation, in my interpretation was a statistical count of the percentage of common events to B and C, assuming C is measured on the B side for purposes of definition.

Hopefully that might help, but it's time for me to do something more real than debate it. The computer modeling sounds interesting. :biggrin: If it works the way I hope, I should be able to emulate an expansion series, and express photons as large base 2 quasi-random numbers in a text file. Kind of a finite way to emulate a single quantum bit, with a probability function built into the random variance of a long 0/1 binary sequence. I'll have to limit the angle setting to half angle increments to keep photon number a reasonable size. The code should be simple enough, but why it works, if it does, may still be confusing just from reading the source. But now I'm just running my mouth.

Thanks, it was not only your modeling on your website, but your forcing me to face a perspective other than my own, that give me a new toy that might even pay. At least learn from. I'm off to play.
 
  • #524
DrChinese said:
I would tend to agree. FTL seems to fill in the cause. As I understand the Bohmian program, it is ultimately deterministic. Randomness results from stochastic elements.

Yes, and if FTL brings cause to Bell test experiments, then either Bell's theorem or FTL goes in the paper bin.

And there seems to be additional dark clouds, gathering up on the "Bell sky"...
(original paper from your site)
http://www.drchinese.com/David/Bell_Compact.pdf"
John S. Bell
...

VI. Conclusion
In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.

Of course, the situation is different if the quantum mechanical predictions are of limited validity. Conceivably they might apply only to experiments in which the settings of the instruments are made sufficiently in advance to allow them to reach some mutual rapport by exchange of signals with velocity less than or equal to that of light. In that connection, experiments of the type proposed by Bohm and Aharonov [6], in which the settings are changed during the flight of the particles, are crucial.

Instantaneously!? Not Lorentz invariant! ?:confused:?

Not only has QM non-locality 'problems', here goes Einstein, SR and RoS down the drain??

I have absolutely no idea what to think... we must all have missed something very crucial... because all this is too strange to be true... :eek:
 
Last edited by a moderator:
  • #525
my_wan said:
...Regardless, debating you has been far more fruitful and informative than I could possibly have hoped. It's rare for me to have the pleasure of such a worthy debate. ... Thanks, it was not only your modeling on your website, but your forcing me to face a perspective other than my own, that give me a new toy that might even pay. At least learn from. I'm off to play.

I am glad if I was a help in any small way. The point is often to address a different perspective, and in that regard I benefit too. :smile:
 

Similar threads

2
Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
2K
Replies
6
Views
1K
Replies
2
Views
1K
Replies
100
Views
9K
Replies
6
Views
3K
Back
Top