Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #736
DrChinese said:
A more generally accepted argument is that the GHZ argument renders the Fair Sampling assumption moot.
Can you produce some reference?

I gave reference for the opposite in my post https://www.physicsforums.com/showthread.php?p=2760591#post2760591" but this paper is not freely accessible so it's hard to discuss it. But if you will give your reference then maybe we will be able to discuss the point.
 
Last edited by a moderator:
Physics news on Phys.org
  • #737
ThomasT said:
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.
If this is a local theory in which any correlations between the two disturbances are explained by properties given to them by the common source, with the disturbances just carrying the same properties along with them as they travel, then this is exactly the sort of theory that Bell examined, and showed that such theories imply certain conclusions about the statistics we find when we measure the "disturbances", the Bell inequalities. Since these inequalities are violated experimentally, this is taken as a falsification of any such local theory which explains correlations in terms of common properties given to the particles by the source.

Again, you might take a look at the lotto card analogy I offered in post #2 here. If Alice and Bob are each sent scratch lotto cards with a choice of one of three boxes to scratch, and we find that on every trial where they choose the same box to scratch they end up seeing the same fruit, a natural theory would be that the source is always creating pairs of cards that have the same set of "hidden fruits" behind each of the three boxes. But this leads to the conclusion that on the trials where they choose different boxes there should be at least a 1/3 probability they'll see the same fruit, so if the actual observed frequency of seeing the same fruit when they scratch different boxes is some smaller number like 1/4, this can be taken as a falsification of the idea that the identical results when identical boxes are chosen can be explained by each card being assigned identical hidden properties by the source.
ThomasT said:
Do you doubt that this is the view of virtually all physicists?
Virtually all physicists would agree that the violation of Bell inequalities constitutes a falsification of the kind of theory you describe, assuming you're talking about a purely local theory.
 
  • #738
zonde said:
Can you produce some reference?

I gave reference for the opposite in my post https://www.physicsforums.com/showthread.php?p=2760591#post2760591" but this paper is not freely accessible so it's hard to discuss it. But if you will give your reference then maybe we will be able to discuss the point.

Here are a couple that may help us:

Theory:
http://www.cs.rochester.edu/~cding/Teaching/573Spring2005/ur_only/GHZ-AJP90.pdf

Experiment:
http://arxiv.org/abs/quant-ph/9810035

"It is demonstrated that the premisses of the Einstein-Podolsky-Rosen paper are inconsistent when applied to quantum systems consisting of at least three particles. The demonstration reveals that the EPR program contradicts quantum mechanics even for the cases of perfect correlations. By perfect correlations is meant arrangements by which the result of the measurement on one particle can be predicted with certainty given the outcomes of measurements on the other particles of the system. This incompatibility with quantum mechanics is stronger than the one previously revealed for two-particle systems by Bell's inequality, where no contradiction arises at the level of perfect correlations. Both spin-correlation and multiparticle interferometry examples are given of suitable three- and four-particle arrangements, both at the gedanken and at the real experiment level. "
 
Last edited by a moderator:
  • #739
ThomasT said:
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.

Do you doubt that this is the view of virtually all physicists?

Do you see anything wrong with this view?

Sorry, I may have missed this post, and I saw JesseM replying so I thought I would chime in...

The EPR conclusion is most certainly not the view which is currently accepted. That is because the EPR view has been theoretically (Bell) and experimentally (Aspect) rejected. But that was not the case in 1935. At that time, the jury was still out.

What is wrong with this view is that it violates the Heisenberg Uncertainty Principle. Nature does not allow that.
 
  • #740
zonde said:
Yes, that is only speculation. Nothing straightforwardly testable.
It demonstrably consistent with any test. This consistency is taken from the fact that if you take a polarized beam and offset a polarizer in its path, offset defined by the difference between light polarization and polarizer setting, the statistics of what is passed, defined by the light intensity making it through that polarizer, exactly matches in all cases the assumptions I am making.

To demonstrate you can use this polarizer applet:
http://www.lon-capa.org/~mmp/kap24/polarizers/Polarizer.htm
Just add a second polarizer and consider the light coming through the first polarizer your polarized beam, which means you double whatever percentage is read, because the 50% lost to the first polarizer doesn't count.

zonde said:
Or lost 35% and gained 35%. Or lost x% and gained x%.
The question is not about lost photon count = gained photon count.
Question is about this number - 15%.
You will keep insisting that it's 15% because it's 15% both ways then we can stop our discussion right there.
The number 15% only results from the 22.5 setting. If we use a 45 setting then it's 50% lost and 50% gained. Any setting cos^(theta) defines both lost and gained because sin^2(theta) = |cos^2(90-theta)| in all cases. There is nothing special about 22.5 and 15%.


zonde said:
sin(theta)=cos(90-theta) is trivial trigonometric identity. What you expect to prove with that?
That's why it constitutes a proof at all angle, and not just the 22.5 degree setting that gets 15% lost and gained in the example used.

zonde said:
Switch routes to ... FROM what?
You have no switching with ONE setting. You have to have switching FROM ... TO ... otherwise there is no switching.
Lost is photons that would have passed the polarizer but didn't at that setting. Gained is what wouldn't have passed the polarizer but did at that setting. Let's look at it using a PBS so we can divide things in H, V, and L, R routes through the polarizer.

Consider a PBS rather than a plain polarizer placed in front of a simple polarized beam of light that evenly contains pure H an V polarized photons. We'll label the V polarization as angle 0. So, a PBS set a angle 0 will have 100% of the V photons takes L route, and 100% of the H photons takes R. At 22.5 degrees L is ~85% V photons and ~15% H photons, while R beams now contains ~15% V photons and ~85% H photons. WARNING: You have to consider that by measuring the photons at a new setting, it changes the photons polarization to be consistent with that new setting. At a setting of 45 degree you get 50% H and 50% V going L, and 50% H and 50% V going R. Nothing special about 15% or the 22.5 degree setting.

Now what the sin^2(theta) = cos^2(90-theta) represents here is anyone (but only one) polarizer setting, such that theta=theta in both cases, and our sin^2(theta) is V photons that switch to the R route, while cos^2(90-theta) is the H photons that switch to the L route.

Now since this is a trig identity for all cases, it valid for ANY uniform mixture of polarizations, whether 2 pure H and V beams or a random distribution, which by definition is a uniform mixture of polarizations.

It would even be easy to make non-uniform beam mixtures, where certain ranges of polarizations are missing in the beam, such that the sin^2(theta) = cos^2(90-theta) can be used to define the ratios of beam intensities as theta, the polarizer setting, is adjusted. If ANY situation can be defined where sin^2(theta) = cos^2(90-theta) doesn't properly predict beam intensity ratios, from any crafted beam mixture, then I'm wrong.

And here's the kicker: by defining properties in terms of photons properties, rather than properties as defined by the polarizer settings that detect them, and using these polarizer path statistics, BI violations statistics also result as a consequence.
 
  • #741
DrChinese said:
Again, I am missing your point. So what? How does this relate to Bell's Theorem or local realism?
It relates to the arbitrary angle condition placed on the modeling of hv models and nothing else.
Consider:
Hidden variable model successfuly Models QM coincidence statistics, but requires coordinate freedom that is objected to. The following properties are noted:
1) One ot the other, but not both detector settings must be defined to have a 0 angle setting. (objection noted)
2) The detector defined as a zero setting has zero information about the other detectors setting.
3) The zero setting can be arbitrarily changed to any absolute setting along with the detector angle changes with or WITHOUT redefining absolute photon polarizations in the process.
4) The default photon polarizations can be rotated with absolute impunity, having no effect whatsoever on coincidence statistic.
5) The only thing considered for detections/non-detections is the photon polarization relative to the setting of the detector it actually hit.

Thus this proves the 0 coordinate requirement in no way hinges upon physical properties unique to the angles chosen. It is a mathematical artifact, related to non-commuting vectors. It's essentially equivalent of giving only the path of a pool ball and demanding that the path of q-ball that hit it must be uniquely calculable in order to prove pool balls are real.

I'll get around to attempting to use predefined non-commutative vectors to get around it soon, but I have grave doubts. Disallowing arbitrary 0 coordinates is tantamount to disallowing an inertial observer from self defining their own velocity as 0, which requires a universal 0 velocity.

At the very least, I would appreciate if you quit misrepresenting the 0 angle condition as a statistically unique physical state at that angle.
 
  • #742
JesseM said:
If this is a local theory in which any correlations between the two disturbances are explained by properties given to them by the common source, with the disturbances just carrying the same properties along with them as they travel, then this is exactly the sort of theory that Bell examined, and showed that such theories imply certain conclusions about the statistics we find when we measure the "disturbances", the Bell inequalities. Since these inequalities are violated experimentally, this is taken as a falsification of any such local theory which explains correlations in terms of common properties given to the particles by the source.

When you say: "this is exactly the sort of theory that Bell examined", it does require some presumptive caveats. That is that the properties that is supposed as carried by the photons are uniquely identified by the route it takes at a detector.

If a particle has a perfectly distinct property, in which a detector setting tuned to a nearby setting has some nonlinear odds of defining the property as equal to that offset setting, then BI violations ensue. The problem for models is that vector product are non-commutative, requiring a 0 angle to be defined for one of the detectors.

Consider a hv model that models BI violations, but has the 0 setting condition. You can assign one coordinate system to the emitter, which the detectors know nothing about. Another coordinate system to the detectors, which the emitter knows nothing about, but rotates in tandem with one or the other detector. Now rotating the emitter has absolutely no effect on the coincidence statistics whatsoever, thus proving that the statistics is not unique to physical states of the particles at a given setting. You can also have any arbitrary offset between the two detectors, and consistency with QM is also maintained. Thus the non-commutativity of vectors is the stumbling block for such models. But the complete insensitivity to arbitrary emitter settings proves it's not a physical stumbling block.

So perhaps you can explain to me the physical significant of requiring a non-physical coordinate choice to give exactly the same answers to vector products, under arbitrary rotations, when you can't even do that on a pool table?
 
  • #743
my_wan said:
Lost is photons that would have passed the polarizer but didn't at that setting. Gained is what wouldn't have passed the polarizer but did at that setting. Let's look at it using a PBS so we can divide things in H, V, and L, R routes through the polarizer.

Consider a PBS rather than a plain polarizer placed in front of a simple polarized beam of light that evenly contains pure H an V polarized photons. We'll label the V polarization as angle 0. So, a PBS set a angle 0 will have 100% of the V photons takes L route, and 100% of the H photons takes R. At 22.5 degrees L is ~85% V photons and ~15% H photons, while R beams now contains ~15% V photons and ~85% H photons. WARNING: You have to consider that by measuring the photons at a new setting, it changes the photons polarization to be consistent with that new setting. At a setting of 45 degree you get 50% H and 50% V going L, and 50% H and 50% V going R. Nothing special about 15% or the 22.5 degree setting.
You compare measurement at 22.5 angle with hypothetical measurement at 0 angle. When I used similar reasoning your comment was that:

This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.
So how is your reasoning so radically different than mine that you are allowed to use reasoning like that but I am not allowed?

But let's say it's fine and look at a bit modified case of yours.
Now take beam of light that consists of H, V, +45 and -45 polarized light. What angle should be taken as 0 angle in this case? Let's say it's again V polarization that is 0 angle. Can you work out photon rates in L and R beams for all photons (H,V,+45,-45)?

my_wan said:
Now what the sin^2(theta) = cos^2(90-theta) represents here is anyone (but only one) polarizer setting, such that theta=theta in both cases, and our sin^2(theta) is V photons that switch to the R route, while cos^2(90-theta) is the H photons that switch to the L route.
How you define theta? Is it angle between polarization axis of polarizer (PBS) and photon so that we have theta1 for H and theta2 for V with condition that theta1=theta2-90?
Otherwise it's quite unclear what you mean with your statement.
 
  • #744
zonde said:
So how is your reasoning so radically different than mine that you are allowed to use reasoning like that but I am not allowed?

When I give the formula sin^2(theta) = |cos^2(90-theta)| theta and theta are the same number from the same measurement. Hence:
sin^2(0) = |cos^2(90-0)|
sin^2(22.5) = |cos^2(90-22.5)|
sin^2(45) = |cos^2(90-45)|
etc.

You only make presumptions about the path statistics of individual photons, and wait till after the fact to do any comparing to another measurement.

You previously give the formula:
zonde said:
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|
Here you put in the 0 from the first measurement as if it's part of what you are now measuring. It's not. The 22.5 is ALL that you are now measuring. The only thing you are comparing after the fact is the resulting path effects. You don't include measurements you are not presently performing to calculate the results of the measurement you are now making. This is to keep the reasoning separate, and avoid the interdependence inherent in the presumed non-local aspect of EPR correlations. It also allows you to compare it to any arbitrary other measurement without redoing the calculation. It's a non-trivial condition of modeling EPR correlations without non-local effects to keep the measurements separate. On these grounds alone mixing settings from other measurements to calculate results of the present measurement must be rejected. Only the after the fact results may be compared, to see if the local path assumptions remain empirically and universally consistent, with and without EPR correlations.

The primary issue remains whether the path statistics are consistent for both the pure H and V case and the randomized polarization case. This is the point on which I will put my pride ALL in by stating this is unequivocally a factual yes. This should also calculate cases in which the intensity of H and V are unequal, giving the variations of intensity at various polarizer settings at different angles. Such non-uniform beam mixtures to test this can quiet easily be experimentally tested. From a QM perspective this would be equivalent to interference in the wavefunction at certain angles.
 
  • #745
DrChinese said:
Nice, it's exactly the same paper I looked at. I was just unsure if posting that link doesn't violate forum rules.
As the file is not searchable I can point out that the text I quoted can be found on p.1136 in the last full paragraph (end of the page).

DrChinese said:
Experiment:
http://arxiv.org/abs/quant-ph/9810035

"It is demonstrated that the premisses of the Einstein-Podolsky-Rosen paper are inconsistent when applied to quantum systems consisting of at least three particles. The demonstration reveals that the EPR program contradicts quantum mechanics even for the cases of perfect correlations. By perfect correlations is meant arrangements by which the result of the measurement on one particle can be predicted with certainty given the outcomes of measurements on the other particles of the system. This incompatibility with quantum mechanics is stronger than the one previously revealed for two-particle systems by Bell's inequality, where no contradiction arises at the level of perfect correlations. Both spin-correlation and multiparticle interferometry examples are given of suitable three- and four-particle arrangements, both at the gedanken and at the real experiment level. "
I think I caught the point you are making.

Let's see if I will be able to explain my objections from the viewpoint of contextuality.
First about EPR, Bell and non-contextuality.
If we take photon that has polarization angle 0° and put it through polarizer at angle 0° it goes through with certainty. However if we change polarizer angle to 45° it goes through with 50% chance (that's basically Malus law).
So when we have entangled photons we have a prediction that 50% chance is somehow correlated between two entangle photons.
Bell's solution to this was non-contextuality i.e. photon is predetermined to take his chance one way or the other way. I would argue that EPR does not contain any considerations regarding solution of this particular problem - it was just the statement of general problem.

So what are other options different from Bell's solution. As I see other solution is that photons can be considered as taking this 50% chance (under 45° measurement base) dependent from particular conditions of polarizer (context of measurement). But in that case it is obvious that this correlation between two entangled photons of taking chances the same way should be correlation between measurement conditions of two photons and not only correlation between photons themselves. This of course leaves the question how measurement conditions get "entangled" and here I speculate that some leading photons from ensemble transfer their "entanglement" to equipment at the cost of becoming uncorrelated.
That way we have classical correlation when we measure photons in the same base as they were created (0° and 90° measurement base) and quantum (measurement context) correlation when we measure photons using incompatible base from the one they were created in (+45° and -45° measurement base).

Now if we go back to GHZ. These inequalities where derived using Bell's non-contextual approach. If we look at them from perspective of contextuality then we can see that this measurement context correlation is not strictly tied to photon polarizations but by varying experiment setup it could be possible to get quite different correlations then the ones you would expect from pure classical polarization correlations.
And if we isolate conditions so that we measure mostly measurement context correlations then pure classical polarization correlations will be only indirectly related to observed results.
 
  • #746
my_wan said:
When I give the formula sin^2(theta) = |cos^2(90-theta)| theta and theta are the same number from the same measurement.
Please tell me what theta represents physically.

As I asked already:
How you define theta? Is it angle between polarization axis of polarizer (PBS) and photon so that we have theta1 for H and theta2 for V with condition that theta1=theta2-90?
Or it's something else?
 
  • #747
zonde said:
Now if we go back to GHZ. ...

Imagine that for a Bell Inequality, you look at some group of observations. The local realistic expectation is different from the QM expectation by a few %. Perhaps 30% versus 25% or something like that.

On the other hand, GHZ essentially makes a prediction of Heads for LR, and Tails for QM every time. You essentially NEVER get a Heads in an actual experiment, every event is Tails. So you don't have to ask whether the sample is fair. There can be no bias - unless Heads events are per se not detectible, but how could that be? There are no Tails events ever predicted according to Realism.

So using a different attack on Local Realism, you get the same results: Local Realism is ruled out. Now again, there is a slight split here are there are scientists who conclude from GHZ that Realism (non-contextuality) is excluded in all forms. And there are others who restrict this conclusion only to Local Realism.
 
  • #748
my_wan said:
When you say: "this is exactly the sort of theory that Bell examined", it does require some presumptive caveats. That is that the properties that is supposed as carried by the photons are uniquely identified by the route it takes at a detector.

If a particle has a perfectly distinct property, in which a detector setting tuned to a nearby setting has some nonlinear odds of defining the property as equal to that offset setting, then BI violations ensue.
Do you just mean that local properties of the particle are affected by local properties of the detector it comes into contact with? If so, no, this cannot lead to any violations of the Bell inequalities. Suppose the experimenters each have a choice of three detector settings, and they find that on any trial where they both chose the same detector setting they always got the same measurement outcome. Then in a local hidden variables model where you have some variables associated with the particle and some with the detector, the only way to explain this is to suppose the variables associated with the two particles predetermined the result they would give for each of the three detector settings; if there was any probabilistic element to how the variables of the particles interacted with the state of the detector to produce a measurement outcome, then there would be a finite probability that the two experimenters could both choose the same detector setting and get different outcomes. Do you disagree?
my_wan said:
Consider a hv model that models BI violations, but has the 0 setting condition. You can assign one coordinate system to the emitter, which the detectors know nothing about. Another coordinate system to the detectors, which the emitter knows nothing about, but rotates in tandem with one or the other detector.
What do you mean by "assigning" coordinate systems? Coordinate systems are not associated with physical objects, they are just aspects of how we analyze a physical situation by assigning space and time coordinates to different events. Any physical situation can be analyzed using any coordinate system you like, the choice of coordinate system cannot affect your predictions about coordinate-invariant physical facts.

Anyway, your description isn't at all clear, could you come up with a mathematical description of the type of "hv model" you're imagining, rather than a verbal one?
 
  • #749
DrChinese said:
ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.
Yes, the generally accepted view does use language like you do. And the generally accepted view for 30 years was that von Neumann's proof disallowed hidden variable theories, even though that proof had been shown to be unacceptable some 30 years before Bell's paper.

Zeilinger's language in the quote you provided, and the general tone of his continuing program, and your language wrt Bell, indicate to me that neither of you understand the subtleties of the arguments being presented here and in certain papers (which are, evidently, not being as clearly presented as necessary) regarding the interpretation of Bell's theorem (ie., the physical basis of Bell inequalities).

You can provide all the quotes you want. Quotes don't refute arguments. You're going to have to refute some purported LR models that reproduce qm predictions but are not rendered in the form of Bell's LHV model.

However, you refuse to look at them because:

DrChinese said:
I have a requirement that is the same requirement as any other scientist: provide a local realistic theory that can provide data values for 3 simultaneous settings (i.e. fulfilling the realism requirement). The only model that does this that I am aware of is the simulation model of De Raedt et al. There are no others to consider. There are, as you say, a number of other *CLAIMED* models yet none of these fulfill the realism requirement. Therefore, I will not look at them.

Please explain what you mean by "a local realistic model that can provide data values for 3 simultaneous settings". Three simultaneous settings of what? In the archetypal optical Bell test setup there's an emitter, two polarizers, and two detectors. The value of (a-b), the angular difference in the polarizer settings, can't have more than one value associated with any given pair of detection attributes. So, I just don't know what you're talking about wrt your 'requirement'.

My not understanding your 'requirement' might well be just a 'mental block' of some sort on my part. In any case, before we can continue, so that you might actually 'refute' something (which you haven't yet), you're going to have to explain, as clearly as you can, what this "data values for 3 simultaneous settings" means and how it is a 'requirement' that purported LR models of entanglement must conform to.

DrChinese said:
(Again, an exception for the De Raedt model which has a different set of issues entirely.)
My understanding is that a simulation is not, per se, a model. So, a simulation might do what a model can't. If this is incorrect, then please inform me. But if it is incorrect, then what's the point of a simulation -- when a model would suffice?

Here's my thinking about this: suppose we eventually get a simulation of an optical Bell test which reproduces the observed results. And further suppose that this simulation involves only 'locally' produced 'relationships' between counter-propagating optical disturbances. And further suppose that this simulation can only be modeled in a nonseparable (nonfactorizable) way. Then what might that tell us about Bell's ansatz?
 
  • #750
DevilsAvocado said:
I must inform the casual reader: Don’t believe everything you read at PF, especially if the poster defines you as "less sophisticated".
No offense DA, but you are 'the casual reader'.

DevilsAvocado said:
Everything is very simple: If you have one peer reviewed theory (without references or link) stating that 2 + 2 = 5 and a generally accepted and mathematical proven theorem stating 2 + 2 = 4, then one of them must be false.
No. Interpreting Bell's theorem (ie., Bell inequalities) is not that simple. If it was then physicists, and logicians, and mathematicians wouldn't still be at odds about the physical meaning of Bell's theorem. But they are, regardless of the fact that those trying to clarify matters are, apparently, a small minority at the present time.

DevilsAvocado said:
And remember: Bell’s theorem has absolutely nothing to do with "elementary optics" or any other "optics", I repeat – absolutely nothing. Period.
Do you think that optical Bell tests (which comprise almost all Bell tests to date) have nothing to do with optics? Even the 'casual reader' will sense that something is wrong with that assessment.

The point is that if optical Bell tests have to do with optics, then any model of those experimental situations must have to do with optics also.

By the way, the fact that I think you're way off in your thinking on this doesn't diminish my admiration for your obvious desire to learn, and your contributions to this thread. Your zealous investigations and often amusing and informative posts are most welcome. And, I still feel like an idiot for overreacting to what I took at the time to be an unnecessarily slanderous post. (Maybe I was just having a bad day. Or, maybe, it isn't within your purview to make statements about other posters' orientations regarding scientific methodology -- unless they've clearly indicated that orientation. The fact is that the correct application of the scientific method sometimes requires deep logical analysis. My view, and the view of many others, is that Bell's 'logical' analysis didn't go deep enough. And, therefore, the common interpretations of Bell's theorem are flawed.)

So, while it's granted that your, and DrC's, and maybe even most physicists, current opinion and expression regarding the physical meaning of violations of BIs is the 'common' view -- consider the possibility that you just might be missing something. You seem to understand that Bell's theorem has nothing to do with optics. I agree. Is that maybe one way of approaching, and understanding, the question of why Bell's ansatz gives incorrect predictions wrt optical Bell tests?
 
  • #751
my_wan said:
1) You say: "Your argument here does not follow regarding vectors. So what if it is or is not true?", but the claim about this aspect of vectors is factually true. Read this carefully:
http://www.vias.org/physics/bk1_09_05.html
Note: Multiplying vectors from a pool ball collision under 2 different coordinate systems don't just lead to the same answer expressed in a different coordinate system, but an entirely different answer altogether. For this reason such vector operations are generally avoided, using scalar multiplication instead. Yet the Born rule and cos^2(theta) do just that.
2) You say: "I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out.", but they don't commute for pool balls either, when used this way. That doesn't make pool balls not real. Thus the formalism has issues in this respect, not the reality of the pool balls. I even explained why: because given only the product of a vector, there exist no way of -uniquely- defining the particular vectors that went into defining it.

DrChinese said:
Again, I am missing your point. So what? How does this relate to Bell's Theorem or local realism?
I think that my-wan's point might be related to Christian's formulation of his (Christian's) LR model. Christian's point being that you need an algebra that can suitably represent the rotational invariance of the local beables -- which, in his estimation, Bell didn't represent adequately. According to Christian, Bell's ansatz misrepresents the topology of the experimental situation. Christian has produced 5, or so, papers (for anyone interested, go to arxiv.org and search on Joy Christian) that I know of trying to explain his idea(s). I don't fully understand what he's saying. That is, presently, I'm having difficulty incorporating what Christian is saying into my own 'intuitive' understanding of what I currently regard as the lack of depth in Bell's 'logical' analysis. Although, intuitively, I see a connection. I've read his papers and the discussions on sci.physics.research that Christian participated in a couple of years ago, and the impression I got was that he became frustrated with the lack of knowledge and preparation of those involved. Since then, I've seen nothing about his stuff and don't know if it's still under consideration for publication or not. Maybe he just abandoned it. Maybe someone should send him an email or something to find out what's what. (No, not me!) After all, the guy is a bona fide mathematical physicist who got his PhD under Shimony -- and he has published some respected peer reviewed stuff. It's very curious to me.) If he came to the conclusion that he was wrong, then wouldn't he be obligated, as a scientist, to say so? I assume that there are physicists and mathematicians here at PF qualified to critique his stuff. So, maybe they will contribute their synopses and critiques.

Anyway, I think my_wan's considerations about vectors are related to this. If I'm wrong, then please let me know why.
 
  • #752
JesseM said:
If this is a local theory in which any correlations between the two disturbances are explained by properties given to them by the common source, with the disturbances just carrying the same properties along with them as they travel, then this is exactly the sort of theory that Bell examined, and showed that such theories imply certain conclusions about the statistics we find when we measure the "disturbances", the Bell inequalities. Since these inequalities are violated experimentally, this is taken as a falsification of any such local theory which explains correlations in terms of common properties given to the particles by the source.
Bell's ansatz depicts the data sets A and B as being statistically independent. And yet we know that separately accumulated data sets produced by a common cause can be statistically dependent -- even when there is no causal dependence between the spacelike separated events that comprise the separate data sets -- precisely because the spacelike separated events have a common cause.

Bell has assumed that statistical dependence implies causal dependence. But we know that it doesn't. So, I ask you, is Bell's purported locality condition, in fact, a locality condition?

JesseM said:
Virtually all physicists would agree that the violation of Bell inequalities constitutes a falsification of the kind of theory you describe, assuming you're talking about a purely local theory.
But that isn't what I asked.

What I asked was:

ThomasT said:
... given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.
Assuming the conservation laws are correct, then are these sorts of deductions allowable?
 
  • #753
DrChinese said:
The EPR conclusion is most certainly not the view which is currently accepted.
The EPR conclusion was that qm is not a complete description of the physical reality underlying instrumental phenomena. Are you telling me that this isn't the view of a majority of physicists? If so, how do you know that? Every single physicist that I've talked to personally about this, whether they're familiar with EPR and Bell etc. or not, has said to me that they regard qm, in the sense of a description of the physical reality underlying instrumental phenomena, to be incomplete. This doesn't speak to why it's incomplete, or whether it must be incomplete, but just that it is incomplete. I conjecture that this is the view of a majority of working physicists. Now you can do a representative survey to prove that that conjecture is incorrect. But in the absence of such a survey, then a conjecture to the contrary is also just a conjecture.

DrChinese said:
That is because the EPR view has been theoretically (Bell) and experimentally (Aspect) rejected. But that was not the case in 1935. At that time, the jury was still out.
The EPR view stands as well today as it did in 1935. Either a real physical disturbance with real physical attributes is being emitted by the emitter or it isn't. If it isn't, then, according to a strict, realistic interpretation of the qm formalism, then the reality of B depends on a detection at A, and vice versa. Pretty silly, eh?

DrChinese said:
What is wrong with this view is that it violates the Heisenberg Uncertainty Principle. Nature does not allow that.
There's nothing in the EPR view that violates the uncertainty relations.

EPR says that two particles emitted from a common source are related wrt an applicable conservation law. So that, if the position of particle A is measured, then the position of particle B can be deduced, and if the momentum of particle A is measured, then the momentum of particle B can be deduced.

The uncertainty relations say that for a large number of similar preparations, the deviation from the statistical mean value of the position measurements will be related to the deviation from the statistical mean value of the momentum measurements by the following inequality: (delta q) . (delta p) >= h (where h is Planck's constant, the quantum of action).

As Bohr so eloquently, and yet so, er, cryptically, expressed, the uncertainty relations have no bearing on the relationship between a measurement that WAS made at A and a measurement that WAS NOT made at B.

Bottom line, the EPR argument has nothing to do with the uncertainty relations.

But it has everything to do with the, I conjecture, virtually universal acceptance that qm is an incomplete description of the physical reality underlying instrumental phenomena.

However, just so you don't take what I'm saying the wrong way. I don't see that an unassailably more complete description is possible. Even though there are LR models of entanglement that reproduce the qm predictions, there's absolutely no way to ascertain whether or not they're accurate depictions of an underlying reality. This was Bohr and Heisenberg's, et al., meeting of the minds, so to speak. Qm is, as a probability calculus of instrumental phenomena, as complete as it needs to be, and as complete as it can be without unnecessary and ultimately frustrating speculation regarding what does or doesn't exist or what is or isn't happening in the deep reality underlying instrumental phenomena. The point is that qm, as a mathematical probability calculus, can continue to be progressively developed, and technologically applicable, without any consensus regarding the constituents or behavior of proposed underlying 'elements of reality'.
 
  • #754
JesseM said:
Do you just mean that local properties of the particle are affected by local properties of the detector it comes into contact with? If so, no, this cannot lead to any violations of the Bell inequalities.
I'll go through the computer model (virtual detectors) I used in your question below. I'll also explain the empirical based assumptions used.

JesseM said:
Suppose the experimenters each have a choice of three detector settings, and they find that on any trial where they both chose the same detector setting they always got the same measurement outcome.
Naturally you get consistency between experiments, at least statistically. It really would be weird otherwise. But real experiments are limited to 2 setting choices at a time. The 3rd setting is a counterfactual from previous experiments. I doubt you've read the unfair coin example, an unfair coin with a tiny adjuster to set so it match a second 85% of the time, but by defining a 3rd simultaneous setting you are putting very severe non-random constraints on how it relates to 2 other settings. Both completely correlated with 1 and totally uncorrelated with the other, yet expecting this nonrandom choice to match stochastically with both based on statistical profiles pulled from previous experiments without such constraints. Neither classical nor QM mechanism allows this. Only QM is not explicitly time dependent so it's much harder to see the mechanism counterfactually in QM.

JesseM said:
Then in a local hidden variables model where you have some variables associated with the particle and some with the detector, the only way to explain this is to suppose the variables associated with the two particles predetermined the result they would give for each of the three detector settings; if there was any probabilistic element to how the variables of the particles interacted with the state of the detector to produce a measurement outcome, then there would be a finite probability that the two experimenters could both choose the same detector setting and get different outcomes. Do you disagree?
Finite, maybe. Though there's at least some reason to believe nature is not finite. But assuming finite, I can also calculate the odds that all the air in the half of the room you are in spontaneously ends up in the other half of the room. The odds of it happening are indeed finite, but I'm not holding my breath just in case.

JesseM said:
What do you mean by "assigning" coordinate systems? Coordinate systems are not associated with physical objects, they are just aspects of how we analyze a physical situation by assigning space and time coordinates to different events. Any physical situation can be analyzed using any coordinate system you like, the choice of coordinate system cannot affect your predictions about coordinate-invariant physical facts.
Quiet simple cases exist were quantities are not coordinate-invariant, and a very important one involves basic vector products. Consider:
http://www.vias.org/physics/bk1_09_05.html
[PLAIN]http://www.vias.org/physics/bk1_09_05.html said:
The[/PLAIN] operation's result depends on what coordinate system we use, and since the two versions of R have different lengths (one being zero and the other nonzero), they don't just represent the same answer expressed in two different coordinate systems. Such an operation will never be useful in physics, because experiments show physics works the same regardless of which way we orient the laboratory building! The useful vector operations, such as addition and scalar multiplication, are rotationally invariant, i.e., come out the same regardless of the orientation of the coordinate system.
It states it "will never be useful in physics", yet both the Born rule and Malus Law involve just such a vector product if you presume there is some underlying mechanism. Given just a single vector magnitude it's not even possible to uniquely identify the vectors that it was derived from.

JesseM said:
Anyway, your description isn't at all clear, could you come up with a mathematical description of the type of "hv model" you're imagining, rather than a verbal one?
My model is based on a computer model, virtual emitters and detectors.

Assumptions (I'll use photons and polarizations for simplicity):
1) A photon has a single unique default polarization, which is only unique in that upon meeting a polarizer at the same polarization it effectively has a 100% chance of passing that polarizer.
2) The odds that a photon will pass through a polarizer that is offset from that photon default polarization is defined by cos^2(theta), Malus Law.
3) A bit field is set to predefine passage through a polarizer it meats at various settings, with the odds of a bit being predefined as 1 (for passage) determined by a random number generator with a min/max of 0/1 that rolls less than cos^2(theta) when created at the emitter.
4) A random number with a min/max of 0/359.5, rounded to half degree increments, predefines the default polarization at the emitter. These can be rotated with impunity.

For computer modeling a default polarization and a bit field is set. I used 180 bit field, which predefines passage or not for each 1/2 degree over 90 degrees, reversed for every other 90 degrees. The odds that a 10 degree bit, for instance, will be predefined 1 is cos^2(10). Anticorrelated photons are simply flipped 180 degrees, with the same bit field. The photons can be randomly generated and written to a text file. I have lots of improvements to try, but haven't got to it yet.

The formula, when a photon meets a detector is simply (polarizer1 - photon1) and (polarizer2 - photon2) at the other end. Then simply count that many bits into the bit field to see if a detection occurs. No Malus Law used here because it's built into the statistics of the bit field. Detections are returned before comparisons are made between polarizer1 and polarizer2.

This only works to match QM predictions if 1 of the polarizer settings is defined to be 0. Yet you can rotate the photons coming from the emitter with impunity, without effecting the coincidence statistics. So there exist no unique physical state at certain rotations. Neither polarizer directly references the setting of the other polarizer. Only the difference between the photons default polarization and the polarizer setting it actually comes in contact with is used to define detections.

The 0 angle is the biggest issue. You could also add another 719 180 bit fields, for 1/2 degree increments, to undo the 0 degree requirement on one of the detector. This would blow up into a huge, possibly infinite, number of variables in real world conditions, but if quantum computers work as well as expected this shouldn't be an issue.

I'm not happy with this, and have a lot of improvements to try, when I get to it. Including using predefined ranges instead of bit fields, and non-commutative vector rotations in an attempt to remove the coordinate rotations as I change a certain detector setting. I have my doubts about these.
 
Last edited by a moderator:
  • #755
zonde said:
Please tell me what theta represents physically.

As I asked already:
How you define theta? Is it angle between polarization axis of polarizer (PBS) and photon so that we have theta1 for H and theta2 for V with condition that theta1=theta2-90?
Or it's something else?

Theta is simply a polarizer setting relative to any arbitrary coordinate system. However, it only leads to valid counterfactual (after the fact) comparisons to route statistics at a 0 setting, but it makes no difference which coordinate choice you use, so long as the photon polarizations are uniformly distributed across the coordinate system.
 
  • #756
ThomasT said:
Bell's ansatz depicts the data sets A and B as being statistically independent.
Only when conditioned on the appropriate hidden variables represented by the value of λ. When not conditioned on λ, Bell's argument says there can certainly be a statistical dependence between A and B, i.e. P(A|B) may be different than P(A). Do you disagree?
ThomasT said:
And yet we know that separately accumulated data sets produced by a common cause can be statistically dependent -- even when there is no causal dependence between the spacelike separated events that comprise the separate data sets -- precisely because the spacelike separated events have a common cause.
Yes, and this was exactly the possibility that Bell was considering! If you don't see this, then you are misunderstanding something very basic about Bell's reasoning. If A and B have a statistical dependence, so P(A|B) is different than P(A), but this dependence is fully explained by a common cause λ, then that implies that P(A|λ) = P(A|λ,B), i.e. there is no statistical dependence when conditioned on λ. That's the very meaning of equation (2) in Bell's original paper, that the statistical dependence which does exist between A and B is completely determined by the state of the hidden variables λ, and so the statistical dependence disappears when conditioned on λ. Again, please tell me if you disagree with this.
ThomasT said:
Bell has assumed that statistical dependence implies causal dependence.
No, he didn't. He was explicitly considering a case where there is a statistical dependence between A and B but not a causal dependence because the dependence is fully explained by λ. In the simplest type of hidden-variables theory, λ would just represent some set of hidden variables assigned to each particle by the source when the two particles were created, which remained unchanged as they traveled to the detector and which determined their responses to various detector settings.

It would really help if you looked over my lotto card analogy in post #2 here! Your comments suggest you may be confused about the most basic aspects of Bell's proof, so instead of trying to understand the abstract equations in his original paper, I think it would definitely help to look over a concrete model of a situation where we propose a simple hidden-variables theory (involving a common cause, namely the cards being assigned identical 'hidden fruits' by the source) to explain a statistical dependence in observed measurements (the fact that whenever Alice and Bob choose the same box on their respective cards to scratch, they always find the same fruit behind it).
ThomasT said:
So, I ask you, is Bell's purported locality condition, in fact, a locality condition?
Properly understood, yes it most certainly is.
ThomasT said:
JesseM said:
Virtually all physicists would agree that the violation of Bell inequalities constitutes a falsification of the kind of theory you describe, assuming you're talking about a purely local theory.
But that isn't what I asked.

What I asked was:
ThomasT said:
given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.
That paragraph is not even a question, so I would say that's not what you asked. My comment above was in response to your question (which I quoted in my post), "Do you doubt that this is the view of virtually all physicists?" And I understood "this is the view" to refer to your earlier comment "The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance", i.e. specifically the view that correlations in quantum physics could be explained by this sort of common cause, in which case this is not the view of virtually all physicists. On the other hand, if you were just asking whether all physicists would agree there are some situations (outside of QM) where correlations between separated measurements can be explained in terms of common causes, then of course the answer is yes.
ThomasT said:
Assuming the conservation laws are correct, then are these sorts of deductions allowable?
Here you seem to be asking a new question, and my answer is "yes, in cases where two disturbances are emitted by a common source, observations of one may allow for deductions about the other". And of course, the whole point of Bell's argument was to consider whether or not the observed correlations between measurements on entangled particles could be explained in terms of this sort of "common cause" explanation in a local realist theory. The conclusion he reached was that any such explanation would imply certain Bell inequalities, which are experimentally observed to be violated in quantum experiments.
 
Last edited:
  • #757
JesseM said:
Only when conditioned on the appropriate hidden variables represented by the value of ?. When not conditioned on λ, Bell's argument says there can certainly be a statistical dependence between A and B, i.e. P(A|B) may be different than P(A). Do you disagree?
Does the form, Bell's (2), denote statistical indepencence or doesn't it?

JesseM said:
Yes, and this was exactly the possibility that Bell was considering! If you don't see this, then you are misunderstanding something very basic about Bell's reasoning. If A and B have a statistical dependence, so P(A|B) is different than P(A), but this dependence is fully explained by a common cause λ, then that implies that P(A|λ) = P(A|λ,B), i.e. there is no statistical dependence when conditioned on λ. That's the very meaning of equation (2) in Bell's original paper, that the statistical dependence which does exist between A and B is completely determined by the state of the hidden variables λ, and so the statistical dependence disappears when conditioned on λ. Again, please tell me if you disagree with this.
It seems that you're saying that if the disturbances incident on a and b have a common cause, then the results, A and B, can't be statistically dependent. Is that what you're saying?

JesseM said:
... the whole point of Bell's argument was to consider whether or not the observed correlations between measurements on entangled particles could be explained in terms of this sort of "common cause" explanation in a local realist theory. The conclusion he reached was that any such explanation would imply certain Bell inequalities, which are experimentally observed to be violated in quantum experiments.
Yes, well, we disagree then on the depth of Bell's analysis, or simply on the way his analysis and result is communicated. The problem is that viable LR models of entanglement exist. Would you care to look at one and refute it -- either wrt to its purported locality or reality or agreement with qm predictions?
 
Last edited:
  • #758
my_wan said:
Quiet simple cases exist were quantities are not coordinate-invariant, and a very important one involves basic vector products. Consider:
http://www.vias.org/physics/bk1_09_05.html

We are not discussing whether 2 measurements commute or not. Or two vector operations. We are discussing whether 2 measurements on separated particles have various attributes. So this statement and the related example are completely meaningless in the context of this discussion. I really wish you would stop mentioning it as it leads us nowhere useful.

We understand that your model lacks rotational invariance in that it works with a reference angle of 0 and not at others. And no, it is not OK that you can define any angle as 0 to make it appear to work. Your "trick" works because you are effectively communicating Alice's setting to Bob or vice versa. Whether or not vectors add in all coordinate systems does not change this point in any way.
 
  • #759
JesseM, do you think that most physicists equate EPR's spooky action at a distance with quantum correlations?
 
  • #760
ThomasT said:
Does the form, Bell's (2), denote statistical indepencence or doesn't it?
You haven't defined what you mean by "statistical independence". I think I made clear already that two variables can be statistically dependent in their marginal probabilities but statistically independent when conditioned on other variables.

Could you please answer the questions I ask you in my posts, like this one?
When not conditioned on λ, Bell's argument says there can certainly be a statistical dependence between A and B, i.e. P(A|B) may be different than P(A). Do you disagree?
ThomasT said:
It seems that you're saying that if the disturbances incident on a and b have a common cause, then the results, A and B, can't be statistically dependent. Is that what you're saying?
If the common cause is the complete explanation for the statistical dependence in the marginal probabilities, then when conditioned on the common cause they wouldn't be statistically dependent (i.e. if you already know precisely what properties were given to system A by the common cause which also gave some related properties to system B, then learning about a later measurement on system B will tell you nothing new about what you are likely to see when you measure system A). Do you disagree with that? If you do disagree, can you think of any classical examples where we have correlations that are completely explained by a common cause, yet where the above would not be true?

Of course you could have a more complicated situation where there were multiple common causes, and perhaps also some direct causal influences between A and B. But then the given common cause wouldn't be the complete explanation for the correlation observed between measurements on A and B.
ThomasT said:
Yes, well, we disagree then on the depth of Bell's analysis.
OK, but do you want to engage in an actual substantive discussion about the details of his analysis and whether his assumptions are justified? If so then I would ask that you please answer my direct questions to you, and also address the examples and arguments I present like the lotto card analogy in post #2 here or the argument I made about conditioning on complete past light cones, and what this would imply in both deterministic and probabilistic theories, in this post. Of course there's no need to respond to all of this immediately, but if you are intellectually serious about exploring the truth and not just trying to engage in rhetorical denunciations, then I'd like some assurances that you do plan to address my questions and arguments seriously if we're going to keep discussing this stuff.
 
  • #761
ThomasT said:
Does the form, Bell's (2), denote statistical indepencence or doesn't it?

It seems that you're saying that if the disturbances incident on a and b have a common cause, then the results, A and B, can't be statistically dependent. Is that what you're saying?

A common cause is assumed. Statistical correlation of A and B is assumed as well. Even perfect correlations can be explained, all within Bell (2). There is no problem with any of this. This is simply a restatement of what EPR was trying to say.

The problem is getting this to agree to the QM expectation values. There are a variety of constraints in this as my_wan has discovered. Under scenario a), the Malus relationship does not hold except at privileged angle settings. Under scenario b), an infinite or at least very large amount of data must be encoded. And both of these scenarios are BEFORE we come to terms with a Bell Inequality.

So my point is that for everyone attacking Bell (2), you are coming at it backwards. It is a generic statement, and does not provide any particular insight into the EPR issue at all. Any way you want to express the statement "The result A does not depend on setting b, and vice versa" would work here. Bell calls this requirement essential because it is his version of locality. (Or call it "local causality" if that is a preferable label.) Bell assumed his version would not cause anybody to have a cow, that it would be accepted as a mathematical version of the "...A not dependent on b..." statement. So whether or not there is a statistical connection, that really makes no difference. Since this is assumed by everyone. So Bell (2) is not an expression of the independence of statistical correlations A and B. It has to do with the independence of A and b, and B and a. If your model has A dependent on b, then it fails test #1. Because it is not local.
 
  • #762
JesseM said:
You haven't defined what you mean by "statistical independence".
Factorability of the joint probability. The product of the probabilities of A and B. Isn't that the definition of statistical independence?
 
  • #763
ThomasT said:
Factorability of the joint probability. The product of the probabilities of A and B. Isn't that the definition of statistical independence?

a and b are different than A and B. A and B do not need to be independent.
 
  • #764
DrChinese said:
a and b are different than A and B. A and B do not need to be independent.
Exactly. But that's how Bell's model denotes them. In Bell's model, the data sets A and B are independent.
 
  • #765
DrChinese said:
So Bell (2) is not an expression of the independence of statistical correlations A and B.
Are you sure about that? I think it's been demonstrated that Bell's ansatz reduces to the probability definition of statistical independence. If you think otherwise then maybe you should revisit the posts in this and other threads dealing with that.
 
  • #766
DrC, the view of many physicists, including past discussions I've had here at PF, indicate that Bell's idea was that if the data sets A and B were statistically dependent, then they must be causally dependent. Of course, we know this is wrong.
 
  • #767
ThomasT said:
Are you sure about that? I think it's been demonstrated that Bell's ansatz reduces to the probability definition of statistical independence. If you think otherwise then maybe you should revisit the posts in this and other threads dealing with that.

I have stated many times: A and b, not A and B. The result A is definitely correlates with B. The question is: does A change with b? It shouldn't in a local world. In other words: if Alice's result changes when spacelike separated Bob moves his measurement dial, then there is spooky action at a distance. I know you will agree with that statement.

From the EPR conclusion: "This makes the reality of P and Q depend upon the process of measurement carried out on the first system in any way. No reasonable definition of reality could be expected to permit this." They are saying the same thing as Bell (2).

And in Bell's words: "The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet for particle 1, nor A on b." Which he then presents in his form (2). There is no restriction on the correlation of A and B in this.

So the data points for A with setting a come out the same regardless of the value of b. Of course the correlation of A and B may change with a change in a or b. Bell (2) is not saying anything about that. If you doubt this, just re-read what Bell said above. Or what EPR said.
 
  • #768
ThomasT said:
DrC, the view of many physicists, including past discussions I've had here at PF, indicate that Bell's idea was that if the data sets A and B were statistically dependent, then they must be causally dependent. Of course, we know this is wrong.

There is nothing wrong with causality in this situation. I mean, the entire point is that the pairs are clones (or anti-clones) of each other because they were created at the same time. The question is whether there is observer independence in the outcomes. Whether the reality of P and Q are independent of what goes on elsewhere. Whether the result A is dependent on setting b.

And as far as anyone knows, this is "possible" within constraints when you consider Bell (2) by itself. This has been demonstrated by who knows how many local realistic papers. But of course all this falls apart when you add the realism requirement. EPR said that it was possible to constrain reality to just the number of observables that could be predicted simultaneously (1), but that was too restrictive (in their opinion). So by then applying the less restrictive definition of reality which they claim as reasonable (2 or more), Bell obtains his famous result.
 
  • #769
ThomasT said:
Exactly. But that's how Bell's model denotes them. In Bell's model, the data sets A and B are independent.

No, that is my point. You have misinterpreted Bell (2). How many times must I repeat Bell:

"The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet for particle 1, nor A on b."

"The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet for particle 1, nor A on b."

"The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet for particle 1, nor A on b."

Yes, I am pretty good with ^V. And there can be a connection between A and B. In fact, there has to be to have an element of reality according to EPR. It was assumed that A and B would be perfectly correlated when a=b. That is how you predict the outcome of one without first disturbing it.
 
  • #770
DrChinese said:
I have stated many times: A and b, not A and B. The result A is definitely correlates with B. The question is: does A change with b? It shouldn't in a local world. In other words: if Alice's result changes when spacelike separated Bob moves his measurement dial, then there is spooky action at a distance. I know you will agree with that statement.

From the EPR conclusion: "This makes the reality of P and Q depend upon the process of measurement carried out on the first system in any way. No reasonable definition of reality could be expected to permit this." They are saying the same thing as Bell (2).

And in Bell's words: "The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet for particle 1, nor A on b." Which he then presents in his form (2). There is no restriction on the correlation of A and B in this.

So the data points for A with setting a come out the same regardless of the value of b. Of course the correlation of A and B may change with a change in a or b. Bell (2) is not saying anything about that. If you doubt this, just re-read what Bell said above. Or what EPR said.
You miss the point. Bell's ansatz denotes that the data sets A and B are statistically independent.
 

Similar threads

2
Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
2K
Replies
6
Views
2K
Replies
2
Views
1K
Replies
100
Views
10K
Replies
6
Views
3K
Back
Top