Quantum mechanics is not weird, unless presented as such

In summary, quantum mechanics may seem weird due to the way it is often presented to the general public. However, there is a long history of this approach, as it sells better. In reality, it can be an obstacle for those trying to truly understand the subject. The paper referenced in the conversation shows that quantum mechanics can actually be derived from reasonable assumptions, making it not as weird as some may think. However, this derivation is only one author's view and may not be the complete truth. There are also other interpretations of quantum mechanics, such as the ensemble interpretation, which may not be fully satisfactory. Overall, a proper derivation of quantum mechanics must account for all aspects, including the treatment of measurement devices and the past before measurements
  • #456
Isn't it just another way of saying what Peres says? Changing the observable bases changes the experiment, so the anticorrelation arises for one experiment and not another (contextually). If you re-define "reality" to be the probability spectrum for specific non-local experiments then sure, reality isn't dead, but it's irreducibly setup-dependent (not so "real")...
 
Physics news on Phys.org
  • #457
stevendaryl said:
I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
The end result must conserve momentum so the only detail that matters physically is that. The arrangements do seem to be irrelevant.

Envisage this - when the state is prepared and we consider the detectors as part of that, we can use local hidden variables that decide if the detectors will click regardless of any other details. So it is decided already and no kind of random intervention can change it. But the context, i.e. the detectors must be part of the probabilty space.
 
  • #458
Mentz114 said:
The end result must conserve momentum so the only detail that matters physically is that.

Well, angular momentum in the case that I'm talking about.[/QUOTE]

The arrangements do seem to be irrelevant.

Envisage this - when the state is prepared and we consider the detectors as part of that, we can use local hidden variables that decide if the detectors will click regardless of any other details. So it is decided already and no kind of random intervention can change it. But the context, i.e. the detectors must be part of the probabilty space.

I don't understand this business about being part of the probability space. Let [itex]P_A(\vec{a}, \alpha, \lambda)[/itex] be the probability that Alice will measure spin-up for her particle, given that she measures spin along axis [itex]\vec{a}[/itex], and that [itex]\alpha[/itex] represents other details of Alice's detector (above and beyond orientation), and [itex]\lambda[/itex] represents details about the production of the twin pair. Similarly, let [itex]P_B(\vec{b}, \beta, \lambda)[/itex] be the probability that Bob will measure spin-up for his particle, given that he measures along axis [itex]\vec{b}[/itex], and that [itex]\beta[/itex] represents additional details about Bob's detector. By assuming that the probabilities depend on these particular parameters, where have I made an assumption about the existence of a single joint probability space? What does "contextuality" mean, other than that the outcome might depend both on facts about the particle and facts about the device? The only assumption, it seems to me, is locality, that [itex]P_A[/itex] doesn't depend on [itex]\vec{b}[/itex] and [itex]P_B[/itex] doesn't depend on [itex]\vec{b}[/itex].

But the predictions of QM for EPR is perfect anti-correlation. Which means that:

If Alice measures spin-up at angle [itex]\vec{a}[/itex], then Bob will measure spin-down at angle [itex]\vec{a}[/itex]. That seems to me to mean that the probabilities must be 0 or 1:

If [itex]P_A(\vec{a}, \alpha, \lambda)[/itex] is nonzero, then that means that Alice has a chance of measuring spin-up. But if Alice measures spin-up, then Bob has no chance of measuring spin-up at that angle. So Bob's probabilities must be zero whenever Alice's are nonzero, and vice-versa. That's only possible if the probabilities are all zero or one. That means that the outcome is actually deterministic, given [itex]\lambda[/itex], which in turn implies that the details [itex]\alpha[/itex] and [itex]\beta[/itex] don't matter.

I don't think that the non-contextuality is an assumption, I think it follows from the perfect anti-correlations.
 
  • #459
stevendaryl said:
I've read such papers before (maybe that very paper), and it doesn't do a thing for me. I don't see how it contributes anything to the discussion of Bell's theorem. If Bell made an unwarranted assumption about the existence of a single joint probability space, so his proof of the nonexistence of hidden variables is incorrect, then I would like to see that loophole exploited by seeing an explicit hidden-variables model that reproduces the statistics of EPR.
Do you doubt the fact that Bell makes such an assumption?

Bell's proof ist not incorrect. His theorem excludes a wide range of hidden variable theories and proves that QM is definitely non-classical, since classical theories are non-contextual. This fact is undisputed. The theorem is just not strong enough to exclude common causes. Of course you can still be of the opinion that QM is non-local. All I'm saying is that this is not backed up by mathematics and therefore stays a belief until you figure out how to prove Bell's theorem without assuming a joint probability space.

I don't need to give you a counterexample, since mathematical statements aren't assumed to be true until they are proven wrong. Nevertheless, I suppose you could take the quantum state to be a contextual hidden variable. If you don't like this idea, it still doesn't free you from the burden of proof.

stevendaryl said:
I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
What if the coins are magnetized (heads = N, tails = S) and instead of slapping down the coin, Alice and Bob use bar magnets, which they can arrange freely either in the NS or the SN direction. If they compare their results, then they will find that the results are either correlated or anti-correlated, depending on whether they chose the same arrangement or not. (Now of course, one would have to check the inequality in order to find out whether this is really contextual or admits a joint probability space description.)
 
  • #460
stevendaryl said:
I don't understand this business about being part of the probability space. Let [itex]P_A(\vec{a}, \alpha, \lambda)[/itex] be the probability that Alice will measure spin-up for her particle, given that she measures spin along axis [itex]\vec{a}[/itex], and that [itex]\alpha[/itex] represents other details of Alice's detector (above and beyond orientation), and [itex]\lambda[/itex] represents details about the production of the twin pair. Similarly, let [itex]P_B(\vec{b}, \beta, \lambda)[/itex] be the probability that Bob will measure spin-up for his particle, given that he measures along axis [itex]\vec{b}[/itex], and that [itex]\beta[/itex] represents additional details about Bob's detector. By assuming that the probabilities depend on these particular parameters, where have I made an assumption about the existence of a single joint probability space? What does "contextuality" mean, other than that the outcome might depend both on facts about the particle and facts about the device? The only assumption, it seems to me, is locality, that [itex]P_A[/itex] doesn't depend on [itex]\vec{b}[/itex] and [itex]P_B[/itex] doesn't depend on [itex]\vec{b}[/itex].

But the predictions of QM for EPR is perfect anti-correlation. Which means that:

If Alice measures spin-up at angle [itex]\vec{a}[/itex], then Bob will measure spin-down at angle [itex]\vec{a}[/itex]. That seems to me to mean that the probabilities must be 0 or 1:

If [itex]P_A(\vec{a}, \alpha, \lambda)[/itex] is nonzero, then that means that Alice has a chance of measuring spin-up. But if Alice measures spin-up, then Bob has no chance of measuring spin-up at that angle. So Bob's probabilities must be zero whenever Alice's are nonzero, and vice-versa. That's only possible if the probabilities are all zero or one. That means that the outcome is actually deterministic, given [itex]\lambda[/itex], which in turn implies that the details [itex]\alpha[/itex] and [itex]\beta[/itex] don't matter.

I don't think that the non-contextuality is an assumption, I think it follows from the perfect anti-correlations.
My point is that probabilities are irrelevant after the preparation. Suppose that the correlation has to be 1 or -1 ( depending on what is being conserved ). Whatever happens the required correlations ( coincidences or anti-coincidences ) will become fact. The result has already been set up. Crudely, there is a conspiracy where each detector is instructed to ignore everything else and click/not click as required. Non-locality is not an issue.

(I have to go to work so I won't be here for some hours now ).
 
  • #461
Hornbein said:
Don't underestimate Izzy Junior.

That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. [4]

— Isaac Newton, Letters to Bentley, 1692/3
In the Principia, he carefully avoids any trace of making things appear weird. It would be interesting to know what he found so greatly absurd about ''action at a distance'', but I suppose that the margin of his letter was not enough to be able to contain his arguments...

In EPR we have no faster than light communication. Thus the nonlocality there is only ''passion at a distance''. Would this have been as absurd for him? We'll never know.
 
  • #462
rubi said:
mathematical statements aren't assumed to be true until they are proven wrong
?

Mathematical statements are true if proved from the assumptions that are part of the statement (or the underlying theory).
 
  • #463
A. Neumaier said:
?

Mathematical statements are true if proved from the assumptions that are part of the statement (or the underlying theory).
Right, but the Riemann hypothesis isn't true, just because nobody has found a counterexample yet. The Riemann hypothesis is true if it can be proved. Until then, we just don't know the truth value. Stevendaryl seems to assume that QM is non-local based on the fact that I haven't given him a convincing counterexample, even though the burdon of proof is on him.
 
  • #464
rubi said:
Do you doubt the fact that Bell makes such an assumption?

I doubt that such an assumption is involved. Bell in his derivation of his inequalities makes the assumption that there is a deterministic function [itex]F(\lambda,
vec{a})[/itex] giving [itex]\pm 1[/itex] for every possible spin direction [itex]\vec{a}[/itex]. But that's a short-cut. He could have allowed a more general dependency, but he, like Einstein, did not think it was possible to get perfect anti-correlations without such a deterministic function.

I don't need to give you a counterexample, since mathematical statements aren't assumed to be true until they are proven wrong.

Well, in that case, I'm not interested. To me, the whole point of Bell's theorem is to rule out a class of models. If you want to say that there are models that are not ruled out, fine. I already knew that--superdeterministic models, retrocausal models, nonlocal models. If you want to throw in another model that is not covered, I'd like to know what it is.
 
  • #465
rubi said:
Right, but the Riemann hypothesis isn't true, just because nobody has found a counterexample yet. The Riemann hypothesis is true if it can be proved. Until then, we just don't know the truth value. Stevendaryl seems to assume that QM is non-local based on the fact that I haven't given him a convincing counterexample, even though the burdon of proof is on him.

Only if I'm looking for proof. I'm not. I'm looking for a local, realistic explanation of quantum correlations. If you have one, I'd like to see it.
 
  • #466
A. Neumaier said:
Yes, but you'd nevertheless replace your utterly wrong statement [it asserts something completely different!] by one that really expresses what you meant.
I think you confused "aren't assumed to be true" with "are assumed to be false". Not assuming X to be true isn't the same as assuming X to be false.

stevendaryl said:
I doubt that such an assumption is involved. Bell in his derivation of his inequalities makes the assumption that there is a deterministic function [itex]F(\lambda,
vec{a})[/itex] giving [itex]\pm 1[/itex] for every possible spin direction [itex]\vec{a}[/itex]. But that's a short-cut. He could have allowed a more general dependency, but he, like Einstein, did not think it was possible to get perfect anti-correlations without such a deterministic function.
I don't see how you can doubt that this assumption is made. Khrennikov has pointed it out clearly. If you are not satisfied with his presentation, you can also check out this paper:
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.48.291
It proves that Bell's factorization criterion is exactly equivalent to the existence of a joint probability distribution. If you reject the proof, you should be able to point out a mistake.

Well, in that case, I'm not interested. To me, the whole point of Bell's theorem is to rule out a class of models. If you want to say that there are models that are not ruled out, fine. I already knew that--superdeterministic models, retrocausal models, nonlocal models. If you want to throw in another model that is not covered, I'd like to know what it is.
Well, you keep claiming that the violation of Bell's inequality unambiguously rules out locality. I'm just pointing out that this is not backed up by the mathematics, so you shouldn't be claiming it as if it were a fact, rather than an opinion. I don't want to throw in another model. I'm happy with QM as it is.

stevendaryl said:
Only if I'm looking for proof. I'm not. I'm looking for a local, realistic explanation of quantum correlations. If you have one, I'd like to see it.
There cannot be a local realistic explanation, since local realism is usually defined to mean the Bell factorization criterion. Theories satisfying the factorization criterion are definitely ruled out. But apparently you are claiming that it is a fact that no contextual theory can be local either.
 
  • #467
rubi said:
I don't see how you can doubt that this assumption is made.

Because if you start with a more general formulation, then Bell's formulation seems to follow from the more general formulation, plus the requirement of perfect anti-correlations.

Khrennikov has pointed it out clearly.

I don't agree.

Well, you keep claiming that the violation of Bell's inequality unambiguously rules out locality.

I'm not saying that. I'm saying that Bell's inequality violation implies nonlocality OR superdeterminism OR retrocausality OR something weirder like Many Worlds, OR...

I don't want to throw in another model. I'm happy with QM as it is.

QM clearly works as a recipe for making predictions. If you're happy with that, fine.
 
  • #468
stevendaryl said:
Because if you start with a more general formulation, then Bell's formulation seems to follow from the more general formulation, plus the requirement of perfect anti-correlations.
What more general formulation doesn't use Bell's factorization criterion?

I don't agree.
Well, what do you say about Fine's paper that I quoted? Do you think his proof is erroneous?

I'm not saying that. I'm saying that Bell's inequality violation implies nonlocality OR superdeterminism OR retrocausality OR something weirder like Many Worlds, OR...
Then I probably misunderstood you. I thought you reject a common cause in the intersection of the past lightcones. If that is not the case, then I'm happy.

QM clearly works as a recipe for making predictions. If you're happy with that, fine. But the business about the possibility of contextual hidden variables does not in any way help understanding QM. I don't see any point in such papers.
I think it improves our understanding quite a bit, since it makes it more clear what exactly the implication of Bell's inequality and their violation are for physics. Knowing that non-contextuality is a crucial assumption in Bell's theorem changes the way we think about the theorem. I think this fact is not widely known in the physics community and should be pointed out more clearly in presentations of Bell's theorem.
 
  • #469
rubi said:
I think it improves our understanding quite a bit, since it makes it more clear what exactly the implication of Bell's inequality and their violation are for physics.

I don't agree with that, at all.
 
  • #470
stevendaryl said:
I don't agree with that, at all.
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial. For me, this definitely improves my understanding of Bell's theorem and its implications a lot. Getting to know something about Bell's theorem that I previously had not known clearly improves my ability to judge its implications. That's what science is about, so I don't understand why you don't acknowledge it.
 
  • #471
rubi said:
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial. For me, this definitely improves my understanding of Bell's theorem and its implications a lot. Getting to know something about Bell's theorem that I previously had not known clearly improves my ability to judge its implications. That's what science is about, so I don't understand why you don't acknowledge it.

I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.
 
  • #472
stevendaryl said:
I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.
It leaves open the possibility that contextual models can be local and admit common causes, which I thought you had rejected initially.
 
  • #473
stevendaryl said:
I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.

I don't understand the point of considering three probability distributions: [itex]dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3})[/itex]. When a twin pair is generated, the settings to be chosen by Alice and Bob haven't been determined yet. The particles have whatever variables they have, independent of what is eventually done with them. So I don't understand the point of the three probability distributions. I would think that there is a set of possible parameters [itex]\lambda[/itex] that can be assigned to the pair, and that they are assigned according to some probability distribution. So the assumption of three different probability distributions, for the three different types of experiments that might be performed in the future, seems very weird to me.
 
  • #474
stevendaryl said:
I don't understand the point of considering three probability distributions: [itex]dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3})[/itex]. When a twin pair is generated, the settings to be chosen by Alice and Bob haven't been determined yet. The particles have whatever variables they have, independent of what is eventually done with them. So I don't understand the point of the three probability distributions. I would think that there is a set of possible parameters [itex]\lambda[/itex] that can be assigned to the pair, and that they are assigned according to some probability distribution. So the assumption of three different probability distributions, for the three different types of experiments that might be performed in the future, seems very weird to me.

To me, rather than talking about different probability distributions for each possible future experiment, I would think that there would be three different processes with associated probabilities:
  1. A twin pair is produced in some state, characterized by a parameter [itex]\lambda[/itex] according to a probability distribution [itex]P(\lambda)[/itex]
  2. A particle with associated parameter [itex]\lambda[/itex] interacts with Alice's device, which is characterized by an orientation [itex]\vec{a}[/itex] and perhaps other variables, [itex]\alpha[/itex]. The probability of Alice getting [itex]+1[/itex] would be given by a probability [itex]P_A(\lambda, \vec{a}, \alpha)[/itex]
  3. A particle with associated parameter [itex]\lambda[/itex] interacts with Bob 's device, which is characterized by an orientation [itex]\vec{b}[/itex] and perhaps other variables, [itex]\beta[/itex]. The probability of Bob getting [itex]+1[/itex] would be given by a probability [itex]P_B(\lambda, \vec{b}, \beta)[/itex]
 
  • #475
stevendaryl said:
I don't understand the point of considering three probability distributions: [itex]dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3})[/itex].
Let's say there is a hidden variable ##\lambda## and 3 combinations of detector settings ##i=1,2,3##, for example Alice measures at angle ##\theta_i## and Bob measures at angle ##\theta_{i+1}## (where ##\theta_4:=\theta_1##). Then for each of these combinations, we collect probability distributions ##P_i(a_i,b_i)##. There may be a hidden variable ##\lambda## such that ##P_i(a_i,b_i) = \int_\Lambda p_i(\lambda,a_i,b_i)\mathrm d\lambda##. Now the fact that all the ##p_i## arise from a single joint probability space is equivalent to Bell's factorization criterion, which implies Bell's inequality. Thus a violation of Bell's inequality falsifies Bell's factorization criterion, but at the same time falsifies non-contextuality. You can't falsify the factorization criterion without falsifying non-contextuality.
 
  • #476
rubi said:
Let's say there is a hidden variable ##\lambda## and 3 combinations of detector settings ##i=1,2,3##, for example Alice measures at angle ##\theta_i## and Bob measures at angle ##\theta_{i+1}## (where ##\theta_4:=\theta_1##). Then for each of these combinations, we collect probability distributions ##P_i(a_i,b_i)##. There may be a hidden variable ##\lambda## such that ##P_i(a_i,b_i) = \int_\Lambda p_i(\lambda,a_i,b_i)\mathrm d\lambda##.

But as I said, there are two different processes involved in Alice getting a measurement result: (1) The production of a twin pair with parameter [itex]\lambda[/itex], and (2) Alice measuring the polarization along some axis of her choosing. Why should either process depend on Bob's setting?
 
  • #477
rubi said:
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial.

That isn't an extra assumption. As far as I can tell, "joint probability space" just means that there is an underlying joint probability distribution and all the correlations can be obtained as marginals of this distribution, i.e., $$P(ab \mid xy) = \sum_{\hat{a}_{x}, \hat{b}_{y}} P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc) \,,$$ where, e.g., ##\hat{a}_{x}## means to sum over all combinations ##(a_{1}, \dotsc, a_{x-1}, a_{x+1}, \dotsc)## except the variable ##a_{x}## and similarly for ##\hat{b}_{y}##. (I don't find Khrennikov so clear but this is definitely what Fine was describing.) This construction is mathematically equivalent to the locality condition Bell arrived at. This means that if you can construct a Bell-local model for a given set of probabilities ##P(ab \mid xy)## then you can also construct an underlying joint probability distribution of the type defined just above, and vice versa.

This equivalence does not mean that the existence of an underlying joint probability distribution is an extra hidden assumption in Bell's theorem. That's just bad logic. Quite the opposite: it means that one of these assumptions is always redundant for the purpose of deriving Bell inequalities, since it is implied by the other anyway. You can derive exactly the same Bell inequalities from either starting assumption alone. You also don't get to choose which assumption to blame for a Bell violation. If a Bell inequality is violated, then both assumptions are contradicted.
 
  • #478
stevendaryl said:
But as I said, there are two different processes involved in Alice getting a measurement result: (1) The production of a twin pair with parameter [itex]\lambda[/itex], and (2) Alice measuring the polarization along some axis of her choosing. Why should either process depend on Bob's setting?
It doesn't depend on Bob's setting. ##P_i(a_i,b_i)## are just the estimated probability distributions that have been measured. Alice and Bob can certainly perform these 3 experiments, collect the data and then meet and calculate the ##P_i## from their results.

wle said:
That isn't an extra assumption. As far as I can tell, "joint probability space" just means that there is an underlying joint probability distribution and all the correlations can be obtained as marginals of this distribution, i.e., $$P(ab \mid xy) = \sum_{\hat{a}_{x}, \hat{b}_{y}} P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc) \,,$$ where, e.g., ##\hat{a}_{x}## means to sum over all combinations ##(a_{1}, \dotsc, a_{x-1}, a_{x+1}, \dotsc)## except the variable ##a_{x}## and similarly for ##\hat{b}_{y}##. (I don't find Khrennikov so clear but this is definitely what Fine was describing.) This construction is mathematically equivalent to the locality condition Bell arrived at. This means that if you can construct a Bell-local model for a given set of probabilities ##P(ab \mid xy)## then you can also construct an underlying joint probability distribution of the type defined just above, and vice versa.
That's right. Bell's factorization criterion is equivalent to the existence of a joint probability distribution.

This equivalence does not mean that the existence of an underlying joint probability distribution is an extra hidden assumption in Bell's theorem. That's just bad logic. Quite the opposite: it means that one of these assumptions is always redundant for the purpose of deriving Bell inequalities, since it is implied by the other anyway. You can derive exactly the same Bell inequalities from either starting assumption alone. You also don't get to choose which assumption to blame for a Bell violation. If a Bell inequality is violated, then both assumptions are contradicted.
If Bell's inequality is violated, we must reject Bell's factorization criterion, but at the same time we must reject non-contextuality (joint probability distributions). Bell's criterion doesn't formalize what locality is supposed to mean in the case of contextual theories, because it can only be applied to non-contextual theories in the first place due to the equivalence to non-contextuality. Thus a violation of Bell's inequality says nothing about locality in the case of contextual theories.
 
  • #479
rubi said:
It doesn't depend on Bob's setting. ##P_i(a_i,b_i)## are just the estimated probability distributions that have been measured. Alice and Bob can certainly perform these 3 experiments, collect the data and then meet and calculate the ##P_i## from their results.

Then I don't really understand the point. What is the point of computing these [itex]P_i[/itex]?

What I assumed that the phrase "contextual theory" is a way of computing probabilities that take into account the measurement process, as opposed to revealing a pre-existing value. So I would think that that would mean describing the process by which a system to be measured (the particle produced in the twin pair) interacts with the measuring device to produce an outcome. So I don't understand what the relevance of the [itex]P_i[/itex] you're describing is to such a theory.
 
  • #480
stevendaryl said:
Then I don't really understand the point. What is the point of computing these [itex]P_i[/itex]?

What I assumed that the phrase "contextual theory" is a way of computing probabilities that take into account the measurement process, as opposed to revealing a pre-existing value. So I would think that that would mean describing the process by which a system to be measured (the particle produced in the twin pair) interacts with the measuring device to produce an outcome. So I don't understand what the relevance of the [itex]P_i[/itex] you're describing is to such a theory.
Let's assume we use the angles ##\theta_1=0^\circ##, ##\theta_2=45^\circ## and ##\theta_3=90^\circ##. We can prepare different experiments using these angles, for instance Alice sets her detector to ##0^\circ## and Bob sets his detector to ##45^\circ##. There are 6 possible combinations, but 3 of them will suffice to establish the non-existence of a joint probability space. Each of these situations determines an experimental situation (context). We can perform each of these experiments randomly and in the end collect all the data in the probability distributions ##P_i##. For example if ##i=1## refers to Alice using ##\theta_1## and Bob using ##\theta_3##, then we could ask for the probability ##P_1(\text{Alices measures }\rightarrow,\text{Bob measures }\uparrow)##. Of course, for another ##i##, ##P_i(\uparrow,\rightarrow)## makes no sense, because the experiment might not even have a detector aligned in one of these directions, so we are forced to collect our data in different ##P_i## distributions for each ##i##. After all, you wouldn't collect the data of LIGO in the same probability distribution as the data of ATLAS either. So after we have collected the ##P_i##, we can ask, whether all these ##P_i## arise from one joint probability distribution as marginals. And it turns out that this is exactly the case if and only if Bell's inequality holds.
 
  • #481
rubi said:
Let's assume we use the angles ##\theta_1=0^\circ##, ##\theta_2=45^\circ## and ##\theta_3=90^\circ##. We can prepare different experiments using these angles, for instance Alice sets her detector to ##0^\circ## and Bob sets his detector to ##45^\circ##. There are 6 possible combinations, but 3 of them will suffice to establish the non-existence of a joint probability space. Each of these situations determines an experimental situation (context). We can perform each of these experiments randomly and in the end collect all the data in the probability distributions ##P_i##. For example if ##i=1## refers to Alice using ##\theta_1## and Bob using ##\theta_3##, then we could ask for the probability ##P_1(\text{Alices measures }\rightarrow,\text{Bob measures }\uparrow)##. Of course, for another ##i##, ##P_i(\uparrow,\rightarrow)## makes no sense, because the experiment might not even have a detector aligned in one of these directions, so we are forced to collect our data in different ##P_i## distributions for each ##i##. After all, you wouldn't collect the data of LIGO in the same probability distribution as the data of ATLAS either. So after we have collected the ##P_i##, we can ask, whether all these ##P_i## arise from one joint probability distribution as marginals. And it turns out that this is exactly the case if and only if Bell's inequality holds.

The issue is whether there is a sensible notion of "local" that violates Bell's factorizability condition. You seem to be saying that there is no proof that there is not. Okay, I'll buy that. Then it takes on the role of a conjecture: that every plausible local theory is factorizable in Bell's sense.
 
  • #482
When you have eliminated every possibility you have to take what is left quite seriously. The issue as I see it is that the arguments so far seem to be all or nothing. Either the direction is determined or it isn't. What about considering its a bit of both? Perhaps spin is fixed in one direction but not the other two, Would this lead to the correlations we observe?
 
  • #483
stevendaryl said:
The issue is whether there is a sensible notion of "local" that violates Bell's factorizability condition. You seem to be saying that there is no proof that there is not.
That's right, although I would put it slightly differently: Locality means that whenever an event A is the cause for an event B, there must be a future directed causal curve connecting these events. So the question is really which events are to be considered as causes or effects. In the non-contextual case, this is quite clear and leads to Bell's factorization criterion. In the contextual case, it is not that obvious. At least QM is silent on it.

Then it takes on the role of a conjecture: that every plausible local theory is factorizable in Bell's sense.
Or equivalently: "Every plausible local theory is non-contextual." We will probably disagree here, but at least I find it plausible that contextual theories can also be local, so I would tend to believe that the conjecture is wrong. However, this is only my opinion.
 
  • #484
rubi said:
If Bell's inequality is violated, we must reject Bell's factorization criterion, but at the same time we must reject non-contextuality (joint probability distributions).

This is fine.

Bell's criterion doesn't formalize what locality is supposed to mean in the case of contextual theories, because it can only be applied to non-contextual theories in the first place due to the equivalence to non-contextuality. Thus a violation of Bell's inequality says nothing about locality in the case of contextual theories.

That doesn't follow. I linked to references in another thread where Bell explains where the factorisation condition comes from and how it captures the idea of locality (or at least, the specific idea of locality that EPR and Bell were concerned with). The reasoning is quite general and has nothing to do with contextuality. Now it so happens that the factorisation condition Bell ends up with is mathematically equivalent to having a joint underlying probability distribution which you call noncontextuality, so noncontextuality implies the same Bell inequalities as Bell locality does. That does not mean Bell inadvertently assumes noncontextuality. What it means is that if you assume Bell locality then it makes no difference to the end result if you additionally assume or don't assume noncontextuality. Or put differently: if I give you a model for some correlations that is Bell local but it isn't obviously noncontextual and you like noncontextuality, then you will always be able to change the model so that it is noncontextual and still makes the same predictions.

Something similar happens with determinism in Bell's theorem: if you have a local stochastic model for a set of correlations then it's known that you can always turn it into a local deterministic model just by adding additional hidden variables. This similarly doesn't mean that determinism is a "hidden assumption" in Bell's theorem. It means that determinism is a redundant assumption that does not affect the end result either way.
 
  • #485
wle said:
I linked to references in another thread where Bell explains where the factorisation condition comes from and how it captures the idea of locality (or at least, the specific idea of locality that EPR and Bell were concerned with). The reasoning is quite general and has nothing to do with contextuality.
That's not right. You need to assume a joint probability space in order to even perform the mathematical manipulations that are needed to justify the factorization criterion. Bell just assumes this implicitly.
 
  • #486
  • #487
Those 2 articles talk about a loophole that is supposed to have been closed already...
 
  • #488
rubi said:
That's not right. You need to assume a joint probability space in order to even perform the mathematical manipulations that are needed to justify the factorization criterion. Bell just assumes this implicitly.

Huh? If you're referring to the finite statistics loophole like atyy says then this only really concerns experiments and it's known not to be a real issue. Considering theory only, quantum physics as a theory predicts joint conditional probability distributions for results (according to the Born rule) and these can be compared directly with the joint conditional probabilities that can be predicted by models respecting Bell locality.
 
  • #489
wle said:
Huh? If you're referring to the finite statistics loophole like atyy says then this only really concerns experiments and it's known not to be a real issue. Considering theory only, quantum physics as a theory predicts joint conditional probability distributions for results (according to the Born rule) and these can be compared directly with the joint conditional probabilities that can be predicted by models respecting Bell locality.
I'm still reading atyy's papers, so I can't comment on them yet. I'm not referring to any loophole or experiment. I'm saying that Bell assumes that ##A_a(\lambda)## and ##B_b(\lambda)## are random variables on one probability space ##(\Lambda,\Sigma)## and thus joint probability distributions exist. QM certainly does not predict joint probability distributions for non-commuting observables. A particle cannot be both spin up and spin left. The spin observables can't be modeled on one probability space.
 
  • #490
rubi said:
I'm still reading atyy's papers, so I can't comment on them yet. I'm not referring to any loophole or experiment. I'm saying that Bell assumes that ##A_a(\lambda)## and ##B_b(\lambda)## are random variables on one probability space ##(\Lambda,\Sigma)## and thus joint probability distributions exist. QM certainly does not predict joint probability distributions for non-commuting observables.

You've certainly misunderstood something here. The object of study in Bell's theorem is the joint probability ##P(ab \mid xy)## (according to some candidate theory) that Alice and Bob obtain results indexed by variables ##a## and ##b## given that they decide to do measurements indexed by variables ##x## and ##y##. This is not restrictive. In particular, the joint probability distribution should be given by the Born rule according to quantum mechanics, i.e., have the form $$P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]$$ where in general the variables ##x## and ##y## are associated with POVMs ##\mathcal{M}_{x} = \{M_{a \mid x}\}_{a}## and ##\mathcal{N}_{y} = \{N_{b \mid y}\}_{b}##. This is perfectly well defined even if the POVMs ##\mathcal{M}_{x}## for different ##x## and ##\mathcal{N}_{y}## for different ##y## are incompatible.
 

Similar threads

Replies
21
Views
1K
  • New Member Introductions
Replies
7
Views
275
Replies
36
Views
3K
  • Sticky
  • Quantum Physics
Replies
1
Views
6K
  • Quantum Interpretations and Foundations
7
Replies
218
Views
12K
  • Quantum Interpretations and Foundations
Replies
1
Views
3K
  • Quantum Interpretations and Foundations
Replies
7
Views
1K
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
14
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
Back
Top