How "spooky action...." may work?

  • I
  • Thread starter Suppaman
  • Start date
  • Tags
    Time Work
In summary, as long as the particles are not disturbed by any external influence, the entanglement in all these cases persists due to conservation laws!
  • #71
wle said:
Huh? You can set ##P(s_{\mathrm{x}}, s_{\mathrm{z}}) = P(s_{\mathrm{x}}) P(s_{\mathrm{z}})## to trivially construct the sort of joint probability distribution you describe. For the example from your post #53 this would get you $$\begin{eqnarray}
P(+_{\mathrm{x}}, +_{\mathrm{z}}) &=& 1/2 \,, \qquad P(+_{\mathrm{x}}, -_{\mathrm{z}}) &=& 0 \,, \\
P(-_{\mathrm{x}}, +_{\mathrm{z}}) &=& 1/2 \,, \qquad P(-_{\mathrm{x}}, -_{\mathrm{z}}) &=& 0 \,.
\end{eqnarray}$$ You can easily check that this reproduces the marginals ##P(+_{\mathrm{z}}) = 1##, ##P(-_{\mathrm{z}}) = 0##, and ##P(+_{\mathrm{x}}) = P(-_{\mathrm{x}}) = 1/2## from your post #53.

The problem in your proof seems to be here:

It looks like ##P(+, -)## accidentally got changed to ##P(+, +)## in the second sentence.
You are right, I made a mistake. I simplified too much and it is not that easy to construct a counterexample. However, it is well known that counterexamples exist and even the very book Ilja quoted contains some of these no-go theorems in the appendix. That just means that one needs to put more effort into the construction of a counterexample. It stays true that some of the predictions of quantum theory are incompatible with classical probability theory and hence, the rest of my argument is untouched.
 
Physics news on Phys.org
  • #72
Zafa Pi said:
I'm not sure I get it. To make sure we are on the same page let's refer to my post #39 on the thread https://www.physicsforums.com/account/posts/5494960/ .
It is true the QM predicts the correlations for the entangled state when it's created, but to check the validity we must step into reality and perform experiments. The results (denying local realism, i.e. the Inequality of #39) would cause people to refer to "spooky action at a distance". In spite of being used to it I still find it mysterious/spooky.

What am I missing? And BTW, what is a reference to Einstein's regret about the EPR paper?
In which particular experiment any "action at a distance" has been demonstrated? This would violate the very foundations of QED, the best tested theory ever and clearly physics beyond the standard model. To my knowledge all that has been observed is, however, in excellent agreement with standard QED, and the violation of Bell's inequality is also as expected (with a very high precision in some experiments). So I don't see, where I should be forced to the conclusion that there are spooky actions at a distance, contradicting local microcausal QFT.
 
Last edited by a moderator:
  • #73
rubi said:
The problem is that you can't find a concept of conditional probabilities in a non-simplicial state space. You will always violate some basic axiom of classical probability theory, like probabilities adding up to ##1##.
If there would be such a problem (I don't see any), then this is what we have already clarified with Holevo's construction how to get a simplicial state space.

But it makes no sense. The rules of classical probability theory, as you can read in Jaynes' Probability theory: the logic of science, is nothing but the rules of consistent plausible reasoning. Consistent reasoning is always possible. The only way consistent reasoning leads to contradictions is if you somewhere make the wrong assumptions. For example by assuming that the results of the experiments are measurement results, thus, do not depend on the state of the measurement device, but only on the measured system or so.
rubi said:
For instance in quantum theory, the concept only makes sense for commuting observables, and this is exactly the case, where quantum probabilities are consistent with classical probabilities. If you include non-commuting observables, the concept ceases to make sense.
Some nonsensical applications of the rules may not make sense.
rubi said:
You still misunderstand the proof. I don't want joint probabilities. I get them for free by classical probability theory. You cannot possibly have a classical probability theory without joint probabilities.
If there is no joint reality, why do you think there should be some joint probability distribution?

This is the situation in quantum theory, where "measurement results" are results of complex interactions which depend on above parts, so that if one "measurement" is made, reasoning about others, which have not been made, makes no sense.
rubi said:
It doesn't matter whether BM can recover the predictions of QM. The thing that matters is whether they are compatible with classical probability theory.
dBB is a deterministic theory, and in no way in conflict with classical probability theory.
rubi said:
We don't even need quantum theory at all. It can already be proven from the observed statistics.
No. All your considerations show is that you make some wrong assumptions.
 
  • #74
Markus Hanke said:
Ok, I think I understand your point ( at least I hope so ). However, it seems to me that the knowledge of the particles being entangled is something that has been added into the mix from the "outside". If we assume that the Alice-Bob system ( with their respective particles ) is isolated in space and time, how would Alice by herself know by performing a measurement on her particle whether it is entangled with a distant particle or not ? Only by either having been present during the initial interaction between them ( classical exchange of information across time ), or by subsequently comparing her results with those of Bob - which is a classical information exchange across space. Without either information exchange or prior interaction ( at some point along Alice's world line ), the outcome of both measurements would appear completely random to both Alice and Bob in isolation. In that sense, it is either the initial interaction that caused the correlation, or the act of comparing the measurement outcomes ( which is always a classical channel ). Without either, the concept of entanglement becomes meaningless. Both cases involve some form of non-locality - either non-locality in space, or non-locality in time, so either way Bell's inequalities will be violated, just as we empirically observe.

Or am I seeing this wrong / missing something ? I am still actively learning about this whole subject matter.

Yes, if Alice and Bob do not know that their particles are entangled, then their results will appear random. But I'm not sure what point is supposed to follow from that.
 
  • #75
It's a very good point! If A and B don't know about the entanglement they just see a stream of unpolarized particles. That's also true when they know it in fact. It doesn't do anything to the particles themselves, whether A or B or both know about their preparation in entangled pairs. Only if they take note accurately about the times of arrival of the measured particles they can later take their measurement protocols and see the correlations between the outcomes of their measurements always looking on the entangled pairs, which is possible due to the accurate time stamps ("coincidence measurement"). Only then the correlations are revealed. They have to communicate their results afterwards, i.e., there's no way for FTL communications through such entangled particle pairs.
 
  • Like
Likes Markus Hanke and Jilang
  • #76
stevendaryl said:
Yes, if Alice and Bob do not know that their particles are entangled, then their results will appear random. But I'm not sure what point is supposed to follow from that.

I don't really think I have a point to make just yet. The thing is this - the more I learn, the more I find aspects of quantum theory that remind me of the situation in classical relativity, in that it is meaningless to talk about relativistic effects at a single event. Likewise, it seems meaningless to me to talk about entanglement for a single observer. When Alice performs a measurement, the outcome is probabilistic for her; also, with respect to Bob performing an experiment, Alice knows only that he got a definite result, but she doesn't know which one. So where is the entanglement ? It is meaningless to speak of entanglement until such time when they are brought together, and their records are compared ( as vanhees71 has said ), just like it is meaningless in relativity to speak of time dilation without some convention about how to compare clocks. This leads me to wonder whether entanglement could be understood as a relationship between observers ( or events ? ) in spacetime in some way, shape, or form.

Again, I'm not trying to make any specific point, I am merely trying to look at things from a slightly different angle.
 
  • #77
Ilja said:
If there would be such a problem (I don't see any), then this is what we have already clarified with Holevo's construction how to get a simplicial state space
Holevo doesn't construct a classical probability theory. It is proven in almost every quantum mechanics textbook that this is not possible. Holevo only constructs an expectation value functional.

Some nonsensical applications of the rules may not make sense.
It it doesn't make sense to compute conditional probabilities, then the theory can't be a classical probability theory, since you can always compute conditional probabilities in classical probability theory.

If there is no joint reality, why do you think there should be some joint probability distribution?
I don't think there should be one. You think so, but you aren't aware of it. If you claim that quantum mechanics can be described by a classical probability theory, then you must also accept that joint probabilities must exist. Probability theory guarantees their existence.

dBB is a deterministic theory, and in no way in conflict with classical probability theory.
Well, dBB cannot have random variables representing spin. Hence, it cannot have probability distributions for spin and always needs to model a measurement device. Quantum mechanics can compute probability distributions for spin, even without a model of measurement. If dBB were to include probability distributions for spin, as quantum mechanics does, then it would necessarily fail to be a classical probability theory.

No. All your considerations show is that you make some wrong assumptions.
Here is a true theorem: No classical probability theory can reproduce the observed statistics of spin particles.
 
  • #78
rubi said:
You are right, I made a mistake. I simplified too much and it is not that easy to construct a counterexample. However, it is well known that counterexamples exist and even the very book Ilja quoted contains some of these no-go theorems in the appendix.

Which no-go theorems and counterexamples to what? The no-go theorems I know of are about whether or not certain types of physical model can reproduce the statistics of quantum physics, not about what type of probability theory quantum physics uses (which, as far as I'm concerned, falls under what I'd consider "ordinary probability theory").
 
Last edited:
  • #79
wle said:
Which no-go theorems and counterexamples to what? The no-go theorems I know of are about whether or not certain types of physical model can reproduce the statistics of quantum physics, not about what type of probability theory quantum physics uses (which, as far as I'm concerned, falls under what I'd consider "ordinary probability theory").
The most widely known one is the Kochen-Specker theorem. However, in order to see that QM can't be an ordinary probability theory, you just need to notice that it has a probability distribution for both ##S_x## and ##S_z##. If these observables were random variables on a probability space, then you would be able to compute the probability ##P(S_x = +\wedge S_z = +)##. However, QM can't compute this number and hence can't be an ordinary probability theory (at least if what you'd consider an "ordinary probability theory" would satisfy Kolmogorov's axioms, which is the standard definition).
 
  • #80
rubi said:
Holevo doesn't construct a classical probability theory.
There is no need for this. The purpose of using Holevo's construction is only to clarify that the non-simplicial character of the state space is irrelevant.

rubi said:
It it doesn't make sense to compute conditional probabilities, then the theory can't be a classical probability theory, since you can always compute conditional probabilities in classical probability theory.
Of course, it does not make sense to compute conditional probabilities for events which are simply incompatible with each other. Or to describe probabilities for different things (like measurement results for spin in one direction before and after a measurement in another direction) as if they would be the same event. Of course, making such errors you can prove 2+2=5 too, but this does not mean that you have proven 2+2=5.

rubi said:
I don't think there should be one. You think so, but you aren't aware of it. If you claim that quantum mechanics can be described by a classical probability theory, then you must also accept that joint probabilities must exist. Probability theory guarantees their existence.
No. Probability theory is simply the logic of plausible reasoning. Their rules are consistency rules for reasoning about reality if you have incomplete information about it. It is not at all about deriving nontrivial information about this reality - if you are mislead by the use of the "measurement" word that these "measurements" really "measure" some preexisting properties, instead of describing the results of an interaction with "measurement instruments", you can, of course, end up with contradictions. This is what has to be expected, once your theory about reality is wrong. And, without doubt, if one derives the contradiction, one will use logic, inclusive the logic of plausible reasoning known as probability theory. This does not mean the error is a contradiction in logic, or that our real world is incompatible with logic.

rubi said:
Well, dBB cannot have random variables representing spin. Hence, it cannot have probability distributions for spin and always needs to model a measurement device.
And it has no need for them. What is the problem with the need to model a device used in an experiment (misleadingly named "measurement device") if one wants to describe an experiment?
rubi said:
Quantum mechanics can compute probability distributions for spin, even without a model of measurement. If dBB were to include probability distributions for spin, as quantum mechanics does, then it would necessarily fail to be a classical probability theory.
Quantum theory does not have the aim to describe the real world, it is not a realistic interpretation. This can lead to some simplifications, like that some distributions in quantum equilibrium do not depend on the states of some devices which are part of the experiment. Fine, be happy with this. For a complete description this is no longer true. Not nice, but such is life.
rubi said:
Here is a true theorem: No classical probability theory can reproduce the observed statistics of spin particles.
You have forgotten to add: if it starts with the assumption that the results of spin-related experiments measure some inherent properties of the particle.
 
  • #81
Ilja, the whole point of my argument is to establish the fact that you cannot apply Reichenbach's principle directly to QM, since it doesn't have the necessary probabilistic structure. What you are doing is take QM and strip off all the features that make it incompatible with the principles of probability theory so you end up with a new theory. To this new theory, you can apply Reichenbach's principle. I agree. But you stil haven't applied it to the original theory, because it is not possible! Hence, Reichenbach's principle can still not make statements about QM itself, but only about different theories derived from it, by stripping off some of the information it provides! QM itself is not one of the theories to which Reichenbach's principle can be meaningfully applied.

It is a perfectly valid point of view to assume that spin is an intrinsic property of particles. You then have to give up classical probability theory. The vast majority of physicists prefer this point of view. Hence, they are using a theory, to which Reichenbach's principle cannot be applied! If you are using a different theory (dBB), then you can apply the principle, but it says nothing about the original theory, which everybody is using!
 
  • Like
Likes Markus Hanke
  • #82
And my point is that the problem is that "apply directly" is misguided, based on wrong assumptions about reality.

In some sense, indeed, dBB theory as well as other "hidden variable" interpretations are different theories. In dBB we obtain agreement with QT only for quantum equilibrium. But why do you think this is a problem? What matters is if the predictions agree with observation, and if the theory is logically consistent.

If a particular interpretation does not allow to talk about reality, ok, it may be nonetheless useful. A religion may have really beautiful music, and a nonrealistic theory can make really good empirical predictions. Does it follow that we have to become religious to have good music, or that we have to reject reality to make accurate predictions about experiments? I don't think so.

I have no problem to accept that to apply Reichenbach's common cause to Bohr's Holy Scriptures is anathema. But so what? And, no, I see no information QT provides which cannot be provided by realistic interpretations too.

If you think that it is a perfectly valid point of view to assume that spin is an intrinsic property of particles, so be it. But this is your personal theory, which makes nontrivial assumptions about reality. Once this theory is in conflict with the logic of plausible reasoning, you have to give up logic to follow it. Feel free to do so, your choice. But certainly not my choice. If the rules of logic cannot be applied to a theory, this theory is, in my opinion, not even a theory. It has, yet, to be modified to become a theory, because a theory should be reasonable, not something in conflict with logic.

Then, I do not care at all what "everybody is using" - many of the most horrible crimes have happened based on everybody was supporting them. And our actual time is in no way superior to the past about this. About the "original" Holy Scriptures I care even less. Anyway, nor the minimal interpretation, nor Bohr's Holy Scriptures make any claims in contradiction with probability theory. Claims that they do are incorrect interpretations, which can be easily traced to particular (wrong) theories about reality.
 
  • #83
Ilja said:
And my point is that the problem is that "apply directly" is misguided, based on wrong assumptions about reality.
Nobody knows what the right assumptions about reality are. It's your personal opinion that QM doesn't describe reality.

In some sense, indeed, dBB theory as well as other "hidden variable" interpretations are different theories. In dBB we obtain agreement with QT only for quantum equilibrium. But why do you think this is a problem? What matters is if the predictions agree with observation, and if the theory is logically consistent.
I don't think it is a problem. I don't even care about dBB theory. I was just countering your claim that Reichenbach's principle can be used to deduce that there is no common cause in QM. It can't, since it can't even be applied to QM. Sure, it makes statements about dBB, but those statements won't automatically hold for QM.

If a particular interpretation does not allow to talk about reality, ok, it may be nonetheless useful. A religion may have really beautiful music, and a nonrealistic theory can make really good empirical predictions. Does it follow that we have to become religious to have good music, or that we have to reject reality to make accurate predictions about experiments? I don't think so.
You don't have to reject reality. You just have to realize that reality can be different from what one might naively assume. Nature is the ultimate judge. She doesn't care about our philosophical preferences.

I have no problem to accept that to apply Reichenbach's common cause to Bohr's Holy Scriptures is anathema. But so what? And, no, I see no information QT provides which cannot be provided by realistic interpretations too.
You were claiming that there cannot be a common cause explanation for the Bell correlations in QM. This is wrong. The true statement would be that hidden variable theories are incompatible with a common cause explanation. Theories that reject hidden variables might still allow for a common cause.

If you think that it is a perfectly valid point of view to assume that spin is an intrinsic property of particles, so be it. But this is your personal theory, which makes nontrivial assumptions about reality.
Right, it is my personal theory (and also the personal theory of many others). It is also your personal theory that the world must be described by hidden variables, which is also a non-trivial assumption about reality.

Once this theory is in conflict with the logic of plausible reasoning, you have to give up logic to follow it. Feel free to do so, your choice. But certainly not my choice. If the rules of logic cannot be applied to a theory, this theory is, in my opinion, not even a theory. It has, yet, to be modified to become a theory, because a theory should be reasonable, not something in conflict with logic.
QM is not in conflict with logic. It is built using standard mathematics, which uses nothing but classical logic. Hence, we can use classical logic to talk about QM. Also, reasonability isn't a necessity for a physical theory. A physical theory must describe nature. If nature contradicts our intuition, then we have to adjust our intuition. Millions of physicists have learned quantum theory and have acquired a good intuition for quantum phenomena.
 
  • Like
Likes vanhees71 and AlexCaledin
  • #84
rubi said:
Nobody knows what the right assumptions about reality are. It's your personal opinion that QM doesn't describe reality.
In the minimal interpretation, QM does not pretend to describe reality. If an interpretation claims that no reality exists, I reject it as nonsensical. But the minimal interpretation does not make such claims, it simply does not give a description of reality.
rubi said:
I was just countering your claim that Reichenbach's principle can be used to deduce that there is no common cause in QM.
I never made such a claim. Reichenbach's principle claims the existence of causal explanations, like common causes. It also specifies what a common cause is.

There are rules of reasoning, which cannot be proven to be false by any observation, because to derive some nontrivial predictions - something which could be falsified by observation - has to use them. So, claiming that these rules are wrong would be simply the end of science as we know it. If we would take such a solution seriously, we would simply stop doing science, because it would be well-known that the methods we use are inconsistent. (Ok, also not a decisive argument - we do a lot of inconsistent things anyway.)

Whatever, there is a hierarchy, we have rules, hypotheses or so which make science possible, to reject them would make science meaningless. They are, of course, only human inventions too, but if they are wrong, doing science becomes meaningless. We would probably continue doing science, because humans like to continue to do things even if they have recognized that doing them is meaningless, which is what is named culture. But this culture named science would not be really science as it is today, an endeavor to understand reality, to find explanations, but like the atheist going to Church as part of his living in a formerly religious culture.

But this has not happened yet, at least for me doing science has yet some of its original meaning, and is an endeavor to understand reality, to find explanations which are consistent with the rules of logic, of consistent reasoning. And this requires that some ideas, like the rules of logic, of consistent reasoning, the existence of some external reality, and the existence of explanations, have to be true.

It is not only the point that giving them up would make science meaningless. It is also that there is no imaginable evidence which would motivate it. Because, whatever the conflict with observation, this would be always only an open scientific problem. And giving up science because there are open scientific problems? Sorry, this makes no sense. Science without open scientific problems would be boring.

rubi said:
You don't have to reject reality. You just have to realize that reality can be different from what one might naively assume. Nature is the ultimate judge. She doesn't care about our philosophical preferences.
Of course, one could imagine a Nature so that some beings in this Nature would be unable in principle to invent a theory about it without logical contradictions.

rubi said:
You were claiming that there cannot be a common cause explanation for the Bell correlations in QM. This is wrong. The true statement would be that hidden variable theories are incompatible with a common cause explanation. Theories that reject hidden variables might still allow for a common cause.
Simply wrong. There are causal explanations, they are even quite simple and straightforward, but violate Einstein causality. This is not really a big problem. Anyway, the other appearances of a similar symmetry (like for acoustic wave equations, where also Lorentz transformation with the speed of sound allow to transform solutions into other solutions of the wave equation) are known to be not fundamental.

rubi said:
Right, it is my personal theory (and also the personal theory of many others). It is also your personal theory that the world must be described by hidden variables, which is also a non-trivial assumption about reality.
Fine.

rubi said:
QM is not in conflict with logic. It is built using standard mathematics, which uses nothing but classical logic. Hence, we can use classical logic to talk about QM. Also, reasonability isn't a necessity for a physical theory. A physical theory must describe nature. If nature contradicts our intuition, then we have to adjust our intuition. Millions of physicists have learned quantum theory and have acquired a good intuition for quantum phenomena.
It is you who claims QM is in conflict with logic, namely with the rules of probability theory, which are, following Jaynes, Probability theory - the logic of science, the rules of consistent plausible reasoning. Consistent reasoning is not at all about intuition.
 
  • #85
rubi said:
The most widely known one is the Kochen-Specker theorem.

The Kochen-Specker theorem is about hidden variable models and uses assumptions beyond only probability theory. (Also, it only applies to Hilbert spaces of dimension three or more, so it says nothing about spin 1/2.) I think Kochen and Specker themselves pointed out that you can always construct a joint probability distribution for different measurements just by taking the probabilities given by the Born rule and multiplying them, like I pointed out in my earlier post. I think their stance was that this sort of thing didn't make a very satisfactory hidden variable model, but if the exercise is just to invent a joint probability distribution in order to express quantum physics in some axiomatic language that requires it then it looks to me like you could do it this way.

However, in order to see that QM can't be an ordinary probability theory

I wouldn't consider QM a probability theory in the first place. It's a physics theory that uses elements from probability theory as well as other areas of mathematics in its formulation. You're acting as if you think all of QM should be seen as a special case of Kolmogorov probability theory. Why would we want to do this, independently of whether it is even possible? We don't try to express all of electromagnetism or Newtonian physics or general relativity in the language of Kolmogorov probability theory, and yet this doesn't prevent us from being able to reason about and discuss and contrast the causal structure of these theories, so why would you insist it should be done for QM?
 
  • #86
wle said:
We don't try to express all of electromagnetism or Newtonian physics or general relativity in the language of Kolmogorov probability theory, and yet this doesn't prevent us from being able to reason about and discuss and contrast the causal structure of these theories, so why would you insist it should be done for QM?

I think it is trivial to express all of electromagnetism, GR etc in the language of Kolmogorov probability theory. For example, for classical mechanics one uses Liouville time evolution.
 
  • Like
Likes Ilja
  • #87
rubi said:
[..]
Here is a true theorem: No classical probability theory can reproduce the observed statistics of spin particles.

Likely untrue. Apparently debunked in this paper, which was linked from a recent post here (could not find back the post, but did find back the paper):

http://arxiv.org/abs/1305.1280 "The Pilot-Wave Perspective on Spin" -Norsen

rubi said:
The most widely known one is the Kochen-Specker theorem. However, in order to see that QM can't be an ordinary probability theory, you just need to notice that it has a probability distribution for both ##S_x## and ##S_z##. If these observables were random variables on a probability space, then you would be able to compute the probability ##P(S_x = +\wedge S_z = +)##. However, QM can't compute this number and hence can't be an ordinary probability theory (at least if what you'd consider an "ordinary probability theory" would satisfy Kolmogorov's axioms, which is the standard definition).

See elaborate discussion in the link here above. :smile:
 
  • Like
Likes Ilja
  • #88
rubi said:
The most widely known one is the Kochen-Specker theorem.
Oh, I haven't seen this. This clarifies the issue. Kochen-Specker presumes non-contextuality, while the known hidden variable theories like dBB are contextual.

This is, translated into layman language, the point I have made many times: In a contextual theory, what is named "measurement" is something different, an interaction, and its result depends as of the state of the "measured system", as of the state of the "measurment device", so that it is not a property of the system which is "measured".
 
  • #89
Well, a measurement of course always depends on the state of the measured object and the measurement apparatus, and to measure something implies that the measured object must interact with the measurement apparatus independent of the religion you follow in your metaphysical worldview ;-)).
 
  • #90
Its the same problem as what hidden variable is contained in two particles that tells them that they have collided with each other.
The particles contain no property like temperature, mass etc that gives them their location in space. So how do they know when they collide?
 
  • #91
LaserMind said:
Its the same problem as what hidden variable is contained in two particles that tells them that they have collided with each other.
The particles contain no property like temperature, mass etc that gives them their location in space. So how do they know when they collide?

Are you talking about entanglement of observables? Particles do not need to "directly" interact (collide) to become entangled on a basis.
 
  • #92
vanhees71 said:
Well, a measurement of course always depends on the state of the measured object and the measurement apparatus, and to measure something implies that the measured object must interact with the measurement apparatus independent of the religion you follow in your metaphysical worldview ;-)).
Fine. So you accept that Kochen-Specker is a theorem about theories where the result does not depend on the state of the measurement apparatus, but has to be predefined by the measured object, and is therefore not relevant for hidden variable theories at all?
 
  • #93
Ilja said:
In the minimal interpretation, QM does not pretend to describe reality. If an interpretation claims that no reality exists, I reject it as nonsensical. But the minimal interpretation does not make such claims, it simply does not give a description of reality.
Of course, QM describes reality. It's just that our naive picture of reality needs to be modified. Open minded people without philosophical prejudices about the world have no problem with that.

I never made such a claim. Reichenbach's principle claims the existence of causal explanations, like common causes. It also specifies what a common cause is.

There are rules of reasoning, which cannot be proven to be false by any observation, because to derive some nontrivial predictions - something which could be falsified by observation - has to use them. So, claiming that these rules are wrong would be simply the end of science as we know it. If we would take such a solution seriously, we would simply stop doing science, because it would be well-known that the methods we use are inconsistent. (Ok, also not a decisive argument - we do a lot of inconsistent things anyway.)

Whatever, there is a hierarchy, we have rules, hypotheses or so which make science possible, to reject them would make science meaningless. They are, of course, only human inventions too, but if they are wrong, doing science becomes meaningless. We would probably continue doing science, because humans like to continue to do things even if they have recognized that doing them is meaningless, which is what is named culture. But this culture named science would not be really science as it is today, an endeavor to understand reality, to find explanations, but like the atheist going to Church as part of his living in a formerly religious culture.

But this has not happened yet, at least for me doing science has yet some of its original meaning, and is an endeavor to understand reality, to find explanations which are consistent with the rules of logic, of consistent reasoning. And this requires that some ideas, like the rules of logic, of consistent reasoning, the existence of some external reality, and the existence of explanations, have to be true.

It is not only the point that giving them up would make science meaningless. It is also that there is no imaginable evidence which would motivate it. Because, whatever the conflict with observation, this would be always only an open scientific problem. And giving up science because there are open scientific problems? Sorry, this makes no sense. Science without open scientific problems would be boring.
None of this makes sense. Science doesn't depend on some any of this. We're making progress almost on a daily basis.

Of course, one could imagine a Nature so that some beings in this Nature would be unable in principle to invent a theory about it without logical contradictions.
There are no logical contradictions in QM. QM is fully consistent with classical logic. If you don't agree, provide a counterexample.

Simply wrong. There are causal explanations, they are even quite simple and straightforward, but violate Einstein causality. This is not really a big problem. Anyway, the other appearances of a similar symmetry (like for acoustic wave equations, where also Lorentz transformation with the speed of sound allow to transform solutions into other solutions of the wave equation) are known to be not fundamental.
Simply wrong. The absence of common cause explanations cannot be proven for QM.

It is you who claims QM is in conflict with logic, namely with the rules of probability theory, which are, following Jaynes, Probability theory - the logic of science, the rules of consistent plausible reasoning. Consistent reasoning is not at all about intuition.
No, I claim that QM is fully consistent with logic. What you call logic isn't actually logic, but rather a formalization of classical intuition. It is unreasonable to expect nature to work according to classical intuition.

wle said:
The Kochen-Specker theorem is about hidden variable models and uses assumptions beyond only probability theory.
No, the assumptions actually formalize some concepts that must be obeyed by a classical probability theory (non-contextuality). No non-classical probability theory will violate them.

I think Kochen and Specker themselves pointed out that you can always construct a joint probability distribution for different measurements just by taking the probabilities given by the Born rule and multiplying them, like I pointed out in my earlier post. I think their stance was that this sort of thing didn't make a very satisfactory hidden variable model, but if the exercise is just to invent a joint probability distribution in order to express quantum physics in some axiomatic language that requires it then it looks to me like you could do it this way.
The probability distribution you get by taking the product measure will be inconsistent with certain functional relationships between random variables that must hold in a classical probability theory. No probability distributions of random variables can be consistent with certain QM statistics. That is the theorem.

I wouldn't consider QM a probability theory in the first place. It's a physics theory that uses elements from probability theory as well as other areas of mathematics in its formulation.
QM is a theory that assigns probabilities to certain events. It does this in a fashion, which is incompatible with classical probability theory. If you don't want to call it a generalized probability theory, fine. That doesn't change the mathematical content of my statement.

You're acting as if you think all of QM should be seen as a special case of Kolmogorov probability theory. Why would we want to do this, independently of whether it is even possible?
I don't, but Ilja needs to, if he wants to apply Reichenbach's principle to QM. Reichenbach's principle requires a classical probability theory to be applied.

harrylin said:
Likely untrue. Apparently debunked in this paper, which was linked from a recent post here (could not find back the post, but did find back the paper):

http://arxiv.org/abs/1305.1280 "The Pilot-Wave Perspective on Spin" -Norsen
This paper doesn't construct a classical probability theory with spin observables modeled as random variables. (By the way, it is even possible for a single isolated spin 1/2 particle, but that is pretty much the only exception.)

Ilja said:
Oh, I haven't seen this. This clarifies the issue. Kochen-Specker presumes non-contextuality, while the known hidden variable theories like dBB are contextual.
Non-contextuality is exactly the assumption that observables can be modeled by classical random variables on a probability space, hence it proves my point. Of course dBB must be contextual, since Kochen-Specker proved that it cannot be non-contextual if it wants to reproduce QM.

This is, translated into layman language, the point I have made many times: In a contextual theory, what is named "measurement" is something different, an interaction, and its result depends as of the state of the "measured system", as of the state of the "measurment device", so that it is not a property of the system which is "measured".
I know that hidden variables must be contextual. That is exactly my point. You cannot cook up a classical probability theory that reproduces all statistics that can be computed from quantum mechanics.
 
  • #94
rubi said:
Of course, QM describes reality. It's just that our naive picture of reality needs to be modified. Open minded people without philosophical prejudices about the world have no problem with that.
"Open minded" people without prejudices have also no problem to accept Buddhism as a description of reality. Sorry for being closed minded on this, but for me a realistic theory has to describe all what is supposed to exist in reality, and this should include all the things around us which nobody doubts really exist. Including some equations how they change their states.

rubi said:
None of this makes sense. Science doesn't depend on some any of this. We're making progress almost on a daily basis.
Of course we make progress - because we do not give up the search for realistic causal explanations. Everywhere except in fundamental physics.
rubi said:
Simply wrong. The absence of common cause explanations cannot be proven for QM.
It can be. We observe 100% correlations if A and B measure the same direction. The common cause explanation would be that some common cause ##\lambda## defines this measurement result. This common cause exists in the past, thus, with some probability distribution ##\rho(\lambda)## independent of a and b. And it defines the measurement results A and B. So that we have the functions ##A (a,'lambda), B(b,\lambda)## we need to prove Bell's inequality. Once Bell's inequality is violated, the common cause explanation is excluded.
rubi said:
No, I claim that QM is fully consistent with logic. What you call logic isn't actually logic, but rather a formalization of classical intuition.
No. The essential argument used by Jaynes is consistency, which is a sufficiently precise requirement, and not some diffuse intuition.

rubi said:
No, the assumptions actually formalize some concepts that must be obeyed by a classical probability theory (non-contextuality). No non-classical probability theory will violate them.
dBB violates them, it is contextual, despite the fact that it is a completely classical, consistent, realistic, causal, deterministic theory. So, with non-contextuality you add some philosophical prejudice to probability theory which is not part of it.
rubi said:
QM is a theory that assigns probabilities to certain events. It does this in a fashion, which is incompatible with classical probability theory. If you don't want to call it a generalized probability theory, fine. That doesn't change the mathematical content of my statement.
It is incompatible with non-contextuality. Not with classical probability theory. Don't mingle a particular confusion about what happens (naming interactions "measurements" and results of the interactions "measurement results" even if nothing indicates that the result depends only on one part of the interaction) with the fundamental laws of consistent plausible reasoning known as probability theory.
rubi said:
I don't, but Ilja needs to, if he wants to apply Reichenbach's principle to QM. Reichenbach's principle requires a classical probability theory to be applied.
There is no problem with this. I can always use consistent reasoning. And I know that the laws of consistent plausible reasoning are those of probability theory. Read Jaynes.
rubi said:
This paper doesn't construct a classical probability theory with spin observables modeled as random variables. (By the way, it is even possible for a single isolated spin 1/2 particle, but that is pretty much the only exception.)
It does not construct what you name "classical probability theory", and what other people name a non-contextual model, because nobody needs it and nobody thinks that it correctly describes reality.
rubi said:
Non-contextuality is exactly the assumption that observables can be modeled by classical random variables on a probability space, hence it proves my point.
Maybe I will start to name my ether theory "classical logic"? This would allow me to accuse everybody who rejects the ether of making logical errors, and prove that ether theory follows from logic alone? That would be similar to your attempt to give non-contextuality (a very strange assumption about results of interactions, which would be appropriate only for a very special subclass of interactions named measurements) the status of of an axiom of consistent plausible reasoning.
rubi said:
I know that hidden variables must be contextual. That is exactly my point. You cannot cook up a classical probability theory that reproduces all statistics that can be computed from quantum mechanics.
I can. Take dBB theory in quantum equilibrium. Read Bohm 1952 for the proof.
 
  • #95
Ilja said:
It can be. We observe 100% correlations if A and B measure the same direction. The common cause explanation would be that some common cause ##\lambda## defines this measurement result. This common cause exists in the past, thus, with some probability distribution ##\rho(\lambda)## independent of a and b. And it defines the measurement results A and B. So that we have the functions ##A (a,'lambda), B(b,\lambda)## we need to prove Bell's inequality. Once Bell's inequality is violated, the common cause explanation is excluded.
No, your probability distribution needn't exist. In a quantum world, the common cause ##\lambda## might not commute with the observables of ##A## and ##B##, hence your joint probability distribution might not exist and so none of the remaining reasoning can be carried out without a common cause principle that works for non-commuting observables. It's that simple. Reichenbach's principle can't be applied in this situation. It's just not general enough.

Ilja said:
I can. Take dBB theory in quantum equilibrium. Read Bohm 1952 for the proof.
dBB theory doesn't reproduce the statistics of spin independent of measurement contexts. QM predicts probability distributions for spin independent of a measurement context.
 
Last edited:
  • #96
rubi said:
No, the assumptions actually formalize some concepts that must be obeyed by a classical probability theory (non-contextuality). No non-classical probability theory will violate them.

How so? The Kochen-Specker theorem includes an assumption that (deterministic) values [tex]v[/tex] associated with quantum observables (Hermitian operators) satisfy conditions like ##v(A + B) = v(A) + v(B)## and ##v(AB) = v(A) v(B)## for all commuting ##A## and ##B##. Quantum observables are a concept specific to quantum physics that doesn't appear at all in probability theory.
No probability distributions of random variables can be consistent with certain QM statistics. That is the theorem.

I don't think you've justified that.

But let's say you're correct, and it's impossible to fully embed quantum physics in the language of Kolmogorov probability theory. In practice you may as well be correct anyway since quantum physics is not normally expressed in that language (whether there is a way to do it or not). So what? However you classify the type of probability theory quantum physics uses, we've been using it in physics since QM was first formulated back in the 1920s and 1930s and for the most part we don't think anything special of it. In particular, when we talk about quantum behaviour and correlations and we contrast this with various types of "classical" behaviour, this is not what we are talking about.
I don't, but Ilja needs to, if he wants to apply Reichenbach's principle to QM. Reichenbach's principle requires a classical probability theory to be applied.

I wouldn't agree with this. The main reason QM doesn't fit neatly into Kolmogorov probability theory is that we treat measurement choice as a free variable. I wouldn't consider this a good reason to shut down a discussion about whether certain types of causal explanation for quantum correlations are possible or not.

An example: if you allow sufficiently fast classical communication then it's possible to simulate arbitrary (even Bell-violating) quantum correlations. If you gave me a "magic" ethernet cable that could transmit data instantaneously then I could program two computers, communicating with each other using this cable and accepting measurement choices as inputs, to generate outputs in accord with correlations predicted by QM. I would in principle consider this sort of thing a valid candidate causal explanation for QM correlations.
 
  • #97
wle said:
How so? The Kochen-Specker theorem includes an assumption that (deterministic) values [tex]v[/tex] associated with quantum observables (Hermitian operators) satisfy conditions like ##v(A + B) = v(A) + v(B)## and ##v(AB) = v(A) v(B)## for all commuting ##A## and ##B##. Quantum observables are a concept specific to quantum physics that doesn't appear at all in probability theory.
Because random variables satisfy these properties by definition: ##(AB)(x) = A(x)B(x)##, because this is how the ##AB## is defined. Same for addition. It has nothing to do with quantum observables, it's just the functional relationships between the valuations. Kochen-Specker assumes that the random variables that must represent the quantum observables in the classical probability theory must satisfy the usual functional relationshipts that classical random variables obey by definition. Kochen-Specker essentially says that you cannot embedd the quantum probabilities into classical probability theory without having to readjust the very definitions of multiplication and addition of random variables.

I don't think you've justified that.
The Kochen-Specker theorem proves it.

But let's say you're correct, and it's impossible to fully embed quantum physics in the language of Kolmogorov probability theory. In practice you may as well be correct anyway since quantum physics is not normally expressed in that language (whether there is a way to do it or not). So what? However you classify the type of probability theory quantum physics uses, we've been using it in physics since QM was first formulated back in the 1920s and 1930s and for the most part we don't think anything special of it. In particular, when we talk about quantum behaviour and correlations and we contrast this with various types of "classical" behaviour, this is not what we are talking about.
It is not problematic that QM uses a generalized way for computing probabilities. It just means that certain concepts that used to work in classical probability theory cannot be carried over to the quantum framework without modification (such as Reichenbach's principle). This doesn't change how we use quantum theory in any way. Most physicist even understand intuitively how to correctly use quantum mechanics without understanding mathematically what is different about it.

I wouldn't agree with this. The main reason QM doesn't fit neatly into Kolmogorov probability theory is that we treat measurement choice as a free variable
No, the reason for why QM doesn't fit into Kolmogorov probability theory is that the event algebra is a certain orthomodular lattice, rather than a sigma algebra. The probability "measure" assigns probabilities to elements of these algebras. While in a sigma algebra, there always exists a third element ##A\wedge B## (the "meet"), given the events ##A## and ##B## (namely ##A\cap B##), this is no longer true for an orthomodular lattice. However, Kolmogorov's axioms of probability theory depend on the existence of ##A\wedge B##. Hence, all theorems that are derived from Kolmogorov's axioms and all concepts that depend on this need to be adjusted to the new situation.

I wouldn't consider this a good reason to shut down a discussion about whether certain types of causal explanation for quantum correlations are possible or not.

An example: if you allow sufficiently fast classical communication then it's possible to simulate arbitrary (even Bell-violating) quantum correlations. If you gave me a "magic" ethernet cable that could transmit data instantaneously then I could program two computers, communicating with each other using this cable and accepting measurement choices as inputs, to generate outputs in accord with correlations predicted by QM. I would in principle consider this sort of thing a valid candidate causal explanation for QM correlations.
This may be a valid causal explanation. However, the point is whether there can be causal explanations that don't violate the speed of light. Ilja wants to exclude those by naively using concepts of classical probability theory, which are known to not even be well-defined in the context of quantum theory. Certainly, this is not valid reasoning.
 
Last edited:
  • #98
rubi said:
This may be a valid causal explanation. However, the point is whether there can be causal explanations that don't violate the speed of light. Ilja wants to exclude those by naively using concepts of classical probability theory, which are known to not even be well-defined in the context of quantum theory. Certainly, this is not valid reasoning.

But that is what Bell's theorem says. As we have discussed, you can escape it by using a more general notion of cause than that used in Bell's theorem, which is fine. But in that case you should simply clarify your terminology.
 
  • #99
atyy said:
But that is what Bell's theorem says.
Bell's theorem is a theorem about theories formulated in the language of classical probability theory. It's not a theorem about quantum theory. We just use it to conclude that quantum theory is different from classical probability by noting that QM can violate an inequality that is not violated by certain classical theories. Bell says that there is a class of theories (local hidden variable theories) that satisfy a certain inequality. QM is not in that class. Bell's theorem can't be used to conclude anything about theories that are not both local and hidden variable theories.

As we have discussed, you can escape it by using a more general notion of cause than that used in Bell's theorem, which is fine. But in that case you should simply clarify your terminology.
No, it's not a more general notion of cause. Cause means the same as always. What changes is how we can tell what consistutes a cause just by looking at statistics. However, if the statistics is not compatible with the classical probability theory anymore, then it is to be expected that one needs to adjust the method which is used to tell whether some statistics hints at a causal relationship or not. Specifically, using Reichenbach's principle makes no sense in the context of quantum statistics. It's just not applicable here. That just means that we have no clear criterion that tells us what consitutes a common cause. The notion of cause itself is not modified.
 
Last edited:
  • #100
rubi said:
Nobody knows what the right assumptions about reality are. It's your personal opinion that QM doesn't describe reality.

Indeed.

:smile::smile::smile::smile::smile::smile:

Thanks
Bill
 
  • #101
rubi said:
Because random variables satisfy these properties by definition: ##(AB)(x) = A(x)B(x)##, because this is how the ##AB## is defined.

You've lost me. What are ##A##, ##B##, and ##x## supposed to be in the context of Kolmogorov probability theory and what do they have to do with the assumptions ##v(A + B) = v(A) + v(B)## and ##v(AB) = v(A) v(B)## for commuting quantum observables ##A## and ##B## in the Kochen-Specker theorem?

The Kochen-Specker theorem proves it.

I don't think you've justified that. You haven't proved that the assumptions behind the Kochen-Specker theorem are equivalent to or follow from the axioms of Kolmogorov probability theory, and proofs of the Kochen-Specker theorem don't claim any such thing.
Hence, all theorems that are derived from Kolmogorov's axioms and all concepts that depend on this need to be adjusted to the new situation.

Likewise, theorems that don't use all of Kolmogorov's axioms are not necessarily restricted to Kolmogorov probability theory. Kolmogorov probability theory requires that the joint event ##A \wedge B## exists for all events ##A## and ##B##, like you say. Bell's theorem, for example, does not require that all possible joint events exist.
This may be a valid causal explanation. However, the point is whether there can be causal explanations that don't violate the speed of light.

Why should that make a difference? If you accept that quantum correlations can be simulated by two computers communicating with each other faster than light, then you can certainly ask if two computers could simulate quantum correlations without communicating faster than light.

This is why I say your insistence on Kolmogorov probability theory isn't relevant. It is not difficult to program a computer to output random results in accord with the Born rule, and two computers allowed to communicate FTL could be programmed to simulate arbitrary quantum correlations. If you insist that we can only reason about causality, Reichenbach's principle, etc. within a certain mathematical framework, and that framework doesn't accommodate something I can simulate on a computer, then I'd say it's not a good framework to begin with.

It's the same if you look at the historical origins behind Bell's theorem. Essentially, Bell was aware that nonlocal hidden variable models like the de Broglie-Bohm interpretation could reproduce the predictions of quantum physics, and he was interested in the question of whether a local hidden variable model could achieve the same thing. So, similarly, if your framework for discussing causality doesn't accommodate the de Broglie-Bohm interpretation then it is not relevant to understanding Bell's theorem, at least not in the way Bell thought about it.
 
  • #102
wle said:
You've lost me. What are ##A##, ##B##, and ##x## supposed to be in the context of Kolmogorov probability theory and what do they have to do with the assumptions ##v(A + B) = v(A) + v(B)## and ##v(AB) = v(A) v(B)## for commuting quantum observables ##A## and ##B## in the Kochen-Specker theorem?
Is it really so hard to understand? The assumptions of the Kochen-Specker theorem require that the valuations of quantum observables follow the same rules as the valuations of classical random variables. Since the valuation of a classical random variable is given by ##v(A) = A(x)## and a the product of classical random variables is defined by ##(AB)(x) := A(x)B(x)##, the requirements of Kochen-Specker follow (same for addition). Kochen-Specker says that one cannot represent quantum observables on a classical probability space without having to redefine multiplication of addition of random variables.

I don't think you've justified that. You haven't proved that the assumptions behind the Kochen-Specker theorem are equivalent to or follow from the axioms of Kolmogorov probability theory, and proofs of the Kochen-Specker theorem don't claim any such thing.
I (or rather Kochen and Specker themselves) have proven that not all quantum observables can be represented as classical random variables on a classical probability space. This is also not my personal claim, but it is standard knowledge that can be looked up in pretty much every book on quantum mechanics.

Likewise, theorems that don't use all of Kolmogorov's axioms are not necessarily restricted to Kolmogorov probability theory. Kolmogorov probability theory requires that the joint event ##A \wedge B## exists for all events ##A## and ##B##, like you say. Bell's theorem, for example, does not require that all possible joint events exist.
Of course, Bell's theorem requires that, because it wants to make statements about all possible events. Otherwise it can only make statements like: "Among the events that are commuting with ##A## and ##B##, none can be a common cause" or "No theory of local hidden variables for events commuting with ##A## and ##B## can reproduce all predictions of QM". Of course, for a classical probability theory, this is the same as Bell's theorem, since all classical random variables commute. But it would be a weak result for QM, since it doesn't allow to conclude the non-existence of a common cause or non-locality in QM, since in QM, there are also events not commuting with ##A## and ##B##.

Why should that make a difference? If you accept that quantum correlations can be simulated by two computers communicating with each other faster than light, then you can certainly ask if two computers could simulate quantum correlations without communicating faster than light.
Two computers of course cannot do that, since they are classical objects. You need quantum objects to generate quantum statistics. The analogy with computers makes no sense here.

This is why I say your insistence on Kolmogorov probability theory isn't relevant.
Of course, it is highly relevant. It is really super trivial: Concepts that only work in Kolmogorov probability theory cannot be applied outside of Kolmogorov probability theory. Apparently, you deny this simple fact.

It is not difficult to program a computer to output random results in accord with the Born rule, and two computers allowed to communicate FTL could be programmed to simulate arbitrary quantum correlations. If you insist that we can only reason about causality, Reichenbach's principle, etc. within a certain mathematical framework, and that framework doesn't accommodate something I can simulate on a computer, then I'd say it's not a good framework to begin with.
You can of course simulate quantum physics on a computer, but you cannot have computers behave like quantum objects. There is no logical problem here. My whole point is that quantum theory is not a classical probability theory. This is really standard and well-known and it makes no sense to doubt it. Hence, concepts that require classical probability theory, just don't work anymore in the context of quantum mechanics. This is a fact of life. Of course, you can prefer Bohmian mechanics, but then you can only use the old concepts to make statements about Bohmian mechanics and not about quantum theory.

It's the same if you look at the historical origins behind Bell's theorem. Essentially, Bell was aware that nonlocal hidden variable models like the de Broglie-Bohm interpretation could reproduce the predictions of quantum physics, and he was interested in the question of whether a local hidden variable model could achieve the same thing.
Yes and of course he proved that no local hidden variable model can make the same predictions as QM. One cannot prove the theorem without the assumption of hidden variables. Hence, the theorem says nothing about theories without hidden variables, such as QM.

So, similarly, if your framework for discussing causality doesn't accommodate the de Broglie-Bohm interpretation then it is not relevant to understanding Bell's theorem, at least not in the way Bell thought about it.
First of all, it is not my framework, but the generally accepted framework of physics. What I am saying is generally agreed upon by all working physicists. Of course, dBB theory can be formulated within this framework. Then you just can't prove Bell's theorem anymore. However, dBB theory can also be formulated as a classical probability theory and hence, Bell's theorem applies. This is only possible, since dBB theory doesn't have all the quantum observables as random variables, since the KS theorem prohibits it. Not even Bohmians deny this.
 
  • #103
rubi said:
Is it really so hard to understand? The assumptions of the Kochen-Specker theorem require that the valuations of quantum observables follow the same rules as the valuations of classical random variables. Since the valuation of a classical random variable is given by ##v(A) = A(x)## and a the product of classical random variables is defined by ##(AB)(x) := A(x)B(x)##, the requirements of Kochen-Specker follow (same for addition). Kochen-Specker says that one cannot represent quantum observables on a classical probability space without having to redefine multiplication of addition of random variables.

You haven't answered my question. In terms of the axioms of Kolmogorov probability theory, what are your ##A##, ##B##, and, especially, ##x## supposed to be?
Of course, Bell's theorem requires that, because it wants to make statements about all possible events.

No it doesn't. The mathematical assumption that Bell inequalities are derived from is that (bipartite) correlations can be expressed in the form $$P(ab \mid xy) = \int \mathrm{d} \lambda \rho(\lambda) P_{\mathrm{A}}(a \mid x; \lambda) P_{\mathrm{B}}(b \mid y; \lambda) \,,$$ where ##a##, ##b##, ##x##, and ##y## are labels associated to measurement outcomes and choices, respectively. This does not require joint events to be defined except where quantum physics already says they exist.
Two computers of course cannot do that, since they are classical objects. You need quantum objects to generate quantum statistics. The analogy with computers makes no sense here.

Wrong. Computers can calculate the Born rule probability ##P(a \mid \rho) = \mathrm{Tr}[M_{a} \rho]## of obtaining a result ##a## from measuring the POVM ##\{M_{a}\}_{a}## on an initial state ##\rho##. A computer can equally easily generate a random result with this probability given the state and measurement as inputs. The only limitations are technological: finite precision of floating point computations, quality of random number generators, and, for high-dimensional Hilbert spaces, processing speed and available memory.

With FTL communication, simulating a given set of bipartite quantum correlations ##P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]## on two distant computers would not be much more difficult. One way to do it is to express the probabilities as ##P(ab \mid xy) = P_{1}(a \mid b, xy) P_{2}(b \mid y)## where ##P_{2}(b \mid y) = \sum_{a} P(ab \mid xy)##. Then program Bob's computer to accept ##y## as input, generate ##b## with probability ##P_{2}(b \mid y)##, transmit ##b## and ##y## to Alice's computer, and finally output ##b##. Likewise, Alice's computer would be programmed to accept ##x## as input, read ##y## and ##b## from Bob's computer, and output result ##a## with probability ##P_{1}(a \mid b, xy)##.

Similar to what I say about Bell's theorem above, there is no requirement here that e.g. the probabilities ##P_{2}(b \mid y)## should admit a hidden variable model in the sense of Kochen-Specker or that joint events like ##(b_{y}, b_{y'})## for different inputs ##y## should be defined.
This is also not my personal claim, but it is standard knowledge that can be looked up in pretty much every book on quantum mechanics.

This is really standard and well-known and it makes no sense to doubt it.

First of all, it is not my framework, but the generally accepted framework of physics. What I am saying is generally agreed upon by all working physicists.

I am a working physicist and I'd say your interpretation of the Bell and Kochen-Specker theorems looks highly nonstandard and poorly supported to me. Please stop making claims along the lines "all physicists agree with this" or "all textbooks say this". They don't.

Concerning Kolmogorov, I think most researchers in quantum physics probably don't know offhand, or really much care, what the exact definition of Kolmogorov probability theory is, let alone "generally accept" it as a condition for discussing things like correlation or causality or Reichenbach's principle. So I doubt that a theorem stating that quantum physics is not a Kolmogorov probability theory would even have much impact in the physics community (certainly nothing like the impact of Bell's theorem).
 
  • #104
wle said:
You haven't answered my question. In terms of the axioms of Kolmogorov probability theory, what are your ##A##, ##B##, and, especially, ##x## supposed to be?
##A## and ##B## are random variables on the probability space. ##x## is an element of the probability space. If you knew anything about probability theory at all, this should have been a triviality to you.

The valuation of classical random variables ##v(A)=A(x)## satisfies the assumptions of the KS theorem (for example: ##v(AB) = (AB)(x) = A(x)B(x) = v(A)v(B)##), so if KS prove that no valuation function compatible with these assumptions can be defined for quantum observables, it means that the quantum observables can't be represented by classical random variables. This is the whole point of the KS theorem and the very motivation of Kochen and Specker for proving it in the first place.

No it doesn't. The mathematical assumption that Bell inequalities are derived from is that (bipartite) correlations can be expressed in the form $$P(ab \mid xy) = \int \mathrm{d} \lambda \rho(\lambda) P_{\mathrm{A}}(a \mid x; \lambda) P_{\mathrm{B}}(b \mid y; \lambda) \,,$$ where ##a##, ##b##, ##x##, and ##y## are labels associated to measurement outcomes and choices, respectively. This does not require joint events to be defined except where quantum physics already says they exist.
Wrong. If the event algebra is not a sigma algebra, then your ##\lambda## will not encompass all possible events, but only such events that commute with ##A## and ##B##. Hence, statements proved from this formula will only hold for events that commute with ##A## and ##B##.

Wrong. Computers can calculate the Born rule probability ##P(a \mid \rho) = \mathrm{Tr}[M_{a} \rho]## of obtaining a result ##a## from measuring the POVM ##\{M_{a}\}_{a}## on an initial state ##\rho##. A computer can equally easily generate a random result with this probability given the state and measurement as inputs. The only limitations are technological: finite precision of floating point computations, quality of random number generators, and, for high-dimensional Hilbert spaces, processing speed and available memory.

With FTL communication, simulating a given set of bipartite quantum correlations ##P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]## on two distant computers would not be much more difficult. One way to do it is to express the probabilities as ##P(ab \mid xy) = P_{1}(a \mid b, xy) P_{2}(b \mid y)## where ##P_{2}(b \mid y) = \sum_{a} P(ab \mid xy)##. Then program Bob's computer to accept ##y## as input, generate ##b## with probability ##P_{2}(b \mid y)##, transmit ##b## and ##y## to Alice's computer, and finally output ##b##. Program Alice's computer to accept ##x## as input, read ##y## and ##b## from Bob's computer, and output result ##a## with probability ##P_{1}(a \mid b, xy)##.

Similar to what I say about Bell's theorem above, there is no requirement here that e.g. the probabilities ##P_{2}(b \mid y)## should admit a hidden variable model in the sense of Kochen-Specker or that joint events like ##(b_{y}, b_{y'})## for different inputs ##y## should be defined.
This is not what I said. I said that two computers cannot simulate this without FTL communication. Of course you can simulate a quantum system on a computer if you simulate the whole system on one machine or simulate the systems individually on two machines with FTL communication. What two computers cannot do is to generate quantum correlations without FTL communication, using only local data. This is the challenge for Bell deniers. Of course, this is not possible (assuming there are no loopholes). The point is that this says nothing about what quantum objects can do. Computers sufficiently classical and hence aren't a good analogy to quantum objects.

I am a working physicist and I'd say your interpretation of the Bell and Kochen-Specker theorems looks highly nonstandard and poorly supported to me. Please stop making claims along the lines "all physicists agree with this" or "all textbooks say this". They don't.
I highly doubt that you are a working physicist. "My" interpretation is fully standard and evident to everyone who understands basic probability theory. KS says that QT cannot be embedded into a classical probability theory without changing the definition of multiplication and addition of random variables. This is definitely well established science. I have done my best to explain this to you, but you will not understand it if you don't invest at least a little bit of time into the study of probability theory and the KS theorem.

Concerning Kolmogorov, I think most researchers in quantum physics probably don't know offhand, or really much care, what the exact definition of Kolmogorov probability theory is, let alone "generally accept" it as a condition for discussing things like correlation or causality or Reichenbach's principle. So I doubt that a theorem stating that quantum physics is not a Kolmogorov probability theory would even have much impact in the physics community (certainly nothing like the impact of Bell's theorem).
I don't know where you learned physics from, but probability theory is an elementary part of physics education. Certainly, all physicists know probability theory. It is also fully standard that Bell's theorem requires the assumption of hidden variables. The fact that quantum theory is not a classical probability theory is well understood and there is a whole industry of research devoted to this fact. It is also not a new result, but known since half a century already. The impact is that almost 100 years after the discovery of QT, we are still discussing about interpretations of QT. If QT were just another classical probability theory, we wouldn't have any interpretational problems.
 
  • #105
rubi said:
##A## and ##B## are random variables on the probability space. ##x## is an element of the probability space. The valuation of classical random variables ##v(A)=A(x)## satisfies the assumptions of the KS theorem (for example: ##v(AB) = (AB)(x) = A(x)B(x) = v(A)v(B)##), so if KS prove that no valuation function compatible with these assumptions can be defined for quantum observables, it means that the quantum observables can't be represented by classical random variables. This is the whole point of the KS theorem and the very motivation of Kochen and Specker for proving it in the first place.

That doesn't support you. You are assuming that deterministic values ##v(A)## and ##v(B)## for different operators must be modeled as the same event ##x##. This is effectively how I'd read the Kochen-Specker theorem if I try to translate it into the language of Kolmogorov probability theory. For example, if you consider the measurement bases ##\mathcal{M}_{1} = \{ \lvert 0 \rangle, \lvert 1 \rangle, \lvert 2 \rangle \}## or ##\mathcal{M}_{2} = \bigl\{ \lvert 0 \rangle, \tfrac{\lvert 1 \rangle + \lvert 2 \rangle}{\sqrt{2}}, \tfrac{\lvert 1 \rangle - \lvert 2 \rangle}{\sqrt{2}}\bigr\}##, which share an eigenvector, then in terms of Kolmogorov probability theory the Kochen-Specker theorem assumes that getting the result ##\lvert 0 \rangle## when measuring in the basis ##\mathcal{M}_{1}## and getting the result ##\lvert 0 \rangle## when measuring in the basis ##\mathcal{M}_{2}## should both be modeled as the same event in the probability space.

I would not consider that the most general possible way to embed quantum physics in Kolmogorov probability theory.

(In fact, I'm having to guess to make sense of your post because what you're claiming doesn't even look well formed. For example in ##v(A) = A(x)## you have ##A## appearing as both a random variable and as a Hermitian operator.)
Wrong. If the event algebra is not a sigma algebra, then your ##\lambda## will not encompass all possible events, but only such events that commute with ##A## and ##B##. Hence, statements proved from this formula will only hold for events that commute with ##A## and ##B##.

I think you're making this up as you go along.

Since you insist on equating Kolmogorov probability theory with the Kochen-Specker theorem I'll add this: there's a simple special case that shows that the setting considered in Bell's theorem is not a special case of the Kochen-Specker theorem. Specifically, if you take a product state ##\rho_{\mathrm{AB}} = \rho_{\mathrm{A}} \otimes \rho_{\mathrm{B}}##, the quantum correlation reduces to $$P(ab \mid xy) = \mathrm{Tr} \bigl[(M_{a \mid x} \otimes N_{b \mid y}) (\rho_{\mathrm{A}} \otimes \rho_{\mathrm{B}}) \bigr] = \mathrm{Tr} [ M_{a \mid x} \, \rho_{\mathrm{A}} ] \mathrm{Tr} [ N_{b \mid y} \, \rho_{\mathrm{B}} ] = P_{\mathrm{A}}(a \mid x) P_{\mathrm{B}}(b \mid y) \,.$$ This is (trivially) a Bell-local model in the sense I defined, regardless of what ##P_{\mathrm{A}}(a \mid x) = \mathrm{Tr} [ M_{a \mid x} \, \rho_{\mathrm{A}} ]## and ##P_{\mathrm{B}}(b \mid y) = \mathrm{Tr} [ N_{b \mid y} \, \rho_{\mathrm{B}} ]## are. In particular ##P_{\mathrm{A}}(a \mid x)## and ##P_{\mathrm{B}}(b \mid y)## may not admit contextual models satisfying the Kochen-Specker assumptions and the model would still count as local.

In general a given set of quantum correlations could admit only a local model (in the sense of Bell's theorem) or only a contextual model (in the sense of the Kochen-Specker theorem) or both or neither, so neither class of model is a subset of the other.
This is not what I said. I said that two computers cannot simulate this without FTL communication.

I never said you said that two computers cannot simulate this without FTL communication*. I described the FTL model to point out two things, neither of which you have addressed:
  • If you accept that the FTL model is a valid explanation for quantum correlations then it is a perfectly natural question to ask whether a similar model could explain quantum correlations without FTL communication. Of course we already know the answer is "no" thanks to Bell's theorem.
  • This question has nothing to do with the Kochen-Specker theorem or requiring that there is a joint probability space for everything or the like. The FTL simulation model I described need not necessarily admit a KS contextual model, for instance, so there is no reason to ask that there is a KS contextual model when you take away the FTL.
This is not a made up scenario. It is a fairly common way to think about Bell's theorem in the physics community and, in fact, there's a version of Bell's theorem (the "nonlocal game" approach) that is explicitly formulated this way.*If you're going to play "I said/you said" then you said "Two computers of course cannot do that" which leaves it ambiguous exactly what "that" is that you are denying, followed by "since they are classical objects. You need quantum objects to generate quantum statistics.", which is still wrong, since two computers using FTL communication is not a quantum object.
I highly doubt that you are a working physicist.

I work as a postdoctoral researcher in quantum information theory.
I have done my best to explain this to you, but you will not understand it if you don't invest at least a little bit of time into the study of probability theory and the KS theorem.

Oh please. Earlier in this thread you insisted, loudly and repeatedly, that there can't be a hidden-variable model for spin-1/2 until I pointed out that even the KS theorem doesn't apply to that. That immediately tells me you never invested your own "little bit of time" studying the theorem. If you actually look at a proof of the KS theorem (like the one on its Wikipedia page), it is actually quite easy to see why the kind of counterexample they construct cannot work for qubits.
 
Last edited:

Similar threads

Replies
50
Views
4K
Replies
8
Views
966
Replies
96
Views
6K
Replies
6
Views
1K
Replies
31
Views
6K
Back
Top