Need Understandable Explanation Of Bell's Theorum

  • Thread starter adrenaline
  • Start date
  • Tags
    Explanation
In summary, a fellow cardiologist has tried to explain Bell's theorem to me but I don't quite understand it. It disproves "hidden variables" and causality and solidifies quantum physics against its onslaught by the greatest mind of our generation, Einstein.
  • #1
adrenaline
Science Advisor
103
3
Every once in a while I try to stretch my feeble mind and pick up layman books on quantum physics as a change of pace from the many clinical/medical journals I have to constantly keep up with. I have a BS in chemistry and understand basic quantum mechanics but as much as I try, I cannot fully understand Bell's theorum...just when I think I've got it , it escapes my brain. A fellow cardiologist has tried to explain it to me (he used to be a physicist) but I don't quite understand his explanation. Would anyone take a gander at explaining it to a lowly non physicist? And I won't feel insulted if you baby the language a little.:biggrin:
 
Last edited:
Physics news on Phys.org
  • #2
adrenaline said:
Would anyone take a gander at explaining it to a lowly non physicist? And I won't feel insulted if you baby the language a little.:biggrin:

I would recommend my own humble page: Bell's Theorem: Overview with Lotsa Links.

There is both text and plenty of links there, although some of the links are advanced. If you will get things started with a few specific questions, I will take a shot at an answer. :smile: And there are a few others here who will help too.
 
  • #3
Since neither electron "knows" what its spin is until we measure it, then when we measure it, it settles on a spin. But this means that the other electron, no matter how far away it is, must now take on the opposite spin.
This relationship is entanglement, no? I am trying to rephrase what I read of your excellent link. Once again, excuse my ignorance. This entanglement thing is what throws my brain into a tizzy (which is why I am not a quantum physicist.)
 
Last edited:
  • #4
adrenaline said:
This entanglement thing is what throws my brain into a tizzy
If it didn't, you would either be a genius, or a liar.
 
  • #5
So the theorum disproves "hidden variables" and causality? That probabilities is the complete explanation for particle universe? In other words, it only solidified and cemented quantum physics even against its onslaught by the greatest mind of our generation, Einstein?
 
Last edited:
  • #6
adrenaline said:
So the theorum disproves "hidden variables" and causality? That probabilities is the complete explanation for particle universe? In other words, it only solidified and cemented quantum physics even against its onslaught by the greatest mind of our generation, Einstein?

Einstein's attack was pretty good. But he didn't give quantum mechanics quite enough credit. Understandable really, I mean QM was only a few years old when EPR was written. And the ideas of QM seem so outlandish in many ways. But hey, so does relativity when you first think about it!

Yes, the answer is that when one entangled particle takes on a particular spin, the other one immediately takes on an appropriate matching value (parallel or perpendicular, as the case may be).

As to hidden variables and causality... the general thinking is that you sacrifice EITHER local causality (locality) OR hidden variables. It's a matter of choice. You will find proponents here who will argue one side or the other with a great deal of conviction. :smile:
 
  • #7
adrenaline said:
So the theorum disproves "hidden variables" and causality? That probabilities is the complete explanation for particle universe?

The theorem actually also eliminates 'simple' probability-based theories because the assumptions that it makes about any local hidden state are so weak.

In other words, it only solidified and cemented quantum physics even against its onslaught by the greatest mind of our generation, Einstein?

Einstein isn't 'our' generation. He was born in the 19th century, and was well past his prime (as a physicist) by the time of the Manhatten project, 60 years ago.

Bell's theorem does very little in terms of justifying quantum physics, since it's a mathematical proof. Really, if you want to look for things that motivate QM you're going to have to look at things like, Bohr predicting hydrogen energy states or the Stein Gerlach experiment. By the time that Einstein made his criticisms, QM was already well established, and had quite a number of strong experimental results.

Really, that's why QM is so wierd. It's built on 'wierd' experimental results.
 
  • #8
adrenaline said:
Every once in a while I try to stretch my feeble mind and pick up layman books on quantum physics as a change of pace from the many clinical/medical journals I have to constantly keep up with. I have a BS in chemistry and understand basic quantum mechanics but as much as I try, I cannot fully understand Bell's theorum...just when I think I've got it , it escapes my brain. A fellow cardiologist has tried to explain it to me (he used to be a physicist) but I don't quite understand his explanation. Would anyone take a gander at explaining it to a lowly non physicist? And I won't feel insulted if you baby the language a little.:biggrin:
Bell's Theorem is the answer to the question about the complementary of quantum theory, which mean: Can we to complete quantum mechanic by unknown variables named Hidden Variables (HV) or no? This question was firstly to ask by Einstein, Podolsky and Rosen (EPR) paper. Around this question there is many philosophy because J. Bell sense that the answer to this question must be experimental. I.e. he thinks that the experiment only can to answer to this question. I suppose that he think that it is possible to add HV in quantum mechanics. But the results of experiments on his calculation give another answer: HV is absent in quantum mechanic. Bell calculates the quantities S that later named Bell's observable. If the world is classical or Quantum Mechanic +HV = Classical Theory than S<=1. Or S<=1 named Bell's Inequalities. Experiments gives S=1.4. Than Bell's Inequalities is violet. It is mean that we don’t find HV. From this follow Bells’ Theorem: It is impossible to construct quantum mechanics with HV.
But I think that it is not finish! This question will be discuss a long time!
Peoples will be finding HV for understanding quantum theory by classical quantities. It is because world which we see is classical and everything what we see is classical too. Yes, the calculation is non classical but why we doesn't understand what its mean.
 
Last edited:
  • #10
Thank you all for your help. :smile: I think I understand it a little better than I did before solicitting everybody's help!
 
  • #11
Question: Bells Theorum

If Electron "A" moves clockwise and Electron "B" millions of light years away moves counterclockwise, what happens if Electron "A" ALSO begins to move from left to right in a pendulum-like motion while still spinning clockwise, would Electron "B" still spinning counterclockwise begin to instantly move in the opposite pendulum-like motion direction as well i.e., from right to left keeping time with Electron "A"?
 
  • #12
Bells Inequality

I wrote the simplest (possible?) explanaition of Bell's here:
http://www.ronsit.co.uk/weird_at_Heart.asp

but having just reads it I need to improve it - a bit
of geek crept in.
 
Last edited by a moderator:
  • #13
wawens said:
I wrote the simplest (possible?) explanaition of Bell's here:
http://www.ronsit.co.uk/weird_at_Heart.asp
Wavens, I think your oversimplified explanation of the delayed choice at
http://www.ronsit.co.uk/SimpleDelayedChoiceExperiment.asp
is actually wrong.

For a correct explanation see e.g.
http://www.bottomlayer.com/bottom/basic_delayed_choice.htm
The point is that for any (of the two) choices, both slits are open all the time. The choice (that takes place after the photon passes the slits) refers to two different possible types of measurements.

I would also like to explain where the solution of the delayed-choice paradox lies. The crucial sentence on the latter link above is:
"Whatever the photon does, it presumably does it now when it passes through the slits."
The point is that this sentence is wrong if read without the word "presumably". Namely, the photon does something important even after passing the slit.
 
Last edited by a moderator:
  • #14
Delayed Choice

Thanks Demystifier, I will have a look at the DCE again with what you said in mind.
Your explanation seems much better on first reading.
 
  • #15
I always use a version I slightly adapted from the one presented in John Presskill's lecture notes (you can google it). Clear, relatively simple to understand intuitively, entertaining and it needn't involve any math. Great for parties.
 
  • #16
adrenaline said:
So the theorum disproves "hidden variables" and causality? That probabilities is the complete explanation for particle universe? In other words, it only solidified and cemented quantum physics even against its onslaught by the greatest mind of our generation, Einstein?

No, I don't think so
The theorum disproves "hidden variables with locality", which Einstein prefer
But Bohm's QM is not "hidden variables with locality", which I think is OK
since you can't use EPR to send signals that are faster than the speed of light
 
Last edited:
  • #17
adrenaline said:
Since neither electron "knows" what its spin is until we measure it, then when we measure it, it settles on a spin. But this means that the other electron, no matter how far away it is, must now take on the opposite spin.
This relationship is entanglement, no? I am trying to rephrase what I read of your excellent link. Once again, excuse my ignorance. This entanglement thing is what throws my brain into a tizzy (which is why I am not a quantum physicist.)

From special relativity we know that time stand still and distance is zero for light
 
  • #18
I looked at the prove and actually it does not exclude hidden variable theory. The prove assumes that the probability factorizes P(A,B)=P(A)P(B) which doesn't have to be true for a *dynamic* hidden variable theory. Then it seems that entanglement is actually non-existent. Hope someone knows what I mean. Anyone knows a prove which doesn't assume factorization?

Answer to the initial question: Bell's theorem claims to prove that if you mess around with one particle, another distant might be influenced by it.
 
  • #19
Perhaps an alternative: don't think about particles

Quite a few assumptions are needed to derive Bell inequalities, but broadly speaking Physicists agree that classical particle models they are willing to use satisfy those assumptions. A nice presentation of the assumptions needed is given by A.G.Valdenebro, "Assumptions Underlying Bell's Inequalities", in http://arxiv.org/abs/quant-ph/0208161v2" ,
which appears in journal form in European Journal of Physics.

As mentioned in another Reply, if someone is willing to use a nonlocal dynamics, de Broblie-Bohm trajectories can be used to model experiments which violate Bell inequalties. Physicists generally don't want to. To me also, the mathematics of de Broglie-Bohm models is quite ugly. That is a criterion.

If, instead of thinking in terms of particle properties, we think in terms of fields, there are possibilities for classical modeling that do not satisfy the assumptions needed to derive Bell inequalities, but that are mathematically not as ugly (whether they look attractive enough to Physicists has yet to be seen, however).

Suppose we see something weird happen in daily life; we would assume that either something weird must have happened in the past, or else that some weird coming together of non-weird things must have happened now. Taking that commonsense approach, a classical model for the weird correlations that violate Bell inequalities in experiments can be constructed, if we're willing to put weird correlations into the past. This is known technically as the conspiracy loophole. It is rarely discussed, because classical particle models that go this way just don't look right to a Physicist. I agree with that assessment. There are well-known field theories that have nonlocal correlations in thermal equilibrium states, however, for which violation of the no-conspiracy assumption is natural, so Bell inequalities can't be derived for them. If you like, this puts the nonlocality into the initial conditions instead of into the dynamics. Again, yet to be seen whether Physicists will like this kind of model.

My web-site, http://pantheon.yale.edu/~pwm22", has links to a number of journal published papers on this (and also to ArXiv-accessible versions).

A further counterpoint to the conventional point of view, which roughly follows Arthur Fine's 1982 approach to the violation of Bell inequalities (which essentially points the finger at measurement incompatibility and contextuality rather than at locality), can be found at
http://pantheon.yale.edu/~pwm22/Bell-inequalities.html".
I hope this paper is relatively accessible.
 
Last edited by a moderator:
  • #20
So here's a layman's view of the case.

At some time somebody in a lab presses a big red button and two entangled electrons are created moving in parallel but opposite direction.
<--A B-->
Now no matter which model you chose the particles have opposite spin. The difference is "when" they have opposite spin.
QT says they only have spin when you measure one of them. Until you measure the spin, both electrons have a superposition of spin up and spin down.
HV says they are born with a certain spin, so Ex A is created with spin up, and B with spin down.

Measuring the spin of both particles along an arbitrary axis, there will be a correlation between the measurements. Ordering the measurements in a certain way, you get a value S, which in a HV model can at highest be 50%, but can be up to 75% in QT.
Bell Theorem is the theory that states this mathematically. A sort of "if you can get more then 50%, you prove that QT is right and HV wrong"-statement.

The problem as I see it with the measurements made on the experiment is that measurement errors increase the S-value. So it's more like saying if you get S higher then 50% you either proved hidden variable wrong, or your setup isn't precise enough. And since it's easy then to just take a crappy setup and say "I proved quantum mechanics!" you don't get hard prof that HV is wrong. Besides this, the famous measurements made actually removed a large amount of measurement.

More educated people are welcome to correct me on this, I haven't look much into the matter.
 
  • #21
jVincent said:
The problem as I see it with the measurements made on the experiment is that measurement errors increase the S-value. So it's more like saying if you get S higher then 50% you either proved hidden variable wrong, or your setup isn't precise enough. And since it's easy then to just take a crappy setup and say "I proved quantum mechanics!" you don't get hard prof that HV is wrong. Besides this, the famous measurements made actually removed a large amount of measurement.
jVincent is essentially describing the "detection loophole". It's not worth rehearsing all the details; Wikipedia has two immediately relevant articles, http://en.wikipedia.org/wiki/Loopholes_in_Bell_test_experiments" .

Almost all Physicists confidently expect the detection loophole to be closed, mostly because it is pretty much inconceivable that all the other empirical successes of quantum theory could be put in any serious doubt.
 
Last edited by a moderator:
  • #23
Peter Morgan said:
jVincent is essentially describing the "detection loophole". It's not worth rehearsing all the details; Wikipedia has two immediately relevant articles, http://en.wikipedia.org/wiki/Loopholes_in_Bell_test_experiments" .

Almost all Physicists confidently expect the detection loophole to be closed, mostly because it is pretty much inconceivable that all the other empirical successes of quantum theory could be put in any serious doubt.

The experiments mentioned in the current wikipedia page go mostly only until about 1998, but many experiments have since then been made which disprove local-hidden-variable-theories, especially by A.Zeilinger and colleagues.

Wikipedia seems to not have been updated (or very little) with later information in several areas concerning these questions.

Interesting for example, the experiments with so-called GHZ states, where 3 or more particles are entangled. There local hidden variable theories are contradicted by experiment without the need for complicated statistical evaluations, but directly by single (or maybe a very small number of) runs of the experiment.

[Edit: it seems some pages now have a little info from after 1998, but very little and very incomplete.]
 
Last edited by a moderator:
  • #24
Last edited by a moderator:
  • #25
Peter Morgan said:
jVincent is essentially describing the "detection loophole". It's not worth rehearsing all the details; Wikipedia has two immediately relevant articles, http://en.wikipedia.org/wiki/Loopholes_in_Bell_test_experiments" .

Almost all Physicists confidently expect the detection loophole to be closed, mostly because it is pretty much inconceivable that all the other empirical successes of quantum theory could be put in any serious doubt.

Not being an expert in the field I can't say with 100% certainty that this is true, but it seems to me that the "successes" of quantum mechanics relates to it giving the correct predictions in experiments, not with the "nature" of the theory. Now even in entanglement experiments, it seems that the natural limit of what we can do is exactly the limit provided by a hidden variable theory, where for example the reason the electron needs to be modeled using a wave function (giving rise to probability density instead of a certain position) isn't that it IS a wave phenomena, but simply because the electron is actually behaving very chaotically on an very short timescale, thus the probability actually describes an estimate of the exact behavior.

Now using this "model" instead of the wave nature model doesn't change the predictions, it only changes the reason for the predictions being right.

So using only the success of the model as a proposal for the model being right is not really a good method.

What one needs is to construct experiments, that exhibit behavior that is impossible under a hidden-variable model.
 
Last edited by a moderator:
  • #26
jVincent said:
What one needs is to construct experiments, that exhibit behavior that is impossible under a hidden-variable model.

Your wish has been heard, see my above messages.
 
  • #27
colorSpace said:
jVincent said:
What one needs is to construct experiments, that exhibit behavior that is impossible under a hidden-variable model.
Your wish has been heard, see my above messages.
In the terms of the debate that are set by the words "hidden-variable model", I agree with colorSpace, up to extremely slight concerns about the detection loophole and up to an acknowledgment that de Broglie-Bohm and Nelson trajectory models are more-or-less viable, but unattractive.

If the terms of the debate are that we are interested in classical models for the observables of an experiment that violates experiments, not so much. For example, the Copenhagen interpretation insists that there must be a classical record of an experiment, that a quantum theoretical description must be in correspondence with that classical record. That is, there are classical non-hidden-variable models for experiments, and according to the Copenhagen interpretation (without too much commitment to the Rolls-Royce of interpretations) there must be.

The question put this way can be extended by asking whether there are other variables that are currently not measured, but could be. Clearly there always are: we could measure the position of the leaky oil can that's sitting on top of the experiment casing, and determine whether moving the oil can changes the results of the experiment (probably we would rather just move the oil can out of the room, don't people know that opening the door changes the results? Who left this oil can on top of my experiment anyway?) So that's an unmentioned classical observable (non-hidden-variable) that could be measured, we just didn't yet.

Now comes the million dollar question, just how many observables are there in a classical model? A classical Physicist, informed by 20th century Physics, would presumably answer that there is potentially unlimited detail, that one choice would be to describe the experiment using a classical field theory. Unfortunately for the classical Physicist, we don't have classically ideal measurement apparatus that can measure arbitrarily small details without changing other results, if we bring in the STM to measure the positions of atoms in the walls of the vacuum chamber that contains the experiment, we pretty much have to dismantle the apparatus to do it; when we put it back together, after we've measured the precise position of every atom, someone will contaminate the surface with the leaky oil can, so we might as well not have brought in the STM at all.

Why doesn't a classical Physicist have a classically ideal experimental apparatus? The answer, it seems, is that we can reduce the temperature (the thermal fluctuations) of an apparatus as much as we like, but we cannot reduce the quantum fluctuations of the apparatus at all, so we can't reduce the interactions of our classical apparatus to as little as we like. In classical modeling, if we cannot for some reason reduce the temperature of the measurement apparatus, and the thermal fluctuations of the measurement apparatus do affect the results of the experiment, we model the thermal interactions of the measurement apparatus with the measured system explicitly.

In experimental terms, although we can reduce the temperature of an experimental apparatus as much as we like, we cannot in fact reduce the temperature to absolute zero -- no thermal fluctuations at all. Nonetheless, we act as if we can, because we can determine the thermal response of the Physics, the way in which the results change as we reduce the temperature. Extending that response to the absolute zero point is problematic, since we have no idea whether the Physics changes drastically as we get closer to absolute zero (the response of He to temperature changes at 10K tells us almost nothing about the response at 1 mK, for example), but we do it anyway. Extending the thermal response to absolute zero from our current best measurements is an idealization that we work happily enough with.

I now want to link this thread to another, [thread]205586[/thread], where I ask "Are there cosmological models in which Planck's constant varies?" A response to that thread clarified my ideas considerably, and has significance here. The general idea is that a change of the metric in a model can be understood to be equivalent to a change of the level of quantum fluctuations. In particular, when we claim that the metric changes as we move in a gravitational field, we invoke a particular coordinatization; in a different coordinatization, we would say that the amplitude of quantum fluctuations changes as we move in a gravitational field, while the metric remains constant. Quantum fluctuations affects everything, just as metric variation affects everything. In the context of this discussion of measurement and Bell inequalities, this means that although we cannot reduce Planck's constant as much as we like, we can determine the quantum fluctuations response, given a coordinatization in which we consider a metric field to be constant while thermal fluctuations change when we move an experiment in a gravitational field.

I had no idea my argument would go this way when I started this post. If we can determine the quanthal response (I've just coined this word as a quantum equivalent of the word thermal, but it doesn't seem special) of an experiment by moving an apparatus around in a gravitational field, it becomes reasonable to talk about what we would observe if we had an ideal classical apparatus at zero quantum fluctuations.

That's a little wild.
 
  • #28
colorSpace said:
Your wish has been heard, see my above messages.

Year, thanks for the post, however I'm not quite sure how this experiment rules out a hiddenvariable model. I'm a little stumped with what they define as a "classical" model. Why is it that the classical model _must_ be a product of four different functions of the different angles? And why is it that each of these functions must be continues? Most likely I am misunderstanding what they are writing, could you clearify? As I understand the different in the two views is:
Classical: Spin direction is born at the particle birth.
QT: spin direction is born at time of fist measurement.
But this doesn't seem to be what the experiment is regarding.
 
  • #29
adrenaline said:
Every once in a while I try to stretch my feeble mind and pick up layman books on quantum physics as a change of pace from the many clinical/medical journals I have to constantly keep up with. I have a BS in chemistry and understand basic quantum mechanics but as much as I try, I cannot fully understand Bell's theorum...just when I think I've got it , it escapes my brain. A fellow cardiologist has tried to explain it to me (he used to be a physicist) but I don't quite understand his explanation. Would anyone take a gander at explaining it to a lowly non physicist? And I won't feel insulted if you baby the language a little.:biggrin:

Adrenaline,

I'll give you the kernel of what made Bell's theorem sensible to me in my studies of quantum theory.

The heart of Bell's Theorem:
No (classical) physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.
is Bell's inequality. An inequality about correlations of probabilities.

It all starts with the assumption that the outcome of a set of possible measurements can be causally predicted by knowing the objective state of the system and measuring devices. Assume there is some set of states out there and we don't (and possibly we cannot) know which is the actual state (this is the local hidden variables). We thus must describe the outcomes of measurements probabilistically.

We can however make statistical predictions about measurements made after preparing a physical system in a certain way (e.g. you have a device which emits a pair of particles with correlated spins). You can then assume that given this method of preparation there is a probability distribution over this set of states we asserted above. Take the set of states which yield a given measurement and then integrate the probability over that set using the probability distribution to find the probability of the outcome.

You then must assume classically that this probability distribution is a "measure" in the formal mathematical sense, i.e. that the probabilities of disjoint sets add when you consider their union. In a more general sense you must assume that for any two sets of states the probability of the union is less than or equal to the sums of the probabilities over each set.

[tex] Pr(A\cup B) \le Pr(A) + Pr(B)[/tex]
(probability of either A or B happening can't be more than the probability of A happening plus the probability of B happening).

Then given this property of a probability distribution you can define a "metric" on sets with non-zero probability using the disjoint union of sets:
[tex] A\oplus B = (A\cap \overline{B}) \cup (B\cap \overline{A})[/tex]
(the set of elements in A xor B, i.e. in A or B but not in both.)
Note that [tex]A \oplus A = \emptyset[/tex].

To get our "metric" we consider the probability of this set difference:
[tex] d(A,B) = Pr(A\oplus B) [/tex]

Now its easy to get lost in the mathematicalese but this "metric" simply tells us by how much two sets of events are both likely and mutually exclusive. [tex] d(A,B)[/tex] is the probability that either A will happen or B will happen but not both will occur together.

Given the measure property of the probability we can derive a form of triangle inequality on this metric:

[tex] d(A,B) + d(B,C) \ge d(A,C)[/tex]
This is Bell's famous inequality.

But quantum mechanics predicts correlations in probabilities of outcomes which in fact violate this inequality. Thus one of our initial assumptions must be wrong.

We may (and I assert we must) interpret this as saying that Quantum Mechanics forbids a description of the outcomes of measurements based on an underlying set of objective states of reality.

I.M.N.S.H.O. Too many people pay too much attention to the locality assumption. It isn't about QM being non-local. Locality of physical interactions (no immediate action at a distance) is just one means of assuring that two acts of measurement are causally independent, i.e. that the outcome of one measurement didn't have an effect on the other measuring device. One executes the pair of measurements sufficiently far apart from each other to be sure of this.

But you can also assert that such must be possible even without invoking locality per se. There must exist some pair of measurements which can be isolated from each other.

Rather Bell's theorem is about the inappropriateness of a classical objective reality as your mental model of what's happening. The quantum universe is best described as an unfolding process and not as an evolving object. Avoid thinking of the "state vector" as describing a state of the system. It is rather a "state of our knowledge about how the system may behave."
 
  • #30
jambaugh said:
I.M.N.S.H.O. Too many people pay too much attention to the locality assumption. It isn't about QM being non-local. Locality of physical interactions (no immediate action at a distance) is just one means of assuring that two acts of measurement are causally independent, i.e. that the outcome of one measurement didn't have an effect on the other measuring device. One executes the pair of measurements sufficiently far apart from each other to be sure of this.

Agreed. You're also right, IMO, that too few people pay attention to this aspect, but there are significant numbers of papers in the journals that emphasize it. I just had a discussion that was new to me pointed out, by Ray Streater, in J. Math. Phys. 41, 3556 (2000), "Classical and quantum probability" (p3576ff), who cites L. J. Landau, ‘‘On the Violation of Bell’s inequality in quantum theory,’’ Phys. Lett. A 120, 54–56 (1987). Some of my earlier posts in this thread emphasize this aspect.

The most straightforward mathematical presentation I know of is Willem de Muynck's Phys. Lett. A114, 65(1986), "THE BELL INEQUALITIES AND THEIR IRRELEVANCE TO THE PROBLEM OF LOCALITY IN QUANTUM MECHANICS" (no doubt what the author wants you to take away from that title).
Perhaps also of interest is the paper "K. Hess and W. Philipp, “Connection of probability models to EPR experiments: Probability spaces and Bell’s Theorem”, in: Th. M. Nieuwenhuizen, V. Spicka, B. Mehmani, M. Jafar-Aghdami, and A. Yu. Khrennikov (eds.), Proceedings of “Beyond the Quantum”, Leiden, 29th May–2nd June, 2006, World Scientific, Singapore, 2007", in which Hess and Phillipp more-or-less back away from their FNAS apostasy towards the position of the other papers mentioned in this post.

Is there any simple and/or graphic discussion in the popular literature that especially emphasizes the greater significance of measurement incompatibility instead of nonlocality?
 
  • #31
Peter Morgan said:
In the terms of the debate that are set by the words "hidden-variable model", I agree with colorSpace, up to extremely slight concerns about the detection loophole and up to an acknowledgment that de Broglie-Bohm and Nelson trajectory models are more-or-less viable, but unattractive.

There is meanwhile also an experiment which closes the detection loophole by achieving a very high detection rate. (I can dig out the reference if you want).

Bohmian mechanics are non-local, so I'm assuming that's usually not what's meant with "hidden-variable" (unless otherwise noted), but yes, afaik this interpretation is neither proven nor disproven. I'm not sure why you find it "unattractive".

I don't know anything about "Nelson trajectory".

Also, there are meanwhile experiments which use random choice of the measurement angles, after the particles have left the source, and prove that the correlation persists even when there is no possibility of classical (sub-lightspeed) communication before the results are taken, to the point where they show that any communication would have to be at least 10 million times the speed of light.

Peter Morgan said:
If the terms of the debate are that we are interested in classical models for the observables of an experiment that violates experiments, not so much. For example, the Copenhagen interpretation insists that there must be a classical record of an experiment, that a quantum theoretical description must be in correspondence with that classical record. That is, there are classical non-hidden-variable models for experiments, and according to the Copenhagen interpretation (without too much commitment to the Rolls-Royce of interpretations) there must be.

Are you saying there must be a classical description of the experimental setup, or of the whole experiment including the physics of the experiment itself? The latter would sound very wrong, and I don't see the point in the former.

Peter Morgan said:
The question put this way can be extended by asking whether there are other variables that are currently not measured, but could be. Clearly there always are: we could measure the position of the leaky oil can that's sitting on top of the experiment casing, and determine whether moving the oil can changes the results of the experiment (probably we would rather just move the oil can out of the room, don't people know that opening the door changes the results? Who left this oil can on top of my experiment anyway?) So that's an unmentioned classical observable (non-hidden-variable) that could be measured, we just didn't yet.

Now comes the million dollar question, just how many observables are there in a classical model? A classical Physicist, informed by 20th century Physics, would presumably answer that there is potentially unlimited detail, that one choice would be to describe the experiment using a classical field theory. Unfortunately for the classical Physicist, we don't have classically ideal measurement apparatus that can measure arbitrarily small details without changing other results, if we bring in the STM to measure the positions of atoms in the walls of the vacuum chamber that contains the experiment, we pretty much have to dismantle the apparatus to do it; when we put it back together, after we've measured the precise position of every atom, someone will contaminate the surface with the leaky oil can, so we might as well not have brought in the STM at all.

"Measure the positions of atoms in the walls of the vacuum chamber that contains the experiment" ?

Meanwhile these experiments are carried out over distances of 144 km (90 miles), and soon between satellites and the Earth's surface.

After all, entanglement has been predicted by theory (the EPR paper), and only confirmed by experiment. Especially the GHZ experiments are a rather straightforward confirmation as they don't involve comparing complicated statistical correlations.
 
Last edited:
  • #32
jambaugh said:
We thus must describe the outcomes of measurements probabilistically.

We can however make statistical predictions about measurements made after preparing a physical system in a certain way (e.g. you have a device which emits a pair of particles with correlated spins).

Jambaugh, perhaps you are not aware yet that in the GHZ experiments (with 3 or more entangled particles, rather than just two), the outcome of the last measurement can be predicted definitely, based on the previous measurements, so that is not a statistical prediction anymore. This is because with multiple entangles particles, a contradiction to local models arises even in those cases where the angle difference of the measurements allows a definite prediction of the outcome. Only for 2 particles are these cases explainable by local models.

(See links above.)

jambaugh said:
I.M.N.S.H.O. Too many people pay too much attention to the locality assumption. It isn't about QM being non-local. Locality of physical interactions (no immediate action at a distance) is just one means of assuring that two acts of measurement are causally independent, i.e. that the outcome of one measurement didn't have an effect on the other measuring device. One executes the pair of measurements sufficiently far apart from each other to be sure of this.

Your description doesn't seem to mention the measurement angles as a variable, which I find noteworthy since it is this variable on which the proof of non-locality is based. Non-locality is certainly worth a lot of attention. :)
 
  • #33
colorSpace said:
Your description doesn't seem to mention the measurement angles as a variable, which I find noteworthy since it is this variable on which the proof of non-locality is based. Non-locality is certainly worth a lot of attention. :)

Bell tests et al are NOT proofs of non-locality. They are proof that local realistic theories do not have experiemental support, while Quantum Theory does have such support.

It is possible that realistic theories are in fact what need to be discarded, rather than local theories. There are a number of papers which pursue this line of thinking. In this view, the Uncertainty Principle is held out as an accurate (and maximum) description of reality - there is no greater description possible.
 
  • #34
DrChinese said:
Bell tests et al are NOT proofs of non-locality. They are proof that local realistic theories do not have experiemental support, while Quantum Theory does have such support.

It is possible that realistic theories are in fact what need to be discarded, rather than local theories. There are a number of papers which pursue this line of thinking. In this view, the Uncertainty Principle is held out as an accurate (and maximum) description of reality - there is no greater description possible.

My (quite limited) understanding is that what you say here is true only for experiments which verify nothing but a violation of Bell theorem's inequality, such as Aspect's original Bell experiment. These experiments are sufficient to confirm or contradict this inequality, but do not test any timing. Without timing, only a contradiction with either locality, or realism, or both, can be concluded.

However, later experiments use random number generators to decide the measurement angles after the particles have left the source, and then limit the time interval in which there could be a classical communication.

Here is a link to one of these experiments, unfortunately only to a half page of description. However it contains an interesting and related quote of Bell from his later works:

http://www.quantum.at/research/quan...anglement/long-distance-bell-experiment.html"

As far as I understand, and I'll try to check this tonight, the remaining "loophole", other than the detection loophole which by itself has been closed by other experiments, is that which he (A.Zeilinger) calls "superdeterminism", in the sense that the choices of measurement angles based on the random number generators is not truly random, but such that it produces these results. But that would be a more than a far fetched explanation; after all, how should the random number generators know what numbers they have to produce to create the impression of a correlation where there isn't really any? Of course one could imagine we live in a computer simulation, and the simulation is trying to fools us into believing these things... but personally I think there is no point (at least in this context :cool:) to worry about that. Especially since one could use the latter argument to produce doubts on any theory there is.
 
Last edited by a moderator:
  • #35
Regarding my previous post:

So far I could not find an explicit statement which would be more specific and definite than the description of the experiment above, contained in the link. Finding such a statement is difficult since "quantum-nonlocality" is often used as a short-hand of saying "quantum - nonlocalRealism". However, many texts suggest implicitly that the entangled particles coordinate their behavior (in measurement) non-locally, depending on the measurement angles, meaning: after these angles have been chosen, and after the particles have separated already.

I did find a link to the corresponding pdf, describing the above experiment in more detail:
http://www.univie.ac.at/qfp/publications3/pdffiles/1998-04.pdf"

Also I think it is necessary to point out that local-hidden-variable models (at least usually) require both locality and realism, so in both cases (non-locality or non-realism) local-hidden-variable models are ruled out, at least those which could be called "classical", as far as I understand.

A.Zeilinger seems to tend towards a view that is both non-local and non-realist, for example there is the following comment on one of his latest experiments: "Our result suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned."

So far I haven't read about any model that would explain the correlations in a non-realist yet local way, and I'd be interested to hear what such a model might be like.

DrChinese said:
In this view, the Uncertainty Principle is held out as an accurate (and maximum) description of reality - there is no greater description possible.

This I consider a given in any case: particles are entangled regarding a property only as long as the specific value of this property is uncertain, AFAIK. But this does not constitute an explanation of the correlations, it seems like saying that these correlations exist only by accident, even though they arise repeatably. Yet if this uncertainty already qualifies a position as non-realist, then the only question in my understanding would be between 'non-local non-realism' and 'local non-realism', and based on what I heard so far, only a non-local model would make sense in light of the experimental results showing these correlations under the described circumstances.
 
Last edited by a moderator:
Back
Top