Assumptions of the Bell theorem

In summary: In fact, the whole point of doing so is to get rid of the probabilistic aspects.The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary.The list of necessary and unnecessary assumptions is preliminary, so I invite others to supplement and correct the list.
  • #71
Demystifier said:
What do you mean by quantum probability? The Born rule satisfies Kolmogorov axioms of probability.
Only for commuting observables. But in QM, it is possible to have observables in the past not commute with observables in the present. And then conditional quantum probabilities, conditioned on non-commuting past observables, behave differently than conditional classical probabilities (see e.g. Ishams book on quantum theory, sec. 8.3.2). That also led some people to study the possibility of non-commuting common causes in the past, as mentioned earlier.
 
Physics news on Phys.org
  • #72
Nullstein said:
Only for commuting observables. But in QM, it is possible to have observables in the past not commute with observables in the present. And then conditional quantum probabilities, conditioned on non-commuting past observables, behave differently than conditional classical probabilities (see e.g. Ishams book on quantum theory, sec. 8.3.2). That also led some people to study the possibility of non-commuting common causes in the past, as mentioned earlier.
Perhaps I missed something, but as far as I can see, Isham distinguishes classical from quantum probability only by assuming that the former is not contextual. I think it's misleading. First, probability in classical physics can be contextual too (it's just not so common as in QM and cannot violate locality). Second, the Kolmogorov axioms do not contain anything like a no-contextuality axiom.
 
  • #73
Demystifier said:
Perhaps I missed something, but as far as I can see, Isham distinguishes classical from quantum probability only by assuming that the former is not contextual. I think it's misleading. First, probability in classical physics can be contextual too (it's just not so common as in QM and cannot violate locality). Second, the Kolmogorov axioms do not contain anything like a no-contextuality axiom.
No, that's not the essence of that section. What I was referring to was that the inequality of eqs. (8.18) and (8.20) shows that quantum conditional probability behaves differently than classical conditional probability:

"Thus quantum-mechanical conditional probabilities behave in a very
different way from classical ones. In particular, the classical probabilistic
distribution of the values of B does not depend on the fact that A has some
value, but the quantum-mechanical predictions of results of measuring B
do depend on the fact that an ideal measurement of A was performed
immediately beforehand."
 
  • #74
Nullstein said:
"Thus quantum-mechanical conditional probabilities behave in a very
different way from classical ones. In particular, the classical probabilistic
distribution of the values of B does not depend on the fact that A has some
value, but the quantum-mechanical predictions of results of measuring B
do depend on the fact that an ideal measurement of A was performed
immediately beforehand."
But that's just contextuality, as I said. And it does have classical analogs. For instance, in psychology measurement of intelligence (IQ) is contextual. The result may depend on many factors, e.g. how well the subject slept that night or whether his emotional intelligence (EQ) has been measured immediately before. Similar examples exist in medicine. Even in pure physics, if measurement of position is performed by looking with the help of strong classical light, the momentum transferred by recoil of classical light slightly changes the momentum of the observed classical object.

You may object that those classical contextual measurements are not ideal, while in QM contextuality manifests even for ideal measurements. But what's the definition of "ideal measurement"? I would say that in such an objection one uses different definitions for classical and quantum measurements. With a reasonable single definition applicable to both classical and quantum, one would probably conclude that in quantum physics no measurement is ideal.
 
  • #75
I was going to argue that QM probabilities do violate the Kolmogorov axioms, but then I convinced myself that they don't. Certainly, the "collapse" interpretation of QM, although it is weird for other reasons, doesn't have weird probabilities. For any measurement you might perform, there is a well-defined probability for each possible measurement result. Noncommuting observables don't interfere with this.

So for example, if an electron is prepared to be spin-up in the z-direction, then the following conditional probabilities are well-defined:

  • The probability of getting spin-up given that I choose to measure spin in the z-direction.
  • The probability of getting spin-down given that I choose to measure spin in the x-direction.
  • Etc.

As long as all the base events are of the form "The probability that I will get result R given that I perform measurement M" then there is nothing weird about quantum mechanical probabilities.

Where quantum probabilities get weird is if you take the events to be of the form "The probability that the particle has spin-up in the x-direction, given that it has spin-up in the z-direction". The collapse interpretation doesn't give a meaning to such statements.
 
  • Like
Likes Killtech, PeterDonis and vanhees71
  • #76
There is a certain position on quantum mechanics that amounts to rejecting all the weird possibilities:
  1. Rejecting nonlocality (FTL influences)
  2. Rejecting back-in-time influences.
  3. Rejecting the idea that reality is subjective.
  4. Rejecting the idea that measurement causes wave function collapse (or minds or observation or anything)
  5. Rejecting the idea of many-worlds.
  6. Rejecting the idea of a classical/quantum cut. (Rejecting the idea of laws that apply only to macroscopic or only to microscopic phenomena)
I consider this the "no-nonsense interpretation of quantum mechanics". It's admirable in many ways, but I really believe that it is inconsistent. Which I can summarize as "no-nonsense is nonsense".
 
Last edited:
  • Like
Likes Demystifier
  • #77
stevendaryl said:
Where quantum probabilities get weird is if you take the events to be of the form "The probability that the particle has spin-up in the x-direction, given that it has spin-up in the z-direction". The collapse interpretation doesn't give a meaning to such statements.
But isn't this exactly what the quantum probabilities are about? You have states and observables. The operational definition of a state is a preparation procedure (or an equivalence class of preparation procedures). E.g., you use an accelerator to accelerate a bunch of protons to 7 TeV energy (within some range of accuracy). This determines within the range of accuracy the momentum of these protons and you let collide them with a bunch of other such prepared protons running in the opposite direction (aka LHC @ CERN). The observables are operationally defined by some measurement devices to measure these observables. At CERN these are the detectors which collect data on all the many particles produced in the pp collision. This you repeat very many times and you get some statistics about the outcome of measuring observables of the produced particles. So indeed what you do is to measure probabilities for a given preparation of your measured system, and that's what a well-defined probability description should do.

Of course this holds for your Stern-Gerlach experiment (SGE) too: What you describe is that you prepare with one SG magnet a particle beam with spin-up wrt. the ##z##-component of the spin and then measure the probability to get spin up or down wrt. the ##x##-component of the spin. There's indeed no need for a collapse or anything else outside the physical core of the theory. You just prepare particles (defining its pure or mixed state; usually it's the latter) and measure any observables on an ensemble of such prepared particles to be able to compare the predicted probabilities for finding the possible values of the measured observables by the usual statistical means (applying of course standard probability theory a la Kolmogorov).
 
  • #78
I have said this before, but the reason I say that the "no-nonsense interpretation" is inconsistent is really about measurements and probability. On the one hand, according to the no-nonsense interpretation, there is nothing special about measurements. They are just physical interactions that are in principle describable by quantum mechanics. On the other hand, according to the no-nonsense interpretation, the only meaning to the wave function is to give probabilities for measurement results. To me, those two statements are just contradictory. The first denies a special role for measurement, and the second affirms it.
 
  • #79
stevendaryl said:
There is a certain position on quantum mechanics that amounts to rejecting all the weird possibilities:
  1. Rejecting nonlocality (FTL influences)
  2. Rejecting back-in-time influences.
  3. Rejecting the idea that reality is subjective.
  4. Rejecting the idea that measurement causes wave function collapse (or minds or observation or anything)
  5. Rejecting the idea of many-worlds.
  6. Rejecting the idea of a classical/quantum cut. (Rejecting the idea of laws that apply only to macroscopic or only to microscopic phenomena)
I consider this the "no-nonsense interpretation of quantum mechanics". It's admirable in many ways, but I really believe that it is inconsistent. Which I can summarize as "no-nonsense is nonsense".
I agree it's inconsistent, but I think @vanhees71 thinks it isn't. How does he avoid the inconsistency? By adding one more principle to the list:
7. Rejecting the idea that philosophical arguments (which are needed to derive a contradiction) are relevant to science.
 
  • #80
Demystifier said:
I agree it's inconsistent, but I think @vanhees71 thinks it isn't. How does he avoid the inconsistency? By adding one more principle to the list:
7. Rejecting the idea that philosophical arguments (which are needed to derive a contradiction) are relevant to science.
Philosophers treat contradictions as these terrible things: "Oh no! Your system is inconsistent! From an inconsistency, you can derive anything! That makes your system worthless!"

But no-nonsense people are not really bothered by contradictions. That's because they don't just apply what they hold to be true willy-nilly. They have informally demarcated domains of applicability for anything that they claim to be true. The contradictions that philosophers worry about only are relevant when you apply something that is true in one domain to an inappropriate domain.

The big contradiction in the no-nonsense view is the attitude toward measurements. On the one hand, measurements are considered ordinary interactions, described by quantum mechanics. On the other hand, measurements play a special role in quantum mechanics, in that quantum mechanics is taken to only give probabilities for measurement results. I think these two beliefs are contradictory. But they don't cause any problems for the practical-minded physicist. Depending on the problem at hand, you either treat measurements as special, or you treat them as ordinary interaction. You don't do both at the same time.

Bohr understood this, I think.
 
  • Like
Likes eloheim, Fra and Demystifier
  • #81
Demystifier said:
I agree it's inconsistent, but I think @vanhees71 thinks it isn't. How does he avoid the inconsistency? By adding one more principle to the list:
7. Rejecting the idea that philosophical arguments (which are needed to derive a contradiction) are relevant to science.
What's concretely inconsistent? I don't see anything inconsistent when just ignoring all the metaphysical balast as mentioned in the list. Indeed, science works best when taking out all the "weirdness" and concentrate on what's observed in the real world/lab (operational definitions of preparation and measurement procedures) and and how this is described within the theory (mathematical formulation) and not to mix up the one with the other side.
 
  • #82
Demystifier said:
But that's just contextuality, as I said. And it does have classical analogs. For instance, in psychology measurement of intelligence (IQ) is contextual. The result may depend on many factors, e.g. how well the subject slept that night or whether his emotional intelligence (EQ) has been measured immediately before. Similar examples exist in medicine. Even in pure physics, if measurement of position is performed by looking with the help of strong classical light, the momentum transferred by recoil of classical light slightly changes the momentum of the observed classical object.

You may object that those classical contextual measurements are not ideal, while in QM contextuality manifests even for ideal measurements. But what's the definition of "ideal measurement"? I would say that in such an objection one uses different definitions for classical and quantum measurements. With a reasonable single definition applicable to both classical and quantum, one would probably conclude that in quantum physics no measurement is ideal.
Well, it's a related, but different thing here. Usually, when you have two non-commuting observables, you can't measure both of them, so the fact that they cannot both have well-defined values due to their non-commutativity, doesn't cause any problems. Think of the SG experiment, where you can align the detector only along one axis, which makes it impossible to measure the spin along another axis at the same time. QM also doesn't assign probabilities to such joint observations of incompatible observables.

However, Isham discusses a situation where you make measurements of incompatible observables at different times. Since they are incompatible, they can't in general both have well-defined values. However, it's perfectly possible to measure incompatible observables one after another and obtain these values. In this case, QM does assign probabilities to this joint observation and this is where a counterexample to the classical behavior of conditional probabilities can be constructed.
 
  • #83
Nullstein said:
Well, it's a related, but different thing here. Usually, when you have two non-commuting observables, you can't measure both of them, so the fact that they cannot both have well-defined values due to their non-commutativity, doesn't cause any problems. Think of the SG experiment, where you can align the detector only along one axis, which makes it impossible to measure the spin along another axis at the same time. QM also doesn't assign probabilities to such joint observations of incompatible observables.

However, Isham discusses a situation where you make measurements of incompatible observables at different times. Since they are incompatible, they can't in general both have well-defined values. However, it's perfectly possible to measure incompatible observables one after another and obtain these values. In this case, QM does assign probabilities to this joint observation and this is where a counterexample to the classical behavior of conditional probabilities can be constructed.
Can you describe Isham's situation? Or post a link?
 
  • #84
stevendaryl said:
Can you describe Isham's situation? Or post a link?
It's in Ishams book "Lectures on Quantum Theory", section 8.3.2. It's a few pages long, but I can try to condense it:

He discusses the situation where you perform a measurement of the observable ##A## and later the measurement of the observable ##B##. He computes the probabilities ##P(B=b_n|A=a_m)##, ##P(A=a_m)## and ##P(B=b_n)## using quantum mechanics. Classically, you would expect that ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)##, but in quantum mechanics, this relation is violated if ##A## and ##B## don't commute.

Normally, it's not a problem if ##A## and ##B## don't commute, because you can't measure them simultaneously anyway, so a probability like ##P(B=b_n|A=a_m)## is meaningless. However, we don't have that excuse if ##A## and ##B## are observables at different times.

As an example, consider some Hamiltonian ##H## with time evolution ##U(t)##. Then take ##\hat A=\hat x## and ##\hat B=\hat x(t)=U^\dagger(t) \hat x U(t)##.
 
  • #85
Nullstein said:
Classically, you would expect that ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)##, but in quantum mechanics, this relation is violated if ##A## and ##B## don't commute.

There is a subtlety here, which is that the formula ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)## implicitly assumes that it is possible to measure ##A## without disturbing the system. If you include the possibility that measuring ##A## disturbs the system, then you really have to make a distinction between the system having value ##a_m## for observable ##A## and the result of an ##A## measurement producing result ##a_m##.

So we don't really have ##P(A = a_m)## but something like ##P(A = a_m | M_A)##. The probability that the measurement result is ##a_m## given that the experimenter performed measurement ##M_A##. Similarly for observable ##B##. Then we have two different probabilities:

##P(B=b_n | M_B \wedge M_A)## (if there was a previous measurement of ##A##)
##P(B=b_n | M_B \wedge \neg M_A)## (if there was not).

The classical probability rules would tell you that:

##P(B=b_n | M_B \wedge M_A) = \sum_m P(B=b_n | M_B \wedge M_A \wedge A = a_m) P(A = a_m | M_A)##

But you would not necessarily have ##P(B=b_n | M_B \wedge M_A) = P(B=b_n | M_B \wedge \neg M_A)##

Of course, the "disturbance" interpretation of noncommuting observables is itself problematic, because it implies nonlocality, but it doesn't by itself violate the rules of probability.
 
  • Like
Likes Demystifier
  • #86
stevendaryl said:
There is a subtlety here, which is that the formula ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)## implicitly assumes that it is possible to measure ##A## without disturbing the system. [...]

Of course, the "disturbance" interpretation of noncommuting observables is itself problematic, because it implies nonlocality, but it doesn't by itself violate the rules of probability.
Yes, I agree that disturbances are one possibility to interpret it. But then you need to keep track of the measurements by hand, it doesn't come out of the formalism or the Born rule. Moreover, one can't cure it by describing the disturbances quantum mechanically, because the more detailed description will suffer from the same problem, just at a different level. Another way to interpret it is that quantum probabilies are generalizations of classical probabilities that don't always respect the classical rules.

Basically, you are faced with the following choice:
  1. Either you accept that measurements disturbe the system. But disturbance should be a physical process, so it should be described by some physical theory. This theory cannot be quantum mechanics itself, because a more detailed quantum theory will not fix the problem once and for all. It will reappear at a different level.
  2. Or you want to stick to the idea that quantum theory describes everything. But then you must give up classical probabilities.
 
Last edited:
  • Like
Likes akvadrako and Demystifier
  • #87
Nullstein said:
Yes, I agree that disturbances are one possibility to interpret it. But then you need to keep track of the measurements by hand, it doesn't come out of the formalism or the Born rule. Moreover, one can't cure it by describing the disturbances quantum mechanically, because the more detailed description will suffer from the same problem, just at a different level. Another way to interpret it is that quantum probabilies are generalizations of classical probabilities that don't always respect the classical rules.

Basically, you are faced with the following choice:
  1. Either you accept that measurements disturbe the system. But disturbance should be a physical process, so it should be described by some physical theory. This theory cannot be quantum mechanics itself, because a more detailed quantum theory will not fix the problem once and for all. It will reappear at a different level.
  2. Or you want to stick to the idea that quantum theory describes everything. But then you must give up classical probabilities.
Then I choose 1. Quantum theory in its minimal form is incomplete, one should add something, like objective collapse (a'la GRW) or additional variables (a'la Bohm).

But for this thread, option 2. is more interesting.
 
  • #88
stevendaryl said:
Philosophers treat contradictions as these terrible things: "Oh no! Your system is inconsistent! From an inconsistency, you can derive anything! That makes your system worthless!"
Would you then count mathematical logicians as philosophers?
 
  • #89
Demystifier said:
Would you then count mathematical logicians as philosophers?
Sure. My point is that physicists, unlike mathematicians, don’t consider inconsistencies fatal. Probably our best theories are inconsistent, but work well enough in a limited domain.
 
  • Like
Likes Demystifier
  • #90
stevendaryl said:
The big contradiction in the no-nonsense view is the attitude toward measurements. On the one hand, measurements are considered ordinary interactions, described by quantum mechanics. On the other hand, measurements play a special role in quantum mechanics, in that quantum mechanics is taken to only give probabilities for measurement results. I think these two beliefs are contradictory.
I agree with putting this in focus.

stevendaryl said:
But they don't cause any problems for the practical-minded physicist. Depending on the problem at hand, you either treat measurements as special, or you treat them as ordinary interaction. You don't do both at the same time.

Bohr understood this, I think.
I suspect a lot of theoretical physicists can't be accused for beeing practical-minded though. So it remains a problem.

I fully agree with the idea that inconsitencies are not fatal.

But I see hope for using this to make progress in the context of ideas of evolution of law. For any agent learning, an inconsistency can be understood in a way where it's own inference system is challenged, and that current observations are not consistent with the a priori expectations, this presents a challenge to the agent. It must revise, or risk destabilisiation. One can probably define a measure of this inconsistency, and as long as its small, progress is made.

This also connects to the issue Popper faced, where he to make science clean and make it a deductive process, no fuzzy induction etc. The inconsistency in that idea is that while falsification is deductive, hypothesis generation is not. Applied to foundations of QM, I like to ask for example, how does nature itself, resolve "inconsistencies" in the evolutionary perspective? We like to think that inconsistences are nto stable, and inconsistent agents are not likely do dominate, so they are removed by darwinian selection. The result is a approximately mutually consistent system.

I think the "inconsistencies" we partly agree one, are not something a responsible theorist should let pass. That's not to suggest however, that one can't simultaneously acknowledge that inconsistencies in themselves can not be avoided, and are not terminal failures in any way. They are rather just food for us who are less practically minded.

/Fredrik
 
  • #91
stevendaryl said:
Sure. My point is that physicists, unlike mathematicians, don’t consider inconsistencies fatal. Probably our best theories are inconsistent, but work well enough in a limited domain.

For a mathematician, his subject is defined by the theory. So if that theory is inconsistent, what are they talking about? But for a physicist, the ultimate subject matter is defined by observations. Theories are just tools. If the tools are flawed (inconsistent), then you just learn to work with their flaws.
 
  • Like
Likes Fra and Demystifier
  • #92
stevendaryl said:
Theories are just tools. If the tools are flawed (inconsistent), then you just learn to work with their flaws.
And I hope, work to improve them?
/Fredrik
 
  • #93
stevendaryl said:
Theories are just tools. If the tools are flawed (inconsistent), then you just learn to work with their flaws.
But tools for what? If they are tools for making predictions, then it's OK. But physicists also use theories for conceptual understanding. So if the theory is inconsistent, then conceptual understanding can also be inconsistent. And if conceptual understanding is inconsistent and the physicist is aware of it, then the tool does not fulfill its purpose.
 
  • Like
Likes Fra
  • #94
stevendaryl said:
I'm not sure I understand the point you're making, but I think I agree that the weirdness of quantum mechanics is not its nonlocality. The only relevance of nonlocality is that it shows that that weirdness can't easily be explained in terms of a nonweird "deeper theory".
I realize was kind of redundant, I agree with what you write

I just meant to say that the it's the application of Reichenbach's Common Cause Principle that seems out of place. It does not follow that the probabiltiy spaces that Alice and Bob alone and together can represent via ensembles consistuted the common probability space that is presumed in the RCC? In here, there is the old pathological expectaions of how causation "must work", which apparently doesn't work so in nature.

/Fredrik
 
  • #95
lugita15 said:
Anyway, what would you say about counterfactual definiteness?
In the meantime, I have changed my mind on that. I think it is a vague term with at least 3 different meanings: micro realism, macro realism and determinism. Since all 3 are already included in my two lists, counterfactual definiteness should not be added to the list.
 
  • #96
Demystifier said:
Then I choose 1. Quantum theory in its minimal form is incomplete, one should add something, like objective collapse (a'la GRW) or additional variables (a'la Bohm).

But for this thread, option 2. is more interesting.
So the relevance for this thread would then be that the law of total probability should be among the necessary assumptions.
 
  • Like
Likes Demystifier
  • #97
The total law of probability assumes that the summation index/integration variable (the hidden variable parameter space) constitutes a proper partition of the event space. If the "hidden variables" isn't a pairwise disjoint set, then the law of total probability does not hold. As the law of total probability is key construct in a "sum over paths" approach, assumptions on this partition, is equivalent to assumptions of the intermediated interactions and thus in the conceptual extension the hamiltonian. And this assumption/ansatz of Bellts theorem that seems to me, hard to defend even a priori. Then only vauge defense, is that in the case where the hidden variable is note strictly isolated from the environment, but rather known, and its' merely hidden from the experiemnter, then this form makes more sense. Ie. nothing prevents a "hidden variable" mechanism, that does not constitute a partition, and thus doesn't obey the premise of bells theorem. This is because transitions in QM seems to make use "several paths at once", not just one at a time - simply weigthed.

This has been my point in previous posts as well, to distinguish between a really HIDDEN variable, and simply ignorance of the experimenter.

I think that the solipsist HV of Demystifiers, is one example of a possibiilty. As I think of it, then the hidden variable is hidden simply because its subjective to the agent. But it nevertheless rules the action of that agent, but in a way that looks "weird" to an outside observe, and this contributes to the total "interaction mechanism" in QM.

/Fredrik
 
  • Like
Likes Demystifier
  • #98
Fifty years of Bell’s theorem | CERN (home.cern)

Fifty years of Bell’s theorem

A paper by John Bell published on 4 November 1964 laid the foundations for the modern field of quantum-information science

4 NOVEMBER, 2014

By Christine Sutton

On 4 November 1964, a journal called Physics received a paper written by John Bell, a theoretician from CERN. The journal was short-lived, but the paper became famous, laying the foundations for the modern field of quantum-information science.

The background to Bell’s paper goes back to the late 1930s and Albert Einstein’s dislike of quantum mechanics, which he argued required “spooky actions at a distance”. In other words, according to quantum theory, a measurement in one location can influence the state of a system in another location. Einstein believed this appeared to occur only because the quantum description was not complete.

The argument as to whether quantum mechanics is indeed a complete description of reality continued for several decades until Bell, who was on leave from CERN at the time, wrote down what has become known as Bell’s theorem. Importantly, he provided an experimentally testable relationship between measurements that would confirm or refute the disliked “action at a distance”.

In the 1970s, experiments began to test Bell’s theorem, confirming quantum theory and at the same time establishing the base for what has now become major area of science and technology, with important applications, for example, in quantum cryptography.

Bell was born in Belfast, where he attended Queen’s University, eventually coming to CERN in 1960. For much of November the university is hosting a series of events in celebration of his work, including an exhibition Action at a Distance: The Life and Legacy of John Stewart Bell with photographs, objects and papers relating to Bell’s work alongside videos exploring his science and legacy. He sadly died suddenly in 1990. Now the Royal Irish Academy is calling for 4 November to be named “http://www.ria.ie/john-bell-day.aspx”.

For more

The original paper: “On the Einstein Podolsky Rosen Paradox” by J S Bell [PDF]

Fifty years of Bell’s theorem | CERN (home.cern)
 
  • #101
RUTA said:
Here is a good paper on Bell: https://arxiv.org/abs/1408.1826
Hmmm. I pass opinions of the historical description itself but...

"Nowadays, it is sometimes reported as ruling out, or at least calling in question, realism. But these are all mistakes. What Bell’s theorem, together with the experimental results, proves to be impossible (subject to a few caveats we will attend to) is not determinism or hidden variables or realism but locality"

Set aside the actual historical account, this summary makes no sense to me.

"A physical theory is EPR-‐local iff according to the theory procedures carried out in one region do not immediately disturb the physical state of systems in sufficiently distant regions in any significant way"

According to this definition I would not say we have EPR-nonlocality. All that is disturbed is the local expectation about the remote physical state, nothing else. Any "actual states" are always by definition shielded by expectations in a measurement theory, there is simply no way to directly "access" any raw truth by bypassing the constraints of the inference and measurement process. Ie. you can not gain knowledge by bypassing the scientific or inference process.

So is the problem realism? ie. that reality does not exist? What do they mean by realism or state of reality?

"If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity."

No, I don't see a problem with that either. Ie. there is no problem with imaging a bare actual state of matter, it is as far as I can see, not incompatible with what we know! The problem I see, and fallacy is

The implicit assumption that it's the "raw reality" that are the correct variables in the causal relations of the laws of nature.

Lets just note that this is NOT the only possibility. The other possibility (which in my view is the by far more obvious and natural one, even though nonstandard) is that the laws of nature rather explain causal relations between expectations only! Ie. the "true state of matter" does NOT need to be (even should not) be explicitly present in the laws.

For those that are unfamiliar with this Agent based model logic, the economic interacts are an excellent example. At some point, the TRUE value of a product or curreny really does not matter for the GAME itself! It's all about expectations, influencing and revising expectations. If the market has high expectations on a value, then that is what its worth, even if the real state of matter os just fake.

So even if it is fine, to think of, and imagine about actual states of matter, that is not the problem. The key insight could well be that the INTERACTIONS are nevertheless best understood in terms of causal relations between expectations, NOT causal relations between elements of reality.

This is why I think both a kind of realism and locality can be saved, but its the "nature of causation" that we do not understand; or what is the flaw in the bell premise => This directly addresses the question of the nature of physical law! Ie. do nature "obey laws", or do nature just behave "law like", and why?

/Fredrik
 
Last edited:
  • #102
I just say, no reality, nothing to talk... no existence...
 
  • Like
Likes Demystifier
  • #103

John Stewart Bell​


Quick Info​

Born28 July 1928
Belfast, Northern IrelandDied1 October 1990
Geneva, Switzerland

SummaryJohn Stewart Bell was an Irish mathematician who worked in quantum mechanics.
thumbnail.jpg
View four larger pictures


Biography​

John Bell's great achievement was that during the 1960s he was able to breathe new and exciting life into the foundations of quantum theory, a topic seemingly exhausted by the outcome of the Bohr-Einstein debate thirty years earlier, and ignored by virtually all those who used quantum theory in the intervening period. Bell was able to show that discussion of such concepts as 'realism', 'determinism' and 'locality' could be sharpened into a rigorous mathematical statement, 'Bell's inequality', which is capable of experimental test. Such tests, steadily increasing in power and precision, have been carried out over the last thirty years.

Indeed, almost wholly due to Bell's pioneering efforts, the subject of quantum foundations, experimental as well as theoretical and conceptual, has became a focus of major interest for scientists from many countries, and has taught us much of fundamental importance, not just about quantum theory, but about the nature of the physical universe.

In addition, and this could scarcely have been predicted even as recently as the mid-1990s, several years after Bell's death, many of the concepts studied by Bell and those who developed his work have formed the basis of the new subject area of quantum information theory, which includes such topics as quantum computing and quantum cryptography. Attention to quantum information theory has increased enormously over the last few years, and the subject seems certain to be one of the most important growth areas of science in the twenty-first century.

John Stewart Bell's parents had both lived in the north of Ireland for several generations. His father was also named John, so John Stewart has always been called Stewart within the family. His mother, Annie, encouraged the children to concentrate on their education, which, she felt, was the key to a fulfilling and dignified life. However, of her four children - John had an elder sister, Ruby, and two younger brothers, David and Robert - only John was able to stay on at school much over fourteen. Their family was not well-off, and at this time there was no universal secondary education, and to move from a background such as that of the Bells to university was exceptionally unusual.

Bell himself was interested in books, and particularly interested in science from an early age. He was extremely successful in his first schools, Ulsterville Avenue and Fane Street, and, at the age of eleven, passed with ease his examination to move to secondary education. Unfortunately the cost of attending one of Belfast's prestigious grammar schools was prohibitive, but enough money was found for Bell to move to the Belfast Technical High School, where a full academic curriculum which qualified him for University entrance was coupled with vocational studies.

Bell then spent a year as a technician in the Physics Department at Queen's University Belfast, where the senior members of staff in the Department, Professor Karl Emeleus and Dr Robert Sloane, were exceptionally helpful, lending Bell books and allowing him to attend the first year lectures. Bell was able to enter the Department as a student in 1945. His progress was extremely successful, and he graduated with First-Class Honours in Experimental Physics in 1948. He was able to spend one more year as a student, in that year achieving a second degree, again with First-Class Honours, this time in Mathematical Physics. In Mathematical Physics, his main teacher was Professor Peter Paul Ewald, famous as one of the founders of X-ray crystallography; Ewald was a refugee from Nazi Germany.

Bell was already thinking deeply about quantum theory, not just how to use it, but its conceptual meaning. In an interview with Jeremy Bernstein, given towards the end of his life and quoted in Bernstein's book [1], Bell reported being perplexed by the usual statement of the Heisenberg uncertainty or indeterminacy principle (\Delta x \Delta p ≥ \hbarΔxΔp≥ℏ, where \Delta xΔx and \Delta pΔp are the uncertainties or indeterminacies, depending on one's philosophical position, in position and momentum respectively, and ℏ is the (reduced) Planck's constant).
It looked as if you could take this size and then the position is well defined, or that size and then the momentum is well defined. It sounded as if you were just free to make it what you wished. It was only slowly that I realized that it's not a question of what you wish. It's really a question of what apparatus has produced this situation. But for me it was a bit of a fight to get through to that. It was not very clearly set out in the books and courses that were available to me. I remember arguing with one of my professors, a Doctor Sloane, about that. I was getting very heated and accusing him, more or less, of dishonesty. He was getting very heated too and said, 'You're going too far'.
At the conclusion of his undergraduate studies Bell would have liked to work for a PhD. He would also have liked to study the conceptual basis of quantum theory more thoroughly. Economic considerations, though, meant that he had to forget about quantum theory, at least for the moment, and get a job, and in 1949 he joined the UK Atomic Research Establishment at Harwell, though he soon moved to the accelerator design group at Malvern.

It was here that he met his future wife, Mary Ross, who came with degrees in mathematics and physics from Scotland. They married in 1954 and had a long and successful marriage. Mary was to stay in accelerator design through her career; towards the end of John's life he returned to problems in accelerator design and he and Mary wrote some papers jointly. Through his career he gained much from discussions with Mary, and when, in 1987, his papers on quantum theory were collected [21], he included the following words:
I here renew very especially my warm thanks to Mary Bell. When I look through these papers again I see her everywhere.
Accelerator design was, of course, a relatively new field, and Bell's work at Malvern consisted of tracing the paths of charged particles through accelerators. In these days before computers, this required a rigorous understanding of electromagnetism, and the insight and judgment to make the necessary mathematical simplifications required to make the problem tractable on a mechanical calculator, while retaining the essential features of the physics. Bell's work was masterly.

In 1951 Bell was offered a year's leave of absence to work with Rudolf Peierls, Professor of Physics at Birmingham University. During his time in Birmingham, Bell did work of great importance, producing his version of the celebrated CPT theorem of quantum field theory. This theorem showed that under the combined action of three operators on a physical event: PP, the parity operator, which performed a reflection; CC, the charge conjugation operator, which replaced particles by anti-particles; and TT, which performed a time reversal, the result would be another possible physical event.
Unfortunately Gerhard Lüders and Wolfgang Pauli proved the same theorem a little ahead of Bell, and they received all the credit.

However, Bell added another piece of work and gained a PhD in 1956. He also gained the highly valuable support of Peierls, and when he returned from Birmingham he went to Harwell to join a new group set up to work on theoretical elementary particle physics. He remained at Harwell till 1960, but he and Mary gradually became concerned that Harwell was moving away from fundamental work to more applied areas of physics, and they both moved to CERN, the Centre for European Nuclear Research in Geneva. Here they spent the remainder of their careers.

Bell published around 80 papers in the area of high-energy physics and quantum field theory. Some were fairly closely related to experimental physics programmes at CERN, but most were in general theoretical areas.

The most important work was that of 1969 leading to the Adler-Bell-Jackiw (ABJ) anomaly in quantum field theory. This resulted from joint work of Bell and Ronan Jackiw, which was then clarified by Stephen Adler. They showed that the standard current algebra model contained an ambiguity. Quantisation led to a symmetry breaking of the model. This work solved an outstanding problem in particle physics; theory appeared to predict that the neutral pion could not decay into two photons, but experimentally the decay took place, as explained by ABJ. Over the subsequent thirty years, the study of such anomalies became important in many areas of particle physics. Reinhold Bertlmann, who himself did important work with Bell, has written a book titled Anomalies in Quantum Field Theory [10], and the two surviving members of ABJ, Adler and Jackiw shared the 1988 Dirac Medal of the International Centre for Theoretical Physics in Trieste for their work.

While particle physics and quantum field theory was the work Bell was paid to do, and he made excellent contributions, his great love was for quantum theory, and it is for his work here that he will be remembered. As we have seen, he was concerned about the fundamental meaning of the theory from the time he as an undergraduate, and many of his important arguments had their basis at that time.

The conceptual problems may be outlined using the spin-\large\frac{1}{2}\normalsize21 system. We may say that when the state-vector is \alpha_{+}α+ or \alpha_{-}α− respectively, s_{z}sz is equal to \large\frac{1}{2}\normalsize \hbar21ℏ and -\large\frac{1}{2}\normalsize \hbar−21ℏ respectively, but, if one restricts oneself to the Schrödinger equation, s_{x}sx and s_{y}sy just do not have values. All one can say is that if a measurement of s_{x}sx, for example, is performed, the probabilities of the result obtained being either \large\frac{1}{2}\normalsize \hbar21ℏ or -\large\frac{1}{2}\normalsize \hbar−21ℏ are both \large\frac{1}{2}\normalsize21.

If, on the other hand, the initial state-vector has the general form of c_{+}\alpha_{+}+ c_{-}\alpha_{-}c+α++c−α−, then all we can say is that in a measurement of s_{z}sz, the probability of obtaining the value of \large\frac{1}{2}\normalsize \hbar21ℏ is |c_{+}^{2} |∣c+2∣, and that of obtaining the value of -\large\frac{1}{2}\normalsize \hbar−21ℏ is |c_{-}^{2} |∣c−2∣. Before any measurement, s_{z}sz just does not have a value.

These statements contradict two of our basic notions. We are rejecting realism, which tells us that a quantity has a value, to put things more grandly -- the physical world has an existence, independent of the actions of any observer. Einstein was particularly disturbed by this abandonment of realism -- he insisted in the existence of an observer-free realm. We are also rejecting determinism, the belief that, if we have a complete knowledge of the state of the system, we can predict exactly how it will behave. In this case, we know the state-vector of the system, but cannot predict the result of measuring s_{z}sz.

It is clear that we could try to recover realism and determinism if we allowed the view that the Schrödinger equation, and the wave-function or state-vector, might not contain all the information that is available about the system. There might be other quantities giving extra information -- hidden variables. As a simple example, the state-vector above might apply to an ensemble of many systems, but in addition a hidden variable for each system might say what the actual value of s_{z}sz might be. Realism and determinism would both be restored; s_{z}sz would have a value at all times, and, with full knowledge of the state of the system, including the value of the hidden variable, we can predict the result of the measurement of s_{z}sz .

A complete theory of hidden variables must actually be more complicated than this -- we must remember that we wish to predict the results of measuring not just s_{z}sz, but also s_{x}sx and s_{y}sy, and any other component of ss. Nevertheless it would appear natural that the possibility of supplementing the Schrödinger equation with hidden variables would have been taken seriously. In fact, though, Niels Bohr and Werner Heisenberg were convinced that one should not aim at realism. They were therefore pleased when John von Neumann proved a theorem claiming to show rigorously that it is impossible to add hidden variables to the structure of quantum theory. This was to be very generally accepted for over thirty years.

Bohr put forward his (perhaps rather obscure) framework of complementarity, which attempted to explain why one should not expect to measure s_{x}sx and s_{y}sy (or xx and pp) simultaneously. This was his Copenhagen interpretation of quantum theory. Einstein however rejected this, and aimed to restore realism. Physicists almost unanimously favoured Bohr.

Einstein's strongest argument, though this did not become very generally apparent for several decades lay in the famous Einstein-Podolsky-Rosen (EPR) argument of 1935, constructed by Einstein with the assistance of his two younger co-workers, Boris Podolsky and Nathan Rosen. Here, as is usually done, we discuss a simpler version of the argument, thought up somewhat later by David Bohm.

Two spin-\large\frac{1}{2}\normalsize21 particles are considered; they are formed from the decay of a spin-\large\frac{1}{2}\normalsize21 particle, and they move outwards from this decay in opposite directions. The combined state-vector may be written as (\large\frac{1}{√2}\normalsize )(\alpha_{1-}\alpha_{2+} - \alpha_{1-} \alpha_{2+})(√21)(α1−α2+−α1−α2+), where the \alpha_{1}sα1s and \alpha_{2}sα2s for particles 1 and 2 are related to the \alpha sαs above. This state-vector has a strange form. The two particles do not appear in it independently; rather either state of particle 1 is correlated with a particular state of particle 2. The state-vector is said to be entangled.

Now imagine measuring s_{1}zs1z. If we get +\large\frac{1}{2}\normalsize v+21v, we know that an immediate measurement of s_{2}zs2z is bound to yield -\large\frac{1}{2}\normalsize \hbar−21ℏ, and vice-versa, although, at least according to Copenhagen, before any measurement, no component of either spin has a particular value.

The result of this argument is that at least one of three statements must be true:
(1) The particles must be exchanging information instantaneously i.e. faster than light;
(2) There are hidden variables, so the results of the experiments are pre-ordained; or
(3) Quantum theory is not exactly true in these rather special experiments.
The first possibility may be described as the renunciation of the principle of locality, whereby signals cannot be passed from one particle to another faster than the speed of light. This suggestion was anathema to Einstein. He therefore concluded that if quantum theory was correct, so one ruled out possibility (3), then (2) must be true. In Einstein's terms, quantum theory was not complete but needed to be supplemented by hidden variables.

Bell regarded himself as a follower of Einstein. He told Bernstein [1]:
I felt that Einstein's intellectual superiority over Bohr, in this instance, was enormous; a vast gulf between the man who saw clearly what was needed, and the obscurantist.
Bell thus supported realism in the form of hidden variables. He was delighted by the creation in 1952 by David Bohm of a version of quantum theory which included hidden variables, seemingly in defiance of von Neumann's result. Bell wrote [21]:
In 1952 I saw the impossible done.
In 1964, Bell made his own great contributions to quantum theory. First he constructed his own hidden variable account of a measurement of any component of spin. This had the advantage of being much simpler that Bohm's work, and thus much more difficult just to ignore. He then went much further than Bohm by demonstrating quite clearly exactly what was wrong with von Neumann's argument.

Von Neumann had illegitimately extended to his putative hidden variables a result from the variables of quantum theory that the expectation value of A + BA+B is equal to the sum of the expectation values of AA and of BB. (The expectation value of a variable is the mean of the possible experimental results weighted by their probability of occurrence.) Once this mistake was realized, it was clear that hidden variables theories of quantum theory were possible.

However Bell then demonstrated certain unwelcome properties that hidden variable theories must have. Most importantly they must be non-local. He demonstrated this by extending the EPR argument, allowing measurements in each wing of the apparatus of any component of spin, not just s_{z}sz. He found that, even when hidden variables are allowed, in some cases the result obtained in one wing must depend on which component of spin is measured in the other; this violates locality. The solution to the EPR problem that Einstein would have liked, rejecting (1) but retaining (2) was illegitimate. Even if one retained (2), as long as one maintained (3) one had also to retain (1).

Bell had showed rigorously that one could not have local realistic theories of quantum theory. Henry Stapp called this result [18]:
the most profound discovery of science.
The other property of hidden variables that Bell demonstrated was that they must be contextual. Except in the simplest cases, the result you obtained when measuring a variable must depend on which other quantities are measured simultaneously. Thus hidden variables cannot be thought of as saying what value a quantity 'has', only what value we will get if we measure it.

Let us return to the locality issue. So it has been assumed that quantum theory is exactly true, but of course this can never be known. John Clauser, Richard Holt, Michael Horne and Abner Shimony adapted Bell's work to give a direct experimental test of local realism. Thus was the famous CHHS-Bell inequality [19], often just called the Bell inequality. In EPR-type experiments, this inequality is obeyed by local hidden variables, but may be violated by other theories, including quantum theory.

Bell has reached what has been called experimental philosophy; results of considerable philosophical importance may be obtained from experiment. The Bell inequalities have been tested over nearly thirty years with increasing sophistication, the experimental tests actually using photons with entangled polarisations, which are mathematically equivalent to the entangled spins discussed above. While many scientists have been involved, a selection of the most important would include Clauser, Alain Aspect and Anton Zeilinger.

While at least one loophole still remains to be closed [in August 2002], it seems virtually certain that local realism is violated, and that quantum theory can predict the results of all the experiments.

For the rest of his life, Bell continued to criticize the usual theories of measurement in quantum theory. Gradually it became at least a little more acceptable to question Bohr and von Neumann, and study of the meaning of quantum theory has become a respectable activity.

Bell himself became a Fellow of the Royal Society as early as 1972, but it was much later before he obtained the awards he deserved. In the last few years of his life he was awarded the Hughes Medal of the Royal Society, the Dirac Medal of the Institute of Physics, and the Heineman Prize of the American Physical Society. Within a fortnight in July 1988 he received honorary degrees from both Queen's and Trinity College Dublin. He was nominated for a Nobel Prize; if he had lived ten years longer he would certainly have received it.

This was not to be. John Bell died suddenly from a stroke on 1st October 1990. Since that date, the amount of interest in his work, and in its application to quantum information theory has been steadily increasing.
https://mathshistory.st-andrews.ac.uk/Biographies/Bell_John/

Other Mathematicians born in Ireland
A Poster of John Bell
 
  • #104
Fra said:
This is why I think both a kind of realism and locality can be saved, but its the "nature of causation" that we do not understand; or what is the flaw in the bell premise => This directly addresses the question of the nature of physical law! Ie. do nature "obey laws", or do nature just behave "law like", and why?
Sorry, I don't understand exactly what you're suggesting.

The simple intuitive reasoning for why anti-correlated spin-1/2 EPR seems to show nonlocality is this:

Pick an inertial coordinate system in which Alice and Bob are at rest, and Alice's measurement takes place slightly before Bob's. Then let's talk about three different events:
  1. ##e_1##: Immediately before Alice makes her measurement.
  2. ##e_2##: Immediately after Alice makes her measurement.
  3. ##e_3##: When Bob makes his measurement.
Alice is trying, based on information available to her, to figure out what result Bob will get at event ##e_3##. For simplicity, let's assume that Alice and Bob both agree ahead of time to measure spins along the z-axis.

At event ##e_1##, Alice is completely uncertain about Bob's result at ##e_3##. He might get spin-up. He might get spin-down. Her subjective probabilities for each possibility is 50%.

At event ##e_2##, Alice is completely certain about Bob's result. If she got spin-up, she knows that Bob will get spin-down, and vice-versa.

So Alice's subjective knowledge about Bob's result changed drastically between event ##e_1## and event ##e_2##. The issue, then, is what caused this change in her knowledge?

Classically, your knowledge about future events changes according to a two-step, back and forth process:
  1. You learn something about past (or present) conditions.
  2. Those facts allow you to recompute probabilities for those future events.
You can only learn something about conditions that are in your causal past (events that influence your present), and that can only give you information about the causal future of those conditions. The example I gave earlier in this thread is that Alice finds Bob's wallet, and she predicts that he will be unable to board a flight due to a lack of proper ID. Alice seeing the wallet in the present tells her something about the past: That Bob lost his wallet. That past tells you about Bob's future: that he will be unable to board the flight.

Without spending too much time on it, I think that it's pretty much true that every case of learning something about future events has this character: You go backwards in time, using deduction, from observation to facts about the past, and then you go forwards in time, again using deduction from past conditions to future events. Einstein's speed of light limitation on influences further restricts things: The causal past of your current observations can only tell you about conditions in your backwards light cone, and the causal future of those conditions can only tell you about events in the future light cone of those conditions.

In contrast, in EPR, Alice's observations now tell her something about Bob's future measurements in a way that is not (apparently) mediated by the back and forth causal influences within a light cone. That's a sense in which Alice's information is nonlocal information about Bob. That doesn't necessarily imply nonlocal influences.
 
  • #105
stevendaryl said:
In contrast, in EPR, Alice's observations now tell her something about Bob's future measurements in a way that is not (apparently) mediated by the back and forth causal influences within a light cone. That's a sense in which Alice's information is nonlocal information about Bob. That doesn't necessarily imply nonlocal influences.
i follow and I agreee with what you write. But if you label that non-local, it is a nonissue I think.

Your definition does not seem to be the same as that paper?
"A physical theory is EPR-‐local iff according to the theory procedures carried out in one region do not immediately disturb the physical state of systems in sufficiently distant regions in any significant way"

This does not speak about information, it speaks about physical states. Which I think the author means to be be "elements of reality". ITs seems by changing definitions we can just decide if we want local or nonlocal :)

Edit: the only twist I see on this, which i suspect is nothing like the author means. Is that even expectations are somehow elements of reality, in that they are encoded somwhere. But this is obviously NOT what they mean, as that makes it even more strange (from their perspective).

stevendaryl said:
The issue, then, is what caused this change in her knowledge?
Anyway, this is was mote the question i considered worth asking. The answer to your question to me is, simply an information update, combined with the premise implied in pair creation. That Alices information has information about Bobs future is not a problem per see at all.

The question I asked is not why we have a correlation, we know we do.

The question I ask is: If we have a pre-correlation, then why does the causal ansatz in bells theorem fail?

Or put differently, HOW COME, the "physics" and "interactions" as infereed by a physicists at Bobs lab (inferring the laws of physics), is indifferent to wether Alice perfomed the measurement or not? Does the particle still behave as a superposition, even after Alice popped the bubble? Why? And how can we "make sense of this"? Ie what is the logic here, that can make us all say "aha"?

/Fredrik
 
Last edited:

Similar threads

Replies
333
Views
14K
Replies
6
Views
2K
Replies
226
Views
20K
Replies
228
Views
13K
Replies
153
Views
7K
Replies
19
Views
2K
Back
Top