A question regarding the Copenhagen interpretation.

In summary: The Copenhagen interpretation of quantum mechanics postulates that the collapse of the wave function is what causes a particular reality to be realized. However, there is no evidence that this collapse actually happens in reality. The theory is based on the assumption that reality exists independent of our observations, which is a questionable assumption.
  • #36
Technically the state would be (if we only measure one particle) |state> = |H>|apparatus shows V> - |V>|apparatus shows H>
 
Physics news on Phys.org
  • #37
Well, but this state collapse then is just the adaption of Alice's knowledge from her non-demolition measurement. Nothing has instantaneously happen with Bob's photon, at least as long standard QED is right, according to which the interaction of Alice's photon with her measurement apparatus is local. To let Bob know her measurement Alice needs to send the information to him, which signal takes at least the time [itex]L/c[/itex], when Bob is at a distance [itex]L[/itex] from Alice. So there is no FTL communication possible by just doing the local measurement at Alice's photon, and no violation of the causality structure of SRT is implied.
 
  • #38
vanhees71 said:
Well, but this state collapse then is just the adaption of Alice's knowledge from her non-demolition measurement. Nothing has instantaneously happen with Bob's photon, at least as long standard QED is right, according to which the interaction of Alice's photon with her measurement apparatus is local. To let Bob know her measurement Alice needs to send the information to him, which signal takes at least the time [itex]L/c[/itex], when Bob is at a distance [itex]L[/itex] from Alice. So there is no FTL communication possible by just doing the local measurement at Alice's photon, and no violation of the causality structure of SRT is implied.

I agree that wave function collapse does not lead to any FTL communication of classical information, and there is no violation of special relativity.

At any rate, I take back my claim in post #16 that in interpretations with a classical/quantum cut, EPR can be described without collapse or non-unitary time evolution of the wave function.
 
Last edited:
  • #39
What I find unsatisfactory about the interpretation of quantum mechanics as a probabilistic theory is that probabilities are probabilities of events. Either it's a probability of something being true, or its a probability of something happening. In the case of quantum mechanics, it's not clear what the probabilities are probabilities for. Associated with a wave function [itex]\Psi[/itex], an observable [itex]\hat{O}[/itex], and an eigenvalue [itex]o_i[/itex] of [itex]\hat{O}[/itex], there is a corresponding probability [itex]p_i[/itex]. But is this a probability that something is true? Is there a probability of [itex]p_i[/itex] that the observable has value [itex]o_i[/itex]? I would say no, that is not a correct interpretation. Observables don't have definite values in quantum mechanics until they are measured (or prepared to have those values). If you prepare a spin-1/2 particle so that it is spin-up in the z-direction, it has a 50% probability associated with having spin-up in the x-direction. But it doesn't actually have a spin in the x-direction prior to measuring it. (If it did, that would be a hidden variable, which is ruled out by Bell-type inequalities.)

So the probabilities for quantum mechanics are not associated with a probability of something being true. Then what are they probabilities for? They are probabilities of measuring something. The probability [itex]p_i[/itex] is the conditional probability that you will measure [itex]\hat{O}[/itex] to have value [itex]o_i[/itex], given that you choose to measure that observable. That's fine as a heuristic, but what does it really mean? The difficulty for me is that measurements themselves are physical interactions, presumably described by quantum mechanics. There is no more reason for the statement "The measurement of [itex]\hat{O}[/itex] produced value [itex]o_i[/itex]" to have a definite truth value than "[itex]\hat{O}[/itex] has value [itex]o_i[/itex]". I feel that a theory is still at the ad hoc stage if it must rely on the distinction between measurements and other interactions.

The closest thing to a objective interpretation of quantum mechanics that does not rely on ad hoc nonlocal interactions of ad hoc distinctions between measurements and other interactions is the "consistent histories" interpretation (which I think of as a variant of the Many-Worlds Interpretation). The wave function of the universe evolves continuously according to Schrodinger's equation (or the quantum-field theoretic generalization) until decoherence splits it into effectively disjoint parts. At that point, we can choose to interpret the wave function probabilistically, as an ordinary probability distribution on alternative histories.

But I find that not completely satisfying either. For one thing, the notion of "effectively disjoint alternative histories" is fuzzy. It depends on the likelihood of being able to observe interference effects between alternatives. For another thing, it seems weird that we have to wait for decoherence to do its thing before we can interpret what is going on.
 
  • #40
stevendaryl said:
I feel that a theory is still at the ad hoc stage if it must rely on the distinction between measurements and other interactions.

Yes, it is widely agreed that quantum mechanics has a measurement problem, which within Copenhagen is rooted in the need to make a classical/quantum cut, with the measuring apparatus on the classical side.

Also I agree that attempts to say that in interpretations with a classical/quantum cut, that wave function collapse is exclusively updating of knowledge, and that the physical system is not evolving, are so far not convincing. If it were, then quantum theory in such interpretations would be standard probability theory, and it isn't.
 
  • #41
Why do we refer to a quantum/classical cut rather than a boundary condition? Is that what measurement effectively is?
 
  • #42
Jilang said:
Why do we refer to a quantum/classical cut rather than a boundary condition? Is that what measurement effectively is?

We do have boundary conditions on the wave function, but quantum mechanics is not a theory like classical field theory, which also has boundary conditions. In classical field theory, the field potentially includes everything in the universe, including the measuring apparatus. The field is a model in which there is a single reality that changes with time. In classical field theory, we don't think there is any fundamental problem with including the measuring apparatus or the whole universe in the field, although it may be inconvenient. However, in quantum mechanics, if we try to put the measuring apparatus or the whole universe into the wave function, we get time evolution into superpositions which are not observed. To make the match to observation, we have to introduce the notion of measurement as a distinct postulate, as something that produces definite outcomes.

It may be possible to avoid the classical/quantum cut by using an interpretation such as many-worlds. However, at least for non-relativistic quantum mechanics, one can also avoid the classical/quantum cut by de Broglie-Bohm theory and its variants. Since there is not one unique way of avoiding the classical/quantum cut, and since quantum mechanics with the cut is working just fine for the moment, we can use it and are agnostic about the underlying interpretation.
 
Last edited:
  • #43
Thanks atyy, my question is more along the lines of why we regard the preparation of the state as an initial boundary condition for the wavefunction, but why we don't talk of the measurement as being a final boundary condition.
 
  • #44
Jilang said:
Thanks atyy, my question is more along the lines of why we regard the preparation of the state as an initial boundary condition for the wavefunction, but why we don't talk of the measurement as being a final boundary condition.

There is one formulation of quantum mechanics in which something like this is the case, if I understand it correctly: http://arxiv.org/abs/0706.1232. I believe it is more a calculational method, like the path integral, than it is an interpretation. I think Dr Chinese has read this article quite carefully, so he could answer questions on it. But if you would like to disucss this, perhaps it'd be better to start a new thread.

In the more textbookish way of thinking, a measurement causes the wave function to collapse, and collapse is not governed by the Schroedinger equation, so if the particle still exists after the measurement, it isn't enough to set the measurement as a boundary condition, since unitary evolution is violated.
 
Last edited:
  • Like
Likes 1 person
  • #45
Thanks atyy. I will read the article and I might start a new post.
 
  • #46
[my bolding]
vanhees71 said:
In this minimal interpretation there is thus no "spooky action at distance" or a "collapse of the state" necessary to explain the 100% correlation of the photons' polarization since this 100% was already prepared when the photons were created by parametric down conversion and not by Alice's and/or Bob's measurement of the polarization state of their single photons.

In the whole description of this EPR experiment nowhere a collapse assumption or action at a distance is necessary and thus there's no EPR paradox.

vanhees71 said:
Further, nobody denies the "nonlocal correlations" known as entanglement. The point is that these are correlations but not nonlocal interactions. To the contrary, the most successful theories, like the Standard Model of elementary particles, are local relativistic quantum field theories. This precisely resolves the EPR paradox as explained in my previous posting. The correlations are already there from the very beginning of the experiment, i.e., due to the preparation of the two photons in the entangled polarization state and it's not caused by the measurement of one of the photon's polarization. So there is no need for an action at a distance of the far-distant photon with the apparatus located where the other photon is registered.

vanhees71 said:
Before Bob measures the polarisation of his photon, he doesn't know it, and the polarizaton is not determined at all. He also doesn't know anything about Alice's photon, and it's indetermined as well. Now at the moment when Bob measures his photon's polarization, he also knows Alice's photon's polarization. The reason for this correlation between Alice's and Bob's photon polarization is, however not Bob's measurement but the preparation in the entangled state by the parametric down conversion in the very beginning.

This is confusing...? Are you talking about the 1935 EPR picture??

What you are saying is obviously not true after 1964, and Bell's groundbreaking paper "On the Einstein Podolsky Rosen paradox". The 1935 picture is only valid for prefect alignments/perfect correlations; i.e. it does not work for any other settings...
 
  • #47
The interesting question is what is responsible for the EPR correlations. I mean how does nature do that 'trick'?
 
  • #48
DevilsAvocado said:
[my bolding]
This is confusing...? Are you talking about the 1935 EPR picture??

What you are saying is obviously not true after 1964, and Bell's groundbreaking paper "On the Einstein Podolsky Rosen paradox". The 1935 picture is only valid for prefect alignments/perfect correlations; i.e. it does not work for any other settings...

There's a confusion about the meaning of "indeterminate". If you say that before Bob detects the photon, its polarization is indeterminate, you could be making an epistemological statement, that Bob doesn't know the polarization. Or you could be making a statement about the photon---that it doesn't have a polarization. After Bob does detect a horizontally-polarized photon, the state of Alice's photon + filter + detector becomes determined: She will definitely not measure a vertically-polarized photon. There are similarly two interpretations to this "becomes determined": (1) Bob learns about Alice's situation, or (2) Alice's situation changes to a situation in which the polarization is definite.

The two interpretations of the words "indeterminate" and "determined" are both unsatisfactory, in my opinion. If you view them as purely epistemological, so it's just a matter of Bob learning facts that are already true, then that would seem to imply a "hidden variables" model, which is ruled out by Bell's inequality. If you view them as not purely epistemological, but as actually about the state of the photons, then it would seem that Bob's measurement has an effect on the distant photon. Which is the "spooky action at a distance". Neither alternative is very attractive.
 
  • #49
bohm2 said:
The interesting question is what is responsible for the EPR correlations. I mean how does nature do that 'trick'?

I have absolutely no idea, and as a petite 'remedy' – we are in good company of über-smart people like Anton Zeilinger...

I think that Bohmian mechanics has some sort of "real explanation" (do you know?), but not in detail how the non-locality is 'implemented', and anyhow, there are serious trouble with RoS as soon as you make "real stuff" being there and influencing other distant "real stuff".

As Lee Smolin writes in his latest book:

Lee Smolin – Time Reborn said:
To describe how the correlations are established, a hidden-variables theory must embrace one observer’s definition of simultaneity. This means, in turn, that there is a preferred notion of rest. And that, in turn, implies that motion is absolute. Motion is absolutely meaningful, because you can talk absolutely about who is moving with respect to that one observer—call him Aristotle. Aristotle is at rest. Anything he sees as moving is really moving.
End of story.
In other words, Einstein was wrong. Newton was wrong. Galileo was wrong. There is no relativity of motion.
This is our choice. Either quantum mechanics is the final theory and there is no penetrating its statistical veil to reach a deeper level of description, or Aristotle was right and there is a preferred version of motion and rest.

It's hard and very interesting – I'm glad to live in these interesting times! ;)
 
  • #50
stevendaryl said:
There are similarly two interpretations to this "becomes determined": (1) Bob learns about Alice's situation, or (2) Alice's situation changes to a situation in which the polarization is definite.

The two interpretations of the words "indeterminate" and "determined" are both unsatisfactory, in my opinion.

I agree, there is no satisfactory explanation for EPR-Bell. You could refer to MWI to get rid of the whole problem, but to me this is at the same level as superdeterminism or the anthropic principle.

Basically, you could introduce even more weirdness to get rid of the EPR-Bell weirdness, or you could just "give up" and say - Shut up and calculate! ;)


P.S: My 'objection' to vanhees's posts was that it looks like he's talking about the 1935 picture of "gloves in a box", and of course this picture make it possible to refer to a common source as an explanation of the correlations, whereas this breaks down as soon as you go one step further to the Bell picture.
 
  • #51
Just a comment to "indeterminism". Quantum theory clearly tells you, which observables have a definite value (i.e., are determined) and which not, given the state of the system, because you can calculate the probability of finding a definite value. If there is one value, for which you get the probability 1, this is the determined value of that variable otherwise not, and the observable is indetermined. The determination or nondetermination of a certain observable is thus due to the preparation of the system in the (pure or mixed) state.

Further, I never have discussed something contradicting Bell's achievements. Of course, the correlations according to entanglement are in perfect agreement with Bell, and these correlations are naturally described by the quantum-theoretical formalism and are well-established empirically (including the violation of Bell's inequality, excluding local deterministic hidden-variable models with very high significance).

With the Aspect-Zeilinger setup of polarization-entangled photons, detected by far distant observers "Alice and Bob", I've described previously, you can also perform high-precision Bell-experiments. You only have to rotate one of the polarization foils against the other. At certain relative angles you get maximal violations of Bell's inequality for the photon polarization. This is all encoded in the quantum-theoretical state and thus, according to the minimal statistical interpretation, inherent in the preparation procedure, leading to the preparation of the photon pair in the entangled state and not due to any kind of collapse of the state due to one observer's measurement of the polarization of his photon.
 
  • #52
vanhees71 said:
Further, I never have discussed something contradicting Bell's achievements.

Good, then we are on the same page, since one of Bell's achievements was to rule out the "common source explanation" in EPR, which was the main fuel of the 20 year long Bohr–Einstein debates.
 
  • #53
bhobba said:
The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics.'

Where can I learn more about this difficulty in QM?
 
  • #54
EskWIRED said:
Where can I learn more about this difficulty in QM?

The measurement problem - the need for a classical/quantum cut within the Copenhagen interpretation, and the use of two different postulates for dynamics given by the Schroedinger equation and wave function collapse are described in:

Bell, Against 'measurement'
http://www.tau.ac.il/~quantum/Vaidman/IQM/BellAM.pdf

Laloe, Do we really understand quantum mechanics?
http://arxiv.org/abs/quant-ph/0209123 (see the section "Difficulties, paradoxes", on p13)
 
  • Like
Likes 1 person
  • #55
DevilsAvocado said:
Good, then we are on the same page, since one of Bell's achievements was to rule out the "common source explanation" in EPR, which was the main fuel of the 20 year long Bohr–Einstein debates.

What do you mean by "common source explanation"? In some sense the entangelment in the two-photon example we discuss here is due to a common source of the two photons by parametric downconversion.
 
  • #56
vanhees71 said:
What do you mean by "common source explanation"? In some sense the entangelment in the two-photon example we discuss here is due to a common source of the two photons by parametric downconversion.

The entangled state is not a local common source in the sense of Bell.

http://arxiv.org/abs/0901.4255 (Eq 2)
Gisin, Foundations of Physics, 42, 80-85, 2012

Special relativity does forbid "nonlocality", but it does not forbid quantum nonlocality. Thus quantum mechanics is nonlocal in the sense of Bell. However, it is not nonlocal in the sense of violating special relativity. Rohrlich and Popescu discuss these different definitions of locality/nonlocality http://arxiv.org/abs/quant-ph/9508009 (Foundations of Physics, 24, 379-385, 1995).
 
Last edited:
  • #57
vanhees71 said:
What do you mean by "common source explanation"? In some sense the entangelment in the two-photon example we discuss here is due to a common source of the two photons by parametric downconversion.

Yes absolutely, the two photons are entangled due to a "common source" (even if it nowadays is possible to entangle photons that have never met, i.e. entanglement swapping), but this common source can never be used as an explanation of the correlations, because the correlations are ruled by the relative angle between the two space-like separated polarizers (outside each other's light cone), in the form of Malus' law: cos2(a-b)

I.e. there is no ("normal") way to know the measurement settings for Alice & Bob in advance; hence the "common source explanation" will get you nowhere.

It "worked" for 1935 EPR, because they only considered perfect correlations, it doesn't work today.
 
  • Like
Likes 1 person
  • #58
atyy said:
but it does not forbid quantum nonlocality.

If QM is non local is very interpretation dependent - that's the import of Bells Theorem and Einsteins error. QM rules out naive-reality ie local realism. If you reject realism (ie properties do not exist independent of observation) then locality is saved. If you keep it then locality is gone. But SR is still saved since it can't be used to send information which is what's required to sync clocks.

Basically all Bell type 'experiments' are doing is observing systems with spatial extent, and because of how its arranged if one thing in the system has a property on observation, so does the other thing - but they are spatially separated.

I have two pieces of paper, one black, and one white and put them in envelopes. I randomly send one to one person, and another to a different person. If any of those people open their envelope they immediately know what the other person will get when they open their envelope. Their is nothing Earth shattering going on. Same with Bell type experiments, with the twist we can't say it has the property of blackness or whiteness until observation.

Griffiths book - Consistent Quantum Theory discusses it from this interesting perspective:
https://www.amazon.com/dp/0521539293/?tag=pfamazon01-20

Thanks
Bill
 
Last edited:
  • #59
bhobba said:
If QM is non local is very interpretation dependent - that's the import of Bells Theorem and Einsteins error. QM rules out naive-reality ie local realism. If you reject realism (ie properties do not exist independent of observation) then locality is saved. If you keep it then locality is gone. But SR is still saved since it can't be used to send information which is what's required to sync clocks.

Basically all Bell type 'experiments' are doing is observing systems with spatial extent, and because of how its arranged if one thing in the system has a property on observation, so does the other thing - but they are spatially separated.

I have two pieces of paper, one black, and one white and put them in envelopes. I randomly send one to one person, and another to a different person. If any of those people open their envelope they immediately know what the other person will get when they open their envelope. Their is nothing Earth shattering going on. Same with Bell type experiments, with the twist we can't say it has the property of blackness or whiteness until observation.

Griffiths book - Consistent Quantum Theory discusses it from this interesting perspective:
https://www.amazon.com/dp/0521539293/?tag=pfamazon01-20

Thanks
Bill

Bell nonlocality is simply the violation of the Bell inequalities, and the existence of the measurement result, the measurement choice, the independence of the measurement choice, and the existence of a variable used to predict the probabilities. Certainly the inequality cannot be violated if any of these quantities don't exist, or a probability distribution cannot be defined over them. However, it does not require that black or white exist before the measurement, only that black or white exist after the measurement. The hidden variable can be the wave function itself. If one by fiat excludes the wave function from entering the inequality (and defines that as the wave function being not real) then perhaps one can escape the conclusion that quantum mechanics violates the inequality. Take a look at Eq 2 in http://arxiv.org/abs/0901.4255 (Foundations of Physics, 42, 80-85, 2012). I agree that one can use nonreality of the measurement results to avoid Bell nonlocality, but I don't think vanhees71 was challenging the violation of the Bell inequalities.
 
  • #60
bhobba said:
If QM is non local is very interpretation dependent - that's the import of Bells Theorem and Einsteins error.

For once, there are thousands of thoroughly and professional experiments, settling the options left for us to consider, i.e. the experimental results has nothing to do with interpretations as such, and the fact is – Bell's theorem is a mathematical proof – not specifically 'tied' to QM, or any interpretation. There is one task left – to close loopholes simultaneously – but no one could seriously expect any different outcome (as this would be a bigger surprise than anything else).

(At least) one of these three options has to be abandoned:

  • Locality
  • Realism
  • Free will*
*I.e. give up our freedom to choose (random) settings, which would conduce to Superdeterminism.

bhobba said:
If you reject realism (ie properties do not exist independent of observation) then locality is saved.

True, but there is one "little" problem left – unless one pursues the "Shut up and calculate!" methodology – one ought to explain how this works, and this (for sure) is not an easy task, not even for fairly 'vague' interpretations. Some attempts are Holism & Nonseparability, Two-state vector formalism, Relational Blockworld, etc. These are all very interesting but quite "nasty creatures", that has very little or nothing to do with standard QM...

bhobba said:
If you keep it then locality is gone. But SR is still saved since it can't be used to send information which is what's required to sync clocks.

True, but if you keep realism in favor of locality, you have "real stuff" out there, that must act according to Relativity of Simultaneity, and then you will surely run into trouble with SR, trying to define a preferred version of motion and rest. Catch-22.

bhobba said:
I have two pieces of paper, one black, and one white and put them in envelopes. I randomly send one to one person, and another to a different person. If any of those people open their envelope they immediately know what the other person will get when they open their envelope. Their is nothing Earth shattering going on. Same with Bell type experiments, with the twist we can't say it has the property of blackness or whiteness until observation.

I'm quite baffled by this... you surely know things that I have absolutely no clue on... but this is wrong, entirely wrong... and this is the same "trap" that vanhees seems to have fallen into... weird...

This picture of envelopes or boxes with only two possible values (i.e. black/white [cards] or left/right [gloves]), is the old 1935 EPR picture. It is proven wrong beyond any discussion.

The new Bell picture is more like the complete spectrum of the rainbow, and the final definite colors are correlated *only* by the *relative* angle *between* the settings of the two space-like separated polarizers of Alice & Bob.

... :bugeye: ...
 
  • Like
Likes 1 person
  • #61
DevilsAvocado said:
For once, there are thousands of thoroughly and professional experiments, settling the options left for us to consider, i.e. the experimental results has nothing to do with interpretations as such, and the fact is – Bell's theorem is a mathematical proof – not specifically 'tied' to QM, or any interpretation. There is one task left – to close loopholes simultaneously – but no one could seriously expect any different outcome (as this would be a bigger surprise than anything else).
I haven't read it yet but a paper published today suggests that there may be a reason for this confusion. Then again, it might be just another paper that adds more confusion:
Many of the heated arguments about the meaning of “Bell’s theorem” arise because this phrase can refer to two different theorems that John Bell proved, the first in 1964 and the second in 1976...Although the two Bell’s theorems are logically equivalent, their assumptions are not, and the different versions of the theorem suggest quite different conclusions, which are embraced by different communities...I discuss why the two ‘camps’ are drawn to these different conclusions, and what can be done to increase mutual understanding.
The Two Bell’s Theorems of John Bell
http://arxiv.org/pdf/1402.0351.pdf
 
  • Like
Likes 1 person
  • #62
DevilsAvocado said:
The new Bell picture is more like the complete spectrum of the rainbow, and the final definite colors are correlated *only* by the *relative* angle *between* the settings of the two space-like separated polarizers of Alice & Bob.
I actually don't think you are saying something different here than what bhobba said. I believe his analogy means that the envelope will be found to contain either black or white once you have chosen what "colors of the rainbow" you are measuring, but you cannot say it was black or white before you measured it, because that won't get the right correlations with the distant envelope that is being measured to be either red or blue. That's your "colors of the rainbow," as well as his "twist."

To me, the key point here is that the color of the paper in the envelope is not a property that the envelope carries inside it all the time, if you allow that you could have chosen to measure any color axis (which is what you mean by "free will.") If you give up localism, you say either that some kind of magical signal connects the envelopes and fixes their correlations consistently with all the measurement choices, or you say that the envelope is part of a larger system and the color of the paper is a joint property of that entire system, not a local property of that one envelope (that's the alternative that makes the most sense to me).

If you give up realism, you say that the colors are just concepts in the minds of the observers, but frankly I don't really see any difference in that alternative-- you still have to maintain either that a signal connects them, or that they are part of a joint system that maintains correlations because it is irreducible, whether you are talking about two minds or two envelopes. So for me, that's the same thing anyway, so I just maintain that what we mean by reality manifests a property of holding irreducible entanglements even when its parts are causally separated, and I have no real problem with this, because frankly any way that reality manifests itself is just as surprising to me as any other. The key is to not forget that argument from incredulity is really just argument from the ignorance of limited experience, in the disguise of familiarity.
 
  • #63
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.

The nonlocality in the violation of Bell's inequality refers to correlations, which have to be clearly distinguished from interactions. The entanglement between the photon polarizations in our Aspect-Zeilinger-experiment example can persist over very long times (as long as you can prevent one or both photons be disturbed by perturbations, leading to decoherence) and thus the two photons may be detected as far away from each other as you like and still show the entanglement, i.e., a correlation! It's the very point why I think one has to abandon the naive collapse of Copenhagen that the correlations are not caused by the (local!) measurement of, say, Alice's photon but are present all the time due to the preparation of the photon pair in the very beginning.

Also note that I said the photons can be detected by far distant observers, not that the photons are far appart. This is because for photons there is not even a position observable in the strict sense and thus it doesn't make sense to talk about a photon's position, but that's another point.
 
  • #64
Ken G said:
I actually don't think you are saying something different here than what bhobba said. I believe his analogy means that the envelope will be found to contain either black or white once you have chosen what "colors of the rainbow" you are measuring,

Maybe this is correct, but perhaps safest is to let Bill explain himself. Anyhow, "colors of the rainbow" was maybe not the most fortunate parable. Of course, when we do measure, we will always get black/white, 1/0, [spin] up/down, at one single polarizer/measurement apparatus. My "rainbow" was referring to the "wide spectrum" of all the orientations/correlations around the complete circle 0° to 360°, resulting in the famous sinusoidal EPR-Bell test experiment results.

2wr1cgm.jpg

[Credit: Alain Aspect]

My impression was that Bill was talking about [deterministic] fixed settings, always resulting in perfect correlations, and of course, in this case envelopes with black/white cards works like a dream, and hence could fool you to believe in a "common source explanation", as it did with Einstein.

But this is clearly wrong.

Ken G said:
but you cannot say it was black or white before you measured it, because that won't get the right correlations with the distant envelope that is being measured to be either red or blue. That's your "colors of the rainbow," as well as his "twist."

Let Bill explain it himself.

Ken G said:
To me, the key point here is that the color of the paper in the envelope is not a property that the envelope carries inside it all the time, if you allow that you could have chosen to measure any color axis (which is what you mean by "free will.") If you give up localism, you say either that some kind of magical signal connects the envelopes and fixes their correlations consistently with all the measurement choices, or you say that the envelope is part of a larger system and the color of the paper is a joint property of that entire system, not a local property of that one envelope (that's the alternative that makes the most sense to me).

I'm just guessing here, but I take it that "envelope is part of a larger system" means some sort of "ensemble interpretation", right? It's of course correct that we need several (thousands) measurements to get correlations like cos2(22.5°) = 85%, which is naturally impossible for a single spin up/down measurement. However, now I would like to return to Bill's [deterministic] perfect correlations and parallel settings – in this case we only need one measurement to establish the correlation "link" – hence no ensemble.

Ken G said:
If you give up realism, you say that the colors are just concepts in the minds of the observers, but frankly I don't really see any difference in that alternative-- you still have to maintain either that a signal connects them, or that they are part of a joint system that maintains correlations because it is irreducible,

Possible solutions for "sur"realism + locality are quite strange, and I'm afraid I can't talk about them in this forum. All I can say is that PF user RUTA (PhD in physics and involved in the foundations of QM) is working on a model called Relational Blockworld (RBW), where reality is fundamentally relational and non-dynamical.
 
  • #65
vanhees71 said:
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.

Yes, in his 1990 paper, Bell gave a formulation of what he called the "Principle of Local Causality":

"The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light. Thus for events in a space-time region 1 [...] we would look for causes in the backward light cone, and for effects in the future light cone. In a region like 2, space-like separated from 1, we would seek neither causes nor effects of events in 1."

vanhees71 said:
The nonlocality in the violation of Bell's inequality refers to correlations, which have to be clearly distinguished from interactions.

Yes, of course, any "classical interactions" are out of the picture; however it's quite hard to neglect that there must be an influence, i.e. change in the states, and this gets even more troublesome if we adhere to realism in favor of locality.

vanhees71 said:
It's the very point why I think one has to abandon the naive collapse of Copenhagen that the correlations are not caused by the (local!) measurement of, say, Alice's photon but are present all the time due to the preparation of the photon pair in the very beginning.

Maybe an extensive discussion regarding collapse, decoherence, etc, could not takes us any further. We have to remember that the correlations exists as classical information in the measuring setup, afaik, it would make no difference whatsoever if the collapsed/uncollapsed wavefunction continues to the other end of the universe – the classical measurement data is still there for us to ponder... this stuff happens in front of our noses.

vanhees71 said:
Also note that I said the photons can be detected by far distant observers, not that the photons are far appart. This is because for photons there is not even a position observable in the strict sense and thus it doesn't make sense to talk about a photon's position, but that's another point.

Also note that EPR-Bell experiments with two entangled trapped ions has been performed.
 
  • #66
vanhees71 said:
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.
Sure, interactions are still local, but that is just not what is meant by local in "local realism", and it was not the motivation for EPR. Einstein wanted all the information, including correlations between widely separated yet local interactions, to be carried with each particle, that was the kind of local realism that EPR aspired to. It was naive, we can agree, but that was his goal, and that is what Bell showed cannot be the case. So we agree-- interactions are local, but information is more global, so even though it cannot be propagated between observers faster than c, it also cannot be regarded as being "contained locally in the particles."
 
  • #67
DevilsAvocado said:
I'm just guessing here, but I take it that "envelope is part of a larger system" means some sort of "ensemble interpretation", right?
No, I just mean the "larger system" includes the other envelope, all the entanglements of the one envelope being measured. That's the crux of entanglement, as you know-- it imposes a holistic quality to its subsystems, that can persist even when the subsystems are interacted with outside of each other's light cones. But bhobba is completely aware of that too, so I don't think I'm stretching his words too far by attaching the interpretation I gave.

The main point I was making is that I don't think the key issue is whether we choose to drop realism (drop that the system in some sense "contains" its own attributes and information) or drop localism (drop that the information and attributes of an object move strictly with that object), because it doesn't really matter where we "put" the attributes, what matters is that we have two observers who are going to notice Bell-inequality-breaking correlations when they compare notes. Thus what we really have to decide between is whether we regard the source of those correlations to be some special kind of signal that can propagate superluminally without propagating any information between the points, or whether we regard the system as having a holistic correlation that does not require any kind of propagation whatsoever to maintain. Once you choose between those two issues, then whether you say you are "dropping locality" or "dropping realism" becomes a rather moot issue, because depending on other assumptions I make, I could characterize your choice in either of those terms, but not in any way that matters much-- you've already chosen the key language for speaking about the source of the correlations.
 
  • #68
@Ken G: Another good idea is, never to talk about "realism" or even "local realism" in discussions about the interpretation of quantum theory without to define very clearly what you mean by that. I've never heard a clear definition in mathematical terms, what's meant by the words "realism" or "local realism" yet. Usually it's used by philosophers in a kind of muttering rather than scientifically clearly defined terms. So I'm not able to discuss that notions properly.

@DevilsAvocado: Yes, that's another very clear feature of local relativistic QFTs! They all fulfill automatically the linked-cluster principle: I.e., there is no influence by space like separated events on measurements by means of local interactions.

In our case: No matter what Bob does with his photon, i.e., whether he determines its polarization state before or after Alice does her experiment (or if Bob and Alice are in relative motion to each other, no matter whether Bob's measurement act is in the future or past lightcone or space-like separated to Alice's measurement act), Alice will always simply measure a stream of unpolarized photons (supposed Alice and Bob are sent a sequence of independently prepared entangled photon pairs from the parametric-down-conversion source). That's so, because the experiment is well-described by standard QED, which is a local relativistic QFT.
 
  • #69
vanhees71 said:
@Ken G: Another good idea is, never to talk about "realism" or even "local realism" in discussions about the interpretation of quantum theory without to define very clearly what you mean by that. I've never heard a clear definition in mathematical terms, what's meant by the words "realism" or "local realism" yet. Usually it's used by philosophers in a kind of muttering rather than scientifically clearly defined terms. So I'm not able to discuss that notions properly.
Yes, clarity is always essential. For me, I understand what "local realism" is intended to mean well enough by defining it to mean essentially "that which, the absence of which was Einstein's main objection to quantum mechanics, as evidenced by the EPR paper." Or equivalently "that property which neither quantum mechanics has, nor real experiments have, that Einstein felt both should have, as evidenced by his arguments in the EPR paper." You are welcome to translate that into more precise mathematical terms if it helps, that might be a service to many! But in simple terms, it means "that the information required to statistically predict the outcome of any set of experiments on different subsystems must be able to be regarded as contained within and carried along with those individual subsystems, entirely by themselves." This also means the information must be completely collapsed by measurements on the subsystems, because a local collapse must be able to access or define the full array of information carried by that subsystem. Or even more succinctly: "no spooky actions or correlation mediation at a distance." I think it may be said that Einstein's main objection to quantum mechanics was its "top-down", or holistic, approach to the wave function, whereas Einstein believed reality needed to be "bottom-up", i.e., reducible to its local elements. Bell showed that reality must have a top-down character, or else we have to imagine very strange things like we are not allowed to pick whatever observation we want to do.
 
Last edited:
  • #70
vanhees71 said:
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.

The nonlocality in the violation of Bell's inequality refers to correlations, which have to be clearly distinguished from interactions. The entanglement between the photon polarizations in our Aspect-Zeilinger-experiment example can persist over very long times (as long as you can prevent one or both photons be disturbed by perturbations, leading to decoherence) and thus the two photons may be detected as far away from each other as you like and still show the entanglement, i.e., a correlation! It's the very point why I think one has to abandon the naive collapse of Copenhagen that the correlations are not caused by the (local!) measurement of, say, Alice's photon but are present all the time due to the preparation of the photon pair in the very beginning.

Regarding the collapse - if we take a frame in which Alice and Bob measure simultaneously, and consider that to be the end of the experiment, then yes, there is no need for non-unitary time evolution, since one just uses the Born rule without collapse to get the joint probability. But what happens in a frame in which their measurements are not simultaneous, and Alice does a non-demolition measurement first and gets a particular result? The two particles must still propagate after the non-demolition measurement, and in Copenhagen there is non-unitary time evolution. In interpretations with a classical/quantum cut (as I understand, both Copenhagen and the Minimal Statistical Interpretation have a classical/quantum cut), is there any description in which the time evolution is unitary in all frames? I thought you had earlier agreed that there was non-unitary time evolution (your post #37)?
 

Similar threads

Replies
15
Views
2K
Replies
13
Views
2K
Replies
109
Views
9K
Replies
17
Views
3K
Replies
8
Views
3K
Replies
122
Views
8K
Back
Top