Von Neumann QM Rules Equivalent to Bohm?

In summary: Summary: In summary, the conversation discusses the compatibility between Bohm's deterministic theory and Von Neumann's rules for the evolution of the wave function. It is argued that although there is no true collapse in Bohmian mechanics, there is an effective (illusionary) collapse that is indistinguishable from the true collapse. This is due to decoherence, where the wave function splits into non-overlapping branches and the Bohmian particle enters only one of them. However, there is a disagreement about the applicability of Von Neumann's second rule for composite systems.
  • #106
rubi said:
Ok, I would have called that Bell's criterion, though. It's of course true that QM and QFT violate Bell's inequalities, but I don't see how that is relevant to the question whether there is a collapse or not. (After all, you can't cure the violation by introduction of a collapse either.)

Yes, my point is that vanhees71's argument against collapse using Einstein causality (considering Bell's criterion to formalize Einstein causality in EPR, which vanhees71 mentioned) is faulty, since the violation cannot be cured in QM, whether one uses collapse or not.

Going back to your construction of making a density matrix into a unit vector, I do see your point that it is not decoherence, and I don't know what it is. However, even at the non-rigourous level, there are simple versions of collapse in which a pure state collapses into a pure state, eg. a wave function collapses into a delta function. Referring to http://arxiv.org/abs/0810.3536, the collapse is usually either
(1) linear and trace non-preserving (Eq 6.9), or
(2) nonlinear and trace-preserving (Eq 6.12)

Is there any way to make either Eq 6.9 or Eq 6.12 unitary?

Of course, there are ways of thinking that there is nothing strange with this non-unitary evolution, since as vanhees71 likes to say, it is just choosing a sub-ensemble. That's fine, except that in quantum mechanics without hidden variables, there are no sub-ensembles until measurement. Without hidden variables, the sub-ensembles appear at the moment of measurement and are labelled by the measurement outcome. It is ok to think that collapse is choosing a sub-ensemble, but then one should admit that one is using hidden variables.
 
Last edited:
Physics news on Phys.org
  • #107
atyy said:
Yes, that is exactly my point - that vanhees71's argument using Einstein causality (considering Bell's criterion to formalize Einstein causality in EPR, which vanhees71 mentioned) is faulty, since the violation cannot be cured in QM, whether one uses collapse or not.
I think vanhees' motivation to get rid of the collapse isn't to fix Einstein causality, but rather to get rid of the two different ways of time evolution, if one way is enough to describe physics. (See my post #97 for that. It's not Einstein causal, but causal in the QFT sense.)

Going back to your construction of making a density matrix into a unit vector, I do see your point that it is not decoherence, and I don't know what it is. However, at least intuitively, I can feel it is right since there is a simple version of collapse which is just that we have a pure state that collapses into a pure state, eg. a wave function collapses into a delta function (to use the non-rigourous language). Referring to http://arxiv.org/abs/0810.3536, the collapse is usually either
(1) nonlinear and trace-preserving (Eq 6.9), or
(2) linear and non-trace-preserving (Eq 6.12)

Is there anyway to make the collapse unitary?
You can't make a non-linear evolution linear using my construction. The point was to make linear evolutions of pure states into density matrices unitary. That works for evolutions given by a Lindblad equation for example, which can be given by decoherence but doesn't need to.

Of course, there are ways of thinking that there is nothing strange with this non-unitary evolution, since as vanhees71 likes to say, it is just choosing a sub-ensemble. That's fine, except that in quantum mechanics without hidden variables, there are no sub-ensembles until measurement. Without hidden variables, the sub-ensembles appear at the moment of measurement and are labelled by the measurement outcome. It is ok to think that it is choosing a sub-ensemble, but then one should admit that one is using hidden variables.
I would say that taking a subensemble is just a mathematical technique that is not related to physics. It's about taking an equivalent description of the physics one is interested in. If we compare it to general relativity for example, it would correspond to looking at a local coordinate patch instead of describing physics globally. The physics stays the same, but the description is different.
 
  • #108
rubi said:
I think vanhees' motivation to get rid of the collapse isn't to fix Einstein causality, but rather to get rid of the two different ways of time evolution, if one way is enough to describe physics. (See my post #97 for that. It's not Einstein causal, but causal in the QFT sense.)

If one gets rid of collapse and has only unitary evolution and no hidden variables, doesn't one end up with Many-Worlds?
 
  • #109
atyy said:
If one gets rid of collapse and has only unitary evolution and no hidden variables, doesn't one end up with Many-Worlds?
I don't think so. MWI includes the observer into the quantum desciption, but one doesn't need to do that. It's enough to have enough additional degrees of freedom to effectively hide the correlations from the observer. We can be agnostic with respect to the ontology and just regard the quantum state as a mathematical object that encodes the statistics of measurements. We could imagine for example that Alice and Bob don't measure their photons directly, but instead route them into their own local quantum eraser apparata instead. I would be very surprised if the results wouldn't be in agreement with what one would calculate from plain old quantum mechanics, so they would still see 100% correlation in the results after the photons have gone through the quantum erasers. The difference between a quantum eraser and an measurement apparatus that destroys the entanglement is just that the correlation can be restored in the quantum eraser case and it's lost for all practical purposes with a realistic measurement device, because we can't control all of its degrees of freedom.
 
  • #110
rubi said:
I don't think so. MWI includes the observer into the quantum desciption, but one doesn't need to do that. It's enough to have enough additional degrees of freedom to effectively hide the correlations from the observer. We can be agnostic with respect to the ontology and just regard the quantum state as a mathematical object that encodes the statistics of measurements. We could imagine for example that Alice and Bob don't measure their photons directly, but instead route them into their own local quantum eraser apparata instead. I would be very surprised if the results wouldn't be in agreement with what one would calculate from plain old quantum mechanics, so they would still see 100% correlation in the results after the photons have gone through the quantum erasers. The difference between a quantum eraser and an measurement apparatus that destroys the entanglement is just that the correlation can be restored in the quantum eraser case and it's lost for all practical purposes with a realistic measurement device, because we can't control all of its degrees of freedom.

I guess I just don't see how that is going to work within quantum mechanics as long as there are sequential measurements. Is there an actual calculation one can read?

It is true that one doesn't need sequential measurements in quantum mechanics. For example, normally one has Alice and Bob take separate measurements with their own time stamps. However, I could see that one could take Alice, Bob and their clocks all as one big experiment, and then we just open the box at the end and measure their results and regard their time stamps as position measurements of the hands of the clocks. I'm willing to accept that. However, that doesn't solve the problem - or rather, it is an already solved problem, since we do regard the possibility of shifting all measurements to the end as a means of preventing collapse. I think this would be something like a super use of the principle of deferred measurement http://en.wikipedia.org/wiki/Deferred_Measurement_Principle.

However, if we regard measurements and their time stamps to indicate real spacetime events, so that there are real sequential measurements, then I don't think one includes enough of the experimental apparatus to avoid collapse in a minimal interpretation.
 
Last edited:
  • #111
vanhees71 said:
Neutrino oscillations are an observed fact
Definite outcomes in QM are also an observed fact.

vanhees71 said:
and you have to introduce neutrino masses into the standard model
As far as I know, there is no strict proof that neutrino masses are the only possible way to explain neutrino oscillations.

vanhees71 said:
To the contrary there's no observed fact which would force me to introduce a collapse and,
There is also no direct measurement of neutrino masses.

(Don't get me wrong! I am not against neutrino masses. I think it's a very reasonable assumption which is almost certainly right. I only emphasize that almost certainly is not the same as certainly.)

vanhees71 said:
in addition, the introduction of this idea is very problematic (EPR!).
I think you misunderstood it. First, if you mean the problem of non-locality, then collapse is non-local even without EPR. Second, if you mean the type of non-locality specific for EPR, then QM is non-local in that sense even without collapse, provided that some ontology (known or unknown) exists even when it is not measured. (That's what the Bell theorem says.)

vanhees71 said:
So while I'm practically forced to introduce neutrino masses to adequately describe the observed fact of mixing, there's no necessity to bother oneself is unobserved and unnecessary collapses in QT!
If you can choose not to bother with collapse or other ontological aspect of QM, and use only those aspects of QM which are directly measurable, then you can also choose not to bother with neutrino masses and describe neutrino oscillations purely phenomenologically, without ever explicitly mentioning their masses.

vanhees71 said:
Since when do you need an "ontology" in physics? Physics is about the description of objective observable facts about nature and not to provide an ontology (although famous people like Einstein opposed this view vehemently).
Who determines what is physics about? Why the opinion of a large portion of physicists, including Einstein, doesn't count?

vanhees71 said:
E.g., it doesn't make sense to ask, whether a "particle" (better say "quantum" here) has a "shape" at all within QT.
Again, who determines which question makes or does not make sense?

vanhees71 said:
You can only describe the probability of the outcome of concrete measurements (observables), which are defined as (an equivlance class) of measurement procedures.
You can describe much more than that, but you do not have direct experimental proof that your description is correct. Just as you can describe neutrino oscillations with neutrino masses, even though you do not have direct experimental proof that the description in terms of masses is correct.
 
  • #112
Well, in this point I'm more with Bohr than with Einstein. It's of course my opinion.

Concerning the neutrinos: That's an interesting statement. I have no clue, how you can have neutrino mixing if the neutrinos don't have different masses. Do you have a model, where you have neutrino mixing without neutrino masses (i.e., all three neutrino flavors massless)?
 
  • #113
vanhees71 said:
In our example, the local interaction of A's photon with her polarizer doesn't affect instantaneously Bob's photon.
When you say that, do you imagine, in your head, that those photons exist even before they are measured by Alice or Bob? Be very careful with your answer because:

a) If you answer that you do, then I will accuse you for inconsistency. Namely, that means that you imagine, in your head, that photons have some ontology, while in another post you strictly claimed that physics is not about ontology.

b) If you answer that you don't even imagine it (which would be hard to believe), then I will ask you the following: Why, in the sentence above, do you use a classical language which seems to suggest that you do imagine that photons exist even before measurement?

And I am not asking you to say whether photons have ontology. I only want to know do you imagine that they do, for the sake of easier thinking about them.
 
  • #114
I don't understand what you mean by this. Of course, there are two photons prepared with parametric down conversion. We write down a two-photon state after all. So this state tells me that there are two photons.

Of course these photons have no definite polarization and no position at all (because there's no position operator for photons). The full analysis must of course use both the space-time information and the polarization information of the two-photon correlation function, which gives the probability for detection (!) of one photon at Alice's place (within a certain macroscopically small volume) at within a certain macroscopically small time interval and dito for the registration probability at Bob's position and time.

Photons only manifest themselves in a space-time picture as "registration events". Whether you claim that this notion of photons is ontological or epistemic, is your choice. I don't consider the answer to this question essential for physics, which is about what we can observe, and photons are observable only by (local) interaction events with massive particles in a macroscopic measurement device, which have a position observable. That's a specialty of massless particles with spin ##\geq 1##.

For a detailed (effective) model, see the classical paper by Hong and Mandel:

Hong, C. K., Mandel, L.: Theory of parametric frequency down conversion of light, Phys. Rev. A 31, 2409, 1985
http://dx.doi.org/10.1103/PhysRevA.31.2409
 
  • #115
vanhees71 said:
Well, in this point I'm more with Bohr than with Einstein. It's of course my opinion.
Fair enough!

vanhees71 said:
Concerning the neutrinos: That's an interesting statement. I have no clue, how you can have neutrino mixing if the neutrinos don't have different masses. Do you have a model, where you have neutrino mixing without neutrino masses (i.e., all three neutrino flavors massless)?
I don't have an explicit alternative model, but the point is: Do I need such a model?

What I am doing is accusing you for double standards:
- In the case of neutrino oscillations you seem to suggest that we should use some model (neutrino masses or possibly something else), even if there is no direct experimental proof that it is correct.
- In the case of quantum observables taking definite values, you suggest that we should not use any model (e.g. collapse or Bohmian trajectories) if there is no direct experimental proof that it is correct.

Why is it legitimate to propose theories of neutrino oscillations, but not theories of definite outcomes of quantum observables? Because Bohr said so?
 
  • #116
vanhees71 said:
Of course, there are two photons prepared with parametric down conversion.
How do you know that, if you didn't yet measured them? That's what I am asking you.
 
  • #117
vanhees71 said:
We write down a two-photon state after all. So this state tells me that there are two photons.
Some of us also write down the two Bohmian trajectories of those two photons. Does this tell us that there are two Bohmian trajectories?
 
  • #118
The neutrino oscillations are an observed fact. So I'm forced to include it into the Standard Model of elementary particle physics. There's no principle difficulty to do so. So what do I need to bother about?

On the other hand, in my opinion, there's not a single hint of something like and instantaneous collapse, and to assume one makes only trouble. So why should I bother to include one. It doesn't explain anything beyond the minimal interpretation, why you measure definite values of observables. You can always only give probabilities for measuring a definite value, no matter whether you envoke a collapse or not.

The case of Bohmian mechanics is more delicate. I don't know, what to make of it to be honest. On the one hand it seems to work for the non-relativistic case and give the same result. Some people feel better about this interpretation, because they claim it's more realistic than the minimal interpretation (or other flavors of Copenhagen). So it gives the same observable facts as QT and thus seems to be a valid interpretation of this successful theory. So I can live with it easily although I still don't see any merit of the introduction of unobservable trajectories, but that's just a matter of taste and no objective argument against it.

For the case of relativistic QFT, I'm not sure what to make of it at all. There's still the open question, whether there is a consistent formulation of Bohmian mechanics for standard relativistic QFT or not. Scully et al claim that "Bohmian trajectories for photons" are at best "surreal", others claim that's not true. What to make of the notion of "trajectories" for entities which don't have a position observable, is another answer, I've not seen answered. So I'd say, it's still an open question, whether a Bohmian approach to standard QFT makes sense or not.
 
  • #119
Demystifier said:
Some of us also write down the two Bohmian trajectories of those two photons. Does this tell us that there are two Bohmian trajectories?
You are quicker than I can answer! It's however in my posting #118.
 
  • #120
vanhees71 said:
The neutrino oscillations are an observed fact. So I'm forced to include it into the Standard Model of elementary particle physics. There's no principle difficulty to do so. So what do I need to bother about?
True, but neutrino masses are not an observed fact. Neutrino masses are a possible interpretation (albeit a very convincing one) of neutrino oscillations.

In fact, this is not even a unique interpretation: Are neutrino masses Majorana or Dirac?

vanhees71 said:
On the other hand, in my opinion, there's not a single hint of something like and instantaneous collapse, and to assume one makes only trouble. So why should I bother to include one. It doesn't explain anything beyond the minimal interpretation, why you measure definite values of observables. You can always only give probabilities for measuring a definite value, no matter whether you envoke a collapse or not.

The case of Bohmian mechanics is more delicate. I don't know, what to make of it to be honest. On the one hand it seems to work for the non-relativistic case and give the same result. Some people feel better about this interpretation, because they claim it's more realistic than the minimal interpretation (or other flavors of Copenhagen). So it gives the same observable facts as QT and thus seems to be a valid interpretation of this successful theory. So I can live with it easily although I still don't see any merit of the introduction of unobservable trajectories, but that's just a matter of taste and no objective argument against it.

For the case of relativistic QFT, I'm not sure what to make of it at all. There's still the open question, whether there is a consistent formulation of Bohmian mechanics for standard relativistic QFT or not. Scully et al claim that "Bohmian trajectories for photons" are at best "surreal", others claim that's not true. What to make of the notion of "trajectories" for entities which don't have a position observable, is another answer, I've not seen answered. So I'd say, it's still an open question, whether a Bohmian approach to standard QFT makes sense or not.
With that I actually agree. Bohmian approach seems to me much more convincing than collapse. And yes, there are still some unsolved problems with Bohmian approach, but hey, which physical theory is without any problems?
 
  • #121
Neutrino masses are just very difficult to measure, as is to figure out whether they are Majorana or Dirac particles. Of course, the models for neutrino oscillations are predictions, which may be falsified by experiment. So it's a feature not a bug, because that's after all what defines a scientific theory or model.

Hm, maybe I end up becoming a Bohmian, but for this I'd need a convincing formulation for the standard relativistic QFTs.
 
  • #123
atyy said:
I guess I just don't see how that is going to work within quantum mechanics as long as there are sequential measurements. Is there an actual calculation one can read?
One could for example write down a photon-phonon interaction Hamiltonian that absorbs photons into phonons (of the polarizer material) and adjust the selection rules so that only horizontal photons are absorbed and vertical photons pass through. One can keep phonon number degrees of freedom for both the polarizers of Alice and Bob. So the basis states would look like this: ##|##(Alice photon number horizontal)##>\otimes|##(Alice photon number vertical)##>\otimes|##(Alice phonon number)##>\otimes|##(Bob PNH)##>\otimes|##(Bob PNV)##>\otimes|##(Bob PN)##>##
We would start with ##|1>\otimes|0>\otimes|0>\otimes|0>\otimes|1>\otimes|0>-|0>\otimes|1>\otimes|0>\otimes|1>\otimes|0>\otimes|0>## and end up with ##|0>\otimes|0>\otimes|1>\otimes|0>\otimes|1>\otimes|0>-|0>\otimes|1>\otimes|0>\otimes|0>\otimes|0>\otimes|1>## (if both Alice and Bob had oriented their polarizers so that horizontal photons are absorbed). Now both Alice and Bob either have a photon left or they don't and if they do, it's not correlated with anything that can be measured (assuming they don't have access to the phonon degrees of freedom).

It is true that one doesn't need sequential measurements in quantum mechanics. For example, normally one has Alice and Bob take separate measurements with their own time stamps. However, I could see that one could take Alice, Bob and their clocks all as one big experiment, and then we just open the box at the end and measure their results and regard their time stamps as position measurements of the hands of the clocks. I'm willing to accept that. However, that doesn't solve the problem - or rather, it is an already solved problem, since we do regard the possibility of shifting all measurements to the end as a means of preventing collapse. I think this would be something like a super use of the principle of deferred measurement http://en.wikipedia.org/wiki/Deferred_Measurement_Principle.
Yes, that's essentially the idea.

However, if we regard measurements and their time stamps to indicate real spacetime events, so that there are real sequential measurements, then I don't think one includes enough of the experimental apparatus to avoid collapse in a minimal interpretation.
I would argue that this is not the right thing to do. QM is a statistical theory and it doesn't make sense to speak about individual measurements within its framework. If you want to compare experimental results to the theory, you first need to translate them into the language of statistics, i.e. you must have performed several measurements and computed the statistical mean and so on. You can of course collect your data in a spacetime diagram, but it tells you nothing about what the quantum state on the corresponding hypersurface was. If you are interested in the quantum state, you basically need to perform something like a quantum tomography, where measure position or momentum a thousand times, but everytime start with a newly prepared state and then find a quantum state that is consistent with the statistics.
 
  • #124
vanhees71 said:
But in the quantum case A also knew beforehand that the two photons in the entangled state are in this entangled state. That is as good as in the classical example.

No, it's not analogous.

Shoe case: With left and right shoes: Alice knows that Bob's box either contains a right shoe or a left shoe. She just doesn't know which. Opening her box tells her which is the case.

EPR twin-pair case: With EPR twin-pairs, the analogous claim would be: Alice knows that Bob's box is either polarized horizontally, or it's polarized vertically. She just doesn't know which. Opening her box tells her which is the case.

Bell's theorem (together with the assumption of locality) shows that the analogous claim for EPR twin-pairs is wrong. Alice's measurement tells her with certainty what the state of Bob's photon is (or at least, as much as is possible to know): she knows that it's horizontally polarized. But unlike the classical case, it isn't consistent for her to assume that that was the case all along.

So do you see the difference:
  • Classically, Alice discovers that Bob's box contains a right shoe. She concludes that it was a right shoe all along.
  • Quantum-mechanically, Alice discovers that Bob's photon is horizontally polarized. She CAN'T conclude that it was horizontally polarized all along.
 
  • #125
rubi said:
As I see it, the problem is the following: We have a state ##\left|\Psi\right> = \left|HV\right>-\left|VH\right>##. This state contains all information that is obtained in an EPR experiment, so a collapse is not necessary. The collapse is not needed to explain the results of an EPR experiment. However, we also know that if we measure any of the same photons again, we will not get the same correlations again. Therefore, after the measurement, the state cannot be ##\left|\Psi\right>## anymore, but needs to be something different. This is the real reason for why we usually assume that the system has collapsed into ##\left|HV\right>## or ##\left|VH\right>## and this would indeed be a non-local interaction. However, it doesn't need to be so. There is another option that is only available if we are willing to include the measurement devices into the description: The local interaction with the measurement device could have made the correlations spill over into some atoms of the measurement device, so the correlations are still there, but not easily accessible. One only needs local interactions for this to happen. I'm convinced that if we could ever control all the degrees of freedom of the measurement apparata, we could recover the information about correlations. It's basically analogous to the quantum eraser.

Well, it seems to me that the "spilling over into atoms of the measurement device" is exactly what leads to the Many Worlds Interpretation.

The collapse interpretation is only relevant if we assume that measurements have unique outcomes.
 
  • #126
rubi said:
@atyy: Can you explain what you mean by Einstein causality and how Bell tests violate it?

If Einstein causality says that non-local 100% correlations should not be allowed if there is no common cause, then I would reply that correlation doesn't imply causation and therefore it wouldn't be a good definition of causality in the first place.

Einstein causality doesn't really have to do with "causation", it can be expressed as purely a claim about correlations. Bell actually worked out a more general claim about correlations in his "theory of local beables" (where "beables" is pronounced BEE-able in an analogy with OBSERV-able).

The idea is this: If two events [itex]A[/itex] and [itex]B[/itex] are correlated (whether 100% or otherwise), then there is some fact [itex]C[/itex] in the common backwards lightcones of [itex]A[/itex] and [itex]B[/itex] such that the correlation between [itex]A[/itex] and [itex]B[/itex] are explaining by sharing the common past [itex]C[/itex].

Let's go through some examples. Suppose that Alice and Bob each have a device that produces a sequence of random digits. When they compare notes, they find that their devices produce the SAME sequence. Then this principle would tell you that there is some unknown common explanation.

Maybe the two devices are using the same, deterministic algorithm. That would explain the correlation.

Maybe the two devices are receiving signals from a common source. That would explain the correlation.

I don't think that there is any logical necessity that there be a common explanation. But it's what people usually assume. An apparent correlation can be a coincidence, but a sustained correlation indicates some common explanation.
 
  • #127
stevendaryl said:
No, it's not analogous.

Shoe case: With left and right shoes: Alice knows that Bob's box either contains a right shoe or a left shoe. She just doesn't know which. Opening her box tells her which is the case.

EPR twin-pair case: With EPR twin-pairs, the analogous claim would be: Alice knows that Bob's box is either polarized horizontally, or it's polarized vertically. She just doesn't know which. Opening her box tells her which is the case.

Bell's theorem (together with the assumption of locality) shows that the analogous claim for EPR twin-pairs is wrong. Alice's measurement tells her with certainty what the state of Bob's photon is (or at least, as much as is possible to know): she knows that it's horizontally polarized. But unlike the classical case, it isn't consistent for her to assume that that was the case all along.

So do you see the difference:
  • Classically, Alice discovers that Bob's box contains a right shoe. She concludes that it was a right shoe all along.
  • Quantum-mechanically, Alice discovers that Bob's photon is horizontally polarized. She CAN'T conclude that it was horizontally polarized all along.
But in the EPR-Twin case it's the same. The only qualification is that you have to say "Bob's photon is either H or V polarized, IF he chooses to measure the polarization in that direction." For the classical case you don't need the "IF". That's the only difference. Of course you are right in saying that Bob's photon had no definite polarization before A made her measurement. Caught in a classical worldview you then could conclude that then there must be a mechanism (called "collapse") that makes Bob's photon polarized horizontally through the measurement. But this is not a compulsary conclusion, because you can as well interpret it in the minimal way, according to which there are simply these non-classical correlations described by the quantum formalism and that's the cause why, if A measures a V-polarized photon, then B must find a H-polarized photon (supposed B measures the same polarization direction).
 
  • #128
stevendaryl said:
The idea is this: If two events [itex]A[/itex] and [itex]B[/itex] are correlated (whether 100% or otherwise), then there is some fact [itex]C[/itex] in the common backwards lightcones of [itex]A[/itex] and [itex]B[/itex] such that the correlation between [itex]A[/itex] and [itex]B[/itex] are explaining by sharing the common past [itex]C[/itex].

What does it mean for two events to be correlated? Don't you need sequences of events?
 
  • #129
rubi said:
Ok, I would have called that Bell's criterion, though. It's of course true that QM and QFT violate Bell's inequalities, but I don't see how that is relevant to the question whether there is a collapse or not. (After all, you can't cure the violation by introduction of a collapse either.)

The importance of collapse is that Bell's inequality as applied to EPR is proved under the assumption that there are no nonlocal influences affecting the two measurements. If there is a nonlocal collapse, then the assumptions behind Bell's proof are not satisfied, and so you shouldn't expect Bell's inequality to hold, necessarily.

That's really the issue: Does the violation of Bell's inequality imply that something nonlocal is happening? The unitary evolution of the wave function is local (although there's a little confusion about that in my mind, because the wave function is a function on configuration space, not physical space, so it's a little tricky to say what "local evolution" means). But the selection of one alternative out of a set of possible alternatives seems to be a nonlocal event. It seems as if this selection process happens at Alice's device and Bob's device simultaneously.
 
  • #130
martinbn said:
What does it mean for two events to be correlated? Don't you need sequences of events?

I'm sloppily using the word "correlated" to mean dependent probabilities. That is,

[itex]P(A\ \&\ B) \neq P(A) \cdot P(B)[/itex]

Bell's theory of local beables says that if [itex]A[/itex] and [itex]B[/itex] are statements about localized conditions at spacelike separations, then there must be some fact [itex]C[/itex] that is a statement about the common past (the intersection of the backward lightcones) such that:

[itex]P(A\ \&\ B | C) = P(A | C) \cdot P(B | C)[/itex]
 
  • Like
Likes martinbn
  • #131
stevendaryl said:
No, it's not analogous.

Shoe case: With left and right shoes: Alice knows that Bob's box either contains a right shoe or a left shoe. She just doesn't know which. Opening her box tells her which is the case.

EPR twin-pair case: With EPR twin-pairs, the analogous claim would be: Alice knows that Bob's box is either polarized horizontally, or it's polarized vertically. She just doesn't know which. Opening her box tells her which is the case.

Bell's theorem (together with the assumption of locality) shows that the analogous claim for EPR twin-pairs is wrong. Alice's measurement tells her with certainty what the state of Bob's photon is (or at least, as much as is possible to know): she knows that it's horizontally polarized. But unlike the classical case, it isn't consistent for her to assume that that was the case all along.

So do you see the difference:
  • Classically, Alice discovers that Bob's box contains a right shoe. She concludes that it was a right shoe all along.
  • Quantum-mechanically, Alice discovers that Bob's photon is horizontally polarized. She CAN'T conclude that it was horizontally polarized all along.
I think you are applying classical reasoning to the quantum mechanical world. In a classical world, it would be reasonable to assume Bell's locality criterion, since all relativistically covariant classical theories automatically satisfy it. We can however have perfectly relativistically covariant quantum theories that don't satisfy the principle and therefore, we can't assume that it is generally valid. In the quantum world, there are objects like left/right shoes and right/left shoes and if Alice discovers that she got the box with the left/right shoe, then she automatically knows that Bob must have gotten the Box with the right/left shoe. I think we just have to accept the peculiar feature of QM that there can be objects that are in superposition.

stevendaryl said:
Well, it seems to me that the "spilling over into atoms of the measurement device" is exactly what leads to the Many Worlds Interpretation.
I wouldn't say that leads to many world, since we only get many worlds if we include the observer into the picture. It suffices however to include enough quantum degrees of freedom to effectively hide the correlation from the observer.

The collapse interpretation is only relevant if we assume that measurements have unique outcomes.
Does the collapse lead to unique outcomes? I would say no, because even if the quantum state collapses to one of the states in a superposition, this state can still be expanded in another basis and looks like a superposition there. Just by looking at a state, we can never tell whether it is the state of a collapsed system and encodes the fact that there is something with a definite outcome or it is an unmeasured state in a superposition of possible outcomes of the conjugate variable. I think we can never interpret the state consistenly as a representation of a physical outcome. Instead, it only tells us about the statistical features that we will observe if we conduct an experiment with an identical preparation procedure a thousand times.
 
  • #132
rubi said:
I would argue that this is not the right thing to do. QM is a statistical theory and it doesn't make sense to speak about individual measurements within its framework. If you want to compare experimental results to the theory, you first need to translate them into the language of statistics, i.e. you must have performed several measurements and computed the statistical mean and so on. You can of course collect your data in a spacetime diagram, but it tells you nothing about what the quantum state on the corresponding hypersurface was. If you are interested in the quantum state, you basically need to perform something like a quantum tomography, where measure position or momentum a thousand times, but everytime start with a newly prepared state and then find a quantum state that is consistent with the statistics.

I agree that one can do without collapse if one is using the philosophy that one should not have sequential measurements, and kith and I have agreed on this in another discussion here some time ago (in fact, I think Feynman says this somewhere). I would argue that it's a matter of taste whether one rejects sequential measurements or not the predictions of sequential measurements and collapse can reproduce all the predictions of the view with no sequential measurements and no collapse.

However, I don't think there can be any violation of the Bell inequalities at spacelike separation in the no collapse view. The reason is that spacelike-separated measurements that are simultaneous in one frame will be sequential in another frame. To fix this problem, instead of Bob saying that Alice made a simultaneous measurement at location x and obtained spin up at time t, he has to say that he only measured what Alice reported to him locally. This is fine, and a standard way to get rid of nonlocality in quantum mechanics.
 
  • #133
vanhees71 said:
But in the EPR-Twin case it's the same. The only qualification is that you have to say "Bob's photon is either H or V polarized, IF he chooses to measure the polarization in that direction."'

Let's assume for simplicity that Alice and Bob are using the same filter orientation. That is agreed-upon ahead of time. Then after Alice measures her photon to have polarization H, Alice knows something definite about Bob's future measurement: that he will measure the polarization to be H.

Classically, if Alice learns something definite about a future measurement performed by Bob, she can assume that that means that that result was pre-determined. If Alice learns that Bob will find a right shoe when he opens the box, she assumes that it was a right shoe before he opened the box.

In EPR, Alice learns that Bob will measure polarization H. But she can't assume that it had polarization H before he measured it.

That's a pretty stark difference between the two cases.

For the classical case you don't need the "IF". That's the only difference.

You don't need an "if" in the EPR case, if Bob agrees ahead of time to use a pre-arranged filter orientation.

Of course you are right in saying that Bob's photon had no definite polarization before A made her measurement. Caught in a classical worldview you then could conclude that then there must be a mechanism (called "collapse") that makes Bob's photon polarized horizontally through the measurement.

I don't think it's a matter of assuming a mechanism. Collapse is just a description of the situation, it seems to me. Assuming once again that Bob has agreed ahead of time to use the same pre-arranged filter orientation as Alice, Alice knows before Bob does what his measurement will be. She learns a fact about Bob's photon + filter + detector remotely. Under the assumption that Alice and Bob are using the same orientation, and that Alice observes a horizontally-polarized photon before Bob does his measurement, let [itex]X[/itex] be the claim: "Bob will observe a horizontally-polarized photon".

It seems to me that there are only three possibilities:
  1. X was a fact before Alice observed her photon.
  2. X only became a fact after Alice observed her photon.
  3. X is not really a fact at all (the MWI tactic of rejecting unique outcomes)
By "fact" I mean something that is objectively true, independent of any observer. If you assume definite outcomes, then it seems to me that "Bob will observe a horizontally-polarized photon" is a fact, in this sense.
 
  • #134
stevendaryl said:
Einstein causality doesn't really have to do with "causation", it can be expressed as purely a claim about correlations. Bell actually worked out a more general claim about correlations in his "theory of local beables" (where "beables" is pronounced BEE-able in an analogy with OBSERV-able).

[...]

I don't think that there is any logical necessity that there be a common explanation. But it's what people usually assume. An apparent correlation can be a coincidence, but a sustained correlation indicates some common explanation.
I think the logical reasoning is as follows:
1. All classical relativistically covariant theories satisfy Einstein causality.
2. All theories that satisfy Einstein causality satisfy Bell's inequality.
3. Quantum mechanics doesn't satisfy Bell's inequality.
4. Therefore quantum mechanics can't be a classical relativistically covariant theory.

So the point of Einstein causality is not the be a physical principle that must necessarily be true, but rather to be a tool that allows us to prove that QM can't be a classical relativistically covariant theory. This is not a problem though, since QM can still be a quantum mechanical relativistically covariant theory and it needs to be interpreted differently.

stevendaryl said:
The importance of collapse is that Bell's inequality as applied to EPR is proved under the assumption that there are no nonlocal influences affecting the two measurements. If there is a nonlocal collapse, then the assumptions behind Bell's proof are not satisfied, and so you shouldn't expect Bell's inequality to hold, necessarily.

That's really the issue: Does the violation of Bell's inequality imply that something nonlocal is happening? The unitary evolution of the wave function is local (although there's a little confusion about that in my mind, because the wave function is a function on configuration space, not physical space, so it's a little tricky to say what "local evolution" means). But the selection of one alternative out of a set of possible alternatives seems to be a nonlocal event. It seems as if this selection process happens at Alice's device and Bob's device simultaneously.
I would just say that Einstein causality is a too strong requirement. It is certainly true for classical theories, but it's not in the spirit of quantum mechanics and shouldn't be applied there. Of course, it's legitimate to look for deeper reasons for the 100% correlations, but in my opinion, nobody has really come up with a convincing theory so far.
 
  • #135
rubi said:
I would just say that Einstein causality is a too strong requirement. It is certainly true for classical theories, but it's not in the spirit of quantum mechanics and shouldn't be applied there. Of course, it's legitimate to look for deeper reasons for the 100% correlations, but in my opinion, nobody has really come up with a convincing theory so far.

As I said, it seems to me that the selection of a single outcome out of a set of possible outcomes is a nonlocal event (or substitute some other word than "event", since Einstein used "event" to mean something local). The fact that quantum field theory is local in a certain sense (spacelike separated field operators commute, or anticommute) doesn't imply that the whole shebang is local, because the equations governing the evolution of the field operators does not include a selection of one outcome out of many possible outcomes. That's an addition needed to apply QFT to experiment. I don't see how there is any possibility of making that step local.

Or here's another way to talk about it. Ultimately, what we observe is not quantum fields. What we observe are macroscopic histories---histories of outcomes of measurements. So quantum mechanics can be thought of simply as a computational tool for computing probabilities of these macroscopic histories.

So what quantum mechanics gives us is a way of computing [itex]P(h)[/itex], the probability for a particular macroscopic history [itex]h[/itex]. Mathematically, you can turn a probability distribution on histories into a stochastic process. We can define for such a process what it means to be "Einstein-separable". Roughly speaking, it means that we can split space up into small regions, and give an independent transition probability for each region that depends only neighboring regions. A failure of Einstein-separability means that the transition probabilities for distant regions are not independent.

Classically (classically meaning physics including SR but excluding QM), the failure of a macroscopic history to be Einstein-separable indicates microscopic facts that were not taken into account. If you enrich your notion of "history" to include these microscopic facts, then Einstein-separability would be restored.

Quantum-mechanically, this is apparently not the case. There is no way to restore Einstein-separability by giving more details about the histories. It's a semantic quibble about whether you consider this failure to be about "locality" or not. It is about locality in the sense that "decisions" about distant regions have to coordinated.
 
  • #136
vanhees71 said:
Hm, maybe I end up becoming a Bohmian, but for this I'd need a convincing formulation for the standard relativistic QFTs.

But do relativistic QFTs exist? Do we have a gauge invariant and Lorentz invariant regulator for the standard model and Einstein gravity?
 
Last edited:
  • #137
rubi said:
I think you are applying classical reasoning to the quantum mechanical world.

I'm saying that you CAN'T say that the quantum situation is analogous to the classical situation, because classically, discovering some fact about a distant situation implies that the fact was already true before you made that discovery. But quantum mechanically, we can't assume that.

In a classical world, it would be reasonable to assume Bell's locality criterion, since all relativistically covariant classical theories automatically satisfy it. We can however have perfectly relativistically covariant quantum theories that don't satisfy the principle and therefore, we can't assume that it is generally valid.

You have to be a little care about what you mean. As I said in another post, the sense in which QFT is covariant is that evolution for the field operators in the Heisenberg picture are governed by covariant equations of motion. But the field operators are not the full story. In addition to the field operators, there is the state itself, which is not an entity living in spacetime, but some object in Hilbert space. Then there is the phenomenon of unique outcomes of experiments. That is not described by the equations of motion.

In the quantum world, there are objects like left/right shoes and right/left shoes and if Alice discovers that she got the box with the left/right shoe, then she automatically knows that Bob must have gotten the Box with the right/left shoe. I think we just have to accept the peculiar feature of QM that there can be objects that are in superposition.

That's not the distinction that I was bringing up. Whether or not particles can exist in superpositions of states, it is the case in EPR that if Alice and Bob choose aligned filters, and Alice measures her photon to be horizontally polarized, then she can conclude, with 100% certainty, that Bob will also measure his photon to be horizontally polarized. So the possibility of superpositions doesn't seem relevant; we're in the situation in which Alice knows exactly what Bob will measure.

Does the collapse lead to unique outcomes?

I would say that collapse IS the picking of one outcome out of several alternatives.

I would say no, because even if the quantum state collapses to one of the states in a superposition, this state can still be expanded in another basis and looks like a superposition there.

Collapse is usually talked about in terms of measurement outcomes. Before the measurement, there are a number of outcomes that are possible. After the measurement, one of those possibilities is chosen. Following the measurement, the collapse assumption is that the wave function is now in an eigenstate of whatever observable was measured. So in my understanding of collapse, it also involves selection of one outcome out of a number of possibilities.

You're certainly right, that the post-collapse wave function doesn't imply unique outcomes for future measurements.

Just by looking at a state, we can never tell whether it is the state of a collapsed system and encodes the fact that there is something with a definite outcome or it is an unmeasured state in a superposition of possible outcomes of the conjugate variable.

Right. Collapse is hypothesized to be a particular kind of transition between two states. The states themselves don't encode the fact that they were produced by a measurement.

I think we can never interpret the state consistently as a representation of a physical outcome. Instead, it only tells us about the statistical features that we will observe if we conduct an experiment with an identical preparation procedure a thousand times.

Sure, but if the state is an eigenstate of a particular observable, then we can say with certainty what the measurement of that observable will yield. So the claim "If you measure X, you will get Y" seems to be an objective fact about the world.
 
  • #138
atyy said:
But do relativistic QFTs exist? Do we have a gauge invariant and Lorentz invariant regulator for the standard model and Einstein gravity?

I think most people assume that relativistic QFTs are possible, even if we haven't figured out a consistent one.

It would be very strange (but certainly possible) if it turned out that the only way to make QFT consistent is to chuck out SR and assume that there is an absolute notion of time. My intuition is that the problems in understanding what's going on in EPR-type experiments is orthogonal to the problems of making a consistent QFT. But it would certainly be exciting to find out that they are connected.
 
  • #139
atyy said:
But do relativistic QFTs exist? Do we have a gauge invariant and Lorentz invariant regulator for the standard model and Einstein gravity?
Well, then you'd through out the baby with the bath ;-). There's no mathematically rigorous relativistic QFTs for real-world problems. On the other hand the Standard Model of Elementary Particles is even too succuessful nowadays. It's also a different question, whether you can solve these purely mathematical problems or which physical interpretation you can give established theories.

On the other hand, perhaps one day somebody finds a mathematical solution for all these mathematical problems which at the same time saves the quibbles with interpretation. Meanwhile we have to live with the effective models we have.
 
  • #140
vanhees71 said:
Well, then you'd through out the baby with the bath ;-). There's no mathematically rigorous relativistic QFTs for real-world problems. On the other hand the Standard Model of Elementary Particles is even too succuessful nowadays. It's also a different question, whether you can solve these purely mathematical problems or which physical interpretation you can give established theories.

On the other hand, perhaps one day somebody finds a mathematical solution for all these mathematical problems which at the same time saves the quibbles with interpretation. Meanwhile we have to live with the effective models we have.

Well, the other way of thinking is that all are theories are wrong anyway. But let's at least have some that are well defined. So for QED we take lattice QED with fine enough spacing and large but finite volume, and similarly we hope for a lattice construction of the standard model with finite spacing in large but finite volume. Then that means our current best theories are not truly Lorentz invariant due to the finite lattice spacing, and special relativity is only a very good approximation at low energy or long wavelength compared to the lattice spacing. So there is no true Lorentz invariance, which means there is nothing wrong with a Bohmian interpretation. This doesn't mean that a Bohmian interpretation is right, since there could be other possibilities like Many-Worlds or something else. However, from the lattice point of view and the Wilsonian point of view, this would at least make Bohmian Mechanics seem more natural.
 

Similar threads

  • Quantum Interpretations and Foundations
2
Replies
52
Views
2K
  • Quantum Interpretations and Foundations
3
Replies
92
Views
7K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
12K
  • Quantum Interpretations and Foundations
4
Replies
105
Views
5K
  • Quantum Interpretations and Foundations
Replies
25
Views
2K
  • Quantum Interpretations and Foundations
Replies
9
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
3K
  • Quantum Interpretations and Foundations
Replies
15
Views
632
  • Quantum Interpretations and Foundations
9
Replies
309
Views
10K
Back
Top