Von Neumann QM Rules Equivalent to Bohm?

In summary: Summary: In summary, the conversation discusses the compatibility between Bohm's deterministic theory and Von Neumann's rules for the evolution of the wave function. It is argued that although there is no true collapse in Bohmian mechanics, there is an effective (illusionary) collapse that is indistinguishable from the true collapse. This is due to decoherence, where the wave function splits into non-overlapping branches and the Bohmian particle enters only one of them. However, there is a disagreement about the applicability of Von Neumann's second rule for composite systems.
  • #71
vanhees71 said:
What is a "non-physical collapse"? Either there is something collapsing in the real world when a measurement is made or not!

I'm not sure who you are responding to, but in classical probability, there is a situation analogous to quantum entanglement, and something analogous to collapse, but it's clearly NOT physical. I have a pair of shoes, and randomly select one to put in a box and send to Alice, and another one to put in a box to send to Bob. Before Alice opens her box, she would describe the situation as "There is a 50/50 chance of my getting a left shoe or a right shoe. There is also a 50/50 chance of Bob getting either shoe." After opening the box and finding a left shoe, she would describe the situation as "I definitely have the left shoe, and Bob definitely has the right shoe". So, the probability distribution "collapses" when she opens the box.

But that's clearly not physical. The box contained a left shoe before she opened it, she just didn't know it. So the probabilities reflect her knowledge, not the state of the world.

In an EPR-type experiment, the analogous explanation would be that the photon was polarized at angle A before Alice detected it, she just didn't know it. But that interpretation of what's going on is contradicted by Bell's theorem. To me, talking about "ensembles" and "filtering a sub-ensemble" is another way of talking about hidden variables, so it seems equally inconsistent with Bell's theorem.
 
Physics news on Phys.org
  • #72
vanhees71 said:
What is a "non-physical collapse"? Either there is something collapsing in the real world when a measurement is made or not! In my opinion there's not the slightest evidence for anything collapsing when we observe something.
The wavefunction is what is collapsing in the usual account, but unless you follow an interpretation with collapse that considers wavefunctions as physical entities, you probably see wavefunctions just as mathematical tools, mathematical tools don't "collapse" in any real world sense. So you are left with the concept of non-physical collpse, just non-unitary evolution(you talked about it in #29, remember?) that is fully compatible with microcausality((anti)commutation of spacelike fields).
 
  • #73
stevendaryl said:
Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.
Yes, and thus it's not a physical process, named collapse, but the mere adaption of the state by A due to information gained by the outcome of her measurement on her photon. It's epistemic not ontological to put it in this philosophical language (which I personally don't like very much, because it's not very sharply defined).
 
  • #74
Ilja said:
How do you obtain a wave function as the initial state? You make a measurement, it has a value, that means, you have obtained a state with the corresponding eigenstate as the wave function. Without collapse there would be no method of state preparation in quantum theory.
I associate the initial state (not wave function, because there's no sensible description of photons as wave functions) to the system under consideration due to the preparation procedure. I don't need a collapse but a laser and an appropriate birefringent crystal for parametric down conversion. Of course, there's a filtering involved to filter out the entangled photon pairs.

I don't know of any paper deriving this photon-pair production process from first principles. It's of course experimental evidence ensuring that you prepare these states. For the effective theory describing it see the classical paper

Hong, C. K., Mandel, L.: Theory of parametric frequency down conversion of light, Phys. Rev. A 31, 2409, 1985
http://dx.doi.org/10.1103/PhysRevA.31.2409
 
  • #75
vanhees71 said:
Yes, and thus it's not a physical process, named collapse, but the mere adaption of the state by A due to information gained by the outcome of her measurement on her photon. It's epistemic not ontological to put it in this philosophical language (which I personally don't like very much, because it's not very sharply defined).

I would say that it's definitely NOT that. I suppose there are different interpretations possible, but the way I read Bell's theorem is that the purely epistemic interpretation of the wave function is not viable.

Once again, I want to point out what are the implications of the claim the updating is purely epistemic. Again, we assume that both Alice and Bob have their filters oriented at the same angle, [itex]A[/itex]. We ask what Alice knows about the state of Bob's photon. Immediately before measuring her photon's polarization, the most that Alice knows is: "There is a 50/50 chance that Bob's photon has polarization [itex]A[/itex]". Immediately afterward, she knows "There is a100% chance that Bob's photon has polarization [itex]A[/itex]".

It seems to me that if you want to say that the change is purely epistemic, then that means that the state of Bob's photon wasn't changed by Alice's measurement, only Alice's information about it changed. Okay, that's fine. But let's go through the reasoning here:
  1. After Alice's measurement, Bob's photon has definite polarization state [itex]A[/itex].
  2. Alice's measurement did not change the state of Bob's photon.
  3. Therefore, Bob's photon had definite polarization state [itex]A[/itex] BEFORE Alice's measurement.
So it seems to me that assuming that measurements are purely epistemic implies that photons have definite (but unknown) polarizations even before they are measured. But that's a "hidden variables" theory of the type ruled out by Bell's theorem.
 
Last edited:
  • #76
stevendaryl said:
So it seems to me that assuming that measurements are purely epistemic implies that photons have definite (but unknown) polarizations even before they are measured. But that's a "hidden variables" theory of the type ruled out by Bell's theorem.
I disagree. What about hidden nonlocal influences?

The measurement of Alice make a local random choice of the direction, this choice is somehow transferred to Bob's particle which changes its hidden internal state correspondingly. This would be a non-local interaction in reality, of course, but not excluded by Bell's theorem. And the wave function could be, nonetheless, purely epistemic.
 
  • #77
Ilja said:
I disagree. What about hidden nonlocal influences?

Yes, you're right. I meant making the auxiliary assumption of locality.

The measurement of Alice make a local random choice of the direction, this choice is somehow transferred to Bob's particle which changes its hidden internal state correspondingly. This would be a non-local interaction in reality, of course, but not excluded by Bell's theorem. And the wave function could be, nonetheless, purely epistemic.
 
  • #78
Non-physical collapse the way I see it is equivalent to a version of decoherence that contrary to the usual account cannot be made reversible even in principle i.e. no possibility of combining system plus environment in any meaningful way, this is what an intrinsic cut in QM is, whether the cut is referring to system/apparatus, system/environment, microscopic/ macroscopic degrees of freedom in coarse-graining, probabilistic/deterministic evolution. This should be common to any interpretation that takes seriosly single measurements.
 
  • #79
TrickyDicky said:
Non-physical collapse the way I see it is equivalent to a version of decoherence that contrary to the usual account cannot be made reversible even in principle i.e. no possibility of combining system plus environment in any meaningful way, this is what an intrinsic cut in QM is, whether the cut is referring to system/apparatus, system/environment, microscopic/ macroscopic degrees of freedom in coarse-graining, probabilistic/deterministic evolution. This should be common to any interpretation that takes seriosly single measurements.

I disagree. The most detailed consideration of the measurement process which is known is that of de Broglie-Bohm theory. So, to say that it does not take measurements seriously would be unjust. But it does not have a cut.

It has an effective collapse, by putting in the trajectory of the measurement device into the wave function of device and system, which defines the effective wave function of the system. You can do this at every moment, before, after and during the measurement, and obtain a nice picture of a non-Schroedinger evolution for the collapsing effective wave function. But where to make the cut between device and system remains your free choice.
 
  • #80
stevendaryl said:
I would say that it's definitely NOT that. I suppose there are different interpretations possible, but the way I read Bell's theorem is that the purely epistemic interpretation of the wave function is not viable.

Once again, I want to point out what are the implications of the claim the updating is purely epistemic. Again, we assume that both Alice and Bob have their filters oriented at the same angle, [itex]A[/itex]. We ask what Alice knows about the state of Bob's photon. Immediately before measuring her photon's polarization, the most that Alice knows is: "There is a 50/50 chance that Bob's photon has polarization [itex]A[/itex]". Immediately afterward, she knows "There is a100% chance that Bob's photon has polarization [itex]A[/itex]".

It seems to me that if you want to say that the change is purely epistemic, then that means that the state of Bob's photon wasn't changed by Alice's measurement, only Alice's information about it changed. Okay, that's fine. But let's go through the reasoning here:
  1. After Alice's measurement, Bob's photon has definite polarization state [itex]A[/itex].
  2. Alice's measurement did not change the state of Bob's photon.
  3. Therefore, Bob's photon had definite polarization state [itex]A[/itex] BEFORE Alice's measurement.
So it seems to me that assuming that measurements are purely epistemic implies that photons have definite (but unknown) polarizations even before they are measured. But that's a "hidden variables" theory of the type ruled out by Bell's theorem.
No, that's not what's implied although the "change of state" due to A's measurement is in my opinion indeed purely epistemic. Before any measurement, both A and B have simply unpolarized photons, which however are known to be entangled due to the preparation procedure in an entangled biphoton. Let's write down the math, because that helps here. I simplify it (somewhat too much) by just noting the polarization states of the photons and let's simplify it to the case that both measure the polarization in the same direction.

So initially we have the two-photon polarization state
$$|\Psi_0 \rangle=\frac{1}{\sqrt{2}} (|H V \rangle-|VH \rangle).$$
The single photon states are given by tracing out the other photon respectively, and both Alice and Bob describe it by
$$\hat{\rho}_{\text{Alice}}=\frac{1}{2} \mathbb{1}, \quad \hat{\rho}_{\text{Bob}}=\frac{1}{2} \mathbb{1}.$$
Now we assume that Alice measures the polarization of her photon and find's that it is horizontally polarized and nothing happens with Bob's photon which may be detected very far away from Alice at a space-like distance so that, if you assume that QED microcausality holds (which I think is a very weak assumption given the great success of QED). Then the state after this measurement is described (for Alice!) by the pure two-photon polarization state (which I leave unrenormalized to store the probability for this to happen conveniently in the new representing state ket; the state itself is of course the ray):
$$|\Psi_1 \rangle = |H \rangle \langle H| \otimes \mathbb{1}|\Psi_0 \rangle=\frac{1}{\sqrt{2}} |H V \rangle.$$
This, of course, happens with the probability
$$\|\Psi_1 \|^2=1/2,$$
which was already clear from the reduced state for A's single photon derived above.

Now, what Bob finds is with probability 1/2 H and with probability 1/2 V, because he cannot know (faster than allowed by the speed of light via communicating with Alice) what Alice has found. So Bob will still describe his single-photon's state as ##\hat{\rho}_{\text{Bob}}=1/2 \mathbb{1}##. Nothing has changed for Bob, and according to the usual understanding of relativistic causality he cannot know before measring his photon's polarization more about it if he doesn't exchange information about Alice's result, and this he can do (again using the standard interpretation of relativistic causality) only by exchanging some signal with Alice which he can get only with the speed of light and not quicker.

Alice knows after her measurement that Bob must find a vertically polarized photon, i.e., the conditional propability given Alice's result gives for 100% V polarization for Bob's photon. That this is true can be verified after exchanging the measurement protocols between Alice and Bob, given the fact that via precise timing it is possible to know which photons A and B measure belong to one biphoton. That's why Bob can "post-select" his photons by only considering the about 50% of photons where Alice found an H-polarized photon, and then finds 100% V-polarized ones.

This clearly shows that the notion of state is an epistemic one in this interpretation, because A and B describe the same situation with different states, depending on her knowledge. Note that there can never be contradictions between these two descriptions, because given that A doesn't know that B's photon is entangled in the way described by ##|\Psi_0 \rangle## A wouldn't ever be able to say that B would find V-polarization with 100% probability, if she has found H-polarization. This is what's stated in the linked-cluster theorem, and this of course holds for any local microcausal relativistic QFT. If on the other hand a theory obeying the linked-cluster theorem (which is the minimum assumption you must make to stay in accordance with usual relativistic causality) must necessarily be such a local microcausal relativistic QFT is not clear to me, and I've not seen any attempts to prove this (see Weinberg QT of Fields, vol. 1).

As a "minimal interpreter" I stay silent about the question, whether or not there is an influence of Alice's measurement on Bob's photon or not, as already mentioned by Ilja above. I only say this is the case in standard QED, which is by construction a local microcausal relativistic QFT. If there is an extension to QT where you can describe non-local interactions in the sense of a non-local deterministic hidden-variable theory that is consistent with the relativistic space-time structure, I don't know, at least I've not seen any convincing yet in the published literature. But what's for sure a "naive" instantaneous collapse assumption is for sure at odds with the relativistic space-time description and it is, as the above argument (hopefully convincingly) shows, not necessary to understand the probabilistic outcomes according to QT.
 
  • #81
vanhees71 said:
As a "minimal interpreter" I stay silent about the question, whether or not there is an influence of Alice's measurement on Bob's photon or not, as already mentioned by Ilja above. I only say this is the case in standard QED, which is by construction a local microcausal relativistic QFT. If there is an extension to QT where you can describe non-local interactions in the sense of a non-local deterministic hidden-variable theory that is consistent with the relativistic space-time structure, I don't know, at least I've not seen any convincing yet in the published literature. But what's for sure a "naive" instantaneous collapse assumption is for sure at odds with the relativistic space-time description and it is, as the above argument (hopefully convincingly) shows, not necessary to understand the probabilistic outcomes according to QT.

It has nothing to do with a minimal interpretation. Any relativistic QFT is not consistent with the classical meaning of "relativistic space-time structure" or "Einstein causality". Relativistic QFT does not allow faster than light signalling of classical information, and that is the meaning of "local microcausal relativistic QFT".
 
Last edited:
  • Like
Likes TrickyDicky and Ilja
  • #82
Microcausality means that local observables commute at space-like distances, and this implies that there is no action at a distance on the quantum level. In our example, the local interaction of A's photon with her polarizer doesn't affect instantaneously Bob's photon.
 
  • #83
vanhees71 said:
Microcausality means that local observables commute at space-like distances, and this implies that there is no action at a distance on the quantum level. In our example, the local interaction of A's photon with her polarizer doesn't affect instantaneously Bob's photon.
No, QFT simply does not tell us anything about this question.

QFT in the minimal interpretation is not a realistic theory, thus, does not make claims that the polarizer affects instantaneously Bob's photon, but also no claims that it doesn't.
 
  • #84
According to standard QFT the Hamilton density commutes with any other local observable at any spacelike separated argument. Thus a local interaction doesn't affect any observable instantaneously (or at speeds faster than light).

Whether or not there are non-local deterministic theories (which I think is what's meant by "realistic theories" by philosophers) which are as successful as standard QFT, I don't know.
 
  • Like
Likes TrickyDicky
  • #85
vanhees71 said:
Now, what Bob finds is with probability 1/2 H and with probability 1/2 V, because he cannot know (faster than allowed by the speed of light via communicating with Alice) what Alice has found. So Bob will still describe his single-photon's state as ##\hat{\rho}_{\text{Bob}}=1/2 \mathbb{1}##. Nothing has changed for Bob,

Okay, but Alice knows what result will get, with 100% certainty, before Bob makes the measurement. So, from her point of view, Bob's information is incomplete. The more complete story is that he will definitely get the same polarization as Alice (assuming their filters are aligned).

So if there is such a thing as "the objective state of Bob's photon", then that state is NOT 50/50 chance of passing Bob's filter.

You could deny that there is such a thing as the state of Bob's photon. But that's pretty weird, too. Alice can certainly reason as if Bob's photon is in a definite state of polarization, and that reasoning gives correct results.

I don't see how it makes sense to say that Alice's updating is purely epistemic.
 
  • #86
But that's very common in probability theory. It's just Bayes formula for conditional probability. This is not more mysterious in QT than in any "classical" probabilistic description.

This example for me makes it very clear that it's purely epistemic, because Alice's measurement updates her information and thus she changes her description of the state of the system. Bob doesn't have this information and thus stays with the description he assigns to the situation due to his knowledge. Physically nothing has changed for his photon by Alice's measurement. So the association of the state is determined by the preparation procedure and can vary for Alice and Bob due to the different information available to them about this system. This for me clearly shows the epistemic character of probabilistic descriptions (not restricted to QT; the difference between QT and classical probabilistic models are the strong correlations described by entangled states, which are stronger than ever possible in classical deterministic local theories, as shown by Bell).
 
  • #87
stevendaryl said:
Okay, but Alice knows what result will get, with 100% certainty, before Bob makes the measurement. So, from her point of view, Bob's information is incomplete. The more complete story is that he will definitely get the same polarization as Alice (assuming their filters are aligned).

So if there is such a thing as "the objective state of Bob's photon", then that state is NOT 50/50 chance of passing Bob's filter.

You could deny that there is such a thing as the state of Bob's photon. But that's pretty weird, too. Alice can certainly reason as if Bob's photon is in a definite state of polarization, and that reasoning gives correct results.

I don't see how it makes sense to say that Alice's updating is purely epistemic.

Just a little expansion on this:

After Alice measures her photon to be polarized horizontally, she would describe Bob's photon as being in the PURE state [itex]|H\rangle[/itex]. As you say, Bob would describe his own photon as being in the mixed state [itex]\rho = \frac{1}{2}(|H\rangle\langle H| + |V\rangle\langle V|)[/itex]. But this disagreement is completely explained by saying that Bob's photon is REALLY in state [itex]|H\rangle\langle H|[/itex], he just doesn't know it. Density matrices reflect both quantum superpositions and classical uncertainty (due to lack of information).

You (vanhees71) say that Bob's photon is still in the state [itex]\rho = \frac{1}{2}(|H\rangle\langle H| + |V\rangle\langle V|)[/itex], even after Alice finds her photon to be horizontally-polarized. That doesn't make sense to me. There is zero probability of Bob detecting polarization [itex]V[/itex], while his density matrix would say it's 1/2. His density matrix is wrong (or is less informative than the one Alice is using for Bob's photon).
 
Last edited:
  • #88
Ilja said:
I disagree. The most detailed consideration of the measurement process which is known is that of de Broglie-Bohm theory. So, to say that it does not take measurements seriously would be unjust. But it does not have a cut.

It has an effective collapse, by putting in the trajectory of the measurement device into the wave function of device and system, which defines the effective wave function of the system. You can do this at every moment, before, after and during the measurement, and obtain a nice picture of a non-Schroedinger evolution for the collapsing effective wave function. But where to make the cut between device and system remains your free choice.
It does have a cut in the sense I commented above, pilot wave/particle trajectories.
 
  • #89
vanhees71 said:
But that's very common in probability theory. It's just Bayes formula for conditional probability. This is not more mysterious in QT than in any "classical" probabilistic description.

No, it's not the same. In classical probability, there is a distinction between what is true and what my knowledge of the truth is. Someone randomly puts a left shoe into one box and a right shoe into the other box. One box is sent to Alice, and the other box is sent to Bob. When Alice opens her box, she finds a left shoe. She updates her epistemic probabilities for Bob's box to be 100% chance of a right shoe. There's clearly no nonlocal influence going on. HOWEVER, Alice knows that Bob actually had a right shoe BEFORE she opened the box. She just didn't know it until she opened her box.

In the EPR experiment, Alice finds out that Bob's photon has polarization H. If it's purely epistemic updating of Alice's information, that means that Bob's photon had polarization H BEFORE she measured her photon.

You can't have it both ways. If it's purely epistemic, then the objective state of the photon cannot be changed by Alice's updating. If the objective state after updating is H, then it must have been H beforehand. I don't see how it could be otherwise.

I guess you could say that the state H that Alice deduces for Bob's photon isn't objective, it's subjective, for Alice only. But that's hard to maintain. Would you then say that the polarization state of a photon is NEVER objective?

As Einstein, Rosen and Podolsky said, if you can predict the result of a future measurement with 100% accuracy, it sure seems like it's something objective.
 
  • #90
vanhees71 said:
Microcausality means that local observables commute at space-like distances, and this implies that there is no action at a distance on the quantum level. In our example, the local interaction of A's photon with her polarizer doesn't affect instantaneously Bob's photon.

No, that is wrong (well, this particular quote is ambiguous, but I'm taking it in the context of your earlier remarks on EPR). The commutation of spacelike-separated observables says that there is no faster than light transfer of classical information. That is a different issue from Einstein causality, which means that the nonlocal correlations are entirely explained by each event having a cause in its past light cone. Relativistic quantum field theory means that Einstein causality is either empty or false.
 
  • #91
I think vanhees' point is that the statistical information about the 100% correlation is already contained in the quantum state ##\left|\Psi\right>=\left|HV\right>-\left|HV\right>## and one doesn't need to collapse it to extract that information. ##\left<HH|\Psi\right>=\left<VV|\Psi\right>=0## (and so on). We just prepare the state ##\left|\Psi\right>## and repeat the experiment a thousand times and the statistics will agree with the QM predictions.

Also, I think we should be careful with the words correlation and causation. QM predicts non-local correlation. That's different from non-local causation. Correlation doesn't imply causation, even in the case of 100% correlation. This is just a logical leap that cannot be made. It is also true that "the sun will rise tomorrow" will be 100% correlated with "humans have two legs" for example, but that doesn't mean that one causes the other or that there is a common cause.
 
  • #92
rubi said:
I think vanhees' point is that the statistical information about the 100% correlation is already contained in the quantum state ##\left|\Psi\right>=\left|HV\right>-\left|HV\right>## and one doesn't need to collapse it to extract that information. ##\left<HH|\Psi\right>=\left<VV|\Psi\right>=0## (and so on). We just prepare the state ##\left|\Psi\right>## and repeat the experiment a thousand times and the statistics will agree with the QM predictions.

Also, I think we should be careful with the words correlation and causation. QM predicts non-local correlation. That's different from non-local causation. Correlation doesn't imply causation, even in the case of 100% correlation. This is just a logical leap that cannot be made. It is also true that "the sun will rise tomorrow" will be 100% correlated with "humans have two legs" for example, but that doesn't mean that one causes the other or that there is a common cause.

The part about vanhees71's point that is wrong is that he is using the EPR objection to collapse. EPR's version of causality is not consistent with quantum field theory.

Furthermore, as long as we use the Schroedinger picture, the collapse is how we extract the information that is contained in the wave function in order to predict the nonlocal correlations.

Of course, one doesn't have to accept the wave function or the collapse as real, so one may say that Einstein causality is empty in quantum field theory. If one accepts the wave function and collapse as real, then Einstein causality is violated. There is no choice of saying that quantum field theory fulfills Einstein causality.
 
Last edited:
  • #93
vanhees71 said:
Whether or not there are non-local deterministic theories (which I think is what's meant by "realistic theories" by philosophers) which are as successful as standard QFT, I don't know.
First, no, classical stochastic theories are also realistic, and Nelsonian stochastics is an example. Then, the first example of a local deterministic theory for the EM field was given already in Bohm's original paper. And, given that such theories are, as interpretations of QT in the particular domain, equivalent to QT in this domain, there is no difference in success betwenn a QFT and a QFT in a realistic interpretation.
 
  • #94
atyy said:
[1)]The part about vanhees71's point that is wrong is that he is using the EPR objection to collapse. EPR's version of causality is not consistent with quantum field theory.

[2)] Furthermore, as long as we use the Schroedinger picture, the collapse is how we extract the information that is contained in the wave function in order to predict the nonlocal correlations.

[3)] Of course, one doesn't have to accept the wave function or the collapse as real, so one may say that Einstein causality is empty in quantum field theory. If one accepts the wave function and collapse as real, then Einstein causality is violated. There is no choice of saying that quantum field theory fulfills Einstein causality.

Ad 1) What's wrong?

Ad 2) You cannot argue with a specific picture of time evolution, because all are equivalent (modulo mathematical quibbles a la Haag's theorem ;-)).

Ad 3) This I don't understand. The usual local microcausal QFTs are precisely constructed such that they fulfill Einstein causality (among other things it makes the S-matrix with its time-ordered products of field operators manifestly covariant wrt. special orthochronous Poincare transformations). Last but not least, if the collapse isn't considered real, it's just a sloppy abbreviation for what the minimal interpretation states more carefully, and there's nothing to argue about it anymore. Then all our debates are pretty empty ;-).
 
  • #95
Ilja said:
First, no, classical stochastic theories are also realistic, and Nelsonian stochastics is an example. Then, the first example of a local deterministic theory for the EM field was given already in Bohm's original paper. And, given that such theories are, as interpretations of QT in the particular domain, equivalent to QT in this domain, there is no difference in success betwenn a QFT and a QFT in a realistic interpretation.
Ok, as you well know, I don't understand what philosophers mean by "realistic", particularly as it seems as if there are as many notions of this word as there are philosophers. Then, if everything is "solved" with Bohm's original paper, why is then always stated, also by followers of the Bohmian interpretation, that there are problems with Bohm and relativistic QFT?
 
  • #96
stevendaryl said:
No, it's not the same. In classical probability, there is a distinction between what is true and what my knowledge of the truth is. Someone randomly puts a left shoe into one box and a right shoe into the other box. One box is sent to Alice, and the other box is sent to Bob. When Alice opens her box, she finds a left shoe. She updates her epistemic probabilities for Bob's box to be 100% chance of a right shoe. There's clearly no nonlocal influence going on. HOWEVER, Alice knows that Bob actually had a right shoe BEFORE she opened the box. She just didn't know it until she opened her box.
But in the quantum case A also knew beforehand that the two photons in the entangled state are in this entangled state. That is as good as in the classical example. The only difference is that in classical physics you can't have such correlations. It's clear that the single-photon polarizations are completely indetermined before A's measurement according to standard QT, while the single-shoe states in the classical example are always definite, but there's no difference concerning a collapse between the two ensembles. In both cases the probabilities describe the knowledge of the observers about the sytem, and that's adapted after new information is gained.
 
  • #97
atyy said:
The part about vanhees71's point that is wrong is that he is using the EPR objection to collapse. EPR's version of causality is not consistent with quantum field theory.

Furthermore, as long as we use the Schroedinger picture, the collapse is how we extract the information that is contained in the wave function in order to predict the nonlocal correlations.

Of course, one doesn't have to accept the wave function or the collapse as real, so one may say that Einstein causality is empty in quantum field theory. If one accepts the wave function and collapse as real, then Einstein causality is violated. There is no choice of saying that quantum field theory fulfills Einstein causality.
As I see it, the problem is the following: We have a state ##\left|\Psi\right> = \left|HV\right>-\left|VH\right>##. This state contains all information that is obtained in an EPR experiment, so a collapse is not necessary. The collapse is not needed to explain the results of an EPR experiment. However, we also know that if we measure any of the same photons again, we will not get the same correlations again. Therefore, after the measurement, the state cannot be ##\left|\Psi\right>## anymore, but needs to be something different. This is the real reason for why we usually assume that the system has collapsed into ##\left|HV\right>## or ##\left|VH\right>## and this would indeed be a non-local interaction. However, it doesn't need to be so. There is another option that is only available if we are willing to include the measurement devices into the description: The local interaction with the measurement device could have made the correlations spill over into some atoms of the measurement device, so the correlations are still there, but not easily accessible. One only needs local interactions for this to happen. I'm convinced that if we could ever control all the degrees of freedom of the measurement apparata, we could recover the information about correlations. It's basically analogous to the quantum eraser.
 
  • #98
vanhees71 said:
The usual local microcausal QFTs are precisely constructed such that they fulfill Einstein causality (among other things it makes the S-matrix with its time-ordered products of field operators manifestly covariant wrt. special orthochronous Poincare transformations).
No, it does not care at all about Einstein causality - this would require to care about the EPR argument - it cares only about correlations.

vanhees71 said:
Last but not least, if the collapse isn't considered real, it's just a sloppy abbreviation for what the minimal interpretation states more carefully, and there's nothing to argue about it anymore. Then all our debates are pretty empty ;-).
"The collapse" is, of course, not the point, the point which proves nonlocality is the violation of Bell's inequality. And this violation exists in QFT too, and that means, QFT is not compatible with the EPR criterion of reality, thus, with Einstein's understanding of causality.
 
  • #99
vanhees71 said:
Ok, as you well know, I don't understand what philosophers mean by "realistic", particularly as it seems as if there are as many notions of this word as there are philosophers. Then, if everything is "solved" with Bohm's original paper, why is then always stated, also by followers of the Bohmian interpretation, that there are problems with Bohm and relativistic QFT?
I said EM theory is presented in Bohm's original paper, not that everything is solved in that paper.

Then, the problem is, of course, that dBB theory requires a preferred frame. The interpretation of relativity we are forbidden to talk about does not have a problem with this, but to talk about it is forbidden not only here, so some people indeed think this is a problem.

The first proposal for fermion fields I know about is from Bell, in my paper http://arxiv.org/abs/0908.0591 I obtain equations for a pair of Dirac fermions from those of a scalar field with broken symmetry, which reduces Bohmian versions of fermions (as long as they appear in pairs) to the unproblematic case of scalar fields, which can use the same scheme used by Bohm for the EM field. For gauge fields, one should not use the Gupta-Bleuer approach with an indefinite Hilbert space, but the older Fermi-Dirac one, what remains is unproblematic too. And, even if some part of it would be problematic, there is a less beautiful but possible variant where only for a part of the degrees of freedom one has a dBB trajectory.
 
  • #100
vanhees71 said:
Ad 1) What's wrong?

Ad 2) You cannot argue with a specific picture of time evolution, because all are equivalent (modulo mathematical quibbles a la Haag's theorem ;-)).

Ad 3) This I don't understand. The usual local microcausal QFTs are precisely constructed such that they fulfill Einstein causality (among other things it makes the S-matrix with its time-ordered products of field operators manifestly covariant wrt. special orthochronous Poincare transformations). Last but not least, if the collapse isn't considered real, it's just a sloppy abbreviation for what the minimal interpretation states more carefully, and there's nothing to argue about it anymore. Then all our debates are pretty empty ;-).

Yes, of course one does not privilege a particular picture of time evolution. However, one also cannot disallow it. So if one allows the Schroedinger picture, there is collapse.

The place I think you are wrong is that Einstein causality is not the causality that is fulfilled by explicit construction in quantum field theory. Here by Einstein causality, I mean the causality in EPR and in classical special relativity, in which the cause of an event is in its past light cone - I am using this definition of Einstein causality because I think this is what you are using by bringing up EPR. The "causality" in quantum field theory is a different thing from Einstein causality - it forbids faster-than-light transfer of classical information. So your mistake is that you are confusing two types of causality - signal causality (which is present in relativistic QFT) and Einstein causality (like EPR, which is not present in relativistic QFT).
 
  • #101
@atyy: Can you explain what you mean by Einstein causality and how Bell tests violate it?

If Einstein causality says that non-local 100% correlations should not be allowed if there is no common cause, then I would reply that correlation does't imply causation and therefore it wouldn't be a good definition of causality in the first place.
 
  • #102
@atyy: Can you explain what you mean by Einstein causality and how Bell tests violate it?

If Einstein causality says that non-local 100% correlations should not be allowed if there is no common cause, then I would reply that correlation does't imply causation and therefore it wouldn't be a good definition of causality in the first place.

Einstein causality says that each event in a nonlocal correlation has causes that are entirely in its past light cone. So let's say we have a variable representing the source of the entangled photons ##\lambda##, and we have Alice's measurement setting ##S## and measurement outcome ##A##, and Bob's measurement setting ##T## and measurement outcome ##B##. We also assume that Alice and Bob can choose their settings independently of how the source is prepared, and independently of each other and the measurement results. Assuming Einstein causality, Alice's outcome ##A## depends only on ##\lambda## and ##S##, since these are in the past light cone of A. Similarly, assuming Einstein causality, Bob's outcome ##B## depends only on ##\lambda## and ##T##, since these are in the past light cone of B. The assumption of Einstein causality can be given pictorially (from http://arxiv.org/abs/1208.4119):

WoodsSpekkensFig19.jpg


Hence:

##P(A,B|S,T,\lambda) = P(A|S,\lambda)P(B|T,\lambda)## ["separability"]

However, assuming quantum mechanics, the Bell argument shows that separability cannot be fulfilled. Regarding this not being a good definition of causality, that is fine, which is why I said that in quantum mechanics Einstein causality is either empty or violated.
 
Last edited:
  • Like
Likes TrickyDicky
  • #103
rubi said:
Can you explain what you mean by Einstein causality and how Bell tests violate it?
If Einstein causality says that non-local 100% correlations should not be allowed if there is no common cause, then I would reply that correlation does't imply causation and therefore it wouldn't be a good definition of causality in the first place.
Causality is something we cannot observe - only correlations are observable. Thus, we always need some theoretical principles to make conclusions about causality.

One such theoretical principle is Reichenbach's principle of common cause: Every observable correlation has to have some causal explanation, and for causal explanations we have the following possibilities: Or one of the thing which are correlated is the cause of the other, or above have some common cause. A principle of this type is necessary, because we need some ways to conclude that some causal relation exists. If you reject the common cause principle, you can forget about causality completely, it is no longer a meaningful notion.

But Reichenbach's principle is sufficient to prove Bell's inequality - it is, in this sense, strong enough to give EPR, because Reichenbach's principle tells us common cause or one is cause of the other, and these last two possibilities are excluded by Einstein causality (but not by the interpretation of relativity we are forbidden to talk about, SCNR).

So, even if in general correlation does not imply causation, we have a method to imply causation - Reichenbach's principle. And we need it, else causation would be meaningless.
 
  • #104
rubi said:
Can you explain what you mean by Einstein causality and how Bell tests violate it?
If Einstein causality says that non-local 100% correlations should not be allowed if there is no common cause, then I would reply that correlation does't imply causation and therefore it wouldn't be a good definition of causality in the first place.
Causality is something we cannot observe - only correlations are observable. Thus, we always need some theoretical principles to make conclusions about causality.

One such theoretical principle is Reichenbach's principle of common cause: Every observable correlation has to have some causal explanation, and for causal explanations we have the following possibilities: Or one of the thing which are correlated is the cause of the other, or above have some common cause. A principle of this type is necessary, because we need some ways to conclude that some causal relation exists. If you reject the common cause principle, you can forget about causality completely, it is no longer a meaningful notion.

But Reichenbach's principle is sufficient to prove Bell's inequality - it is, in this sense, strong enough to give EPR, because Reichenbach's principle tells us common cause or one is cause of the other, and these last two possibilities are excluded by Einstein causality (but not by the interpretation of relativity we are forbidden to talk about, SCNR).

So, even if in general correlation does not imply causation, we have a method to imply causation - Reichenbach's principle. And we need it, else causation would be meaningless.
 
  • Like
Likes Demystifier
  • #105
atyy said:
Einstein causality says that each event in a nonlocal correlation has a cause that is entirely in its past light cone. So let's say we have a variable representing the source of the entangled photons ##\lambda##, and we have Alice's measurement setting ##a## and measurement outcome ##A##, and Bob's measurement setting ##b## and measurement outcome ##B##. We also assume that Alice and Bob can choose their settings independently of how the source is prepared, and independently of each other and the measurement results. Assuming Einstein causality, Alice's outcome ##A## depends only on ##\lambda## and ##a##, since these are in the past light cone of A. Similarly, assuming Einstein causality, Alice's outcome ##B## depends only on ##\lambda## and ##b##, since these are in the past light cone of B. Hence:

##P(A,B|a,b,\lambda) = P(A|a,\lambda)P(B|b,\lambda)## ["separability"]

However, assuming quantum mechanics, the Bell argument shows that separability cannot be fulfilled.
Ok, I would have called that Bell's criterion, though. It's of course true that QM and QFT violate Bell's inequalities, but I don't see how that is relevant to the question whether there is a collapse or not. (After all, you can't cure the violation by introduction of a collapse either.)

Ilja said:
Causality is something we cannot observe - only correlations are observable. Thus, we always need some theoretical principles to make conclusions about causality.

[...]

So, even if in general correlation does not imply causation, we have a method to imply causation - Reichenbach's principle. And we need it, else causation would be meaningless.
I don't agree that we need an all-encompassing criterion for causality. We should admit that there is just no way to imply causation from correlation. That would be what honest scientist should do in my opinion. Now on the other hand, we can of course formulate theories about the world in a mathematical framework. And then it makes sense to ask: "Does the theory predict X only if there previously had been Y?" This is a question that we can analyze mathematically and we could come to a definite answer. So it makes sense to talk about causality only given a particular theory.
 
Last edited:

Similar threads

Replies
2
Views
679
Replies
92
Views
8K
Replies
2
Views
3K
Replies
92
Views
14K
Replies
105
Views
6K
Replies
4
Views
2K
Back
Top