Entanglement spooky action at a distance

In summary, entanglement is often described as "spooky action at a distance" because of the correlation between the outcomes of measurements on entangled particles, which is inconsistent with the idea that they are independent and random. The correlation follows a mathematical formula that is derived from Quantum Mechanics. While it may seem like this could be used for faster-than-light communication, there is no evidence to support this and it is likely just a byproduct of the experimental design.
  • #71


ThomasT said:
This makes sense to me. Nevertheless, it would be nice to know if experimental violations of Bell inequalities have any physical meaning -- and, if so, what. Might it be that there's no way to ascertain what the physical meaning of an EPR-Bell experiment is? I think this is possible, maybe even likely, and, if so, it would seem to reinforce the Copenhagen approach to interpreting the formalism and application of the quantum theory. (ie., we can't possibly know the truth of a deep quantum reality, so there's no scientific point in talking about it)
Oh absolutely. Don't get me wrong, I'd love a physical explanation as well, or proof that none is possible. But the starting point has to be the math, not the philosophy. A lot of these interpretive endeavors tend to drift a long way from science.

At the risk of opening another flame war, this is the reason I prefer MWI because it throws out assumptions that are necessitated only by our subjective perceptions (wavefunction collapse) rather than by objective evidence. That should be the starting point. Then let's find whee it leads.
 
Physics news on Phys.org
  • #72


Originally Posted by peter0302:

We have to see Bell's inequality for what it is: the consequence of an assumption which Aspect and others have proven wrong. While we all agree on what that asusmption is mathematically, we can't agree on what it means physically. But at the very least, we should be focusing on the assumption, and not any author's (including Bell's own) editorial comments or beliefs regarding it.

--------------------------------------------------------

I agree with the above. And we should all recall that EPR started the debate with their terms and definitions, especially that there should be a "more complete" specification of the system possible (or else the reality of one particle would be dependent of the nature of the measurement done on another). It does not appear that a more complete specification of the system is possible regardless of what assumption you end up rejecting.
 
  • #73


Chinese,

It does not appear that a more complete specification of the system is possible regardless of what assumption you end up rejecting.


That's blatantly false (if I understand you correctly), as deBB and GRW and stochastic mechanical theories have proven. You may not like these more complete specifications of the system for various philosophical reasons, but it is dishonest to deny that they exist and are empirically equivalent to the standard formalism.
 
  • #74


DrChinese said:
Originally Posted by peter0302:

We have to see Bell's inequality for what it is: the consequence of an assumption which Aspect and others have proven wrong. While we all agree on what that asusmption is mathematically, we can't agree on what it means physically. But at the very least, we should be focusing on the assumption, and not any author's (including Bell's own) editorial comments or beliefs regarding it.

In fairness, the boldface words are actually not true either. Aspect's original experiments were heavily flawed with various loopholes, and it was quite easy to account for those results with locally causal hidden variable models. Also, not even Zeilinger or Kwiat would claim the Bell tests are conclusive today. Because they acknowledge that no experiment has yet been done that simultaneously closes the detection efficiency loophole AND the separability loophole AND cannot be equally well explained by LCHV models like Santos-Marshall stochastic optics and Fine-Maudlin Prism models of GHZ correlations.
 
Last edited:
  • #75


peter0302 said:
I'd love a physical explanation as well, or proof that none is possible. But the starting point has to be the math, not the philosophy. A lot of these interpretive endeavors tend to drift a long way from science.
Agreed. But I think it's worth the effort to sort out the semantics of the interpretations.
peter0302 said:
At the risk of opening another flame war, this is the reason I prefer MWI because it throws out assumptions that are necessitated only by our subjective perceptions (wavefunction collapse) rather than by objective evidence. That should be the starting point. Then let's find whee it leads.
Wavefunction collapse or reduction is the objective dropping of terms that don't correspond to a recorded experimental result. That is, once a qualitative result is recorded, then the wavefunction that defined the experimental situation prior to that is reduced to the specification of the recorded result.

If one reifies the wavefunction, then one is saddled with all sorts of (in my view) unnecessary baggage -- including, possibly, adherance to MWI, or MMI, or some such interpretation. :smile:
 
  • #76


DrChinese said:
I agree with the above. And we should all recall that EPR started the debate with their terms and definitions, especially that there should be a "more complete" specification of the system possible (or else the reality of one particle would be dependent of the nature of the measurement done on another). It does not appear that a more complete specification of the system is possible regardless of what assumption you end up rejecting.
Exactly. And so instead, what we have are interpretations which are not "more complete" so much as they are "more complicated" - since they (as of yet) make no different or more accurate predictions than the orthodox model EPR criticized.

Not that I don't think the research is worthwhile. I certainly do. I'm still hopeful there is something more complete, but I doubt any of the interpretations we have now (at least in their current forms) are going to wind up winning in the end.

Wavefunction collapse or reduction is the objective dropping of terms that don't correspond to a recorded experimental result. That is, once a qualitative result is recorded, then the wavefunction that defined the experimental situation prior to that is reduced to the specification of the recorded result.
Yes, but what's your physical (objective, real) justification for doing so? Plus, it's not defined objectively in Bohr's QM. It's done differently from experiment to experiment, and no one really agrees whether a cat can do it or not, let alone how.

Were you not the one who wanted a physical explanation? :)
 
  • #77


DrChinese said:
1. We agree on this point, and that was my issue.

2. Thank you for these references, there are a couple I am not familiar with and would like to study.

4. Repeating that I was not trying to advance the cause of "non-realism" other than showing it is one possibility. I agree that non-local solutions should be viable. In a lot of ways, they make more intuitive sense than non-realism anyway.

BTW, my point about GHZ was not that it proved non-realism over non-locality. It is another of the no-go proofs - of which there are several - which focus on the realism assumption. These proofs are taken in different ways by the community. Since we don't disagree on the main point, we can drop this particular sidebar.



1. OK.

2. You're welcome.

4. I have not yet seen any evidence that local non-realism is a viable explanation of Bell inequality violations. I challenge you to come up with a mathematical definition of non-realist locality. And I challenge you to come up with a measurement theory based on solipsism that solves the measurement problem, and allows you to completely derive the quantum-classical limit.

About GHZ, you said it is another no-go proof that focuses on the realism assumption. That's just not true. It focuses just as much on locality and causality as Bell's theorem does.
 
  • #78


ThomasT said:
"The Cosmic Code: quantum physics as the language of nature"


Since the experimental designs seem to have (in my view anyway) common emission cause written all over them (and since the whole topic is open to speculation) I would rank that as more plausible than any of the other more exotic explanations for the correlations.


I take it you didn't like my polariscope analogy? (I really thought I had something there. :smile:)
I read what I could of Maudlin's first chapter at Google books. Nothing new or especially insightful there. I've read Price's book -- didn't like it. But thanks for the references and nice discussion with DrChinese et al.
I don't like the nonlocal explanations. Too easy. I'll continue, for the time being, working under the assumption that something (or things) about the physical meaning of Bell's theorem and Bell inequalities is being misinterpreted or missed.


Thanks for the references.

<< Since the experimental designs seem to have (in my view anyway) common emission cause written all over them (and since the whole topic is open to speculation) I would rank that as more plausible than any of the other more exotic explanations for the correlations. >>

At first I thought you meant the experimental designs lend themselves to detection loopholes and such. But then you say

<< This makes sense to me. Nevertheless, it would be nice to know if experimental violations of Bell inequalities have any physical meaning -- and, if so, what. Might it be that there's no way to ascertain what the physical meaning of an EPR-Bell experiment is? >>

So you clearly believe that Aspect and others have confirmed Bell inequality violations. You also say that

<< I don't like the nonlocal explanations. Too easy. I'll continue, for the time being, working under the assumption that something (or things) about the physical meaning of Bell's theorem and Bell inequalities is being misinterpreted or missed. >>

So, honestly, it just sounds to me like you have refused to understand Bell's theorem for what it is, or have been told what it is, and are just in denial about it. In that case, no one can help you other than to say that you seem to be letting your subjective, intuitive, biases prevent you from learning about this subject. And that won't get you anywhere.
 
  • #79


peter0302 said:
Interesting idea Count. I did something like this (I was a computer science guy in a previous life).

It is impossible using a standard - non-quantum - computer to simulate the results of EPRB experiments without utilizng both polarizer settings in calculating the odds of any particular photon passing.

The "Does Photon Pass Polarizer x" function simply cannot be written without reference to the other polarizer while still obtaining the quantum results.

If you try to do something elaborate - say, in the "generate entangled photons" function, you pre-program both of them for every conceiveable polarizer angle - you come close to the quantum results, but not perfectly.

In order to reproduce the quantum results, you have to either:
1) allow the two photons to "know" the polarizer settings before they've reached them (some kind of superdeterminism) and agree ahead of time on how they're going to behave; or
2) check to see whether the twin has reached its polarizer yet; if not, just go with 50/50. If it has reached, behave in a complimentary way (non-locaity).

The third option would be some kind of many-world simulation where we let objects continue to evolve in super-position until someone observes both but I thought that a little too complicated to code.

Yes, the code when interpreted as describing a classical world will describe it as being non-local. But when you run the computer, the internal state of the computer will evolve in a local deterministic way.
 
  • #80


Well, sure. Is your point that we could be living in a computer simulation?

Even if so, locality is a supposed rule of the simulation. If the simulation is comparing both polarizer angles, that's cheating. :)
 
  • #81


I saw this rather late, sorry for my late response...

ThomasT said:
vanesch says that violations of Bell inequalities mean that the incident disturbances associated with paired detection attributes cannot have a common origin. This would seem to mean that being emitted from the same atom at the same time does not impart to the opposite-moving disturbances identical properties.

What I said was that the correlations as seen in an Aspect-like experiment, and as predicted by quantum theory, cannot be obtained by "looking simply at a common cause". That is, you cannot set up a classical situation where you look at a common property of something, and obtain the same correlations as those from quantum mechanics. This is because classically, we only know of two ways to have statistical correlations C(A,B): A causes B (or B causes A) is one way, and A and B have a common cause C is the other effect. Example of the first:
- A is "setting of the switch" and B is "the light is on"
clearly, because setting the switch causes the light to be on or off, we will find a correlation between both.
Example of the second:
- you drive a Ferrari and you have a Rolex.
It is not true that driving Ferrari's makes you have a Rolex, or that putting on a Rolex makes you drive a Ferrari. So there's not "A causes B" or "B causes A". However, being extremely rich can cause you to buy a Ferrari as well as a Rolex. So there was a common cause: "being rich".

Well, Bell's theorem is a property that holds for the second kind of correlations.

So the violation of that theorem by observed or quantum-mechanically predicted correlations means that it cannot be a correlation that can be explained entirely by "common cause".

And yet, in the hallmark 1984 Aspect experiment using time-varying analyzers, experimenters were very careful to ensure that they were pairing detection attributes associated with photons emitted simultaneously by the same atom.

Yes, of course. That's because we have to look for *entangled* photons, which NEED to have a common source. But the violation of Bell's theorem simply means that it is not a "common cause of the normal kind", like in "being rich".

I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations.

That's wrong: Bohmian mechanics explicitly shows how action-at-a-distance can solve the issue. In fact, that's not surprising either. If you have action at a distance, (if A can cause B or vice versa) then all possible correlations are allowed, and there's no "Bell theorem" being contradicted. What's the problem with the EPR kind of setups is that the "causes" (the choices of measurement one makes) are space-like separated events, so action-at-a-distance would screw up with relativity. So people (like me) sticking to relativity refuse to consider that option. But it is a genuine option: it is actually by far the "most common sense" one.
 
  • #82


Maaneli said:
Thanks for the references.

<< Since the experimental designs seem to have (in my view anyway) common emission cause written all over them (and since the whole topic is open to speculation) I would rank that as more plausible than any of the other more exotic explanations for the correlations. >>

At first I thought you meant the experimental designs lend themselves to detection loopholes and such.
No, I had the emission preparations and the data matching mechanisms in mind. So, even if all the loopholes were conclusively closed and the inequalities were conclusively violated experimentally, I'd still think that the experimental designs have common emission cause written all over them.

Maaneli said:
But then you say

<< This makes sense to me. Nevertheless, it would be nice to know if experimental violations of Bell inequalities have any physical meaning -- and, if so, what. Might it be that there's no way to ascertain what the physical meaning of an EPR-Bell experiment is? >>

So you clearly believe that Aspect and others have confirmed Bell inequality violations.

Well, I did believe that, but now I suppose I'll have to look a bit closer at the loopholes you're referred to.

But it won't matter if the loopholes are all closed and the violations are conclusive. One can talk all one wants to about nonlocality, however, like, say, a seamless, nonparticulate medium that can't possibly be detected -- what would be the point?

Maaneli said:
You also say that

<< I don't like the nonlocal explanations. Too easy. I'll continue, for the time being, working under the assumption that something (or things) about the physical meaning of Bell's theorem and Bell inequalities is being misinterpreted or missed. >>

So, honestly, it just sounds to me like you have refused to understand Bell's theorem for what it is, or have been told what it is, and are just in denial about it.

I think that those who understand Bell's theorem as leading to the conclusion that there must be FTL physical propagations in nature might have missed some important subtleties regarding its application and ultimate meaning.

The locality assumption is that events at A cannot be directly causally affecting events at B during any given coincidence interval. Quantitatively at least, we know that this assumption is affirmed experimentally. Since there never will be a way to affirm or deny it qualitatively, I conclude that the assumption of locality is the best bet -- regardless what anyone thinks that Bell has shown.

And, because of the way these experiments must be set up and run, I conclude that the assumption of common cause is also a best bet regarding the deep cause(s) of the quantum experimental phenomena that, collectively, conform to the technical requirements for quantum entanglement.

So, yes, I'm in denial about what I think you (and lots of others) think the meaning of Bell's theorem is. But thanks for the good discussions and references, and I'll continue to read and think and keep an open mind about this (think of my denial as a sort of working assumption), and when I get another flash of insight (like the polariscope analogy), then I'll let you know. :smile:

When is a locality condition not, strictly speaking, a locality condition?
 
  • #83


Maaneli said:
4. I have not yet seen any evidence that local non-realism is a viable explanation of Bell inequality violations. I challenge you to come up with a mathematical definition of non-realist locality.

You are a challenging kind of guy... :)

All you need to do to explain the Bell Inequality violation is to say that particles do NOT have well-defined non-commuting attributes when not being observed. You deny then that there is an A, B and C in Bell's [14], as previously mentioned. (Denying this assumption is completely in keeping with the HUP anyway, even if not strictly required by it.)

And there is experimental evidence as well, but you likely don't think it applies. There is the work of Groeblacher et al which I am sure you know. I accept these experiments as valid evidence but think more work needs to be done before it is considered "iron clad".

So, there is a mathematical apparatus already: QM and relativity. So dumping non-locality as an option does not require anything new. The reason I think non-realism is appealling is because it - in effect - elevates the HUP but does not require new forces, mechanisms, etc. We can also keep c as a speed limit, and don't need to explain why most effects respect c but entanglement does not.

By the way, you might have been a bit harsh on ThomasT. We each have our own (biased?) viewpoints on some of these issues, including you. Clearly, you reject evidence against non-locality and reject non-realism as a viable alternative.
 
  • #84


vanesch said:
What I said was that the correlations as seen in an Aspect-like experiment, and as predicted by quantum theory, cannot be obtained by "looking simply at a common cause". That is, you cannot set up a classical situation where you look at a common property of something, and obtain the same correlations as those from quantum mechanics.
Sorry if I misparaphrased you, because you've helped a lot in elucidating these issues.

There is a classical situation which I think analogizes what is happening in optical Bell tests -- the polariscope. The measurement of intensity by the detector behind the analyzing polarizer in a polariscopic setup is analogous to the measurement of rate of coincidental detection in simple optical Bell setups. Extending between the two polarizers in a polariscopic setup is a singular sort of optical disturbance. That is, the disturbance that is transmitted by the first polarizer is identical to the disturbance that's incident on the analyzing polarizer. In an optical Bell setup, it's assumed that for a given emitted pair the disturbance incident on the polarizer at A is identical to the disturbance that's incident on the polarizer at B. Interestingly enough, both these setups produce a cos^2 functional relationship between changes in the angular difference of the crossed polarizers and changes in the intensity (polariscope) or rate of coincidence (Bell test) of the detected light.

vanesch said:
This is because classically, we only know of two ways to have statistical correlations C(A,B): A causes B (or B causes A) is one way, and A and B have a common cause C is the other effect. Example of the first:
- A is "setting of the switch" and B is "the light is on"
clearly, because setting the switch causes the light to be on or off, we will find a correlation between both.
Example of the second:
- you drive a Ferrari and you have a Rolex.
It is not true that driving Ferrari's makes you have a Rolex, or that putting on a Rolex makes you drive a Ferrari. So there's not "A causes B" or "B causes A". However, being extremely rich can cause you to buy a Ferrari as well as a Rolex. So there was a common cause: "being rich".

Well, Bell's theorem is a property that holds for the second kind of correlations.

So the violation of that theorem by observed or quantum-mechanically predicted correlations means that it cannot be a correlation that can be explained entirely by "common cause".
And yet it's a "common cause" (certainly not of the ordinary kind though) assumption that underlies the construction and application of the quantum mechanical models that pertain to the Bell tests, as well as the preparation and administration of the actual experiments.


vanesch said:
Yes, of course. That's because we have to look for *entangled* photons, which NEED to have a common source. But the violation of Bell's theorem simply means that it is not a "common cause of the normal kind", like in "being rich".
OK, the correlations are due to a unusual sorts of common causes then. This is actually easier to almost visualize in the experiments where they impart a similar torque to relatively large groups of atoms. The entire, separate groups are then entangled with respect to their common zapping. :smile: Or, isn't this the way you'd view these sorts of experiments?


vanesch said:
Bohmian mechanics explicitly shows how action-at-a-distance can solve the issue.
The problem with instantaneous-action-at-a-distance is that it's physically meaningless. An all-powerful invisible elf would solve the problem too. Just like instantaneous-actions-at-a-distance, the existence of all-powerful invisible elves is pretty hard to disprove. :smile:

vanesch said:
In fact, that's not surprising either. If you have action at a distance, (if A can cause B or vice versa) then all possible correlations are allowed, and there's no "Bell theorem" being contradicted. What's the problem with the EPR kind of setups is that the "causes" (the choices of measurement one makes) are space-like separated events, so action-at-a-distance would screw up with relativity. So people (like me) sticking to relativity refuse to consider that option. But it is a genuine option: it is actually by far the "most common sense" one.
I disagree. The most common sense option is common cause(s). Call it a working hypothesis -- one that has the advantage of not being at odds with relativity. You've already acknowledged that common cause is an option, just not normal common causes. Well, the submicroscopic behavior of light is a pretty mysterious subject, don't you think? Maybe the classical models of light are (necessarily?) incomplete enough so that a general and conclusive lhv explanation of Bell tests isn't (and maybe will never be) forthcoming.
 
  • #85


A shift of angle on this maybe in case folks are getting a little over heated...

If the entangled particles are modeled as the tensor product of two Hilbert Spaces
then the result is a combined wave function for two particles that behave as one
wave function.

But then we ask questions about the large euclidean separation between the particles
and how one particle 'knows' about the other's state (eg hidden variable). This seems an inconsistent question because there is only one wave function (albeit for two particles) and when something happens to the wave it would be instantaneous everywhere at once.

An additional helping analogy would be to consider a single wave packet for an electron or photon - Youngs Slits or similar.
We don't ask questions about the 'speed of probablities' between one end of the wave packet and the other - its all instant. There is no 'time separation' about where the particle statistically reveals itself. Similarly with entangle particles. Or am I barking up a wrong tree?
 
  • #86


DrChinese said:
You are a challenging kind of guy... :)

All you need to do to explain the Bell Inequality violation is to say that particles do NOT have well-defined non-commuting attributes when not being observed. You deny then that there is an A, B and C in Bell's [14], as previously mentioned. (Denying this assumption is completely in keeping with the HUP anyway, even if not strictly required by it.)

And there is experimental evidence as well, but you likely don't think it applies. There is the work of Groeblacher et al which I am sure you know. I accept these experiments as valid evidence but think more work needs to be done before it is considered "iron clad".

So, there is a mathematical apparatus already: QM and relativity. So dumping non-locality as an option does not require anything new. The reason I think non-realism is appealling is because it - in effect - elevates the HUP but does not require new forces, mechanisms, etc. We can also keep c as a speed limit, and don't need to explain why most effects respect c but entanglement does not.

By the way, you might have been a bit harsh on ThomasT. We each have our own (biased?) viewpoints on some of these issues, including you. Clearly, you reject evidence against non-locality and reject non-realism as a viable alternative.


Haha good one about my being challenging.

About your non-realist locality definition, what do you do about collapse of the measurement settings to definite values? What causes it and when does it happen? What is your mathematical description of that process.

I don't think it was harsh. There is a big difference between rejecting what is unambiguously wrong from the POV of Bell's theorem, and being very skeptical about another possbility (non-realist locality) which you also admit still has to be worked out. Also, as I said, I don't actually think the nonlocality explanation is necessarily the best one. In fact I am much more inclined to think the causality assumption is the more unphysical assumption that must be given up, rather than locality. But that is clearly implied by Bell's theorem.
 
  • #87


LaserMind said:
We don't ask questions about the 'speed of probablities' between one end of the wave packet and the other - its all instant. There is no 'time separation' about where the particle statistically reveals itself. Similarly with entangle particles. Or am I barking up a wrong tree?

This is the collapse of the wavefunction, and I think this manifests itself identically whether we are talking about 1 particle or a pair of entangled particles.

Any single photon (say emitted from an electron) has a chance of going anywhere and being absorbed. The odds of it being absorbed at Alice's detector are A, and the odds of it being absorbed at Bob's detector is B. And so on for any number of possible targets, some of which could be light years away. When we observe it at Alice, that means it is NOT at Bob or any of the other targets. Yet clearly there was a wave packet moving through space - that's what experiments like the Double Slit show, because there is interference from the various possible paths. And yet there we are at the end, the photon is detected in one and only one spot. And the odds collapse to zero at everywhere else. And that collapse would be instantaneous as best as I can tell.

So this is analogous to the mysterious nature of entanglement, yet I don't think it is really any different. Except that entanglement involves an ensemble of particles.
 
  • #88


Maaneli said:
Haha good one about my being challenging.

About your non-realist locality definition, what do you do about collapse of the measurement settings to definite values? What causes it and when does it happen? What is your mathematical description of that process.

I don't have a physical explanation for instantaneous collapse. I think this is a weak point in QM. But I think the mathematical apparatus is already there in the standard model.

By the way, challenging is good as far as I am concerned. Not sure I am up to too many though. :)
 
  • #89


MWI is a perfect example of locality and non-realism. There is no objective state that other parts of your world are in until you interact with them and go through decoherence.
 
  • #90


DrChinese said:
I don't have a physical explanation for instantaneous collapse. I think this is a weak point in QM. But I think the mathematical apparatus is already there in the standard model.

By the way, challenging is good as far as I am concerned. Not sure I am up to too many though. :)


<< I don't have a physical explanation for instantaneous collapse. I think this is a weak point in QM. >>

Well, let's distinguish textbook QM mathematics and measurement postulates from the interpretation of it all as anti-realist. Certainly the ad-hoc, mathematically and physically vague measurement postulates are a weak point of textbook QM. But if your anti-realist interpretation has the same basic problem, then it cannot be a true physical theory of QM measurement processes, or a potentially fundamental physical interpretation of QM. That's why I still have not yet seen any coherent physical/mathematical definition of "anti-realist locality".


<< But I think the mathematical apparatus is already there in the standard model. >>

Ah but that's the thing. Standard QM has only postulates - no mathematical apparatus for treating measurement processes! Not even adding decoherence theory does the job fully! And anyway decoherence theory implies realism. That's why I think that if you don't want to invoke additional (hidden) variables to QM, and want to keep only with the wavefunction and HUP, and not try to analyze measurement processes, the only self-consistent interpretation is Ballentine's statistical interpretation of QM - but even that can only be a temporary filler to a more complete description of QM and, inevitably, a beable theory of QM.


<< By the way, challenging is good as far as I am concerned. Not sure I am up to too many though. :) >>

Glad you think so!
 
Last edited:
  • #91


BTW, contrary to what some people say, MWI is not an example of locality and nonrealism. That's a bad misconception that MWI supporters like Vanesch (I suspect), or even Tegmark, Wallace, Saunders, Brown, etc., would object to.
 
  • #92


I don't understand the fuss about "instantaneous collapse". If you consider an entangled state like singlet two spin state:

|+-> - |-+>

Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. So, if that's what we call collapse and that's when information is transferred to us (in each of our branches), then information about spin 1 was already present in spin 2 and vice versa when the entangled 2-spin state was created.
 
  • #93


Count Iblis said:
I don't understand the fuss about "instantaneous collapse". If you consider an entangled state like singlet two spin state:

|+-> - |-+>

Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. So, if that's what we call collapse and that's when information is transferred to us (in each of our branches), then information about spin 1 was already present in spin 2 and vice versa when the entangled 2-spin state was created.



<< Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. >>

This sentence makes no sense. The wavefunctions of the two "particles" (if you're just talking textbook QM) are spinor-valued, and therefore already contain spin, and when they are in the singlet state, they are already entangled in configuration space (by definition!). When you "measure" the spin of one particle, you "collapse" the entangled spin states of the two "particles" to a definite spin outcome, and they are therefore no longer entangled.
 
  • #94


Maaneli said:
When you "measure" the spin of one particle, you "collapse" the entangled spin states of the two "particles" to a definite spin outcome, and they are therefore no longer entangled.
Here's an intuitive view to explain the quantum postulates that we are using (IMHO):
When something or someone demands an answer
to a state vector (as an observable) by collapsing
the wave function and observing it (say on a screen)
then the Universe is forced to give an answer whether
it has one or not - the Universe cannot reply 'sorry I don't know
where the particle is, actually, I haven't got one, but you are demanding
it - so I'll have to make a guess for you, I've no other choice because of
your clunky apparatus and strange question is forcing me to answer'
It must answer our strange question. The only sensible answer it
can give is a statistical one because any other answer would be
wrong.
 
  • #95


ThomasT said:
Sorry if I misparaphrased you, because you've helped a lot in elucidating these issues.

There is a classical situation which I think analogizes what is happening in optical Bell tests -- the polariscope. The measurement of intensity by the detector behind the analyzing polarizer in a polariscopic setup is analogous to the measurement of rate of coincidental detection in simple optical Bell setups. Extending between the two polarizers in a polariscopic setup is a singular sort of optical disturbance. That is, the disturbance that is transmitted by the first polarizer is identical to the disturbance that's incident on the analyzing polarizer. In an optical Bell setup, it's assumed that for a given emitted pair the disturbance incident on the polarizer at A is identical to the disturbance that's incident on the polarizer at B. Interestingly enough, both these setups produce a cos^2 functional relationship between changes in the angular difference of the crossed polarizers and changes in the intensity (polariscope) or rate of coincidence (Bell test) of the detected light.

Yes, but there's a world of difference. The light disturbance that reaches the second polarizer has undergone the measurement process of the first, and in fact has been altered by the first. As such, it is in a way not surprising that the result of the second polarizer is dependent on the *choice of measurement* (and hence on the specific alteration) of the first. The correlation is indeed given by the same formula, cos^(angular difference), but that shouldn't be surprising in this case. The result of the second polarizer is in fact ONLY dependent on the state of the first polarizer: you can almost see the first polarizer as a SOURCE for the second one. So there is the evident possibility of a causal relation between "choice of angle of first polarizer" and "result of second polarizer".

What is much more surprising - in fact it is the whole mystery - in an EPR setup, is that two different particles (which may or may not have identical or correlated properties) are sent off to two remote experimental sites. As such there can of course be a correlation in the results of the two measurements, but these results shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance. In other words, the two measurements done by the two experimenters "should" be just statistical measurements on a "set of common properties" which are shared by the two particles (because of course they have a common source). And it is THIS kind of correlation which should obey Bell's theorem (statistical correlations of measurements of common properties) and it doesn't.

And yet it's a "common cause" (certainly not of the ordinary kind though) assumption that underlies the construction and application of the quantum mechanical models that pertain to the Bell tests, as well as the preparation and administration of the actual experiments.

Yes, but now it is up to you what you understand by common cause, but not of the ordinary kind. Because the "ordinary kind" includes all kinds of "common properties" (identical copies of datasets). So whatever is not the ordinary kind, it's going to be "very not ordinary".

OK, the correlations are due to a unusual sorts of common causes then. This is actually easier to almost visualize in the experiments where they impart a similar torque to relatively large groups of atoms. The entire, separate groups are then entangled with respect to their common zapping. :smile: Or, isn't this the way you'd view these sorts of experiments?

Well, how do you visualize "these non-ordinary" common causes ? About every mental picture you can think off, falls in the class of "ordinary" common causes, which should respect Bell's theorem.

The problem with instantaneous-action-at-a-distance is that it's physically meaningless. An all-powerful invisible elf would solve the problem too. Just like instantaneous-actions-at-a-distance, the existence of all-powerful invisible elves is pretty hard to disprove. :smile:

I agree. Nevertheless, Newtonian gravity is "action at a distance", but indeed, it opens up the gate for arbitrary explanations, of the astrology kind, of about any phenomenon. It's yet less of a problem than superdeterminism, which means the end of science, though.

Nevertheless, I agree with you, and it is the fundamental difficulty I have with Bohmian mechanics, which would otherwise have been the best explanation for quantum phenomena. But from the moment, indeed, that the motion of an arbitrarily distant particle can induce an arbitrarily large force on a local particle here, "all bets are off".

I disagree. The most common sense option is common cause(s). Call it a working hypothesis -- one that has the advantage of not being at odds with relativity. You've already acknowledged that common cause is an option, just not normal common causes. Well, the submicroscopic behavior of light is a pretty mysterious subject, don't you think? Maybe the classical models of light are (necessarily?) incomplete enough so that a general and conclusive lhv explanation of Bell tests isn't (and maybe will never be) forthcoming.

No, it won't do. All "common sense" common causes are of the "ordinary" kind. So saying that it must be a common sense, but "non-ordinary" common cause is not going to help us.

I will tell you how *I* picture this (but I won't do this for too long, as I have done this at least already a dozen times on this forum). After all, we're not confronted with an *unexpected* phenomenon. We're verifying predictions of quantum theory ! So what's the best way of at least *picture* what happens ? Answer: look at quantum theory itself, which predicts this ! You can obtain the results of an Aspect-like experiment using quantum theory, and purely local interactions (the ones we use normally, such as electrodynamics). You just let the wave-function evolve! And then you see that you get different observer states, which have seen different things, but *when they come together* they separate in the right branches with the right probabilities - which are nothing else but the observed correlations. That's nothing else but "many worlds". It solves the dilemma of the "correlations-at-a-distance" simply by stating that those correlations didn't happen "at the moment of measurement" which simply created both possible outcomes, but the correlations happened when the observers came together to compare their outcomes. In fact, all different versions of the observers came together to compare all their different possible sets of outcomes, and those that are most probable (those with the largest hilbert norm) are simply those with the right correlations from QM predictions.

Of course, now you have the weirdness of multiple worlds, but at least, you have a clear picture of how the theory that correctly predicts the "incomprehensible outcomes" comes itself to those outcomes.

I've worked this out several times here, I won't type all that stuff again.
 
  • #96


Maaneli said:
BTW, contrary to what some people say, MWI is not an example of locality and nonrealism. That's a bad misconception that MWI supporters like Vanesch (I suspect), or even Tegmark, Wallace, Saunders, Brown, etc., would object to.

The problem lies in the word "non-realism" and then the right definition of "local". There are some papers out there that show that you can see unitary wavefunction evolution as a local process (as long as the implemented dynamics - the interactions - are local of course), although that's better seen in the Heisenberg picture. I'm too lazy to look up the arxiv articles.
So MWI can be seen as respecting locality in a way. That's not surprising given that unitary evolution respects lorentz invariance (if the dynamics does so).

As to "realism", instead of calling it "non-realist", I'd rather call it "multi-realist". But that's semantics. The way MWI can get away with Bell is simply that at the moment of "measurement" at each side, there's no "single outcome", but rather both outcomes appear. It is only later, when the correlations are established, and hence when there is a local interaction between the observers that came together, that the actual correlations show up.
 
  • #97


It really is *relative* state, just like the first paper called it. There's no objective state of any particle before observation that everyone will agree on, so there's no one "true" reality.

Are there any other interpretations that preserve locality?
 
  • #98


Maaneli said:
<< Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. >>

This sentence makes no sense. The wavefunctions of the two "particles" (if you're just talking textbook QM) are spinor-valued, and therefore already contain spin, and when they are in the singlet state, they are already entangled in configuration space (by definition!). When you "measure" the spin of one particle, you "collapse" the entangled spin states of the two "particles" to a definite spin outcome, and they are therefore no longer entangled.

In the MWI, there is no collapse, the wavefunction of the observer gets entangled with the two spin state. I think that the "paradox" implied by instantaneous collapse is just an artifact of assuming that the observer collapses the wavefunction, while in reality this is an effective description.
 
  • #99


If wavefunction collapse really happens, then that should be confirmed by experiments testing for violations of unitarity. Unitarity could perhaps be spontaneously broken as has been suggested in some recent publications...
 
  • #100


vanesch said:
Nevertheless, Newtonian gravity is "action at a distance", but indeed, it opens up the gate for arbitrary explanations, of the astrology kind, of about any phenomenon. It's yet less of a problem than superdeterminism, which means the end of science, though.

I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.
 
Last edited by a moderator:
  • #101


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.

The title of t'Hooft's paper should really be "The Free Will Postulate in Science" or even better "How God has tricked everyone into believing their experimental results are meaningful".

In fact: there is no superdeterministic theory to critique. If there were, if would be an immediate target for falsification and I know just where I'd start.

You know, there is also theory that the universe is only 10 minutes old. I don't think that argument needs to be taken seriously either. Superdeterminism is more of a philosophical discussion item, and in my mind does not belong in the quantum physics discussion area. It has nothing whatsoever to do with QM.
 
Last edited by a moderator:
  • #102


DrChinese said:
I don't have a physical explanation for instantaneous collapse. I think this is a weak point in QM. But I think the mathematical apparatus is already there in the standard model.

(Def: Local; requiring both Locality & Realism)

IMO not having a physical explanation for any of the Non-Local specifications (oQM instantaneous collapse; deBB guide waves; GRW; MWI etc.) is a STRONG point for the QM argument by Bohr that no explanation could be “More Complete” than QM.

That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”.

One could apply the Law of Parsimony to claim the high ground, but to use Ockham requires a complete physical explanation (not just a mathematical apparatus) that would in effect be Local not Non-Local. And a Local physical explanation not being possible is the one thing all these have in common, which is all Bohr needs to retain the point that they are not More Complete” than CI.

I agree with you on ’t Hooft's support of superdeterminism – IMO a weak sophist argument not suitable for scientific discussion that belongs in Philosophy not scientific debates.
 
  • #103
RandallB said:
(Def: Local; requiring both Locality & Realism)

IMO not having a physical explanation for any of the Non-Local specifications (oQM instantaneous collapse; deBB guide waves; GRW; MWI etc.) is a STRONG point for the QM argument by Bohr that no explanation could be “More Complete” than QM.

That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”.

One could apply the Law of Parsimony to claim the high ground, but to use Ockham requires a complete physical explanation (not just a mathematical apparatus) that would in effect be Local not Non-Local. And a Local physical explanation not being possible is the one thing all these have in common, which is all Bohr needs to retain the point that they are not More Complete” than CI.

I agree with you on ’t Hooft's support of superdeterminism – IMO a weak sophist argument not suitable for scientific discussion that belongs in Philosophy not scientific debates.




<< That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”. >>

Contrary to common belief, this is actually not true. Many times I have cited the work of leaders in those research areas who have recently shown the possibility of empirically testable differences. I will do so once again:

Generalizations of Quantum Mechanics
Philip Pearle and Antony Valentini
To be published in: Encyclopaedia of Mathematical Physics, eds. J.-P. Francoise, G. Naber and T. S. Tsun (Elsevier, 2006)
http://eprintweb.org/S/authors/quant-ph/va/Valentini/2

The empirical predictions of Bohmian mechanics and GRW theory
This talk was given on October 8, 2007, at the session on "Quantum Reality: Ontology, Probability, Relativity" of the "Shellyfest: A conference in honor of Shelly Goldstein on the occasion of his 60th birthday" at Rutgers University.
http://math.rutgers.edu/~tumulka/shellyfest/tumulka.pdf

The Quantum Formalism and the GRW Formalism
Authors: Sheldon Goldstein, Roderich Tumulka, Nino Zanghi
http://arxiv.org/abs/0710.0885

De Broglie-Bohm Prediction of Quantum Violations for Cosmological Super-Hubble Modes
Antony Valentini
http://eprintweb.org/S/authors/All/va/A_Valentini/2

Inflationary Cosmology as a Probe of Primordial Quantum Mechanics
Antony Valentini
http://eprintweb.org/S/authors/All/va/A_Valentini/1

Subquantum Information and Computation
Antony Valentini
To appear in 'Proceedings of the Second Winter Institute on Foundations of Quantum Theory and Quantum Optics: Quantum Information Processing', ed. R. Ghosh (Indian Academy of Science, Bangalore, 2002). Second version: shortened at editor's request; extra material on outpacing quantum computation (solving NP-complete problems in polynomial time)
Journal-ref. Pramana - J. Phys. 59 (2002) 269-277
http://eprintweb.org/S/authors/All/va/A_Valentini/11

Pilot-wave theory: Everett in denial? - Antony Valentini

" We reply to claims (by Tipler, Deutsch, Zeh, Brown and Wallace) that the pilot-wave theory of de Broglie and Bohm is really a many-worlds theory with a superfluous configuration appended to one of the worlds. Assuming that pilot-wave theory does contain an ontological pilot wave (a complex-valued field in configuration space), we show that such claims arise essentially from not interpreting pilot-wave theory on its own terms. Pilot-wave dynamics is intrinsically nonclassical, with its own (`subquantum') theory of measurement, and it is in general a `nonequilibrium' theory that violates the quantum Born rule. From the point of view of pilot-wave theory itself, an apparent multiplicity of worlds at the microscopic level (envisaged by some many-worlds theorists) stems from the generally mistaken assumption of `eigenvalue realism' (the assumption that eigenvalues have an ontological status), which in turn ultimately derives from the generally mistaken assumption that `quantum measurements' are true and proper measurements. At the macroscopic level, it might be argued that in the presence of quantum experiments the universal (and ontological) pilot wave can develop non-overlapping and localised branches that evolve just like parallel classical (decoherent) worlds, each containing atoms, people, planets, etc. If this occurred, each localised branch would constitute a piece of real `ontological Ψ-stuff' that is executing a classical evolution for a world, and so, it might be argued, our world may as well be regarded as just one of these among many others. This argument fails on two counts: (a) subquantum measurements (allowed in nonequilibrium pilot-wave theory) could track the actual de Broglie-Bohm trajectory without affecting the branching structure of the pilot wave, so that in principle one could distinguish the branch containing the configuration from the empty ones, where the latter would be regarded merely as concentrations of a complex-valued configuration-space field, and (b) such localised configuration-space branches are in any case unrealistic (especially in a world containing chaos). In realistic models of decoherence, the pilot wave is delocalised, and the identification of a set of parallel (approximately) classical worlds does not arise in terms of localised pieces of actual `Ψ-stuff' executing approximately classical motions; instead, such identification amounts to a reification of mathematical trajectories associated with the velocity field of the approximately Hamiltonian flow of the (approximately non-negative) Wigner function --- a move that is fair enough from a many-worlds perspective, but which is unnecessary and unjustified from a pilot-wave perspective because according to pilot-wave theory there is nothing actually moving along any of these trajectories except one (just as in classical mechanics or in the theory of test particles in external fields or a background spacetime geometry). In addition to being unmotivated, such reification begs the question of why the mathematical trajectories should not also be reified outside the classical limit for general wave functions, resulting in a theory of `many de Broglie-Bohm worlds'. Finally, because pilot-wave theory can accommodate violations of the Born rule and many-worlds theory (apparently) cannot, any attempt to argue that the former theory is really the latter theory (`in denial') must in any case fail. At best, such arguments can only show that, if approximately classical experimenters are confined to the quantum equilibrium state, they will encounter a phenomenological appearance of many worlds (just as they will encounter a phenomenological appearance of locality, uncertainty, and of quantum physics generally). From the perspective of pilot-wave theory itself, many worlds are an illusion. "
http://users.ox.ac.uk/~everett/abstracts.htm#valentini


So everything you said based on that initial assumption is null.

Also, superdeterminism, if implemented in an empirically adequate way in replacement of nonlocality, would be just as valid as a nonlocal account of EPR, and therefore just as relevant to QM.
 
Last edited by a moderator:
  • #104


vanesch said:
... but there's a world of difference. The light disturbance that reaches the second polarizer has undergone the measurement process of the first, and in fact has been altered by the first. As such, it is in a way not surprising that the result of the second polarizer is dependent on the *choice of measurement* (and hence on the specific alteration) of the first. The correlation is indeed given by the same formula, cos^(angular difference), but that shouldn't be surprising in this case. The result of the second polarizer is in fact ONLY dependent on the state of the first polarizer: you can almost see the first polarizer as a SOURCE for the second one. So there is the evident possibility of a causal relation between "choice of angle of first polarizer" and "result of second polarizer".

What is much more surprising - in fact it is the whole mystery - in an EPR setup, is that two different particles (which may or may not have identical or correlated properties) are sent off to two remote experimental sites. As such there can of course be a correlation in the results of the two measurements, but these results shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance. In other words, the two measurements done by the two experimenters "should" be just statistical measurements on a "set of common properties" which are shared by the two particles (because of course they have a common source). And it is THIS kind of correlation which should obey Bell's theorem (statistical correlations of measurements of common properties) and it doesn't.
Look at what the two setups have in common, not how they're different.

I don't understand what you mean when you say that the correlations "shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance."

They don't, do they? They only depend on the angular difference between the crossed polarizers associated with paired incident disturbances. This angular difference changes instantaneously no matter what the spatial separation as A or B changes polarizer setting. This isn't action-at-a-distance though.

Anyway, the point of the polariscope analogy is that in both setups there is, in effect, a singular, identical optical disturbance extending from one polarizer to the other -- and that the functional relationship between the angular difference and rate of detection is the same in both. This seems to me to support the assumption that in the quantum experiments the polarizers at A and B are analyzing an identical optical disturbance at each end for each pair. B doesn't need to be influencing A, or vice versa, to produce this functional relationship. They just need to be analyzing the same thing at each end for each pair.
 
  • #105


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.


There is yet another way to relinquish the "free-will" postulate in QM. One can implement backwards causation, as Huw Price and Rod Sutherland have proposed and successfully shown can reproduce the nonlocal correlations, as well as the empirical predictions of QM in general.
 
Last edited by a moderator:

Similar threads

Back
Top