Why does quantum entanglement not allow ftl communication

In summary, the question is asking why quantum entanglement doesn't allow for faster than light communication. The answer is that the information encoded in entanglement is only extractable when you look at correlations between measurements on both the entangled systems. To access that correlation information, you would need communication anyway, and that communication could not be FTL. If you only look at either system, but not the other, then you need no such communication, but you also can extract no information from the entanglement. This is actually a good thing, because much of science is done by ignoring entanglements, and the reason we get away with that is the information we are ignoring cannot interfere with our interpretation of the
  • #36
JesseM said:
Only in the MWI, where measurements are themselves just new entanglements, is this really true. In Copenhagen QM, the act of measuring a particle can destroy previous entanglements it may have had up until that measurement (though it won't always, it depends on what measurement you perform)--subsequent measurements on this particle won't show any correlations with other particles it was entangled with prior to the first measurement.
Note I did some editing of my last post, as we're exchanging in real time! But even in the CI, the act of measuring does not destroy previous entanglements (as usual, such entanglements only show up in correlations with other measurements, never on measurements of the same system). The CI is simply more honest that you have made a choice not to track them, so the CI makes no claims that the wave function is a complete description of the reality-- even for a maximal set of commuting observations. You are right that the MWI does try to retain that "reality" property, but it still fails unless you include the whole universe in the wave function. That's the problem with MWI in the first place, there's no evidence that such a wave function exists, and we certainly know we can never use it for anything. So with MWI, all I'd have to say is "any wavefunction that any physicist could ever actually use for anything cannot be the wavefunction of that system without losing some of the reality of the situation", it's still always going to reflect a choice of some kind in MWI or CI, or Bohm.
 
Last edited:
Physics news on Phys.org
  • #37
Ken G said:
Note I did some editing of my last post, as we're exchanging in real time! But even in the CI, the act of measuring does not destroy previous entanglements (as usual, such entanglements only show up in correlations with other measurements, never on measurements of the same system).
Why do you say it does not destroy previous entanglements? If you measure a particle that's entangled with others, the results of that measurement may be correlated with measurements on the other particles, but then won't subsequent measurements of the same particle give results that are completely uncorrelated with the other particles in the system?
 
  • #38
JesseM said:
Why do you say it does not destroy previous entanglements? If you measure a particle that's entangled with others, the results of that measurement may be correlated with measurements on the other particles, but then won't subsequent measurements of the same particle give results that are completely uncorrelated with the other particles in the system?
I'm not sure I understand the question, measurements you do on one system will not destroy correlations with another system, it will just determine those correlations. If you go back and do the measurement again, you'll get the same correlations again, so the correlations are not destroyed (in any interpretation). If you are using CI you are probably doing measurements on individual particles to generate single-particle eigenfunctions, and once you've done that, all the correlations are already actualized in each trial so they are already embedded in the whole ensemble if you choose to track them. If you are doing MWI, you have a lot more work to do, because you have to include what didn't happen as well as what did. That's pretty much why MWI is not used in practice, it seems to me.
 
Last edited:
  • #39
Ken G said:
I'm not sure I understand the question, measurements you do on one system will not destroy correlations with another system, it will just determine those correlations. If you go back and do the measurement again, you'll get the same correlations again, so the correlations are not destroyed (in any interpretation).
I should have been more specific, I meant measurements using a measurement operator that doesn't commute with the measurement operator(s) that were used in the initial measurement(s) that were found to be correlated with the other, distant particle due to entanglement.
 
  • #40
JesseM said:
I should have been more specific, I meant measurements using a measurement operator that doesn't commute with the measurement operator(s) that were used in the initial measurement(s) that were found to be correlated with the other, distant particle due to entanglement.
Correlations can still be preserved even by measurements like that. And in cases where no correlation appears, my money says there would have been no correlation in the original wavefunction either, so it wasn't "destroyed" by the measurement. It's an interesting question if measurements (in CI) destroy correlations like they destroy phase coherence. I'm not sure that they do, and if I'm right, that's the fundamental reason that the CI is a complete description.
 
  • #41
Quantum Temporal Paradox

There are indications that despite the Grandfather Paradox and Eberhard's proof to the contrary, quantum nonlocality may in fact support FTL. The lines of evidence are as follows;

1. Teleportation does in fact transmit information, since the state of the particle cannot be reconstructed w/o both the classical and the nonlocal channel. This is not FTL only because the classical channel is required.
2. Gisin's 2001 experiment in Geneva disproving Scarani and Suarez's conjecture that the correlations between EPR pairs in which both measurements occurred prior to the other in the local frames of reference of the actual measurements would disappear, means physicists have no causal explanation of quantum nonlocality. Global timelike causality is eliminated because the measurements are spacelike separated, common timelike cause is eliminated by Bell's theorem, and Gisin's null result eliminates local timelike causality. Unless there is yet some other kind of causality (Gisin argues this should be considered), the only other option is spacelike causality. Thus, this opens the door to considering spacelike causality despite the conceptual hurdles.
3. Conventional quantum mechanics (CQM) suffers from 5 anomalies, fundamental unsolved problems that according to Kuhn should have been solved in due course as the field matured. They are the measurement problem, interpretation problem, collapse problem, supercedence problem, and the nonlocality problem. There is therefore reason to believe that progress has been impeded by a paradigm barrier and that on the other side of this barrier lies new physics waiting to be discovered.
4. The Grandfather Paradox (and its twin sister argument against FTL, the Shakespeare Indeterminacy) are examples of self-reference. Mathematicians and logicians do not have a good track record in dealing with self-reference. A few who have made progress in this area are G. Spencer Brown, "Laws of Form" who first introduced the idea of imaginary truthvalues as a way to make sense of logical paradox, Hellerstein, "Diamond Logic," Kaufman, Shoup, and Goff have also contributed to our understanding of nonlinear logics. The best know popular account is "Gödel, Escher, Bach" by Hofstadter. These advances suggest that self-reference might be fundamental to quantum mechanics, the measurement process in particular, and to a censor mechanism that would permit spacelike causality while prohibiting temporal paradox.
5. For an example of an abstract quantum system (AQS) where self-reference is central to the measurement process, backwards-in-time causality, and a censor mechanism preventing temporal paradox, see Quantum Tic-Tac-Toe at ParadigmPuzzles.
6. Impossibility proofs, such as Eberhard's, that are eventually overturned, almost always reveal not a technical flaw but a lack of imagination. In the 40's, a respected scientist showed that going to the moon was impossible. He thoroughly understood the astrodynamics and the expected advances in technology including H2/LOX. He showed that a vehicle that could travel to the moon and return to Earth would have to carry 200 times its weight in propellant; clearly impossible. We went to the moon anyway. Why? Because we left bits and pieces of the spacecraft all along the way, there, and all along the way back. The lack of imagination was to envision a throw-away design. Eberhard's proof may suffer similarly for it assumes a linear architecture for the nonlocality with an observer-dependent measurement on each end. A pair of entanglements that extends from sender to receiver in a folded pattern and can be self collapsed in either of two ways by local actions on only one end, can in principle exceed mere teleportation achieving true FTL.
7. The theoretical framework that integrates these ideas into a conceptual whole is quantum temporal paradox (QTP). A key piece of this framework is the idea of symmetric spacetime intervals (SSI) along which collapse of the wave function can occur in a relativistically consistent way. A paper that derives symmetric intervals from the concept of world ribbons (generalizations of the world lines of relativity applicable to the uncertainty of quantum objects) is in review at the Foundations of Physics Journal. If this paper is accepted for publication (it is classic speculative physics, so publication hinges on the eccentricities of the reviewers) then we are a step closer to allowing spacelike causality in quantum mechanics and thus discussions of FTL and even time travel become a tad bit more respectable. Symmetric intervals counter the relativity and causality arguments against spacelike causality.
8. Self-reference introduces nonlinearity into QM in a natural way, not in the ad hoc way being explored by adding various nonlinear terms to the Schrödinger equation. Self-reference also shows how to overcome the Grandfather Paradox and Shakespeare Indeterminacy which are the strongest arguments against spacelike causality.
9. An alternative nonlinear operator may be hiding in the normalization process associated with indistinguishable particles. The reduction in the dimensionality of the Hilbert space when indistinguishable particles become entangled cannot be reduced to a linear operator. This disputable fact is hidden by the typically casual way physicists perform the mathematical trick of renormalization.
10. The mathematics of QM may be a red herring, playing the role of extra information not strictly needed for a solution, that by its very presence makes finding the solution much more difficult. The vector which is supposed to represent a state contains more information than is physically significant. The phase of a state is physically irrelevant unless interference is expected, and then only the relative phase is physically significant. There is reason to believe therefore, that an objective measurement system might exist, no pesky observers required, if only the mathematics could be reduced to have a better impedance match with the actual physics.
11. A metaphor might help. In classical physics, the present is envisioned as an infinitely thin dividing line between the past and the future. If QTP is correct, then in quantum physics it is possible to entangle the near future with the recent past so that the "present" has a temporal width. Within this entanglement, the concepts of past, present, and future become ambiguous, the present becomes a window in time. From the quantum perspective, causality is maintained and clear even with the statistical nature of the outcomes, but from the classical perspective, the explanation of cause and effect looks an awful lot like time travel. No real "traveling" occurred, but what this window in time allows is the selection, at the very last moment, of which pair of histories we are going to find ourselves in, versus which histories became contradictory, pruned out of existence because of paradox. The essence of time travel is childlike wish fulfillment; make it didn't happen. One of the surprises of Quantum Tic-Tac-Toe is the recognition that to play it at the highest strategic level requires one to realize that the present move is changing the past. The implications for basic physics and technology are exciting, and potentially troubling.

Time travel is one of those scifi concepts that ought to stay firmly in the genre, and not poke its disturbing head into actual reality. Yet, if we are ever to travel to the stars, the speed of light has to be overcome, and since FTL and time travel are two sides of the same coin, perhaps developments in this area are to be hoped for, looked for, and pursued with due scientific rigor.
 
Last edited:
  • Like
Likes Warren Williams
  • #42
AllanGoff said:
There are indications that despite the Grandfather Paradox and Eberhard's proof to the contrary, quantum nonlocality may in fact support FTL. The lines of evidence are as follows;

1. Teleportation does in fact transmit information, since the state of the particle cannot be reconstructed w/o both the classical and the nonlocal channel. This is not FTL only because the classical channel is required.
2. Gisin's 2001 experiment in Geneva disproving Scarani and Suarez's conjecture that the correlations between EPR pairs in which both measurements occurred prior to the other in the local frames of reference of the actual measurements would disappear, means physicists have no causal explanation of quantum nonlocality. Global timelike causality is eliminated because the measurements are spacelike separated, common timelike cause is eliminated by Bell's theorem, and Gisin's null result eliminates local timelike causality. Unless there is yet some other kind of causality (Gisin argues this should be considered), the only other option is spacelike causality. Thus, this opens the door to considering spacelike causality despite the conceptual hurdles.
3. Conventional quantum mechanics (CQM) suffers from 5 anomalies, fundamental unsolved problems that according to Kuhn should have been solved in due course as the field matured. They are the measurement problem, interpretation problem, collapse problem, supercedence problem, and the nonlocality problem. There is therefore reason to believe that progress has been impeded by a paradigm barrier and that on the other side of this barrier lies new physics waiting to be discovered.
4. The Grandfather Paradox (and its twin sister argument against FTL, the Shakespeare Indeterminacy) are examples of self-reference. Mathematicians and logicians do not have a good track record in dealing with self-reference. A few who have made progress in this area are G. Spencer Brown, "Laws of Form" who first introduced the idea of imaginary truthvalues as a way to make sense of logical paradox, Hellerstein, "Diamond Logic," Kaufman, Shoup, and Goff have also contributed to our understanding of nonlinear logics. The best know popular account is "Gödel, Escher, Bach" by Hofstadter. These advances suggest that self-reference might be fundamental to quantum mechanics, the measurement process in particular, and to a censor mechanism that would permit spacelike causality while prohibiting temporal paradox.
5. For an example of an abstract quantum system (AQS) where self-reference is central to the measurement process, backwards-in-time causality, and a censor mechanism preventing temporal paradox, see Quantum Tic-Tac-Toe at ParadigmPuzzles.
6. Impossibility proofs, such as Eberhard's, that are eventually overturned, almost always reveal not a technical flaw but a lack of imagination. In the 40's, a respected scientist showed that going to the moon was impossible. He thoroughly understood the astrodynamics and the expected advances in technology including H2/LOX. He showed that a vehicle that could travel to the moon and return to Earth would have to carry 200 times its weight in propellant; clearly impossible. We went to the moon anyway. Why? Because we left bits and pieces of the spacecraft all along the way, there, and all along the way back. The lack of imagination was to envision a throw-away design. Eberhard's proof may suffer similarly for it assumes a linear architecture for the nonlocality with an observer-dependent measurement on each end. A pair of entanglements that extends from sender to receiver in a folded pattern and can be self collapsed in either of two ways by local actions on only one end, can in principle exceed mere teleportation achieving true FTL.
7. The theoretical framework that integrates these ideas into a conceptual whole is quantum temporal paradox (QTP). A key piece of this framework is the idea of symmetric spacetime intervals (SSI) along which collapse of the wave function can occur in a relativistically consistent way. A paper that derives symmetric intervals from the concept of world ribbons (generalizations of the world lines of relativity applicable to the uncertainty of quantum objects) is in review at the Foundations of Physics Journal. If this paper is accepted for publication (it is classic speculative physics, so publication hinges on the eccentricities of the reviewers) then we are a step closer to allowing spacelike causality in quantum mechanics and thus discussions of FTL and even time travel become a tad bit more respectable. Symmetric intervals counter the relativity and causality arguments against spacelike causality.
8. Self-reference introduces nonlinearity into QM in a natural way, not in the ad hoc way being explored by adding various nonlinear terms to the Schrödinger equation. Self-reference also shows how to overcome the Grandfather Paradox and Shakespeare Indeterminacy which are the strongest arguments against spacelike causality.
9. An alternative nonlinear operator may be hiding in the normalization process associated with indistinguishable particles. The reduction in the dimensionality of the Hilbert space when indistinguishable particles become entangled cannot be reduced to a linear operator. This disputable fact is hidden by the typically casual way physicists perform the mathematical trick of renormalization.
10. The mathematics of QM may be a red herring, playing the role of extra information not strictly needed for a solution, that by its very presence makes finding the solution much more difficult. The vector which is supposed to represent a state contains more information than is physically significant. The phase of a state is physically irrelevant unless interference is expected, and then only the relative phase is physically significant. There is reason to believe therefore, that an objective measurement system might exist, no pesky observers required, if only the mathematics could be reduced to have a better impedance match with the actual physics.
11. A metaphor might help. In classical physics, the present is envisioned as an infinitely thin dividing line between the past and the future. If QTP is correct, then in quantum physics it is possible to entangle the near future with the recent past so that the "present" has a temporal width. Within this entanglement, the concepts of past, present, and future become ambiguous, the present becomes a window in time. From the quantum perspective, causality is maintained and clear even with the statistical nature of the outcomes, but from the classical perspective, the explanation of cause and effect looks an awful lot like time travel. No real "traveling" occurred, but what this window in time allows is the selection, at the very last moment, of which pair of histories we are going to find ourselves in, versus which histories became contradictory, pruned out of existence because of paradox. The essence of time travel is childlike wish fulfillment; make it didn't happen. One of the surprises of Quantum Tic-Tac-Toe is the recognition that to play it at the highest strategic level requires one to realize that the present move is changing the past. The implications for basic physics and technology are exciting, and potentially troubling.

Time travel is one of those scifi concepts that ought to stay firmly in the genre, and not poke its disturbing head into actual reality. Yet, if we are ever to travel to the stars, the speed of light has to be overcome, and since FTL and time travel are two sides of the same coin, perhaps developments in this area are to be hoped for, looked for, and pursued with due scientific rigor.

That certainly clears things up.
 
  • #43
AllanGoff said:
There are indications that despite the Grandfather Paradox and Eberhard's proof to the contrary, quantum nonlocality may in fact support FTL. The lines of evidence are as follows;
No doubt these issues are at the forefront of our understanding, but I don't see any fundamental problems here. This is how I would react to each of these, for what it's worth:
1. Teleportation does in fact transmit information, since the state of the particle cannot be reconstructed w/o both the classical and the nonlocal channel. This is not FTL only because the classical channel is required.
Eliminating the word "only" makes this a non-problem.
2. ...Unless there is yet some other kind of causality (Gisin argues this should be considered), the only other option is spacelike causality.
Or, we simply haven't yet found a versatile enough meaning for "causality". When a concept reaches the limit of its service to us, need we torture it further?
3. Conventional quantum mechanics (CQM) suffers from 5 anomalies, fundamental unsolved problems that according to Kuhn should have been solved in due course as the field matured. They are the measurement problem, interpretation problem, collapse problem, supercedence problem, and the nonlocality problem.
I don't see any inconsistency in "the measurement problem", I would call it "the science problem" and liken it to how following Polaris is a good way to go north but a lousy way to go to Polaris. There's nothing to "solve" there. The interpretation problem is also not a problem, because relativity already taught us not to expect the existence of unique intepretations. Collapse is not a problem either, it is like the measurement "problem" and simply stems from the way we choose to do science-- there's no need to solve that either. I don't know what the supercedence problem is, but it sounds like something about quantum erasure and the only problem I see there is in our own unwillingness to let go of ideas that reach the limit of their usefulness, like causality. The nonlocality problem is also nothing that needs solving-- physical systems are indeed nonlocal because they are linked by their history to the rest of the universe, and not in a way that is "stored" locally in the elements of the system.
There is therefore reason to believe that progress has been impeded by a paradigm barrier and that on the other side of this barrier lies new physics waiting to be discovered.
I don't see it in that light, to me this is just how reality works, why would we start telling it that it has "problems"? We are like out-of-work psychiatrists trying to convince a perfectly healthy patient that they need our services.
4. ...These advances suggest that self-reference might be fundamental to quantum mechanics, the measurement process in particular, and to a censor mechanism that would permit spacelike causality while prohibiting temporal paradox.
There is no harm in speculating, but the shooting percentage of speculation is even worse than in dealing with self-referential paradoxes.
5. For an example of an abstract quantum system (AQS) where self-reference is central to the measurement process, backwards-in-time causality, and a censor mechanism preventing temporal paradox, see Quantum Tic-Tac-Toe at ParadigmPuzzles.
Can you give a link and a summary? That's always helpful, it sounds interesting.
6. Impossibility proofs, such as Eberhard's, that are eventually overturned, almost always reveal not a technical flaw but a lack of imagination.
Yes, I would say that "impossibility proofs" are a misnomer, for they don't say what result is impossible, they actually point to the hurdles that need to be overcome to make something possible. They should really be called "why you can't get there this way" proofs.

A pair of entanglements that extends from sender to receiver in a folded pattern and can be self collapsed in either of two ways by local actions on only one end, can in principle exceed mere teleportation achieving true FTL.
If this is truly a prediction of existing physics, it should be easy enough to set up a gedankenexperiment that shows it. If it requires other physics, it is no different from any other magical means of FTL, because the new physics first has to be demonstrated.
7. ...If this paper is accepted for publication (it is classic speculative physics, so publication hinges on the eccentricities of the reviewers) then we are a step closer to allowing spacelike causality in quantum mechanics and thus discussions of FTL and even time travel become a tad bit more respectable.
Hang on, how does the capriciousness of "eccentric reviewers" bring us closer to allowing spacelike causality? It will take experiment to do that, not reviewers. It's kind of a "pet peeve" of mine when people use theory, and now reviewers of theory, to tell reality what to do. The real goal of this work should be to motivate the right experiment.
8. Self-reference introduces nonlinearity into QM in a natural way, not in the ad hoc way being explored by adding various nonlinear terms to the Schrödinger equation. Self-reference also shows how to overcome the Grandfather Paradox and Shakespeare Indeterminacy which are the strongest arguments against spacelike causality.
Again with the theory telling reality what to do. None of it means a thing until there is experimental justification. That doesn't make it worthless, it makes it worthless unless it is used to motivate experiment.
9. An alternative nonlinear operator may be hiding in the normalization process associated with indistinguishable particles. The reduction in the dimensionality of the Hilbert space when indistinguishable particles become entangled cannot be reduced to a linear operator.
I'm a bit confused what this means, I thought the Hilbert space was a space of linear operators. Note that "nonlinear terms" in an operator do not stop it from being a linear operator-- the operator formalism is itself linear, at least as far as I have seen.
10...The phase of a state is physically irrelevant unless interference is expected, and then only the relative phase is physically significant.
That's not a significant problem, it just means the wave function is not explicitly respecting a symmetry that is present. This redundancy is eliminated in the "Heisenberg picture" as it never appeared in the matrix elements of the wave function anyway.
There is reason to believe therefore, that an objective measurement system might exist, no pesky observers required, if only the mathematics could be reduced to have a better impedance match with the actual physics.
I can't really see how that follows. That sounds like saying that because of some relatively trivial redundancy in the Schroedinger picture, we should do science totally differently.
11. ...If QTP is correct, then in quantum physics it is possible to entangle the near future with the recent past so that the "present" has a temporal width. Within this entanglement, the concepts of past, present, and future become ambiguous, the present becomes a window in time.
This sounds like a perfectly reasonable hypothesis, entirely analogous to the Heisenberg uncertainty principle applied to our knowledge of when events occur. But such is hardly suitable for using as a window into FTL travel of anything but a tiny particle whose relation to time has always been quite a bit different from the irreversible macroscopic version. To me it merely sounds like a nice way to travel femtoseconds into the past, and not even be able to establish that you did.

No real "traveling" occurred, but what this window in time allows is the selection, at the very last moment, of which pair of histories we are going to find ourselves in, versus which histories became contradictory, pruned out of existence because of paradox.
I agree with the start of this-- we generally find that such "selection" ends up being like trying to change the weather by blowing at clouds. You will indeed change the weather that way, but only in the meaningless way that you can change a dice roll by yelling at the person releasing the dice.
Time travel is one of those scifi concepts that ought to stay firmly in the genre, and not poke its disturbing head into actual reality.
It is certainly fascinating to think about, and good fodder for sci fi.
Yet, if we are ever to travel to the stars, the speed of light has to be overcome, and since FTL and time travel are two sides of the same coin, perhaps developments in this area are to be hoped for, looked for, and pursued with due scientific rigor.
But why do we need to overcome the speed of light to go to the stars? It would suffice to be able to reach very close to that speed. A daunting task, I admit, but I don't see much evidence that time travel is any less daunting.
 
Last edited:
  • #44
Ken G said:
Correlations can still be preserved even by measurements like that.
Do you have any specific examples of problems where they would be preserved with these kinds of measurements? I haven't studies such problems in detail, but consider a situation where if we find one particle in an eigenstate of some measurement operator; if we keep measuring with the same operator it'll stay in that eigenstate forever (if there's no time dependence), but then if we stick a measurement with a noncommuting operator in between, then when we return to the original operator the system may no longer be in the same eigenstate. But there can't be any way this change in eigenstate can be reflected in the other, entangled particle, because if it was this would allow for the possibility of FTL communication. So this is one intuitive reason for thinking you won't necessarily see correlations preserved after you've made multiple measurements on entangled particles, where the later measurements don't commute with the initial measurement.
 
  • #45
To Ken G.
Thank you for taking the time to comment. It is a little late here, so I'll respond in full to selected comments tomorrow. This blog prevents posting URL's until at least 15 posts have been made, a rule I presume exists to keep spam to a minimum, but you should have no trouble finding quantum tic-tac-toe with a quick google search. Today's post was partially intended to capture the "forest," answering your questions and responding to your points will help me articulate each "tree." Like you, I find this area irresistibly interesting. I'm looking forward to a lively exchange.

P.S. How do you get the quotes before your responses? Thanks.
 
  • #46
JesseM said:
Do you have any specific examples of problems where they would be preserved with these kinds of measurements?
One example would be a spin measurement tilted at some angle other than 90 degrees. That will maintain some correlation, yet not commute. Still, the case of 90 degrees does seem to destroy the correlation, but even that leads to some subtleties-- if you can still tell what the outcome of the previous experiment was even after you do the new one, then the correlation is not destroyed. You have to "erase" the information of the first measurement-- but then it will be as if that measurement never happened and your new one will establish the correlation we are talking about. So I don't think you can ever really "destroy" a correlation.
 
Last edited:
  • #47
AllanGoff said:
P.S. How do you get the quotes before your responses? Thanks.
Click on the box that says "quote" under this line, and I think it will become clear how to get that.
 
  • #48
QTP - The Anomalies in CQM

Ken G said:
I don't see any inconsistency in "the measurement problem", I would call it "the science problem" and liken it to how following Polaris is a good way to go north but a lousy way to go to Polaris. There's nothing to "solve" there. The interpretation problem is also not a problem, because relativity already taught us not to expect the existence of unique intepretations. Collapse is not a problem either, it is like the measurement "problem" and simply stems from the way we choose to do science-- there's no need to solve that either. I don't know what the supercedence problem is, but it sounds like something about quantum erasure and the only problem I see there is in our own unwillingness to let go of ideas that reach the limit of their usefulness, like causality. The nonlocality problem is also nothing that needs solving-- physical systems are indeed nonlocal because they are linked by their history to the rest of the universe, and not in a way that is "stored" locally in the elements of the system.

In my first post I presented several lines of evidence that spacelike causality may be allowed in physics. Each was presented, briefly, and without justification. Ken G took issue with item number 3, the 5 anomalies of classical quantum mechanics (CQM), so I'll respond just to this item.

The relevant background information for the claims in this item is the concept of paradigm, as articulated by Thomas Kuhn in his seminal work, "The Structure of Scientific Revolutions." Indeed it is this work which provides the modern meaning of the term "paradigm", derived from the Greek word for pattern. For those with an interest in science, and in particular how physics might change in the future, this is a must read. Kuhn presents a model of scientific advancement at odds with the model we were all presented with in grade school. In his model any field is dominated by an existing paradigm, it defines the problems of interest, how they are to be attacked, and what a successful solution will look like in general. In this view, scientific problems are seen as puzzles, problems with guaranteed but unknown solutions. The incremental advance of science occurs as each puzzle is solved. In the course of this process, however, some problems resist solution, even by the greats in the field. If they remain unsolved even when the field has by other measures matured, then they take on the status of anomalies, problems which are not puzzles. There is no longer a guarantee that solutions exist. In the history of physics, such problems have been the leading clues for the next paradigm shift. Because they are a professional embarrassment, the typical establishment response is to declare them non problems by fiat. This has the unfortunate effect of killing research in the area because astute careerists will select other problems to work on. This is part of the reason that paradigm shifts are often achieved by outsiders.

What I'd like to do in the rest of this post is explain why these five unsolved problems deserve the label anomaly.

1. The Measurement Problem. The concept of a measurement is central to the mathematical and conceptual structure of CQM. It is the process by which the state of quantum systems, in general in a superposition of possibilities, is reduced to a single classical value. The only problem is that we have no frigg'n clue what causes a measurement. The problem is so severe, and so unexpected, that Penrose calls it the measurement paradox, a misuse of the term, but indicative of how serious this gap is for the foundations of quantum mechanics. Physicists find themselves in the uncomfortable position of having to admit that a measurement is like good art, "I know one when I see one." In an effort to solve this, (I believe it was Von Neuman) showed that one could draw the line of measurement anywhere. If beta decay is to be measured, is it the tracks in the bubble chamber that form the measurement? Or the photo of the bubbles? Or when the tech develops the film? Or when the grad student looks at the film? Or when the professor reviews the grad student's work? The infinite regress is hard to avoid. Von Neuman argued that this process could be continued until encountering a conscious observer, and then we didn't know enough to take the process further. This has lead some to conclude that measurements require a conscious observer, a dubious conclusion.

In contrast, in the abstract quantum systems we have studied, such as quantum tic-tac-toe, there is an objective measurement process. An entanglement that becomes cyclic is typically the trigger for a measurement, no outside macro system, much less a conscious observer, needs to be invoked. While such systems are abstractions and do not represent real physical systems, they do show that it is plausible that an objective measurement system is the real case in quantum physics. It becomes reasonable therefore to seek one, and this provides a fresh attack on the measurement problem.

Since this has become a long reply, I'll return to the other anomalies at a later time.
 
  • #49
AllanGoff said:
...If they remain unsolved even when the field has by other measures matured, then they take on the status of anomalies, problems which are not puzzles. There is no longer a guarantee that solutions exist. In the history of physics, such problems have been the leading clues for the next paradigm shift.
I don't think this is by any means a statement of how most scientific advancements occur. Far more often than "long-standing or lingering problems", the advances come from stunning new observations that were entirely unexpected. When the unexpected result is found, everyone knows a new theory is needed, and it is then just a matter of coming up with it-- and that rarely requires more than a few decades to half a century. As long as we recognize our theories are just models, and do not have important philosophical implications, we face no difficulties.

A classic example of what I mean is "action at a distance" in Newtonian mechanics. No one was more philosophically bothered by that than the theory's own creator, but there were no observations that created any difficulties at the time. Some went so far as to read in philosophical implications, such as that all of reality was deterministic by virtue of being described completely by Newton's laws. That was a foolish extrapolation, so we are not surprised when "action at a distance" models are found wanting in later more precise observations. Should we say that the philosophical "problem" of action at a distance was evidence all along that we needed a new theory? It's not very meaningful to take that stance, because the "problem" was not sufficient to motivate a successful new theory, observations were needed for that, and furthermore, it is always silly to think that we need "evidence" that some new theory might be better than the one we have now-- we can just accept that as given, without reference to any specific "problems".

Because they are a professional embarrassment, the typical establishment response is to declare them non problems by fiat.
That very rarely happens, it's basically a complete myth. What significant event in the history of science can you point to that suggests such "dismissal by fiat" of challenging observations?
This has the unfortunate effect of killing research in the area because astute careerists will select other problems to work on. This is part of the reason that paradigm shifts are often achieved by outsiders.
More myths. It is every scientists dream to replace an old paradigm with a new one. The difficulty is not in finding the motivation to do it, or even the support-- it is figuring out how to do it. Wild speculation and protestations of "suppression" generally don't lead there.

More on the other stuff after I've had a chance to see the quantum tic tac toe.
 
  • #50
The Structure of Scientific Revolutions

Ken G said:
I don't think this is by any means a statement of how most scientific advancements occur. Far more often than "long-standing or lingering problems", the advances come from stunning new observations that were entirely unexpected. When the unexpected result is found, everyone knows a new theory is needed, and it is then just a matter of coming up with it-- and that rarely requires more than a few decades to half a century. As long as we recognize our theories are just models, and do not have important philosophical implications, we face no difficulties.

Read Kuhn.
 
  • #51
I'd rather simply look at the history of physics. I presume that's what Kuhn claims to have done, but I submit he was mostly seeing the inside of his glasses.

Of course I won't make that accusation without an effort to back it up. I'll just look at the introduction to Kuhn's views found at the website http://www.des.emory.edu/mfp/kuhnsyn.html,
annotated by my personal impressions of the value of the content:
A scientific community cannot practice its trade without some set of received beliefs.
Painfully obvious, but I'll grant the latitude to start with a meaningless "motherhood remark" to set the stage.
These beliefs form the foundation of the "educational initiation that prepares and licenses the student for professional practice".
Immediately we find a significant error in Kuhn's impression of what science education is about. Kuhn appears to think that science education is solely about propagating a body of scientific knowledge. That is indeed a big part of it, but by no means all. An extremely important aspect of any good science education, which Kuhn seems to miss, is the teaching of the scientific method and how to do science, i.e., how to add to or change that "educational initiation". Rather major oversight there.
The nature of the "rigorous and rigid" preparation helps ensure that the received beliefs are firmly fixed in the student's mind.
Same comment-- Kuhn just doesn't get it. Indeed, one of the most important advantages that science has over, say, religion, which I convey to my students and I know I'm not alone, is that science is allowed to be wrong-- because it is self-correcting and it evolves. In short, it is not "rigid" at all. How could Kuhn miss one of the most important of all elements of science, and still count himself an authority on it? Even in my own short career in astronomy I have witnessed countless examples of the flexibility of science. Sorry Kuhn, that's a miss.
Scientists take great pains to defend the assumption that scientists know what the world is like...To this end, "normal science" will often suppress novelties which undermine its foundations.
Now we find some significant errors in logic. Yes, scientists do attempt to convey a sense what they have learned is of value, but partly that stems from demonstrated results (men on the Moon, etc.) and partly that is common to all propagated human pursuits. It's a lousy pedagogical stance to start out with "don't take anything I say seriously, it's all basically baloney. Now, here's the syllabus...". The error in the logic is the implication that scientists effort to convince students there is value in a body of scientific knowledge somehow provides the reason that "novelties" are suppressed. That is flat false. Any real scientist is quite well aware of why novelties are suppressed-- they are vastly likely to be of no value at all, and most educators have enough trouble getting across what has been proven to be valuable. Why on Earth would any intelligent person look for any reason other than that? Too obvious?
Research is therefore not about discovering the unknown, but rather "a strenuous and devoted attempt to force nature into the conceptual boxes supplied by professional education".
Now the logic takes another step into fantasy land. I thought that people like Kuhn were supposed to understand logic, even if they don't know much physics. This is obviously the fallacy of the neglected middle, where Kuhn says essentially that since scientists don't give equal time to crackpot theories that would completely derail the progress of science, the only other possibility is that they set out entirely to maintain the status quo in scientific thought. To me that sounds like he knows little of either science or logic. How did he get to be so famous? Tell me this summary is way off base, because I'm not impressed.

In my experience, all scientists revere to the point of deification the people who have broken out of the boxes. We recognize that not only are our models limited by our intelligence, but also our intelligence is limited by our models, so we need geniuses to break through those limitations and we strongly encourage such geniuses to step forward and do just that. Unfortunately, there tends to be a concept that anyone who says something that disagrees with the mainstream must be such a genius, even if what they are saying makes no sense at all and doesn't even agree with existing observations. So what value does Kuhn's point really have?
A shift in professional commitments to shared assumptions takes place when an anomaly undermines the basic tenets of the current scientific practice. These shifts are what Kuhn describes as scientific revolutions - "the tradition-shattering complements to the tradition-bound activity of normal science".
This is probably the idea that made Kuhn famous, and here he is actually on to something. Yes, scientific advancement is not always the gradual and steady progress that it is sometimes portrayed by people who know little about it (again, not by any science educators I know). So that point is worth making, and if Kuhn made it first, good for him. Nowadays it is perfectly standard in any scientific education process, even for nonscientists (just look up "Galileo" or "Darwin" in any general education syllabus).

New assumptions –"paradigms" - require the reconstruction of prior assumptions and the re-evaluation of prior facts. This is difficult and time consuming. It is also strongly resisted by the established community.
Again we have an improper insinuation here. This is like saying "tearing down your house and building a new one would be costly and time-consuming, so is strongly resisted by homeowners". The appropriate response to that observation is "duh".

But I guess I'm getting off topic-- perhaps we need a new thread on Kuhn (if there isn't one).
 
Last edited:
  • #52
AllanGoff said:
4. The Grandfather Paradox (and its twin sister argument against FTL, the Shakespeare Indeterminacy) are examples of self-reference. Mathematicians and logicians do not have a good track record in dealing with self-reference.
:confused:

nonlinear logics. ... self-reference might be fundamental to quantum mechanics
Classical logic capable of treating other logics -- one never has to adopt a different logic as anything but a syntactic description of a traditional mathematical object.

Furthermore, every major 'interesting' logic of which I'm aware is completely subsumed by an ordinary, classical subject. e.g.

Intuitionistic logic is subsumed by topos theory
Constructivism is subsumed by computability theory (at least, some forms are)
Quantum logic is subsumed by C*-algebra
 
  • #53
Forget my remarks on Kuhn, I was probably a bit unnecessarily harsh and it makes no real difference in this thread because I'm going to argue that we are simply not seeing any paradigm-shift-driving issues here. The issue is what should count as an "anomaly" in a theory, versus the other possible classifications of something left unspecified by a theory, to wit: a limitation of a theory that is of no value to be concerned with until some specific observation points to a problem (as happened to Newton's laws), or a fundamental limitation of science, more so than the theory (as is likely the case with quantum mechanics seen in the Copehagen interpretation). So we have (at least) three classifications for sticky philosophically unappealing elements of any theory and the resolutions they suggest:
1) anomaly-- get busy fixing it by considering existing observations
2) unconstrained limitation-- it will probably be fixed in the future, but current observations offer no guide, so there is simply no current "action item"
3) fundamental limitation-- don't bother trying to "fix" this, there's nothing to fix.

As an example of each, (1) is like a car with a nasty noise from its engine, (2) is like a car that you wish got 100 miles per gallon, and (3) is like a car that can't fly to the Moon.

So in light of those possibilities, let's look at the interesting issues you raise, issues that indeed come up often in this context:
AllanGoff said:
1. The Measurement Problem. The concept of a measurement is central to the mathematical and conceptual structure of CQM. It is the process by which the state of quantum systems, in general in a superposition of possibilities, is reduced to a single classical value. The only problem is that we have no frigg'n clue what causes a measurement.
I hear this a lot but to me this exposes a common misconception about measurement in quantum mechanics. In my view, there is very little question about what causes a measurement-- it is the decohering of the projections of a wave function onto a particular set of eigenstates. I know that has a lot of jargon in it, but it's really pretty straightforward-- you can always project a wavefunction onto a complete set of basis states, but the amplitudes that describe that projection retain coherences, which means you cannot simply pretend that one of the basis functions is "correct" while the others simply express your lack of knowing that. However, the first step in a measurement is the intentional destruction of those coherences, done expressly so that we can imagine that one of the basis functions is "correct" even if we don't yet know which one (or never look).

You might then ask, but how does the measurement "know" which set of basis states to perform this decoherence with respect to? The answer to that is, the question is being asked backward-- all we know about the measurement is what basis states it decoheres, indeed we chose that measurement expressly because of that property. How it accomplishes the decoherence is what we don't know, but that's not at all unusual in science-- at least we do know why we don't know: we don't know because we have chosen not to track that information (usually it would involve the coupling to macroscopic noise modes that are quite untrackable anyway, but the principle applies any time we simply choose not to track the information, as can occur for one part of an entangled system). So I really don't see any "measurement problem" at all-- it is category (3) above.

In an effort to solve this, (I believe it was Von Neuman) showed that one could draw the line of measurement anywhere. If beta decay is to be measured, is it the tracks in the bubble chamber that form the measurement? Or the photo of the bubbles? Or when the tech develops the film? Or when the grad student looks at the film? Or when the professor reviews the grad student's work? The infinite regress is hard to avoid. Von Neuman argued that this process could be continued until encountering a conscious observer, and then we didn't know enough to take the process further. This has lead some to conclude that measurements require a conscious observer, a dubious conclusion.
This is another very common story, but to me what it does is confuse the first step of measurement, described above (and which is a real connection with physical noise modes of an actual apparatus), with the second step, which is the recording of the result in a conscious mind. The second step is indeed a formal step in "measurement" as the term is used in science, but is in no way central to the quantum mechanics of the problem. The quantum mechanics was over in step 1, the destruction of the coherences. Step 2 is no different at all from classical situations like a person playing a shell game and revealing which shell the pea is under. It's under one of them already, by virtue of the decohering of the amplitudes or the lack of need for amplitudes in the first place, but the player just doesn't know which. Why people think quantum mechanics, once the coherences are destroyed by the classical apparatus doing the measurement, is any different from classical physics, is beyond me-- I don't see any problem there other than we have no idea what a conscious mind is doing.

Thus my answer to von Neumann's chain (if it was indeed him) is that the measurement in the quantum mechanical sense (step 1) occurs as soon as the coherences are destroyed, i.e., the first stage of that chain, but the classical meaning of measurement (step 2) is not resolved until some later and less well determined stage-- but that much was already true for the shell game, and quantum mechanics adds nothing to it. I would call this category (2) from above-- when we have a working model of what consciousness is, we can better address this issue, but until we have a greater body of experimental data on that topic, we are shooting blanks and really shouldn't bother ourselves with it at this juncture.

In contrast, in the abstract quantum systems we have studied, such as quantum tic-tac-toe, there is an objective measurement process. An entanglement that becomes cyclic is typically the trigger for a measurement, no outside macro system, much less a conscious observer, needs to be invoked. While such systems are abstractions and do not represent real physical systems, they do show that it is plausible that an objective measurement system is the real case in quantum physics. It becomes reasonable therefore to seek one, and this provides a fresh attack on the measurement problem.
I agree that quantum tic tac toe is an interesting game (congratulations), with some parallels with quantum mechanics that needn't be taken too literally. But given my answer above, I think you are trying to solve a "problem" of category (3). It is already clear to me that measurement in quantum mechanics (step 1 above) is an objective process, very akin to your quantum tic tac toe, and the Copenhagen interpretation already includes that just fine. I really don't know what all the buzz is about (and I know about non-unitariness and so forth, note that I already addressed that when I mentioned all the information that we have chosen not to track when a step-1 measurement occurs). The coupling to a device we can trust to behave classically, and therefore we know we are not going to track the full information of the reality, is an integral part of objective science, there's no other way to do science and therefore there is nothing to fix. I believe that is true to Bohr's way of looking at things.
 
Last edited:
  • #54
AllanGoff said:
.
2. ... common timelike cause is eliminated by Bell's theorem.
This is not eliminated by Bell's theorem.

Common timelike cause and common spacelike cause are how quantum entanglements are experimentally produced in the first place. There just isn't a generally accepted expression with a visualizable (classical) analog to explain the correlations. What Bell showed is that orthodox quantum mechanics is incompatible with such an explanation.

AllanGoff said:
.
... this opens the door to considering spacelike causality despite the conceptual hurdles.
Common spacelike causality is already an experimental fact. This has been done to entangle even somewhat large groups of atoms if I'm not mistaken.

The other sort of spacelike causality -- ie. instantaneous action at a distance -- is physically meaningless.

Of course, something is happening instantaneously in EPR-Bell experiments. When the setting at one end or the other is changed, then the global setting (and the probability of joint detection) instantaneously changes. Of course, this angular difference isn't a local object. It's simply an observational perspective.

There isn't any evidence to suggest that ftl or instantaneous actions or connections have anything to do with quantum entanglement. Thus, the appropriate path to take in considering all the stuff related to EPR, Bell, quantum entanglement, etc. is to assume that nature is local -- at least until something a bit more compellingly suggestive of ftl or instantaneous actions or connections is discovered or invented.
 
Last edited:
  • #55
ThomasT said:
Thus, the appropriate path to take in considering all the stuff related to EPR, Bell, quantum entanglement, etc. is to assume that nature is local -- at least until something a bit more compellingly suggestive of ftl or instantaneous actions or connections is discovered or invented.
I agree, and that's why I object whenever I hear someone claim that Bell-type experiments exhibit a nonlocal influence when a measurement is made. I see it as entirely local influences, being used to intentionally "unpack" nonlocal information. You only run into trouble when you ask "where is the information stored", and combine local thinking with realism. But these are problems for philosophy, not physics, and really just say that we need to tailor successful philosophies more carefully for them to be informed by physics. Philosophies should not, on the other hand, be used to inform physics-- the history of trying that is pretty clear on that point. (Even the principle of relativity, which is often pointed to as a kind of philosophy-informing-physics, is actually just philosophy-informing-form, i.e., informing pedagogy, not physics itself. In my view, anyway-- there's a relativity thread on this which draws much fire for that position and I'd have to say it's still unresolved).
 
Last edited:
  • #56
I don't understand. Isnt it that there is no SIGNIFICANT information sent? Suppose I wish to receive a signal to turn on a lamp and I have one of two entangled particles. When the particle has an up spin, I am to turn on the lamp. My partner, a couple lightyears away decides to do something to his particle to change its spin to down. My particle instantly reacts with an up spin meaning that I am to turn on my lamp. Isnt information sent here, as primitive of a method it might be?
 
  • #57
Degeneration said:
When the particle has an up spin, I am to turn on the lamp. My partner, a couple lightyears away decides to do something to his particle to change its spin to down. My particle instantly reacts with an up spin meaning that I am to turn on my lamp. Isnt information sent here, as primitive of a method it might be?
No, because your partner cannot "decide" to make his particle be down, and expect that will make your particle be up. If the partner makes a decision and gets a certain spin by design, that would break the entanglement. The entanglement is only unbroken if the partner makes no such decision and simply measures the spin-- but then he has no way to influence whether or not you turn on the lamp. Nothing is transmitted, nonlocal information is simply being "unpacked" by the experiment.

It only seems like a nonlocal "influence" if you imagine that the information being unpacked is somehow stored in the two particles, such that changing that information represents a physical change in both particles, but I would argue that such is a purely philosophical picture that is clearly problematic and retains no value in quantum mechanics, any more than imagining that any wave function is "stored" in the same region of space as it takes on its values. I would say that the place a wave function "resides" is in the mind of the physicist using it, not in the region of space where it takes on its values, and many people may not even realize they are implicitly assuming the latter instead of the former when they agonize over entanglement and delayed choice.
 
Last edited:
  • #58
hello everyone, I am very new to this discussion and i have just a few questions regarding this topic.

1.) how are these particles affected by speed. do they gain mass? can that be measured?
2.) Is the communication of these paricles affected by gravity such as the gravity well around massive objects.
3.) what do you suppose of this experiment on an entangled pair? One is left here on Earth and the other is placed aboard the International Space Station. both are observed.

Just some random thoughts and questions from a non student. Thanks for your time and information. :)
 
  • #59
Dar Kthulu said:
1.) how are these particles affected by speed. do they gain mass? can that be measured?
They don't change rest mass, and if you consider the change in what is known as "relativistic mass", that is just a frame-of-reference issue, not a physical difference that should affect entanglement.
2.) Is the communication of these paricles affected by gravity such as the gravity well around massive objects.
The GR effects should just affect the background spacetime through which the system moves, but I don't see a direct impact on entanglement except perhaps in strong gravity environments where we would need a combined theory of quantum mechanics and gravity.
3.) what do you suppose of this experiment on an entangled pair? One is left here on Earth and the other is placed aboard the International Space Station. both are observed.
I think the normal quantum mechanical expectations, referenced to the system proper times, should work fine there.
 
  • #60
Ken G said:
The GR effects should just affect the background spacetime through which the system moves, but I don't see a direct impact on entanglement except perhaps in strong gravity environments where we would need a combined theory of quantum mechanics and gravity.

This is one of those areas where there are some interesting opportunities to consider QM and GR as a pair. If GR is correct, and there is no graviton, then you would certainly expect that entanglement is not affected by gravitational field. That might not be true, on the other hand, if the graviton exists. There have been a few papers that have speculated on this point. Of course without a specific QG candidate to work with, it is hard to say too much. But there might be some limits which could be derived to steer a potential candidate theory.

http://arxiv.org/abs/0910.2322

"We propose a thought experiment to detect low-energy Quantum Gravity phenomena using Quantum Optical Information Technologies. Gravitational field perturbations, such as gravitational waves and quantum gravity fluctuations, decohere the entangled photon pairs, revealing the presence of gravitational field fluctuations including those more speculative sources such as compact extra dimensions and the sub-millimetric hypothetical low-energy quantum gravity phenomena and then set a limit for the decoherence of photon bunches and entangled pairs in space detectable with the current astronomical space technology. "
 
  • #61
That's interesting, it would be somewhat ironic if gravity waves are first detected via their interaction with sublimely constructed entangled quantum states, rather than the more brutely classical application of watching them make masses jiggle!
 
  • #62
By using a minimum of 2 sets of qubits in isolation and by freezing the spin of the entangled particles and specifying that set 1 is used to indicate the start of a message and set 2 is used to send the message. By influencing the spin of the particles at one site and monitoring the spin changes at the other site why is this not possible. In this manner would it not be possible to send data over an infinate distance with no delay and therefore ftl.
 
  • #63
Because you can't check whether the spin was "influenced" by the sender or by your own attempt to check whether it was influenced. Both give you exactly the same result, and so no information is carried.
 
  • #64
Sec. 3 of
http://xxx.lanl.gov/abs/1006.0338
gives a simple explanation why entanglement cannot be used for ftl signalization.
It also proposes how this inability (to use it for ftl signalization) could, in principle, be overcame.
 
  • #65
So basically the reason FTL communication is not possible using quantum entanglement: Currently we cannot control the state of the entangled particles, we can only observe the changes that nature is making to the state of the particles. If we could figure out a way to control the state of these particles, FTL communications would be possible.
 
  • #66
I've read some papers that were aiming to use linearly and circularly polarized light as a protocol for communication - however, it seems difficult/impossible to distinguish these two when you have to rely on incident photons (eg. Physics Letters A
Volume 251, Issue 5, 1 February 1999, Pages 294-296). Anyone with an idea?

I recently saw another ideá from Arxiv.org. I am not able to discover the flaw in his argument, but I suspect that there will be no interference?

http://arxiv.org/abs/1106.2257
 
Last edited:
  • #67
This is a discussion of the Cornwall paper on superluminal communication from

http://arxiv.org/abs/1106.2257

quantum theory says that whatever you do on one side does not change what you observe on the other:

Total state |Phi> = (|H>|V> + |V>|H>)/sqrt2.

Not using the polarizing filter (no modulation)

rho = |Phi><Phi|
= (1/2) ( |H>|V><H|<V| + |H>|V><V|<H| + |V>|H><H|<V| + |V>|H><V|<H| ).

In order to see what we observe on the left side we have to "trace out" the right side

rho_right = Tr_left(rho) = (1/2) (|H><H|+|V><V|),

which is eihter a photon in the mode H or a photon in the mode V, which will give no interference.

Am I mistaken?
 
  • #68
If I understand correctly, then quantum entanglement is explained by the simple fact that two particles behave the same way after being separated.

Take Machine A and B, each compute numbers from 1 to 10 and are synchonized. Separate the machines and get the output at a given moment in time. We know what the other machine reads, is this correct?

Another thing is to assume that something is propagating through space... ()
 
Last edited by a moderator:
  • #69
N468989 said:
If I understand correctly, then quantum entanglement is explained by the simple fact that two particles behave the same way after being separated.

Take Machine A and B, each compute numbers from 1 to 10 and are synchonized. Separate the machines and get the output at a given moment in time. We know what the other machine reads, is this correct?

This is true in a sense. And the description you give works fine for identical measurements on the individual particles. But it does not yield a suitable explanation for Bell tests. I.e. it predicts the wrong results. This fact was not noticed for many years after the EPR paper appeared, until Bell discovered it around 1964.

Best way to think of it is to imagine polarization of a pair of Type II entangled photons Alice and Bob at angles 0, 120 and 240 degrees. I.e. 1/3 of the way around a circle. After a while, you will realize that using your example, there is an average of at least a 1/3 chance that 2 adjoining measurements (one on Alice, the other on Bob) yielding the same value. However, experiments yield a value of 25% which is in agreement with the quantum expectation value.
 
  • #70
DrChinese said:
This is true in a sense. And the description you give works fine for identical measurements on the individual particles. But it does not yield a suitable explanation for Bell tests. I.e. it predicts the wrong results. This fact was not noticed for many years after the EPR paper appeared, until Bell discovered it around 1964.

Best way to think of it is to imagine polarization of a pair of Type II entangled photons Alice and Bob at angles 0, 120 and 240 degrees. I.e. 1/3 of the way around a circle. After a while, you will realize that using your example, there is an average of at least a 1/3 chance that 2 adjoining measurements (one on Alice, the other on Bob) yielding the same value. However, experiments yield a value of 25% which is in agreement with the quantum expectation value.


Agreed. But this leaves the question of how the entangled particles "know" what to do. If the correlation can't be explained in terms of a past interaction, I don't see how you can ever escape from "what I do over hear influences what happens over there". I think that's the whole point of Bell's theorem. It's not that hidden variables must be non-local, but any theory explaining this must be non-local.
 

Similar threads

Replies
2
Views
1K
Replies
7
Views
1K
Replies
2
Views
1K
Replies
7
Views
4K
Replies
1
Views
1K
Replies
8
Views
1K
Back
Top