Murray Gell-Mann on Entanglement

  • I
  • Thread starter Thecla
  • Start date
  • Tags
    Entanglement
  • Featured
In summary: I think it's a little more subtle than "non-local means measurement-dependent".In summary, most physicists working in this field agree that when you measure one of the photons it does something to the other one. It doesn't mean that they reject non-locality.
  • #351
If I may, I think stevendaryl's point is not that it is surprising that a million measurements of a brick's location will average to a number very likely to fall well within the brick, but rather, that the standard deviation of the individual measurements will themselves yield a distribution that is highly peaked thusly. My example was to show that this is not due to the way we measure the location of bricks, but rather, to the way we cull those measurements by correlating them against other information that we generally have access to macroscopically-- information we do not have access to and cannot cull by microscopically.
 
Physics news on Phys.org
  • #352
To continue that point, what it means is that in any situation where measurements on bricks do give a wide standard deviation, we can always attribute that to a lack of complete information about the brick-- we can always imagine having "the lights on" in such a way that we can cull that broad distribution into subsets with much smaller standard deviations. That's just what we cannot do with electrons. So is it that bricks behave differently than electrons, or is the different behaviour our own-- we analyze the situation differently because we have access to richer information for the bricks, and we use that richer information to correlate the measurements and look at the standard deviations within those correlated subsets. When we have more information, we act differently, and there's the "cut" right there. This doesn't explain why the cut is there, why we get richer information about bricks than electrons, but it does show where the cut comes from-- it comes from how we think, how we process information, and what we mean by "everything it is possible to know about a system."
 
  • Like
Likes ddd123
  • #353
A. Neumaier said:
The second is explained by the law of large numbers and the standard procedures in statistical mechanics.
I don't get where you get large numbers. Do you take all the particles that make up the brick as an ensemble?
 
  • #354
zonde said:
I don't get where you get large numbers. Do you take all the particles that make up the brick as an ensemble?
Not as an ensemble - the ensemble is just a buzzword for the density operator, visualized with a popular - but in the macroscopic case, where statistical mechanics predicts properties of single systems such as a particular brick, misleading - picture of many repetitions.

The many particles appear instead in the sums that define the various macroscopic observables!
 
  • #355
If the many particles appear in those sums, then they are certainly not uncorrelated A operators. The brick is a solid object, those measurements have correlations (and consider the significance of that in my cat analog).
 
  • #356
Ken G said:
If the many particles appear in those sums, then they are certainly not uncorrelated A operators. The brick is a solid object, those measurements have correlations (and consider the significance of that in my cat analog).
That's why I included in my statement the phrase ''except that they account (in many important instances) for the typical correlations''. You need to do the real calculations (for the canonical ensemble, say) that accompany the derivation of the thermodynamic limit in every textbook to see that the correlations contribute to a lower order than the sum, so that the simple argument I gave for the uncorrelated case still remains qualitatively valid.

Without the law of large numbers there would be no thermodynamic limit, and thermodynamics would not be valid.
 
  • #357
So what I'm saying is, the "ensemble" concept is also applicable to macro systems, like a bunch of decks of cards that have all been shuffled. The only difference is, we have access to lots of other ways to get information about those various decks of cards, such that we can regard the situation as more than a density matrix if we do access that other information. Ironically, a card player does not have access to that information unless they cheat, so they do in fact treat a single deck exactly as though it were represented by a diagonal density matrix. We only encounter problems when we ask "but what is the deck really doing", or some such thing, but those questions are of no value to the card player-- they are really just errors in failing to track the differences in having information, versus not having information. We should stop thinking that we are talking about the systems, and simply recognize that we are always talking about our information about the system. After all, that is all the scientist ever uses. When you do that, ensembles and density matrices are exactly the same in quantum and classical theory, the latter are just decohered versions. So that's the first type that stevendaryl was talking about-- the second type is just an artifact of the different quality of the information we have access to classically. When we don't have access to that information to cull our results by, then we only get the decoherence type-- and we do get the large standard deviations.
 
  • #358
A. Neumaier said:
That's why I included in my statement the phrase ''except that they account (in many important instances) for the typical correlations''. You need to do the real calculations (for the canonical ensemble, say) that accompany the derivation of the thermodynamic limit in every textbook to see that the correlations contribute to a lower order than the sum, so that the simple argument I gave for the uncorrelated case still remains valid.
That cannot be true because it doesn't work for the cat analog-- that standard deviation is not small at all.
Without the law of large numbers there would be no thermodynamic limit, and thermodynamics would not be valid.
I certainly agree with that: thermodynamics needs the law of large numbers. But it needs much more: it needs the way we cull by the information we have.
 
  • #359
Ken G said:
we are always talking about our information about the system.
You may be always talking about your information about the system. But physics models the behavior of systems independent of anyone's information. The nuclear processes inside the sun happen in the way modeled by physics even though nobody ever looked into this inside.

From measurements, one can get information about the outside only. But the model is about the inside, and predicts both the inside and what it radiates to the outside.
 
  • #360
Ken G said:
That cannot be true because it doesn't work for the cat analog-- that standard deviation is not small at all.
The single cat is not an ensemble - it remains all the time macroscopic and its measurable aspects therefore have a small standard deviation.
 
  • #361
A. Neumaier said:
You may be always talking about your information about the system. But physics models the behavior of systems independent of anyone's information. The nuclear processes inside the sun happen in the way modeled by physics even though nobody ever looked into this inside.
Ah, but look more carefully at what you are saying here. Does what you mean by the nuclear processes in the Sun include which nucleons have fused and which ones haven't, or do you just mean what you care about the Sun, the total amount of fusion energy that has been released? You have to know what information you care about before you can assert what you mean by the fusion processes, and this contradicts your claim that physics models the behavior independently of our information. Look at an actual model of the core of the Sun, and what you will instantly see is that nowhere in that model does it include which nucleons have fused and which ones haven't, so it's still just a density matrix in that model! We model what we care about, is that not always so? That's why it is always about the information we are choosing to track.
 
  • #362
A. Neumaier said:
The single cat is not an ensemble - it remains all the time macroscopic and its measurable aspects therefore have a small standard deviation.
I'm not talking about our language about the brick, I'm talking about setting up an experiment, and looking at the standard deviation in the outcome. It's purely observational, it's not some picture we are invoking. The experiment is a thousand bricks on trap doors attached to unstable nuclei with a half-life of an hour, followed by measurements of how far the bricks have displaced after 1 hour. That will produce a distribution (in this case bimodal) with a large standard deviation, even though all the systems are prepared identically. The only way to reduce that standard deviation is to cull it based on other information, like turning the lights on and watching which trap doors trigger, and correlating via that new information. Of course that's just what we do, but the problem is-- we forget that that's what we do! We lose track of our own role, the role of how we are culling and correlating the data based on other information we are in some sense taking for granted. But a complete description should never take anything for granted, every step, every correlation being used, must be tabulated and explicitly included. Then there's no difference between classical and quantum systems any more, except the decoherence and the richness of the additional information, both of which are perfectly natural ramifications of macro systems.
 
Last edited:
  • #363
Ken G said:
which nucleons have fused and which ones haven't
This is a meaningless statement since nucleons are indistinguishable.
 
  • #364
Ken G said:
The experiment is a thousand bricks
That makes an ensemble of bricks. I was talking about a single brick. Statistical mechanics makes assertions about every single brick.
 
  • #365
secur said:
... the essential peculiarity of QM, compared to classical, is that in order to completely predict results, info beyond the past light cone is required.

A. Neumaier said:
One needs the information on a Cauchy surface, not on the past light cone, to make predictions. More precisely, to predict classically what happens at a point in the future of a given observer, the latter's present defines (at least in sufficiently nice spacetimes) a Cauchy surface where all information must be available to infer the desired information.

First let's get Closed Timelike Curves out of the way. It occurs to me that their presence might vitiate my statement, depending how you look at it. They're not supported by experiment and thoroughly irrelevant to this discussion. So let's ignore such pathological spacetimes.

Then we can, as you say, define a Cauchy Surface for any observer, for instance Alice or Bob in typical Bell experiments. But this contributes nothing to the discussion.

Cauchy Surface is used to formulate an "Initial" Value Problem in GR or SR, to determine a complete solution (both past and future) for an entire space. The spacetime point or event where/when Alice makes her observation is one point on a Cauchy Surface which constitutes her "instant" (loosely speaking) throughout space. The info at that specific point comes only from her past light cone (assuming "forward" time). To predict her result, all the rest of the Cauchy Surface is irrelevant (classically) since by definition it can't causally affect her. If we're interested in solving Einstein Field Equation for the entire block universe we'd need it - but we're not.

So (ignoring Closed Timelike Curves) you're simply wrong. One does NOT need the entire Cauchy Surface (which includes detailed info on the Bullet Cluster, for instance, which won't affect her for 3.7 billion years) to predict what Alice's SG says about her particle today in a lab here on Earth!

A. Neumaier said:
it is no different in quantum mechanics when one makes (probabilistic) predictions.

Sorry, that's irrelevant, since my statement is about exact (implied by the word "completely"), not probabilistic, predictions.

A. Neumaier said:
The apex of the light cone is the point in space-time at which all information needed to do the statistics is available.

Correct. That apex is precisely Alice's measurement's spacetime event. All the info there is determined by her past light cone, and nothing else - classically.

To summarize - arguably, with GR, you can produce a contradiction to my statement. In a pathological spacetime one could argue that classical predictions at a point require info outside the past light cone - maybe. If so, I'll concede the point. The fact one must go to such extremes demonstrates the basic validity of my statement.

Cauchy Surfaces have nothing to do with the discussion of my statement (which, I claim, is quite illuminating), or of Bell-type experiments (excepting GR-related Orch-OR and Joy Christian :-), or of Gell-Mann's video. Let's not muddy the waters with irrelevancies, it's muddy enough already.
 
Last edited:
  • #366
The bottom line of what I'm saying is, the collapse of the wavefunction (the second type stevendaryl was talking about, not decoherence which is mundane) occurs when we lose track of how our minds are processing information, how we are correlating and culling by a lot of information we take for granted as obvious to us. Each interpretation sees what is happening there differently. Many worlds refuses to imagine that our minds are playing a role and making choices about what to track and what to ignore, so then the mind is trapped in a coherent subspace of a much larger but mutually incoherent reality that has no consequences for that subspace. Copenhagen sees the unknowns in what the mind is doing as reality itself, so the collapse is real because the mind just works that way. Bohm sees hidden information that we have no access to that determines all these things. But the scientist only cares about the information processing, so it always comes back to what their mind is doing with the information they have.
 
  • #367
A. Neumaier said:
That makes an ensemble of bricks. I was talking about a single brick. Statistical mechanics makes assertions about every single brick.
Statistical mechanics makes assertions about the location of a brick? How does that work? I could see a claim that it makes assertions about the center of mass of a gas of free particles, but that's the kind of uncorrelated system you were talking about above-- that's not at all a brick. You still need to explain what these A operators are, and how they are uncorrelated in a brick.
A. Neumaier said:
This is a meaningless statement since nucleons are indistinguishable.
But notice that the way we model the core of the Sun does not care if the nucleons are distinguishable or not, which is exactly my point about the information that we choose to track. I wager that nothing you just said about fusion in the Sun would be different if the nuclei were distinguishable, since you never in any way invoked indistinguishability. If that is correct, it follows immediately that your objection is not relevant, it obfuscates the key issue here.
 
Last edited:
  • Like
Likes zonde
  • #368
A. Neumaier said:
If ##A_1,\ldots,A_N## are uncorrelated operators with the same standard deviation ##\sigma## then ##X:=N^{-1}(A_1+\ldots A_N)## has standard deviation ##N^{-1/2}\sigma##, as a simple calculation reveals. The arguments in statistical mechanics are similar, except that they account (in many important instances) for the typical correlations between the ##A_k##.

Please point out where the argument is faulty. If successful, all books on statistical mechanics must be rewritten to account for your revolutionary insight.

The law of large numbers doesn't not imply anything about the possibility or impossibility of superpositions of macroscopically distinguishable states. It doesn't imply anything about whether unitary evolution can result in such a superposition. It doesn't say anything of relevance to this discussion.
 
  • Like
Likes zonde
  • #369
stevendaryl said:
The law of large numbers doesn't not imply anything about the possibility or impossibility of superpositions of macroscopically distinguishable states. It doesn't imply anything about whether unitary evolution can result in such a superposition. It doesn't say anything of relevance to this discussion.

The issue is whether there is any reason to believe that the standard deviation of a macroscopic variable such as position must remain small. That seems like a bizarre claim. It's certainly not true classically.

Take the dynamics of some sufficiently complex classical system. In phase space, pick out a small neighborhood. All the relevant physical variables such as position will then have a small standard deviation, if the initial neighborhood is small enough. Now, let the system evolve with time. Typically, for complex systems, the evolution will result in the neighborhood being stretched out and distorted. If the system is ergodic, then that initially compact neighborhood will spread out until it is dense in the subspace of the phase space consisting of all points with the same values for conserved quantities such as energy and angular momentum, etc. There is absolutely no guarantee that the standard deviation will remain small.

I don't know why you [A. Neumaier] think that it will remain small in the case of quantum dynamics.
 
  • #370
Ken G said:
Statistical mechanics makes assertions about the location of a brick?
about the macroscopic properties of the brick in its rest frame; the motion of the rest frame itself is already given by classical mechanics, which applies to macroscopic objects with good accuracy. If you take a photodetector (with a pointer) in place of a brick it makes predictions about the pointer location in the detector's rest frame given the incident current after it was magnified enough. If you register a microscopic phenomenon, only the little subsystem that magnifies the microscopic event to a macroscopic one needs a more detailed stochastic treatment via decoherence, where the microscopic event is modeled by a true ensemble.
 
  • #371
stevendaryl said:
The law of large numbers doesn't not imply anything about the possibility or impossibility of superpositions of macroscopically distinguishable states. It doesn't imply anything about whether unitary evolution can result in such a superposition. It doesn't say anything of relevance to this discussion.
In post #342 you said that ''(2) they have a small standard deviation'' was not explained by decoherence, and I argued that it was explained by statistical mechanics. Why did you bring it up if it wasn't relevant to this discussion?
stevendaryl said:
the standard deviation of a macroscopic variable such as position must remain small. That seems like a bizarre claim. It's certainly not true classically.
Classical statistical mechanics predicts this, too, for the center of mass of a single macroscopic solid body. The trajectory of the center of mass may be a complicated curve, but everyday experience already shows that the uncertainty in predicting the path is tiny, unless one plays billiard or so where the motion is ergodic. Even then it holds for short times, almost up to the order of the inverse first Lyapunov exponent. But ergodic motion is not the typical case; if it were, Galilei would never have found the dynamical laws based on which Newton formulated his mechanics. And life would probably be impossible.
 
  • #372
A. Neumaier said:
about the macroscopic properties of the brick in its rest frame; the motion of the rest frame itself is already given by classical mechanics, which applies to macroscopic objects with good accuracy.

The issue is this: Imagine a situation in which the eventual location of the brick is extremely sensitive to initial conditions. You can make up your own example, but maybe the brick is balanced on the end of a pole, and that pole is balanced on its end. The slightest push in any direction will result in the pole falling in that direction. If the setup is sensitive enough, then a random quantum event, such as the decay of a radioactive atom, could be used to influence the final location of the brick.

In such a circumstance, if you try to compute the probability distribution of the final location of the brick, then it will have a sizable standard deviation. So there is no definitely no mechanism that confines probability distributions for macroscopic objects to give small standard deviations to position.

Now, maybe you want to say that the large standard deviation is due to our ignorance of the details of the initial conditions. That doesn't make any sense to me. What determines the final position is (by hypothesis) whether the atom decays or not. Of course, this is so far talking about a semi-classical notion of "standard deviation", where you treat the brick and pole classically. But I can't see how treating the brick quantum mechanically would make much difference. There is no guarantee that the standard deviation for position of a brick will remain small. It almost certainly will not, in cases where microscopic quantum events are amplified to have macroscopic consequences.
 
  • #373
A. Neumaier said:
In post #342 you said that ''(2) they have a small standard deviation'' was not explained by decoherence, and I argued that it was explained by statistical mechanics. Why did you bring it up if it wasn't relevant to this discussion?

Because what you were saying was false, and I was pointing out that it was false.
 
  • #374
A. Neumaier said:
Classical statistical mechanics predicts this, too, for the center of mass of a single macroscopic solid body.

No, it does not. If you have a chaotic system, and you have an ensemble of systems that initially are confined to a small region of phase space, then as time progresses, that region will spread out. The standard deviation for variables such as position will not remain small.
 
  • #375
A. Neumaier said:
about the macroscopic properties of the brick in its rest frame; the motion of the rest frame itself is already given by classical mechanics, which applies to macroscopic objects with good accuracy.
Exactly, and the way classical mechanics gives that accuracy is by bringing in and correlating with all kinds of extra information from the environment. It won't work at all for the bricks on trap doors unless you correlate the outcomes with that extra information! It's always information processing, even classically.
 
  • #376
stevendaryl said:
No, it does not. If you have a chaotic system, and you have an ensemble of systems that initially are confined to a small region of phase space, then as time progresses, that region will spread out. The standard deviation for variables such as position will not remain small.
Yes, chaos is an excellent alternative to quantum coupling to see this effect. In both cases, we always drive down the uncertainty by correlating with additional outside information, just as with the chaos in shuffling a deck of cards. It's just that we so automatically, without even thinking about it or tracking it formally, say that "I don't know what the cards are, but I could if I just gained access to information I am not privy to but which has been determined", that we don't even realize you can always go from a diagonal density matrix to a definite outcome by correlating with additional information, you simply cull the outcomes into bins and poof, the diagonal density matrix is a bunch of definite outcomes. We don't even seem to realize it is we who have accomplished that "collapse" via information correlation, but we can tell that is true by simply not doing the information correlation, and immediately we are right back to the diagonal density matrix-- exactly like actual card players do.
 
  • #377
stevendaryl said:
In such a circumstance, if you try to compute the probability distribution of the final location of the brick, then it will have a sizable standard deviation. So there is no definitely no mechanism that confines probability distributions for macroscopic objects to give small standard deviations to position.
See my answer here.
stevendaryl said:
If you have a chaotic system, and you have an ensemble of systems
Note that I was talking about a single solid body. Statistical mechanics of macroscopic bodies does not make any prediction for probabilities for what happens to a collection of single solid bodies.
 
Last edited:
  • #378
If the single body is a kite in the wind, then classical mechanics does not tell you where the kite will be a minute after its string breaks-- except to within a broad distribution that will have a large standard deviation. If we do a measurement of the kite's location a minute later, the uncertainty we face in predicting that outcome is no more avoidable than the uncertainty in an electron's location in an atom. So it's not about the theories we use, it is about the information we are plugging in as we go along. Decoherence removes the quantum coherences the electron would show, but that's not collapse-- the collapse still happens when we correlate with other information, in either case. Collapse is culling.
 
  • #379
Thecla said:
In this video ... Murray Gell-Mann discuses Quantum Mechanics and at 11:42 he discuses entanglement. At 14:45 he makes the following statement:

"People say loosely, crudely, wrongly that when you measure one of the photons it does something to the other one. It doesn't."

Do most physicists working in this field agree with the above statement ?

It's interesting to note that OP's question is already more-or-less answered by Gell-Mann's quote. Who are these "people" who "loosely, crudely, wrongly" disagree with him? They are, in fact, "physicists working in this field"! As the video mentions, they include other Nobel Prize winners. The truth is, most physicists don't buy his "Consistent Histories", which provides the justification for his stance.

What he should say is something like "Many other highly respected physicists believe this statement, but I don't. Currently there's no experimental evidence one way or the other; it's just a matter of opinion. Time will tell who's right."
 
  • #380
I think the essential disagreement between A. Neumaier vs. stevendaryl and Ken G can be described as follows. I'll use the language of "superposition" and "collapse" - because, when you get right down to it, it's the only interpretation I understand. A. Neumaier doesn't like that language, of course, but I hope he'll agree with the essence of my explanation.

Suppose a random event (radium decay) sends a brick to two very different locations: an unstable pole falls in a random direction, or a trap-door opens / doesn't. Before "collapse" occurs, we compute the average location of the brick (its center-of-mass) over the superposed states represented by ( |radium decays> + |radium doesn't decay> ). Now, stevendaryl figures the standard deviation of this average can become large. A. Neumaier says no, it remains very small by macro-world standards.

The essential difference is that A. Neumaier figures the collapse (a.k.a. "collapse", with scare quotes) will happen as soon as the superposed states decohere sufficiently, which will be very quick. Again, to be clear, he doesn't use that language, but - looking at it the Copenhagen way - I think that's a correct statement of his position. Therefore we'll never be averaging over two brick-positions that are macroscopically far apart.

stevendaryl, OTOH, figures the collapse won't happen automatically or spontaneously, just because of decoherence. Some sort of measurement event is required. Therefore we will be averaging over superposed brick positions separated by arbitrarily large macroscopic distances.

In A. Neumaier's approach we get sensible macro-world answers for brick locations and the standard deviation thereof, but not in the other approach.

Both positions are reasonable, on their own terms, and neither violates current experimental data. To end the fruitless dispute we must simply agree to wait for future experiments to decide. Unfortunately I have no idea how to design such experiments.
 
  • #381
secur said:
What he should say is something like "Many other highly respected physicists believe this statement, but I don't. Currently there's no experimental evidence one way or the other; it's just a matter of opinion. Time will tell who's right."

Yes, it's pretty mysterious how interpretation is the main source of dogmatism among physicists, when it is also the topic with the least chance of being subjected to experimental verification. Not even string theorists are this self-assured.
 
  • #382
secur said:
Both positions are reasonable, on their own terms, and neither violates current experimental data. To end the fruitless dispute we must simply agree to wait for future experiments to decide. Unfortunately I have no idea how to design such experiments.

Stevendaryl's position is more modest though, because he simply says something isn't proven - he doesn't exclude the other possibility.
 
  • #383
secur said:
The essential difference is that A. Neumaier figures the collapse (a.k.a. "collapse", with scare quotes) will happen as soon as the superposed states decohere sufficiently, which will be very quick. Again, to be clear, he doesn't use that language, but - looking at it the Copenhagen way - I think that's a correct statement of his position. Therefore we'll never be averaging over two brick-positions that are macroscopically far apart.
As I understand A.Neumaier argues about some averaging over all particles of macroscopic system ... but I don't get it how it is relevant. And it does not seem that anybody else get's it.
 
  • #384
secur said:
What he should say is something like "Many other highly respected physicists believe this statement, but I don't. Currently there's no experimental evidence one way or the other; it's just a matter of opinion. Time will tell who's right."
You are of course right, though Gell-Mann is not known for diplomacy or humility! Still, I think one can go farther and still be correct-- one can add "in my opinion, the key lesson that quantum entanglement, a theory that of course will eventually be modified or replaced (not because it got the wrong answer for Bell experiments, which it didn't, but because that's what happens in physics) is trying to help us see, and thereby help motivate whatever will replace it, is not that the parts of the system influence other parts in nonlocal ways, but rather that the behavior of the full system is not well characterized in the first place by the concept of influences between its parts."

I don't know if Gell-Mann would agree to this, but it seems to me that the reason entanglement is not well characterized by influences between parts (or at least, it gets awkward in that area) is because the concept of influences between parts is itself a behavior that appears only due to the breakdown of entanglements. So our mission is not to understand how the parts influence each other when entangled, but rather to understand why we get away with imagining that parts influence each other when entanglement is absent. It's like with decoherence, our goal is not to figure out how coherences support superpositions, but rather how interactions diagonalize the density matrix. Only the Bohmians start with the definite outcomes and try to figure out how ignoring the pilot wave produces the illusion of populating coherences across a density matrix-- the rest of us take those off-diagonal coherences for granted and try to figure out how they went away in a measurement!
 
Last edited:
  • #385
Ken G said:
You are of course right, though Gell-Mann is not known for diplomacy or humility!

For me, that isn't really a problem in itself. I haven't read Gell-Mann's book which he refers to, so I'll have to only consider the posted video and this is surely a limitation. In the video, Gell-Mann tries to characterize the objection that one can choose, say, the polarizer angle as something "to confuse us". But that is the crux of the whole matter! That's not a confusing objection, it's the problem that an interpretation must answer! It's fine if he has a really strong argument against some position, but first he must acknowledge the position. In fact, he says right away that the explanation is that the choices of the different angles belong to different histories - he jumps straight to his own interpretation. I think what we're doing here is a little different, because to argue that there's no "influence between parts" doesn't require any marriage with a specific interpretation but can be held on a more general basis instead, using fewer very reasonable assumptions.
 
Back
Top