Murray Gell-Mann on Entanglement

  • I
  • Thread starter Thecla
  • Start date
  • Tags
    Entanglement
  • Featured
In summary: I think it's a little more subtle than "non-local means measurement-dependent".In summary, most physicists working in this field agree that when you measure one of the photons it does something to the other one. It doesn't mean that they reject non-locality.
  • #211
Shayan.J said:
They're not assuming that a hidden variables approach is correct. They're just examining what such an approach implies in their case and compare it with approaches that assume the quantum state is the objective state. As far as I understand, none of them is your point of view, because you seem to assume the quantum state as subjective but at the same time don't assume any underlying theory that gives an objective state.
There is a mixture between the pair "ontic vs. epistemic" and "objective vs. subjective". My view is that quantum states are epistemic, i.e., they tell you what we know about the system, but they are also objective, because they are defined by (equivalence classes of) preparation procedures. Any experiment/observation describable by QT leads to a unique state due to the preparation procedure. Otherwise the preparation procedure is not well defined.
 
  • Like
Likes maline
Physics news on Phys.org
  • #212
Demystifier said:
Yes, it is a hidden-variable approach.It is not clear to me what do you mean by "deterministic" and "determined". Is it the opposite to truly random? Or is it the opposite to existing only when measured? The difference is very important because there are models with stochastic (i.e. truly random) hidden variables, which also need to be non-local by the Bell theorem.
An observable is determined, if it has with certainty a specific value. In standard quantum theory the state must then be of the form
$$\hat{\rho}_{A=a}=\sum_{\beta} P_{\beta} |a,\beta \rangle \langle a,\beta|,$$
where ##\sum_{\beta} P_{\beta}## with ##P_{\beta} \geq 0## and ##|a,\beta \rangle## spans the eigenspace of ##\hat{A}## of the eigenvalue ##a##.

A theory is called deterministic if the complete knowledge of the state implies that all observables are determined. That's not the case for standard quantum theory. In this case complete knowledge about the state means that ##P_{\beta_0}=1## for one ##\beta=\beta_0## and all other ##P_{\beta}=0##, i.e., when ##\hat{\rho}_{A=a}=|a,\beta_0 \rangle \langle a,\beta_0|##, which represents a "pure state". But this doesn't imply that all observables have determined values. Any incompatible observable usually has not a determined value within standard quantum theory.
 
  • #213
atyy said:
Standard QFT requires a Heisenberg cut. This does not mean that any system does not obey QT. It simply means that QT cannot describe the whole universe (unless you have Bohmian Mechanics or MWI).

If you believe there is a Hamiltonian of the universe, then doesn't that mean that you believe there is a wave function of the universe?
No physical theory can describe the whole universe. So I don't care about this. I also deny the need for a cut, because I think all our measurement device (particularly their classical behavior) is in fact compatible with QT. There's no other dynamics than that provided by QT. The macroscopic observables appear to obey deterministic classical laws, because they are pretty coarse-grained averages over many microscopic states.
 
  • #214
vanhees71 said:
No physical theory can describe the whole universe. So I don't care about this. I also deny the need for a cut, because I think all our measurement device (particularly their classical behavior) is in fact compatible with QT. There's no other dynamics than that provided by QT. The macroscopic observables appear to obey deterministic classical laws, because they are pretty coarse-grained averages over many microscopic states.

But aren't you contradicting yourself? If your theory does not describe the whole universe, then your theory must have a cut - the part of the universe that your theory describes, and the part that it does not describe.
 
  • #215
In classical mechanics you can in principle describe the whole universe, it's quantum mechanics that introduced major problems in doing that. Since we are discussing the issues of quantum mechanics I feel it's a little circular to say it's ok because we can't describe the whole universe anyway.
 
  • #216
vanhees71 said:
An observable is determined, if it has with certainty a specific value. In standard quantum theory the state must then be of the form
$$\hat{\rho}_{A=a}=\sum_{\beta} P_{\beta} |a,\beta \rangle \langle a,\beta|,$$
where ##\sum_{\beta} P_{\beta}## with ##P_{\beta} \geq 0## and ##|a,\beta \rangle## spans the eigenspace of ##\hat{A}## of the eigenvalue ##a##.

A theory is called deterministic if the complete knowledge of the state implies that all observables are determined. That's not the case for standard quantum theory. In this case complete knowledge about the state means that ##P_{\beta_0}=1## for one ##\beta=\beta_0## and all other ##P_{\beta}=0##, i.e., when ##\hat{\rho}_{A=a}=|a,\beta_0 \rangle \langle a,\beta_0|##, which represents a "pure state". But this doesn't imply that all observables have determined values. Any incompatible observable usually has not a determined value within standard quantum theory.
I am still not sure that I understand you correctly, so I will ask an additional question. Suppose that ##x## is known (measured) and consider the following two statements:
1) ##p## has an undetermined value.
2) ##p## does not have a value at all.
In your opinion, are statements 1) and 2) equivalent? If not, which of them is correct?
 
  • #217
I think it's easy to see that QT can't give the most comprehensive description of physical situation. When Alice detects downconverted photon Bob will detect other downconverted photon in respective time window (assuming idealized setup). I suppose that no one doubts that this photon detection time is classical not so hidden variable that QT says nothing about (if it isn't classical variable then we don't need Bell inequality violations to see non-locality of physical situation).
 
  • #218
atyy said:
But aren't you contradicting yourself? If your theory does not describe the whole universe, then your theory must have a cut - the part of the universe that your theory describes, and the part that it does not describe.
Again, you use the words with a different meaning than I have learned them. In the context of QT the "cut" (due to Heisenberg, von Neumann and others) means where the quantum dynamics ends and the classical dynamics starts. Since I deny such a need of having quantum and classical dynamics, since classical dynamics is emergent and describable by coarse graining of the quantum dynamics, I also deny the existence of a cut.

Of course, in reality we never observe the universe as a whole but only a tiny part of it. After all all our observations are local!
 
  • #219
Demystifier said:
I am still not sure that I understand you correctly, so I will ask an additional question. Suppose that ##x## is known (measured) and consider the following two statements:
1) ##p## has an undetermined value.
2) ##p## does not have a value at all.
In your opinion, are statements 1) and 2) equivalent? If not, which of them is correct?
Neither ##x## nor ##p## can have a definite value since both have a continuous spectrum. Given the state by ##\hat{\rho}## the probability distribution for either observable is
$$P(x)=\langle x|\hat{\rho} x \rangle, \quad \tilde{P}(p)=\langle p|\hat{\rho} p \rangle.$$
There's no other meaning in the quantum mechanical state than this probabilities.
 
  • #220
vanhees71 said:
Neither x nor p can have a definite value since both have a continuous spectrum.
Then can I ask the same question for spins in ##x## and ##y## directions? If ##\sigma_x## is known, which of the two statements
1) ##\sigma_y## has an undetermined value.
2) ##\sigma_y## does not have a value at all.

is correct?
 
  • #221
The answer is the same. All you can say is that if your ##\sigma_x## is determined its state is ##\hat{\rho}=|\sigma_x \rangle \langle \sigma_x|## the probability for getting a value ##\sigma_y## when measuring the ##y## component is
$$P(\sigma_y)=\langle \sigma_y|\hat{\rho} \sigma_y \rangle=|\langle \sigma_y|\sigma_x \rangle|^2.$$
There's nothing else known about ##\sigma_{y}##.
 
  • #222
vanhees71 said:
The answer is the same. All you can say is that if your ##\sigma_x## is determined its state is ##\hat{\rho}=|\sigma_x \rangle \langle \sigma_x|## the probability for getting a value ##\sigma_y## when measuring the ##y## component is
$$P(\sigma_y)=\langle \sigma_y|\hat{\rho} \sigma_y \rangle=|\langle \sigma_y|\sigma_x \rangle|^2.$$
There's nothing else known about ##\sigma_{y}##.
OK, that seems clear enough, so now I can finally respond to your post #204. There you said
"For me the current status of QT rather suggests that there is no such thing as a deterministic underlying state description but QT tells us what we can possibly know about the system."

I agree that there is no such thing as a deterministic underlying state description in standard quantum theory. But I don't think that standard quantum theory is the end of the story. In some better theory, there may be such thing as a deterministic underlying state description. The PBR theorem, like Bell theorem, is a theorem about such theories that go beyond standard quantum theory. Like Bell theorem, it is a no-go theorem: If one wants to construct a theory beyond standard quantum theory, that's fine, but one should not attempt to impose properties which are forbidden by those theorems.
 
  • #223
Sure, that could be, but I'd not hope for a deterministic theory that's more understandable or simpler than quantum theory in any way. It must be nonlocal (according to Bell) and consistent with the relativistic space-time structure (particularly the causality structure), which seems to be pretty tough to construct. I'm not aware of any working nonlocal relativistic classical model at all, let alone one that reproduces the probabilistic predictions of QFT. Maybe there is such a theory, but if so it seems to be very difficult to find!
 
  • #224
vanhees71 said:
Again, you use the words with a different meaning than I have learned them. In the context of QT the "cut" (due to Heisenberg, von Neumann and others) means where the quantum dynamics ends and the classical dynamics starts. Since I deny such a need of having quantum and classical dynamics, since classical dynamics is emergent and describable by coarse graining of the quantum dynamics, I also deny the existence of a cut.

Of course, in reality we never observe the universe as a whole but only a tiny part of it. After all all our observations are local!

But if you are coarse graining, then in principle you do believe that QT does apply to the whole universe.
 
  • #225
No, to explain the functioning of a photodetector almost all of the universe is completely irrelevant. That's the nice practical feature of interactions being local!
 
  • #226
vanhees71 said:
I'm not aware of any working nonlocal relativistic classical model at all, let alone one that reproduces the probabilistic predictions of QFT. Maybe there is such a theory, but if so it seems to be very difficult to find!

Relational blockworld should be doing that...
 
  • #227
Never heard of this. Is this published in a peer reviewed serious physics journal?
 
  • #229
Well, of course, I meant a physics paper (not many words but many formulae ;-)).
 
  • #230
Papers on interpretations usually don't have that many formulae, because they just use quantum theory in the end. But I think this paper has the average amount of formulae for a foundations article.
 
  • #231
vanhees71 said:
No, to explain the functioning of a photodetector almost all of the universe is completely irrelevant. That's the nice practical feature of interactions being local!

Well, the photodetector is part of the universe - do you think quantum mechanics doesn't apply to eveything?
 
  • #232
vanhees71 said:
Sure, that could be, but I'd not hope for a deterministic theory that's more understandable or simpler than quantum theory in any way. It must be nonlocal (according to Bell) and consistent with the relativistic space-time structure (particularly the causality structure), which seems to be pretty tough to construct.
The catch is that it does not need to be consistent with the relativistic space-time structure. It may have a preferred Lorentz frame, such that it's existence cannot be observed at a statistical level. It is in fact very easy to construct models with a preferred Lorentz frame with the same predictions as standard quantum theory.

Conceptually, it is analogous to the Lorentz interpretation of Lorentz transformations, in terms of a Lorentz ether.
https://en.wikipedia.org/wiki/Lorentz_ether_theory

vanhees71 said:
I'm not aware of any working nonlocal relativistic classical model at all, let alone one that reproduces the probabilistic predictions of QFT. Maybe there is such a theory, but if so it seems to be very difficult to find!
If true Lorentz invariance is required (not only at the observable statistical level), then it's more difficult. Nevertheless, see
http://lanl.arxiv.org/abs/1205.1992
for an attempt. There is no doubt that the theory is non-local and Lorentz invariant. However, there are some doubts whether the predictions are really exactly the same as in standard quantum theory. (The paper contains a "proof" that it is, but it has been pointed out to me that the proof contains a gap.) So in principle, some fine deviations (perhaps even measurable with current technology) are possible.
 
Last edited:
  • Like
Likes secur, OCR and vanhees71
  • #233
vanhees71 said:
Again, you use the words with a different meaning than I have learned them. In the context of QT the "cut" (due to Heisenberg, von Neumann and others) means where the quantum dynamics ends and the classical dynamics starts.

I don't think that's the primary "cut" in quantum mechanics. The most fundamental "cut" is between the system and the measuring device (or observer). The main distinction between the two sides of the cut that comes into play in the quantum formalism is that on the "system" side, variables need not have definite values for physical properties--the system can be in a superposition of states having drastically different values for physical properties, while on the "measurement" side, it is assumed that there is a definite value for macroscopic properties such as the position of a pointer. (It's not that important, but for clarity, I should distinguish between quantities that are definite, and quantities that are precise. The location of brick is definite, in the sense that the brick is either here or there, and not in some quantum superposition of the two locations. But the location is not precise, because it doesn't make sense to talk about the location of a brick to an accuracy greater than maybe a centimeter.)

On the "measurement" side, macroscopic quantities simply have values; you don't say that they are observed to have those values. That would be somewhat of an infinite regress: You measure a property of an electron by using some measuring device. If you need a second measuring device to determine the state of the first measuring device, and a third to determine the state of the second, that's an infinite regress.

The two sides of the cut are treated very differently by the quantum formalism. That doesn't necessarily imply that the two sides aren't both described by quantum mechanics, but as I said earlier, it sure isn't obvious that they are. If everything is described by quantum mechanics, then I don't see the need for a cut at all.
 
  • Like
Likes Jilang
  • #234
stevendaryl said:
The two sides of the cut are treated very differently by the quantum formalism. That doesn't necessarily imply that the two sides aren't both described by quantum mechanics, but as I said earlier, it sure isn't obvious that they are. If everything is described by quantum mechanics, then I don't see the need for a cut at all.
Macroscopic observables are operators that are defined as a huge sum over few-particle operators. Hence they have values that are quite precise (in the sense that their uncertainty as computed by the standard formula) is tiny. This is the main difference between observing the detector and observing an electron.

Thus indeed, no cut is needed at all; the macreoscopic size is the difference that determines in how the observabes are treated.
 
Last edited:
  • Like
Likes Jilang
  • #235
Demystifier said:
The catch is that it does not need to be consistent with the relativistic space-time structure. It may have a preferred Lorentz frame, such that its existence cannot be observed at a statistical level. It is in fact very easy to construct models with a preferred Lorentz frame with the same predictions as standard quantum theory.

If true Lorentz invariance is required (not only at the observable statistical level), then it's more difficult. Nevertheless, see ... There is no doubt that the theory is non-local and Lorentz invariant. However, there are some doubts whether the predictions are really exactly the same as in standard quantum theory.

Nikolic is a well-known pilot-wave theorist, and I've seen this paper before. Many people have been trying for decades to create a Bohmian QFT. AFAIK no one has succeeded yet. Let's face it, it may not be possible, although it's certainly worth a try.

AFAIK your Lorentzian approach, OTOH, really does work, but I'd like to clarify how a preferred frame (LET) solves the problem of "instantaneous" collapse. It seems incompatible, as we all know, with relativity of simultaneity. If the collapse, which extends some distance in space, happens instantaneously -i.e., simultaneously - in one inertial frame, it's not instantaneous in other frames (in general). But with LET, we can assume the collapse is instantaneous only, specifically, in the preferred frame. Other frames can still do QM calculations as though the collapse was instantaneous (even though it's not, in that frame), getting the same predictions as usual - is that right?

Note this approach applies, mutatis mutandi, not only to Copenhagen "collapse" interpretation but most others as well.

Is this one way to construct a model "with a preferred Lorentz frame with the same predictions as standard quantum theory"?

Parenthetically, IMHO you don't really need to invoke LET; collapse is (or, can be viewed as) perfectly consistent with BU, I think, with no modifications. In spite of the above-mentioned apparent incompatibility.
 
  • Like
Likes Demystifier
  • #236
atyy said:
If you know QM, it's about 5 seconds of study.

Tensor networks are basically a pictorial representation of the entanglement structure of a wave function. The pictorial representation of a wave function is similar in spirit to the Penrose pictorial representation of tensors.

Appropriately, some of the tensor networks look like curved space. An important point going beyond looks is that calculations using tensor networks approximate a formula called the Ryu-Takayangi formula with the same form as the Hawking formula - which relates the entropy of entanglement to the entropy of a region of space.
Ok, that begins to fill in the gaps, thanks, but it's already more than 5 seconds! But even without understanding the paper, I think I can take away the key message: entanglement is the natural state of things, kind of like superpositions in more mundane quantum mechanics. Locality, and parts of systems, and signals propagating around, those are all concepts that emerge only after something has collapsed the entanglements, much like measurements and decoherent macro interactions collapse superpositions. So the EPR objection to entanglement can be seen as similar to Schroedinger's objection to that blasted cat. To me, they are all the mistake of placing our familiar ways of processing information above the necessity to learn new modes of information processing when nature asks us to. It is ironic that we think we need to explain how the collapse of a superposition happens, but somehow we also think we need to understand what maintains entangement!
 
Last edited:
  • #237
vanhees71 said:
There's no other dynamics than that provided by QT. The macroscopic observables appear to obey deterministic classical laws, because they are pretty coarse-grained averages over many microscopic states.

And that works fine at the level of ensembles - but gives the wrong answer when applied to the result of a single measurement. But the issues with collapse aren't 'ensemble' issues - they're single measurement issues (which all naturally disappear when we "ensembleize" everything).

So if we want QM to say anything about single measurements (which I believe it does) then I don't see how one can avoid the 'cut'. There's an irreversible change before and after measurement which can't be explained away for a single system. Of course we can still use the decoherence approach for a single quantum measurement - measured system, measuring system, large number of environmental degrees of freedom (treated quantum mechanically) over which we eventually do some coarse-grained averaging (i.e "classicize") - but we end up with our measured system being in a mixture that is consistent to describe repeated experiments over an ensemble, but not the pure state that QM requires after the measurement process for a single measurement.

There's some classical information recorded somewhere every time we perform a single measurement - so, for example, a single measurement of spin-x on a particle prepared in a known eigenstate of spin-z yields a single classical bit of information. Although I don't know how to formulate this precisely I would suggest that a measurement occurs only when some real classical information is recorded.
 
  • #238
secur said:
Many people have been trying for decades to create a Bohmian QFT.
The fundamental problem is that in standard Bohmian N-particle mechanics and in the various attempts for a Bohmian quantum field theory different objects are given an ontologically real status, and that there is no way to reconcile the two ontologies. So what is really real in the Bohmian approach?
 
  • #239
A. Neumaier said:
The fundamental problem is that in standard Bohmian N-particle mechanics and in the various attempts for a Bohmian quantum field theory different objects are given an ontologically real status, and that there is no way to reconcile the two ontologies. So what is really real in the Bohmian approach?
In the attempts to reconcile Bohmian approach with QFT, the answer to your question is not unique. The Bohmian approach is quite flexible, so there are different proposals.
 
  • #240
vanhees71 said:
There's no other dynamics than that provided by QT.
How do you know that? Or perhaps you meant that there is no proof for other dynamics than that provided by QT?
 
  • #241
Demystifier said:
How do you know that? Or perhaps you meant that there is no proof for other dynamics than that provided by QT?
Sure, what I meant is that there is not the slightest evidence for a failure of the purely quantum theoretical description.
 
  • #242
vanhees71 said:
Sure, what I meant is that there is not the slightest evidence for a failure of the purely quantum theoretical description.
That's true. But one of the reasons it hasn't failed so far is because it remained agnostic on many interesting questions.
 
  • #243
The trouble is that you don't know how to proceed in theory/model building if there's no empirical evidence. An example is the present situation in HEP physics, where everybody is eager to find a discrepancy between observations at the LHC and the Standard Model. Unfortunately there are none, and it's not clear what's the correct extension or modification of the Standard Model, which most physicists hope for due to some problems of the Standard Model (naturalnas/hierarchy problem; too weak CP violation; nature of the "dark matter").
 
  • Like
Likes Demystifier
  • #244
Demystifier said:
because it remained agnostic on many interesting questions.
on many interesting questions that can be checked experimentally? What would be an example?
 
  • #245
vanhees71 said:
Sure, what I meant is that there is not the slightest evidence for a failure of the purely quantum theoretical description.

But to me, the fact that the two halves of a quantum experiment--the system being measured, and the system doing the measuring--have such completely different properties according to the quantum formalism suggests to me that the burden of proof should be on the other side. Prove that hamiltonian dynamics is sufficient to account for all phenomena, including measurement processes.

Many-worlds is an attempt to do that. A. Neumaier claims that it can be done without many-worlds (although I don't understand his argument). But it seems to me that some kind of derivation of measurement from hamiltonian dynamics is needed before you can say that hamiltonian dynamics applies to everything.

The problem for me is that the standard way that quantum mechanics is done postulates properties for measurement devices and measurement interactions which it does not postulate for single particles, or any combination of particles. If you have a single electron that is in the spin state [itex]\frac{1}{\sqrt{2}} (|U\rangle + |D\rangle[/itex], then it doesn't make any sense to say that it is 50% likely to be spin-up and 50% likely to be spin-down. It is in the definite state "spin-up in the x-direction". If you have an interaction between two electrons, it doesn't make any sense to say that one electron has a 50% chance of observing the other to be spin-up. With a small number of particles, probability doesn't come into play at all. Definite values for dynamic variables doesn't come into play at all. But if you scale up one of the interacting systems to be a measurement device designed to measure spin, then it becomes unproblematic to say that the measurement device interacting with the electron has a 50% chance of going into the "observed spin-up" state, and 50% chance of going into the "observed spin-down" state. How did this probabilistic description arise from microscopic interactions that are non-probabilistic?
 
Back
Top