Murray Gell-Mann on Entanglement

  • I
  • Thread starter Thecla
  • Start date
  • Tags
    Entanglement
  • Featured
In summary: I think it's a little more subtle than "non-local means measurement-dependent".In summary, most physicists working in this field agree that when you measure one of the photons it does something to the other one. It doesn't mean that they reject non-locality.
  • #386
secur said:
Both positions are reasonable, on their own terms, and neither violates current experimental data. To end the fruitless dispute we must simply agree to wait for future experiments to decide. Unfortunately I have no idea how to design such experiments.
What I'm saying is that we already can do experiments to see what is happening here, but the experiments are on the scientists! All we have to do is watch how the scientist is using information, and you can observe exactly the place where collapse occurs-- it occurs when the elements in the density matrix (which is quickly diagonalized by decoherence) are correlated against other information, such as experimental outcomes. The correlations cull the data into subsets, which are regarded as independent and show small standard deviations within those subsets, but of course it was our choice to look only at the subsets in the first place. We can see exactly when the choice to do that culling occurred, it occurred when we culled the data into bins we call "what happened this time or that time." Quantum theory was never built to be culled that way, that's why it looks like collapse.
 
Physics news on Phys.org
  • #387
ddd123 said:
In fact, he says right away that the explanation is that the choices of the different angles belong to different histories - he jumps straight to his own interpretation. I think what we're doing here is a little different, because to argue that there's no "influence between parts" doesn't require any marriage with a specific interpretation but can be held on a more general basis instead, using fewer very reasonable assumptions.
I agree, I think we can do better than just buy off on one interpretation and discard the rest. We can watch the process of ourselves doing experiments and correlating data, and see what parts of that quantum mechanics is designed to treat, and what parts are coming from us in a more "manual" kind of way. It is the parts we take for granted that create our confusion, so different interpretations get confused at different places because they take different things for granted. Just as you noticed where Gell-Mann plugged in an interpretation, we need to see all or our modes of experimentation and analysis as examples of interpretation choices.
 
  • #388
Ken G said:
The correlations cull the data into subsets, which are regarded as independent and show small standard deviations within those subsets, but of course it was our choice to look only at the subsets in the first place. We can see exactly when the choice to do that culling occurred, it occurred when we culled the data into bins we call "what happened this time or that time." Quantum theory was never built to be culled that way, that's why it looks like collapse.

What do you think about this other paper that's been posted earlier: https://arxiv.org/abs/1412.6987

Specifically the argument at chapter 9.1, do you think it is more or less what you're saying now? Just curious, because I was unable to form an opinion on this paper.
 
  • Like
Likes Ken G
  • #389
I do see some parallels. I believe the author is making the case that Kolmogorov's approach to probability was just one type of analysis, like choosing Euclid's approach for processing geometric information. But neither can be said to be "absolute" structures that are axiomatic to reality, instead we use them when they work and discard them when they don't. We can also gain an understanding of the requirements needed for them to be useful, but other types of probability analyses may be needed to account for things like irrationality in players of a game-- you could have weird correlations that show up that would not appear in a formal analysis in which all players were rational. The parallel I see is that he seems to be saying that there are not "absolute probabilities", rather probabilities are what you make of them based on your assumptions and constraints, and more bizarre probability structures may work better in some contexts that you cannot always tell in advance without very carefully tracking what assumptions are valid. That seems to gibe with the perspective of Scott Aaronson that bhobba often cites-- that you can understand quantum theory by using a probability structure that allows probabilities to be negative at various places in the calculation, but which never end up negative when you combine them into a final result. That would be anti-axiomatic for a Kolmogorov probability structure much like a triangle with three right angles would be anti-axiomatic for Euclid.

So I agree that many of the paradoxes we get in QT are when we try to plug square pegs into round holes, as with a particular version of probability theory, but I suspect the problem traces more specifically to us taking for granted certain steps in the data analysis, steps that we did not think needed to be included in the formalism because they were just so obviously the way we think about things. Perhaps these steps in the scientific method are as obvious to us as Euclidean geometry, so shouldn't need to be included in the axioms of the formal system-- making the formal system incomplete and vulnerable to paradoxes like collapses and nonlocal influences.
 
  • #390
secur said:
The essential difference is that A. Neumaier figures the collapse (a.k.a. "collapse", with scare quotes) will happen as soon as the superposed states decohere sufficiently, which will be very quick. Again, to be clear, he doesn't use that language, but - looking at it the Copenhagen way - I think that's a correct statement of his position. Therefore we'll never be averaging over two brick-positions that are macroscopically far apart.

stevendaryl, OTOH, figures the collapse won't happen automatically or spontaneously, just because of decoherence. Some sort of measurement event is required. Therefore we will be averaging over superposed brick positions separated by arbitrarily large macroscopic distances.

I'm not actually making any wild claim about what happens. I believe that whenever you look at a brick, it will be in a more or less definite location (up to within some tiny standard deviation). So I believe the same thing as A. Neumaier about what actually happens. The disagreement is over whether what actually happens is (easily) explained by quantum mechanics without collapse.

To me, there are several alternative explanations for why a brick is in a sort-of definite position at all time, and they are all sort-of plausible to me:
  1. The Bohmian explanation: all particles have definite positions at all times, and so of course a brick does, as well.
  2. The Many-Worlds explanation: the brick doesn't actually have a definite position, but within a single "branch" of the universal wave function, it does have a definite position.
  3. The collapse explanation: as soon as you measure a brick's location (or look at it), the brick's wave function collapses into a state of definite position.
A. Neumaier seems to be denying all three possible explanations, and claiming that ordinary quantum mechanics, without collapse, predicts that the brick is in a (more or less) definite location at all times. That to me is completely implausible, and in my opinion, probably provably wrong. (Not provable by me, but maybe by someone smarter than me.)
 
  • #391
vanhees71 said:
It's definitely not true that I think that all the "founding fathers" of QT are wrong or haven't understood their own theory. Ballentine, in my opinion, also follows just standard QT. He's even emphasizing the bare physics content of it, and there's no contradiction of "minimal interpretation" to the Copenhagen flavor without collapse. As I said, I never understood Bohr completely, but as far as I can see he had a pretty similar view, taking the quantum states as epistemic.

If you believed the quantum state were epistemic, you would not object to collapse, and you would not object to collapse conceived as nonlocal.
 
  • #392
It requires no "belief" to regard the state as epistemic, that can be observed simply by watching a physicist apply the concept. So it only requires belief that the demonstrably epistemic use of the state concept corresponds to, or represents, something ontic. One chooses to either believe that if it works epistemically, there must be an ontic reason for that, or else one looks at things like Newton's force of gravity, which might not seem so ontic after all, and just says "oh yeah, it's a concept."
 
  • #393
ddd123 said:
Yes, it's pretty mysterious how interpretation is the main source of dogmatism among physicists, when it is also the topic with the least chance of being subjected to experimental verification. Not even string theorists are this self-assured.

Actually I'd say string theorists are that self-assured! Anyway it's not that mysterious. It's precisely when your case is weak that you can't give an inch. Lawyers, politicians, rhetoricians and debaters know this well. If you can't blind them with brilliance, obfuscate. And always remember: ad hominem is your friend.

Ken G said:
You are of course right, though Gell-Mann is not known for diplomacy or humility! Still, I think one can go farther and still be correct-- one can add "in my opinion, the key lesson that quantum entanglement, a theory that of course will eventually be modified or replaced ... the behavior of the full system is not well characterized in the first place by the concept of influences between its parts."

That would be fine also. Just acknowledge that your position is not proven, with a phrase like "in my opinion"; after that you can be as assertive as you like. But when you insist that anyone who disagrees is "loose, crude and wrong" civilized discussion becomes impossible. Think that, but don't say it.

Brief personal aside: both my parents were diplomats, so it's in my DNA :-)

Your "holistic" idea is attractive - something like that must be right. It's more-or-less compatible with any interpretation, if you look at it the right way (although you may not agree). But I don't accept that the state is entirely epistemic. Can't formulate a clear objection yet, though.

stevendaryl said:
I'm not actually making any wild claim about what happens.

Didn't mean to imply you made a wild claim.

stevendaryl said:
The disagreement is over whether what actually happens is (easily) explained by quantum mechanics without collapse.

I figure A. Neumaier must be postulating some process that corresponds to collapse. Something like GRW, maybe. I wish he'd say "IMO there is no collapse, but there is a process which you mistake for collapse. It happens as follows: (*** insert explanation here ***)".

One of these days I'll study his paper on the subject. I glanced at it and no question, it contains a lot of good stuff. If he'd provide some explanatory comments using the standard language it would be easier to absorb. Use terms like what-we-mistakenly-call-collapse (wwmc-collapse) if you like. Compare and contrast to MWI, GRW or whatever, as applicable.

Your objection - assuming I understand it - is correct. Decoherence can (arguably) explain why off-diagonal terms get close to zero, eliminating interference. So far, only unitary evolution is required. But it doesn't address, at all, why we wind up seeing one particular outcome and not others. That's the vital issue.
 
  • #394
Ken G said:
... Gell-Mann is not known for diplomacy or humility!
Well no, I would say he's not... lol
Murray Gell-Mann said:
If I have seen further than others, it is because I am surrounded by dwarfs.
Some more Murray Gell-Mann clasic quotes...
secur said:
Actually I'd say string theorists are that self-assured!
Some certainly are ... or were... ?

Continue! - excellent thread! ... it was not my intention to butt in.[COLOR=#black]..[/COLOR] :blushing:
 
Last edited:
  • #395
OCR said:
Continue! - excellent thread!

Yes - I'd second that - and also add my thanks to the many contributors. Wonderful stuff :smile:
 
  • #396
...looks like anyone makes mistakes
 
  • #397
  • #398
OCR said:
Those quotes give us a clear look at how Gell-Mann thinks about entanglement:
"If on one branch of history, the plane polarization of one photon is measured and thereby specified with certainty, then on the same branch of history the circular polarization of the other photon is also specified with certainty. On a different branch of history the circular polarization of one of the photons may be measured, in which case the circular polarization of both photons is specified with certainty. On each branch, the situation is like that of Bertlmann's socks"

So he sees classical consistent histories weaved together into a whole that exhibits bizarre correlations, the entanglement is not between the contributing parts of the system, but rather between the contributing histories of the whole system. It's an interesting take on "holism"-- the "whole thing" is this entangled history. I'm not sure that's any less bizarre than entangling the parts of the system, but either way, the main lesson of entanglement is that the whole is not a simple amalgamation of parts, and the amalgamation is not well characterized by a simple sum with "influences between parts" enforcing the emergent properties. Instead, an "influence" is merely a decohered version of those more general entanglements of histories. Of course, if one rejects the idea that "histories" can be different things that come together to support a classical concept of a single decohered history (a "collapse" of histories, if you like), then to those people, Gell-Mann's view is just as objectionable as Copenhagen's view of collapse. I'll bet Gell-Mann must have the same problem with the question "but how does the history we perceive get culled from all the others" that Bohr had with "but how does the outcome we perceive get culled from all those that could have happened." Either way, I return to my earlier conclusion that collapse is culling.
 
  • #399
Thecla said:
"People say loosely ,crudely,wrongly that when you measure one of the photons it does something to the other one. It doesn't."
I am wondering if the interpretation I present below is of any use to anyone other than myself.

A measurement with respect to one particle does not have any affect on any property of the other particle. However, it does affect the probability distribution of the expected value of a future measurement with respect to the other particle.

Consider the following experimental environment. A large collection of urns each has N balls, some white and some black, put into each of them. For each urn, the number of balls of each color put into it, W and B, W+B=N, is randomly selected from a given probability distribution. A measurement involves randomly choosing an urn and randomly drawing K balls without replacement. A second future measurement involves randomly drawing K more balls from the same urn. Looking at the color of the balls from the first drawing has no affect on the color of any of the remaining balls in the urn. However, it does effect the probability distribution of the expected number of white (or black) balls for the second measurement.

Regards,
Buzz
 
  • #400
Buzz Bloom said:
A second future measurement involves randomly drawing K more balls from the same urn. Looking at the color of the balls from the first drawing has no affect on the color of any of the remaining balls in the urn. However, it does effect the probability distribution of the expected number of white (or black) balls for the second measurement.
It doesn't sound like the ontology you are picturing to demonstrate that interpretation would exhibit correlations that would violate the Bell inequality. So that's really the rub here-- it's not that we can't picture an ontology that would allow outcomes of one measurement to alter our expectations for another, it's that we can't picture an ontology that does it in a way that can violate the Bell inequality without there being any influences between the parts of the system, or something else strange going on (like the whole system being more than the simple sum of its parts).
 
  • #401
Ken G said:
It doesn't sound like the ontology you are picturing to demonstrate that interpretation would exhibit correlations that would violate the Bell inequality.
Hi Ken:

I confess I may well be misunderstanding the issue, but my intended interpretation of the metaphorical experiment is that the probability distribution for the first measurement metaphorically represents the Bell distribution. The distribution for the second measurement "violates" whatever inequalities that might be a consequence of the first distribution.

Regards,
Buzz
 
  • #402
The problem is, the second probability doesn't violate the Bell inequality, the measurements you describe would show correlations between outcomes that satisfy the Bell inequality. Remember, the Bell inequality is not a constraint on individual probabilities, it is a constraint on joint probabilities-- the probability of this and this happening together. It's subtler than having one event change the probability of another, that's what happens in "Bertlmann's socks", where if one sock of a pair is left in the dryer, and the other gets in your drawer, and you see you have a left sock in the drawer, you immediately know you have a right sock in the dryer. Probability changes like that don't violate the Bell inequality, so we can imagine that the socks "know their own handedness", if you will. If you have balls that are black or white, and their order is set in the tubes they are being drawn from, then joint probabilities you get from a system like that (say, the chance that if I drew white on pick N I'll draw black on pick N+1, that kind of thing) won't violate the Bell inequality.
 
  • Like
Likes Buzz Bloom
  • #403
It's natural that, since our experiences are of a collapsed and disentangled world, we should be a bit befuddled about why there are superpositions and why there are entanglements. But what I find ironic is that quantum mechanics has long since moved past the question of why are there superpositions, and moved on to how do the superpositions "collapse," yet we still seem stuck on "what enforces the entanglements." We get all these notions of nonlocal effects and so on, all to try to answer how systems can remain entangled when separated. Why haven't we also moved on to the question of how the disentanglement occurs? There's not going to be any nonlocal issues in producing disentanglement, so if we just accept that systems start out entangled, we are not going to have anything difficult to explain. We don't need to explain why the strange correlations are there, we need to explain why we don't usually encounter those correlations, and see them as strange as a result.
 
  • #404
Ken G said:
Why haven't we also moved on to the question of how the disentanglement occurs?

Isn't that "the measurement problem"?
 
  • #405
ddd123 said:
Isn't that "the measurement problem"?

Decoherence also addresses how and why disentanglement occurs.

Ken G said:
We don't need to explain why the strange correlations are there, we need to explain why we don't usually encounter those correlations, and see them as strange as a result.

We don't "need" to explain the strange correlation; life will go on without it. Nevertheless there is no accepted explanation for the correlation, yet. Quantum theory only defines it - correctly, of course, as shown by experiment. It doesn't address the how or why at all. It's an issue because it requires nonlocal influence, or something equally strange, to implement those correlations. Until that's resolved curiosity demands further explanation.

The reason it's strange is directly addressed by my post in the other thread, https://www.physicsforums.com/threa...tional-in-physics.885480/page-21#post-5578506, except there I use the word "weird" instead.
 
  • Like
Likes vanhees71
  • #406
atyy said:
In this example, for Murray's statement to be true, he would be talking about the reduced density matrix of an observer who only makes a measurement on the other photon.

However, it would be equally right to say that measuring one photon does affect the other photon, since a measurement collapses the wave function of both photons.
Using q-information notation, if a photon is in state √½(|0⟩ + |1⟩) and it is measured using Z it collapses to either |0⟩ or |1⟩ (with probability ½ each).
If a pair of photons is in state √½(|00⟩ + |11⟩) and Alice (left photon) measures with Z what does the state collapse to?
 
  • #407
stevendaryl said:
Gell-Mann seems to be saying that on this branch of history, Alice measures the circular polarization of her photon, and Bob's photon has a definite circular polarization state (either left-handed or right-handed). On some other branch (one that doesn't actually occur), Alice measured a different property of her photon, and Bob's photon was in some other definite state all along.

I sort of understand this point of view, but it seems a little mysterious, to me. After all, Alice chooses which branch is actual by choosing which measurement to make. (Actually, I guess her choosing a measurement means picking two possible branches; one in which she has a right-handed photon, and one in which she has a left-handed photon. She can't choose which of those she is in, but she can choose not to be in a possible branch in which her photon is linearly polarized.)

In "coherent histories", we have a Block Universe with a wave function that has only "legal" branches: ones in which A & B make measurements that are consistent with QM. As in MWI, all branches always exist. Alice doesn't "choose" anything. There's one branch where Alice incorrectly believes she "chooses" an angle, and others where she incorrectly thinks she "chooses" some other angle to measure. But in fact it's all pre-determined, there's no choice involved. In each branch Bob is also predestined to incorrectly think he makes a "choice". Of course, their measurements are predestined to agree with QM.

Gell-Mann doesn't need "elements of reality" because the measurements don't need any connection to reality (whatever that is). The only requirement, measurements must agree with QM. When Bob gets "spin up" you can incorrectly imagine there was some "cause" if you wish; like, the photon actually was already right-handed. But in fact there's no need for any so-called "cause". They get compatible measurements because these branches just happen to coincide with QM predictions, like all branches do throughout the universe. Why? Gell-Mann gives this comprehensive, detailed, satisfying explanation: "that's just the way QM works!"

The idea is worthless scientifically - even if it happens to be true! - being immune to the slightest shred of proof or disproof.
 
  • Like
Likes zonde and Collin237
  • #408
secur said:
In "coherent histories", we have a Block Universe with a wave function that has only "legal" branches: ones in which A & B make measurements that are consistent with QM. As in MWI, all branches always exist.
That's not correct. In consistent histories, there is only one branch. The wave function is a probability distributions over the possible branches and one branch will be physically realized with a certain probability. It is completely analogous to the situation in classical Brownian motion. You also got a probability distribution over all possible paths of a particle, but the particle will only choose one path and this path will be chosen with a certain probability distribution. Consistent histories is a direct generalization of the theory of Brownian motion and classical stochastic processes.
 
  • #409
rubi said:
That's not correct. In consistent histories, there is only one branch. The wave function is a probability distributions over the possible branches and one branch will be physically realized with a certain probability. ...

You're right - but I'm righter :-) Wikipedia, and modern writers, seem to agree with you. But in fact consistent histories (or whatever we call it) is a "branch" of MWI, in the sense that ours is only one of many alternative realities, as is very clear from the original paper. I found a poor photocopy of it at http://tuvalu.santafe.edu/~mgm/Site/Publications_files/MGM 102.pdf, which is good enough to at least check my quotes.

Gell-Mann, M. & J.B. Hartle, 1990, “Quantum mechanics in the light of quantum cosmology”, in W. H. Zurek (ed.), Complexity, entropy and the physics of information. Redwood City, Calif.: Addison-Wesley, pp. 425–458

"It is an attempt at extension, clarification, and completion of the Everett interpretation."

"decohering sets of alternative histories give a definite meaning to Everett's "branches".

"Thus we can, at best, deal with quasiclassical maximal sets of alternative decohering histories, with trajectories that split and fan out at (sic) a result of the processes that make the decoherence possible. As we stressed earlier, there are no classical domains, only quasiclassical ones".

"mechanisms for decoherence will operate differently in different alternative histories of the universe".

"The histories in which an observer, as part of the universe, measures p and the histories in which that observer measures x are decohering alternatives."

'Everett and others have described this situation, not incorrectly, but in a way that has confused some, by saying that histories are all "equally real" (meaning only that QM prefers non over another except via probabilities) and by referring to "many worlds" instead of "many histories".'

Read that paper. There's no question about the MWI-ness of (original) decoherent histories.

However more recently proponents have de-emphasized this aspect, saying it "generalizes conventional Copenhagen interpretation" (Wikipedia). Yes, that was partly true in the original paper as well. They get their probabilities from Born, bypassing one of MWI's big problems. Wikipedia also notes that:

'In the opinion of others this still does not make a complete theory as no predictions are possible about which set of consistent histories will actually occur. ... However, Griffiths holds the opinion that asking the question of which set of histories will "actually occur" is a misinterpretation of the theory; histories are a tool for description of reality, not separate alternate realities.'

IOW, in the modern flavor, they want to ignore the fact that only one alternative actually seems to occur. This brings up the question, is the original Gell-Mann Hartle paper still authoritative? I'd say it's still applicable. But a physicist can't answer that question. It needs an English professor with a PhD in weasel wording.

I found this revealing thread on stackexchange http://physics.stackexchange.com/qu...rpretation-of-qm-a-many-worlds-interpretation

Question: Is the “consistent histories” interpretation of QM a “many worlds interpretation” in disguise?

Lubos Motl, a proponent, answered:

"People behind Consistent Histories usually admit that their interpretation - my favorite one - is just a refinement of the probabilistic Copenhagen interpretation. Nothing essential has changed; the predictions are still fundamentally probabilistic. Consistent Histories is the framework that incorporated the explanations of decoherence - the key process that calculates the boundary of the classical and quantum world - as the first one (and maybe still only one). Many-worlds interpretation is just a semi-popular psychological framework to think about quantum mechanics - and it hasn't been useful to do any actual, new calculations. One doesn't really know how to extract the numerical values of the probabilities from the many worlds, at least not in a way that would tell us more than any other interpretations."

He denigrates MWI as "just psychological" and emphasizes the Copenhagen connection. He ascribes opinion to "people behind consistent histories usually ...". But he doesn't actually deny MW - like alternate realities. You need to be an expert in weasel-wording (like myself) to understand this. But another poster named "understanding", amplifying Motl's comment, gives the game away:

"No, in the many worlds interpretation, every parallel universe is real, but in consistent histories, once you choose your projection operators, only one possibility is real, and all the others are imaginary. ... Why should one world be more real than the others? There is no reason. To copies of you living in a parallel world, they are more real than you are."

"understanding"'s weasel-wording skills are seriously deficient!
 
  • Like
Likes Collin237
  • #410
secur said:
You're right - but I'm righter :-) Wikipedia, and modern writers, seem to agree with you. But in fact consistent histories (or whatever we call it) is a "branch" of MWI, in the sense that ours is only one of many alternative realities, as is very clear from the original paper.
Same is true for the different possible paths of a particle in Brownian motion. The situation is completely analogous. If you wouldn't say that all alternative branches in Brownian motion are equally real, you wouldn't say the same thing about the histories in CH either. If you restrict to a set of commuting observables, CH is even exactly a classical stochastic process and not just analogous to one. Of course you can always say that the alternative paths of a particle in a classical Brownian motion are realized in a parallel universe, but it would be pointless to do so and it would be a non-standard interpretation. The same is true for consistent histories.

(By the way: "It is not easy to ignore Lubos Motl, but it is always worth the effort" (John Baez))
 
  • Like
Likes eloheim
  • #411
rubi said:
Same is true for the different possible paths of a particle in Brownian motion. The situation is completely analogous. If you wouldn't say that all alternative branches in Brownian motion are equally real, you wouldn't say the same thing about the histories in CH either.
Alternative branches in Brownian motion do not show interference effects. So analogy does not hold.
 
  • #412
zonde said:
Alternative branches in Brownian motion do not show interference effects. So analogy does not hold.
Alternative branches in CH don't show interference effects either. The analogy holds perfectly.
 
  • #413
rubi said:
Alternative branches in CH don't show interference effects either.
Hmm, then how CH works out predictions for interference effects?
 
  • #414
zonde said:
Hmm, then how CH works out predictions for interference effects?
Just like ordinary quantum mechanics. The formulas are identical. There is no interference between the individual branches, but there is interference in each individual branch. What's going on in one branch is completely independent of what happens in another branch. The branches are mutually exclusive, just like the Brownian paths.
 
  • #415
rubi said:
Just like ordinary quantum mechanics. The formulas are identical. There is no interference between the individual branches, but there is interference in each individual branch. What's going on in one branch is completely independent of what happens in another branch. The branches are mutually exclusive, just like the Brownian paths.

Wouldn't you have to invoke an explicit form of nonlocality, as per Bell's theorem, in order to claim that there's a single history that's actually taken? What you'd be left with is some form of Bohmian mechanics, with all its attendant problems.
 
  • #416
rubi said:
Same is true for the different possible paths of a particle in Brownian motion. The situation is completely analogous. If you wouldn't say that all alternative branches in Brownian motion are equally real, you wouldn't say the same thing about the histories in CH either. If you restrict to a set of commuting observables, CH is even exactly a classical stochastic process and not just analogous to one. Of course you can always say that the alternative paths of a particle in a classical Brownian motion are realized in a parallel universe, but it would be pointless to do so and it would be a non-standard interpretation. The same is true for consistent histories.

(By the way: "It is not easy to ignore Lubos Motl, but it is always worth the effort" (John Baez))

My point was really about history not "histories" :-) Historically, consistent histories began as "an attempt at extension, clarification, and completion" of MWI. Putting that aside, it does seem to have evolved to your position. Viz., denying or at least de-emphasizing the "reality" of the alternate branches. @zonde's point is well taken, though. Surely the original impetus for CH (and, I suppose, all interpretations) comes from the need to explain interference. But you can say that any "branches" that are still capable of interfering, because they're coherent enough, are not yet separated. And once they are, only one (ours) still has real existence. This denatured version seems to lack explanatory power, but that's a matter of taste I suppose.

BTW I was probably too hard on Gell-Mann. No one's perfect, why pick on him. Furthermore although I find the main idea of CH unconvincing he and Hartle did some good work with the details in that 1990 paper.

The John Baez quote is spot-on.
 
  • #417
MrRobotoToo said:
Wouldn't you have to invoke an explicit form of nonlocality, as per Bell's theorem, in order to claim that there's a single history that's actually taken? What you'd be left with is some form of Bohmian mechanics, with all its attendant problems.

Not exactly.

There are (at least) two different approaches to describing the rules for how a system behaves:
  1. A state-based approach
    • You specify the set of possible states of the system.
    • You specify the rules for one state leading to another state (usually via differential equations).
  2. A history-based approach
    • You just directly specify the set of possible complete histories, and a probability distribution on that set.
The latter approach is how I understand consistent histories. There is a sense in which this approach throws out any notion of "interaction" or "forces". Those notions only come into play in a state-based approach.

I suppose you could re-interpret a history-based approach as a state-based approach by treating "which history you're in" as a "hidden variable". In general, that would be a superdeterministic model, rather than a nonlocal model.
 
  • Like
Likes rubi
  • #418
MrRobotoToo said:
Wouldn't you have to invoke an explicit form of nonlocality, as per Bell's theorem, in order to claim that there's a single history that's actually taken? What you'd be left with is some form of Bohmian mechanics, with all its attendant problems.
No, consistent histories is a fully local interpretation of QM. It has nothing to do with Bohmian mechanics, which is a hidden variables interpretation. CH doesn't have hidden variables and is as quantum as Copenhagen. However, CH nicely resolves issues like the measurement problem and apparent quantum non-locality. Unfortunately, one can only truly appreciate it with some background in the theory of stochastic processes. If you want to learn more about it, I suggest the book "Consistent Quantum Theory" by Griffiths as an introduction.

secur said:
But you can say that any "branches" that are still capable of interfering, because they're coherent enough, are not yet separated. And once they are, only one (ours) still has real existence. This denatured version seems to lack explanatory power, but that's a matter of taste I suppose.
In order to apply the CH framework, you must first select a set of mutually exclusive histories that you want to take into consideration. (The physics will not depend on this choice. It's much like choosing a coordinate system.) The physically realized history will be compatible with some conjunction of the set of mutually exclusive histories that you chose. However, of course you can make a poor choice of histories that you consider and you would get a better resolution if you choose a set of histories that is adapted to the physical situation (just like you wouldn't choose spherical coordinates for a non-spherically symmetric situation). Nevertheless, any choice will be consistent with the physics. Now the great thing is that the set of histories is chosen in such a way that there is no interference between the histories, so this problem will automatically not occur and the formalism will tell you if made an incorrect choice. If you choose a set of histories that shows interference between the histories, you will get probabilities that don't add up to one and there is no reasonable way to interpret this.

stevendaryl said:
There is a sense in which this approach throws out any notion of "interaction" or "forces". Those notions only come into play in a state-based approach.
I would say that the effect of forces is incorporated into the probability distribution. If you look at classical statistical mechanics, you have Gibbs distributions ##e^{-\beta(\frac{p^2}{2m}+V)}##, where the distribution will be concentrated in the potential well of ##V##. In the same way, the probability distribution on the set of histories will assign smaller probabilities to histories that would run against a potential hill. It's a probabilistic notion of force, like an entropic force.
 
  • #419
stevendaryl said:
Not exactly.

There are (at least) two different approaches to describing the rules for how a system behaves:
  1. A state-based approach
    • You specify the set of possible states of the system.
    • You specify the rules for one state leading to another state (usually via differential equations).
  2. A history-based approach
    • You just directly specify the set of possible complete histories, and a probability distribution on that set.
The latter approach is how I understand consistent histories. There is a sense in which this approach throws out any notion of "interaction" or "forces". Those notions only come into play in a state-based approach.

I suppose you could re-interpret a history-based approach as a state-based approach by treating "which history you're in" as a "hidden variable". In general, that would be a superdeterministic model, rather than a nonlocal model.
rubi said:
No, consistent histories is a fully local interpretation of QM. It has nothing to do with Bohmian mechanics, which is a hidden variables interpretation. CH doesn't have hidden variables and is as quantum as Copenhagen. However, CH nicely resolves issues like the measurement problem and apparent quantum non-locality. Unfortunately, one can only truly appreciate it with some background in the theory of stochastic processes. If you want to learn more about it, I suggest the book "Consistent Quantum Theory" by Griffiths as an introduction.

What's nagging at me is the analogy being made between consistent histories and Brownian motion. Perhaps the analogy is an accurate one, and my hesitation in accepting it stems merely from my ignorance of CHs detailed formulation. But if I restrict myself instead to a comparison between standard quantum mechanics and Brownian motion, then I feel my skepticism is justified. No one who has studied physics at any depth will have failed to notice that the Schrödinger equation is formally almost identical to the diffusion equation (or to a reaction-diffusion equation to be more exact), the only differences being that the wave function is complex and there's a factor of i in front of the time derivative. This immediately leads one to wonder if perhaps the Schrödinger equation is describing some sort of diffusive process. This speculation is further bolstered by the realization that an analysis by Feynman diagrams of, say, a particle traveling through some potential is nearly identical to the analysis one would carry out of a classical particle taking a random walk through said potential. In the case of a classical random walk we know that the particle takes only one of the many possible paths that have to be taken into account in calculating the probability distribution. However, in quantum mechanics Bell's theorem prevents us from drawing the same conclusion, unless we're willing to invoke causal nonlocality. Resorting to superdeterminism to get rid of the nonlocal influences strikes me as highly ad hoc ('There is no nonlocal causation, there is merely the appearance of it.') (I'm not denying superdetermism, however; I'm simply rejecting its use as a Get Out of Jail Free card for those who don't want to be burdened by the implication of nonlocality.)
 
  • Like
Likes secur
  • #420
MrRobotoToo said:
No one who has studied physics at any depth will have failed to notice that the Schrödinger equation is formally almost identical to the diffusion equation (or to a reaction-diffusion equation to be more exact), the only differences being that the wave function is complex and there's a factor of i in front of the time derivative. This immediately leads one to wonder if perhaps the Schrödinger equation is describing some sort of diffusive process.

There is another big difference with diffusion, and that is that diffusion is a matter of some substance spreading out in physical space, while a wave function propagates in configuration space. The difference isn't apparent when you're talking about a single particle, but becomes important when you are talking about multiple particles. For two particles, the wave function is a function of 6 variables: [itex]\psi(x_1, y_1, z_1, x_2, y_2, z_2)[/itex] where [itex](x_1, y_1, z_1)[/itex] refers to the location of the first particle, and [itex](x_2, y_2, z_2)[/itex] refers to the location of the second particle. Because it's a function of configuration space, there is no meaning to "the value of the wave function here". So, in spite of the similarity of form, the Schrodinger equation is nothing like a diffusion equation (at least not diffusion through ordinary 3-space).
 
  • Like
Likes MrRobotoToo, Jilang and vanhees71

Similar threads

Replies
58
Views
898
Replies
5
Views
1K
  • Art, Music, History, and Linguistics
Replies
11
Views
1K
  • Quantum Physics
Replies
7
Views
1K
Replies
24
Views
712
Replies
22
Views
2K
Replies
33
Views
2K
Replies
19
Views
2K
  • General Discussion
Replies
7
Views
2K
Replies
40
Views
4K
Back
Top