When Did Einstein First Encounter Quantum Entanglement?

In summary, To me, the concept of entanglement sounds like an epiphany. I’m sure I can’t find one specific moment for it but I’d like to get closer. And I’d like your help. So far, I see that Einstein had issues with Born’s matrix mechanics (1925) among other things, which heated up the Bohr Einstein debates. A decade passed before the EPR paradox. So somewhere in there, I assume that Einstein noticed that QM does not allow the spin to be set at the emitter. I’m not sure how he recognized that.Even that might not go back far enough for me. Somebody figured out
  • #36
DrChinese said:
Experiments show that particles can be entangled that have never interacted. ...
harrylin said:
That's really surprising... example please! :cool:

I find it surprising and fascinating too. To avoid derailing this thread, which is mostly about the history of entanglement, I have started a new thread on this question here: https://www.physicsforums.com/showthread.php?t=473822
 
Physics news on Phys.org
  • #37
DrChinese said:
The problem with this statement is that it is false. Experiments show that particles can be entangled that have never interacted. ... Also, particles can become entangled after they are detected.
This is interesting, but whether it precludes a common cause understanding depends on how the entanglements were produced. Any readily available references you can post?

DrChinese said:
Also, I really think you should drop the "counter-propagating" lingo as it really has no place in the discussion. :smile:
Should I stop referring to the Aspect 1982 experiment(s)?

Edit: I just located your Entangled "Frankenstein" Photons paper, which includes references to the experiments you mentioned, and posted the links to them in yuiop's new thread on this.
 
Last edited:
  • #38
ThomasT said:
Counterpropagating photons are a subset of all the photon pairs that are emitted via the cascade process vis Aspect 1982. Is it that they're emitted in opposite directions during the same quantum transition that makes the law of conservation of angular momentum applicable, and thus renders them entangled in polarization, or are all emitted pairs, no matter what the relative propagational directions of the individual photons that comprise each pair, entangled in polarization as well?
I would imagine the key issue is the conservation laws-- entanglement just means that there is a constraint on the system as a whole that impacts upon the possible outcomes of its various parts. So the entanglement "comes from" the same place as whatever is imposing the conservation law, and exists whenever the conservation law does. In Newtonian physics we can show where the conservation laws come from (usually some form of action/reaction forces), but in quantum mechanics, the conservation laws come from the algebra of the matrix elements that tell us what things can happen. There's probably an even more fundamental source than that, but I don't know if quantum mechanics identifies it, beyond the usual Noether's theorem (conservation laws come from symmetries). So I guess we could say that entanglments also come from symmetries, and the difficulty in breaking them.
The import, or point, of what I'm saying is that quantum entanglement is ultimately traceable to some sort of common cause -- whether via mutual interaction of, or common origin of, or the application of an identical torque to, the entangled entities.
I wouldn't say that is or is not useful, because it's the kind of thing that works for each person or not, I would just caution that the language of causation is tricky in quantum mechanics-- things that happen tend to be because of constructive interference among all the ways they can happen, but shall we say that the happening is caused by constructive interference, or is the constructive interference just our mathematical test that it will in fact happen? I don't really know what a fundamental cause is at the elementary level.
Ok, but it's a start. If I think in terms of the joint polarizer setting as measuring a relationship between the entangled photons, and that this relationship is produced via the emission process, then the observed correlation between joint settings and joint detections isn't surprising or strange, and I don't need to assume that the entangled photons are communicating with each other via some unknown flt or nonlocal process.
I agree there, I always rejected language that entanglement involves "communication between the parts"-- instead I would tend to simply say that entanglement is an example of the breakdown of the entire concept that a system is "made of parts." A system is a system, period-- the concept of parts is an approximate notion, largely due to classical experience.
Why does qm correctly model entanglement? I would say that it's because qm takes into account all of the relevant relationships that produce the joint correlations.
I'm fine with that.
 
  • #39
Ken G said:
No, there's nothing wrong about thinking about entanglement as a relationship-- but that's quite a vague characterization. There's nothing weird about relationships writ large-- the point about entanglement is that it is a weird type of relationship, a type that shows up nowhere else in our experience. To say it is weird is not to say that it is unusual or of secondary importance, it just means we never anticipated it from anything we observe in our daily lives. That's because we never observe quanta-- everything we observe is an aggregate property of very many quanta, which loses any sense of correlation between individual events. The guts of entanglement is a very high-level information of correlations that we never even knew existed when all we saw was averaged over ensemble aggregates.

Indeed. If you allow me, I will elaborate a bit on this (and I will come back to my favourite statement that the "weirdness of entanglement" is simply the weirdness of superposition, in a dramatic setting where it is harder to sneak out from).

The weirdness of superposition comes about from the DIFFERENCE between "superposition of states" and "statistical mixture of states". If I say: "All of quantum mechanics' bizarreness comes from this single aspect" I think I'm not exaggerating. It is why I find any "information approach" to quantum mechanics pedagogically dangerous, because it is again hiding the essential part.

There is a fundamental difference between:
our system is in the quantum superposition |A> + |B>
and
our system has 50% chance to be in state A, and 50% chance to be in state B.

Very, very often, both concepts are confused, sometimes on purpose, sometimes by inadvertence, and this is a pity because you are then missing the essential part.

The reason why this confusion is so often taken, is that *IF YOU ARE GOING TO LOOK AT THE SYSTEM* and you are going to try to find out whether it is in state A or in state B, then the behaviour, the outcomes, of the two statements are identical.
*IF* you are limiting yourself to the "measurement basis" containing A and B states, then there is no observable difference between:
"our system is in quantum superposition |A> + |B>" and "our system has 50% chance to be in state A and 50% chance to be in state B".
All observations will be identical... as long as we remain in the basis (A,B...), and quantum mechanics then reduces to a fancy way of dealing with statistical ensembles of systems.
Whether we consider those probabilities to be "physical" or just due to our "ignorance" doesn't matter.

But.

The superposition |A> + |B> behaves dramatically different from the mixture 50% A and 50% B when we go to another, incompatible, observable basis. There is NO WAY in which a mixture of 50% A and 50% B can explain the statistics of observation on a superposition |A> + |B> in another basis.

And that is what the 2-slit experiment demonstrates: you cannot consider the particles to be a mixture of 2 populations, one that went through slit 1, and one that went through slit 2, when you look at the interference pattern on the screen. When you only measure directly behind the slits, you are still in the "slit basis" and you can still pretend that you have the same results as if we actually had a statistical mixture of 2 populations: 50% "slit 1" and 50% "slit 2". But when you "change basis" and you go looking at the screen, that doesn't work any more.

In other words, as long as we work in one basis, we can still confuse "superposition" with "statistical mixture". From the moment we change basis, we can't, any more and the weird properties of superposition set in. They are weird, exactly because they do NOT correspond to what we would have with a statistical mixture.

And now we come to entanglement, and the difference with statistical correlations.

The funny thing about entanglement is NOT that there are correlations between particles. There's nothing strange with having correlations between particles. Yes, interaction (classical interaction) CAN provide for correlations. If we have balls of different colours, and we cut them in 2, and send the halves to two different places, we won't be surprised that there is a correlation between the colours. That when there is half a red ball at Alice's place, that there is also half a red ball at Bob's place. We are used to statistical correlations of distant events if they have a common origin.

So the fact that the spins are opposite have nothing special.

If we consider the entangled state:

|spin z up> | spin z down> - |spin z down> |spin z up>

then there's nothing surprising that the spin at Alice is the opposite as the spin at Bob's.

The above superposition (entanglement because it is a 2-particle system) is indistinguishable from the normal, classical CORRELATED event set:

50% chance to have the couple (up down) and 50% chance to have the couple (down up).

It is only when we are going to CHANGE BASIS and when we are going to look at the spin correlations with axis in different directions (between them) that the outcomes are NOT compatible any more with a statistical ensemble. (in essence, that's Bell's theorem). Just as in the 2-slit experiment.

We are now again confronted with the fact that a superposition of states is NOT the same as a statistical ensemble of states, but that this difference is only revealed when we change observation basis from the one that served to do the superposition in.

Any process that could make classically a correlation between quantities could eventually also give rise to an entangled state. It is not the correlation of variables by itself that is surprising. We are used to have statistical correlations due to interactions. What is surprising (again) is that we have a superposition of states, which doesn't behave as a statistical ensemble, if we can measure it in a "rotated" basis.

And now the point is that the more complicated your system is, the more involved the entanglement, the harder it is to do an observation in a rotated basis. In fact, from a certain amount of complexity onwards, you do not really practically have access any more to a rotated basis. You are forced to work in a compatible basis with the original one. And when that happens, there IS no observational difference any more between a superposition (a complicated entanglement) and a statistical mixture. You can pretend, from that point onward, for all practical purposes, that your system is now in a statistical mixture. It will lead observationally to the correct results. You won't be able, practically, to do an experiment that contradicts thinking of your system as a classical statistical mixture of basis states. That's the essence of decoherence, and the reason why we are macroscopically only observing "genuine statistical mixtures" and no complicated quantum entanglements.

And why entanglement experiments that demonstrate a genuine entanglement by SHOWING that the outcomes are different than can be explained by a statistical mixture, are difficult, and usually limited to a very small set of system components.

So again: the weird thing is superposition, and its difference with potential statistical mixtures. (the fact that stochastic outcomes of measurements on superpositions cannot be explained by statistical mixtures).

Entanglement is a special kind of superposition, which involves 2 or more ("distant" for more drama) systems, and entanglement's strangeness comes about because of the difference between its results, and normal statistical correlations in a statistical mixture, difference which can only be shown when we measure in a different basis than the one we set up the entanglement in.
 
  • #40
vanesch said:
[..]
The weirdness of superposition comes about from the DIFFERENCE between "superposition of states" and "statistical mixture of states". If I say: "All of quantum mechanics' bizarreness comes from this single aspect" I think I'm not exaggerating. It is why I find any "information approach" to quantum mechanics pedagogically dangerous, because it is again hiding the essential part.

There is a fundamental difference between:
our system is in the quantum superposition |A> + |B>
and
our system has 50% chance to be in state A, and 50% chance to be in state B.
[..]
And that is what the 2-slit experiment demonstrates: you cannot consider the particles to be a mixture of 2 populations, one that went through slit 1, and one that went through slit 2, when you look at the interference pattern on the screen. When you only measure directly behind the slits, you are still in the "slit basis" and you can still pretend that you have the same results as if we actually had a statistical mixture of 2 populations: 50% "slit 1" and 50% "slit 2". But when you "change basis" and you go looking at the screen, that doesn't work any more. [..]

And now we come to entanglement, and the difference with statistical correlations.

The funny thing about entanglement is NOT that there are correlations between particles. There's nothing strange with having correlations between particles. Yes, interaction (classical interaction) CAN provide for correlations. If we have balls of different colours, and we cut them in 2, and send the halves to two different places, we won't be surprised that there is a correlation between the colours. That when there is half a red ball at Alice's place, that there is also half a red ball at Bob's place. We are used to statistical correlations of distant events if they have a common origin.
[..]
It is only when we are going to CHANGE BASIS and when we are going to look at the spin correlations with axis in different directions (between them) that the outcomes are NOT compatible any more with a statistical ensemble. (in essence, that's Bell's theorem). Just as in the 2-slit experiment.
[..]
Entanglement is a special kind of superposition, which involves 2 or more ("distant" for more drama) systems, and entanglement's strangeness comes about because of the difference between its results, and normal statistical correlations in a statistical mixture, difference which can only be shown when we measure in a different basis than the one we set up the entanglement in.

Thanks for this clear resume, I will ponder over it! :smile:
 
  • #41
vanesch said:
Any process that could make classically a correlation between quantities could eventually also give rise to an entangled state. It is not the correlation of variables by itself that is surprising. We are used to have statistical correlations due to interactions. What is surprising (again) is that we have a superposition of states, which doesn't behave as a statistical ensemble, if we can measure it in a "rotated" basis.
Yes, I think that is very profoundly correct. It could be summarized with the remark that the weirdness of dealing with individual quanta, not present when dealing with large aggregates of quanta, is the very concept of a "rotated" basis, or a "complementary" observable. Classically, we can observe everything at once, because the aggregate averages we form don't contradict each other. But individual quanta don't contain that much information projected onto each particle-- the whole point of an elementary wave concept is not that it tells us more about the particle, it's that it tells us no more about the particle than the particle seems to know about itself, when it is not being looked at. And when it is being looked at, we are not looking in some "god's eye" sense, we are doing a very particular kind of looking-- we are applying a measurement basis, as you say, and all the rotated bases we could imagine mean nothing at that point. We never dreamed that choosing an observation basis to obtain detailed information precluded a whole other class of detailed information, because information about aggregates doesn't work that way-- the aggregate average has already truncated the information present so drastically that we completely miss this little complementarity limitation.
And now the point is that the more complicated your system is, the more involved the entanglement, the harder it is to do an observation in a rotated basis.
Bang on. In another thread, we are discussing the cat paradox, and I claimed that the way the cat paradox is normally expressed is just wrong quantum mechanics, and now you have given be better words to say why: because it pretends that such a rotation of observation basis makes sense on a cat.
 
  • #42
vanesch said:
The superposition |A> + |B> behaves dramatically different from the mixture 50% A and 50% B when we go to another, incompatible, observable basis. There is NO WAY in which a mixture of 50% A and 50% B can explain the statistics of observation on a superposition |A> + |B> in another basis.
Well, there is.

You just have to allow the possibility that second measurement takes place - interference measurement.
Say you measure interference of single H polarized photon rotated to new H' polarization in respect to subensemble of V polarized photons rotated to H' polarization. And you filter out only photons with certain level of constructive interference discarding the rest. And of course you have to have phase property for photons to speak about interference.
 
  • #43
zonde said:
Well, there is.

You just have to allow the possibility that second measurement takes place - interference measurement.
Say you measure interference of single H polarized photon rotated to new H' polarization in respect to subensemble of V polarized photons rotated to H' polarization. And you filter out only photons with certain level of constructive interference discarding the rest. And of course you have to have phase property for photons to speak about interference.

?

You measure photon polarisation twice, is that what you are saying ?
 
  • #44
vanesch said:
?

You measure photon polarisation twice, is that what you are saying ?
No

You create entangled photons in H/V basis. Then you measure them in +45°/-45° base. This measurement of polarization is completely undetermined as you have 50/50 chance that photon will go +45° or -45° path. But correlations appear in interference measurements between H rotated to +45° and V rotated to +45° at two sites.
 
  • #45
zonde said:
No

You create entangled photons in H/V basis. Then you measure them in +45°/-45° base. This measurement of polarization is completely undetermined as you have 50/50 chance that photon will go +45° or -45° path. But correlations appear in interference measurements between H rotated to +45° and V rotated to +45° at two sites.

Yes, that's correct. In what way is that contradicting my claim that you cannot describe this as a statistical ensemble ?

If you were to consider that your original population of pairs of photons was a statistical ensemble, 50% (H,H) and 50% (V,V), then such a statistical ensemble will NOT give you what you actually measure in the 45/-45 basis, because such a statistical ensemble would give you a totally UNCORRELATED 45/-45 result, while according to QM (as you say), you find perfect correlation in the 45/-45 measurement.

Indeed, the statistical ensemble approach would give you the following:

50% chance that you have a (H,H) pair. The first H impinging on a 45 polarizer has 50% chance to pass, the second H impinging on a -45 polarizer has also 50% chance to pass, independently.
So we get here 25% chance to get (pass, pass), (pass, no pass), (no pass, pass), and (no pass, no pass).

Same for the 50% chance that you have a (V,V) pair.

So in total you get:

25% (pass, pass), 25% (no pass, pass), 25% (pass, no pass) and 25% (no pass, no pass).


The quantum superposition approach gives you:

50% chance to have (pass pass), and 50% chance to have (no pass, no pass).

So the statistical ensemble approach 50% (H,H) and 50% (V,V) is not explaining the QM result.
 
  • #46
thenewmans said:
To me, the concept of entanglement sounds like an epiphany. I’m sure I can’t find one specific moment for it but I’d like to get closer. And I’d like your help. So far, I see that Einstein had issues with Born’s matrix mechanics (1925) among other things, which heated up the Bohr Einstein debates. A decade passed before the EPR paradox.


Einstein around 1931.
 
Last edited:
  • #47
vanesch said:
Yes, that's correct. In what way is that contradicting my claim that you cannot describe this as a statistical ensemble?
You are right that I am not talking about statistical ensemble.
I am talking about let's say "physical ensemble".

But in this particular part of your post that I actually quoted in my post:
vanesch said:
The superposition |A> + |B> behaves dramatically different from the mixture 50% A and 50% B when we go to another, incompatible, observable basis. There is NO WAY in which a mixture of 50% A and 50% B can explain the statistics of observation on a superposition |A> + |B> in another basis.
you do not speak about statistical ensemble but rather about mixture of A and B (I suppose we can say mixture of (H,H) and (V,V) pairs). And so it sounds like you include my described "physical ensemble" case too. You have to admit that word "mixture" does sound physical and not statistical.
 
  • #48
zonde said:
You are right that I am not talking about statistical ensemble.
I am talking about let's say "physical ensemble".

I wouldn't know what is the difference. If you have a "statistical" ensemble or mixture of 50% black balls and 50% white balls, in how much is that different from a "physical ensemble" which contains well-mixed 5 million white balls and 5 million black balls ?

you do not speak about statistical ensemble but rather about mixture of A and B (I suppose we can say mixture of (H,H) and (V,V) pairs). And so it sounds like you include my described "physical ensemble" case too. You have to admit that word "mixture" does sound physical and not statistical.

Again, what's the difference ?

If we do this experiment with 10 million "events", and we say that they come from about 5 million (H,H) pairs and about 5 million (V,V) pairs (sent out randomly by the source) ; or we say that we have 10 million events drawn from a statistical mixture of (H,H) pairs and (V,V) pairs in 50% - 50% ratio, what's the difference ?
 
  • #49
In this series of Quantum Mechanics videos, the lecturer discusses entanglement and how it is an exclusive relationship. For example, for 'entangled total point zero particles', one particle must be pointing up, and the other particle down. These particles cannot be entangled with any other particles, according to the lecturer.
But what about three particle entanglement? Or is that different seeing that the particles in this case are not necessarily needed to have total point zero spin? But what about particles that have never interacted - aren't they really entangled with every other particle in the world too?

So, one particle can or cannot be entangled with every other particle in the universe?
 
  • #50
I think the lecturer must have meant "cannot be entangled with another particle if we are to use this analysis," rather than "it is impossible for further entanglements to exist." In principle, particles are vastly mutually entangled, including the fact that many are indistinguishable in the first place (like all electrons, etc.). But physics is not about what is, it is about how we can treat what is and get the right answers, to within some desired precision. In practice, we can find situations where entanglements are vastly unimportant, or we can find situations where simple entanglements matter but more complicated ones don't. Physics is very much about building up to the complex from the simple, and that it works at all says something about what a tiny fraction of the information the universe encodes is actually "active" in determining the outcomes of our experiments.
 
  • #51
Ken G said:
I think the lecturer must have meant "cannot be entangled with another particle if we are to use this analysis," rather than "it is impossible for further entanglements to exist." In principle, particles are vastly mutually entangled, including the fact that many are indistinguishable in the first place (like all electrons, etc.). But physics is not about what is, it is about how we can treat what is and get the right answers, to within some desired precision. In practice, we can find situations where entanglements are vastly unimportant, or we can find situations where simple entanglements matter but more complicated ones don't. Physics is very much about building up to the complex from the simple, and that it works at all says something about what a tiny fraction of the information the universe encodes is actually "active" in determining the outcomes of our experiments.

Indeed. I think we said this already before in this thread: massive entanglement gives in most cases exactly the same observable result as no entanglement. This is why there are families of interpretations of quantum mechanics which go for the "no entanglement" view (all projection-based interpretations), and those that go for the "massive entanglement" view (all MWI-like interpretations). The link between both is decoherence.

When you look at quantum dynamics, when two subsystems interact, most of the time this results in an entangled state between both, even if initially, we had "pure product states", that is to say, each system had its own independent quantum state, and the overall state was just the juxtaposition of these two sub-system states. As it is very difficult to deny a system to interact with its environment (scatter a thermal photon, hit a molecule of air, interact with a phonon in a solid - a vibration - ...), usually a system quickly gets entangled with its environment (if you follow quantum dynamics). Turns out that you can ONLY distinguish entangled states from statistical mixtures of pure product states if you do a correlation measurement on ALL entangled components in a ROTATED measurement basis. If you omit one, the remaining correlation will show up as identical to that of a statistical mixture.
So if your system hit a remaining air molecule, scattered a thermal photon, and created a phonon in a crystal of the metal of your vacuum tube, then in order to see this entanglement, you'd have to measure simultaneously your system, that air molecule, that photon, and that phonon, in an incompatible basis with the original one. FAPP, that's impossible. So FAPP, your system is now in a statistical mixture of pure product states, EVEN if it contained entangled components (that is to say, your system was 2 or more subsystems on purpose).

So you can now say that the "measurement" has "projected out" the states of the system (and you have a statistical mixture of measurement outcomes) - that's Copenhagen and Co ; or you can say that your system got hopelessly entangled with its environment (including you): that's MWI and Co. The last approach has the advantage that it follows from quantum dynamics directly and is why personally, I like it. But the results are the same: no weird correlations are seen from the moment there is interaction with the environment.

This is why entanglement experiments are hard and never done with cats.
 
  • #52
vanesch said:
So you can now say that the "measurement" has "projected out" the states of the system (and you have a statistical mixture of measurement outcomes) - that's Copenhagen and Co ; or you can say that your system got hopelessly entangled with its environment (including you): that's MWI and Co. The last approach has the advantage that it follows from quantum dynamics directly and is why personally, I like it. But the results are the same: no weird correlations are seen from the moment there is interaction with the environment.

This is why entanglement experiments are hard and never done with cats.
Exactly, and indeed I am making a similar point in the current cat paradox thread (no doubt it has been made many times before in that context). I would just add to your extremely insightful description of the role of entanglement in MWI and Copenhagen the comment that personally I'm not too crazy about MWI and the "hopelessly entangled approach" for two reasons:
1) it seems to ignore the role of the observer, just when relativity and other areas of physics (consciousness studies?) are starting to force us to come to terms with our own role in our own science (and at some point in the future I imagine that physics will have to start being framed as how we interact with and process our environment, more so that how our environment acts independently of how we process it), and
2) it seems to assume that the universe as a whole is not already in a mixed state. In other words, MWI proceeds from the idea that a closed system must always evolve from a pure state to a pure state, so as more and more systems come into contact, the entanglements cascade upward into more and more complex "closed" systems, but none of that cascade actually means anything if one cannot assume that the initial states are pure. When we look at how pure states get created in our laboratories for testing, we see that they always appear by coupling to a macro system that essentially "soaks off" all the entanglements of the pure state (or as you put it, so hopelessly decoheres them that they may as well be gone). So it's not that macro systems cascade pure states up a hierarchy, it's the opposite-- the pure state cascades down the hierarchy. Open systems are not the enemy of pure states, they are the cause of pure states. If the universe as a whole, as well as any semi-closed subsystem of it, starts out already in a mixed state, we'll never get larger systems to be in pure states-- only smaller ones.

Taking that second point from a quantum statistical mechanics perspective, we can say that the entropy of a pure state is zero. So if the whole universe is in a pure state, then its entropy is zero, and the idea that entropy can increase is just a kind of illusion of our corner of the "many worlds." But if we say that the universe as a whole is a mixed state, then it can evolve into mixed states of even higher entropy, as we might normally imagine happens with entropy. So Copenhagen gives us a kind of "WYSIWYG" approach to quantum mechanics, which I find to be true to the way we try not to add too much to the science that nature isn't forcing us to add. Though I admit someone else might actually like the finality of a zero-entropy universe. I suppose it's a point that will be debated for a very long time, and just when one approach seems to have won hands down, that pendulum of scientific discovery will swing again.
 
Last edited:
  • #53
Ken G said:
Exactly, and indeed I am making a similar point in the current cat paradox thread (no doubt it has been made many times before in that context). I would just add to your extremely insightful description of the role of entanglement in MWI and Copenhagen the comment that personally I'm not too crazy about MWI and the "hopelessly entangled approach" for two reasons:

Ok, long ago and far away, I have been a strong defender of MWI here, but I decided to stop that (after having gone through all the arguments 20 times) simply because at the end of the day, it occurred to me that in fact it doesn't matter. Also I don't want this thread to be highjacked by a pro/con MWI discussion. But...

1) it seems to ignore the role of the observer, just when relativity and other areas of physics (consciousness studies?) are starting to force us to come to terms with our own role in our own science (and at some point in the future I imagine that physics will have to start being framed as how we interact with and process our environment, more so that how our environment acts independently of how we process it), and

It's a funny statement because I would say that if there's one thing that different MWI flavors DO think a lot about, is exactly what it means to be "an observer". My personal stance on this is that it has to do with the question "what is conscious experience" but I won't elaborate.

2) it seems to assume that the universe as a whole is not already in a mixed state. In other words, MWI proceeds from the idea that a closed system must always evolve from a pure state to a pure state, so as more and more systems come into contact, the entanglements cascade upward into more and more complex "closed" systems, but none of that cascade actually means anything if one cannot assume that the initial states are pure. When we look at how pure states get created in our laboratories for testing, we see that they always appear by coupling to a macro system that essentially "soaks off" all the entanglements of the pure state (or as you put it, so hopelessly decoheres them that they may as well be gone). So it's not that macro systems cascade pure states up a hierarchy, it's the opposite-- the pure state cascades down the hierarchy. Open systems are not the enemy of pure states, they are the cause of pure states. If the universe as a whole, as well as any semi-closed subsystem of it, starts out already in a mixed state, we'll never get larger systems to be in pure states-- only smaller ones.

Indeed, if you consider "the wavefunction of the universe" (which is needed in any MWI setting), then in fact, entropy comes about, not from the wavefunction of the universe itself, but from the amount of different "worlds" you could be in. In other words, the entropy resides in the "choice" that makes you experience THIS world, and not all those other possible worlds, and that choice is increasing.

Taking that second point from a quantum statistical mechanics perspective, we can say that the entropy of a pure state is zero. So if the whole universe is in a pure state, then its entropy is zero, and the idea that entropy can increase is just a kind of illusion of our corner of the "many worlds." But if we say that the universe as a whole is a mixed state, then it can evolve into mixed states of even higher entropy, as we might normally imagine happens with entropy.

Or you could say "as we might normally imagine happens with OBSERVED entropy".

So Copenhagen gives us a kind of "WYSIWYG" approach to quantum mechanics, which I find to be true to the way we try not to add too much to the science that nature isn't forcing us to add. Though I admit someone else might actually like the finality of a zero-entropy universe. I suppose it's a point that will be debated for a very long time, and just when one approach seems to have won hands down, that pendulum of scientific discovery will swing again.

This is why I stopped discussing it :-)
 
  • #54
vanesch said:
So in total you get:

25% (pass, pass), 25% (no pass, pass), 25% (pass, no pass) and 25% (no pass, no pass).


The quantum superposition approach gives you:

50% chance to have (pass pass), and 50% chance to have (no pass, no pass).

So the statistical ensemble approach 50% (H,H) and 50% (V,V) is not explaining the QM result.
I think it will be more meaningful to respond to this post one more time.

Yes that's correct that 50% (H,H) and 50% (V,V) gives 25% (pass, pass), 25% (no pass, pass), 25% (pass, no pass) and 25% (no pass, no pass) after polarization measurement.

And I am saying that this is what you actually get.
And now correlations for +45/-45 measurement you actually get only after phase measurement (interference).
So the second measurement gives high coincidence rate after +45/+45 polarization measurement (if we have (H,H) and (V,V) pairs) and low coincidence rate after +45/-45 polarization measurement.

Speaking about QM result - there is no such thing. There are experimental results and there are QM predictions.
Experimental results are interpreted using fair sampling assumption and so my reasoning is consistent with them when we discard (obviously because of second measurement) fair sampling assumption.
 
  • #55
vanesch said:
It's a funny statement because I would say that if there's one thing that different MWI flavors DO think a lot about, is exactly what it means to be "an observer". My personal stance on this is that it has to do with the question "what is conscious experience" but I won't elaborate.
I don't want to hijack the thread either, and I believe I understand what you mean here-- consciousness is an emergent property of each decohered corner of the many worlds, which would neatly explain why we only perceive one such corner. There's no doubt that MWI is the king of the neat explanation-- I just feel that the mind is the source of the neat explanation, not the other way around. Like you say, there's no real difference-- perhaps the reason we cannot find which one is correct is because there's no such thing as the correct interpretation, any more than there is a correct interpretation of a great piece of music.

Indeed, if you consider "the wavefunction of the universe" (which is needed in any MWI setting), then in fact, entropy comes about, not from the wavefunction of the universe itself, but from the amount of different "worlds" you could be in. In other words, the entropy resides in the "choice" that makes you experience THIS world, and not all those other possible worlds, and that choice is increasing.
Yes, entropy still works in MWI, the issue is whether the reality is something that precedes the choice, or if the reality is the choice. It's realism or idealism, and in my opinion, when one projects both those philosophical stances onto the scientific method, the projections are identical. Much like how observations in two different frames project onto the same invariants-- if science cannot find a difference, then the message might be there isn't one.

Edit to add: In fact, it may even hold that, just as all physics theories are expected to preserve the relativity of the observer, we might require that all physical theories preserve the indistinguishability of MWI and Copenhagen. That might be an interesting angle to look at quantum gravity theories.
 

Similar threads

Replies
4
Views
1K
Replies
50
Views
4K
Replies
7
Views
2K
Replies
5
Views
1K
Replies
29
Views
3K
Replies
23
Views
3K
Replies
43
Views
5K
Back
Top