What does decoherence have to do with phases?

In summary: No, the problem is that we can't track the overall quantum state, which is the prerequisite for any kind of experimental control.
  • #1
Talisman
95
6
Here's the simplest example of decoherence I can think of. (I will drop normalizing factors for ease of typing.)

Start with a state ##|\psi\rangle = |0\rangle + |1\rangle##. Measure it in the basis:

$$|+\rangle = |0\rangle + |1\rangle, |-\rangle = |0\rangle - |1\rangle$$

It will always measure as ##|+\rangle## and never ##|-\rangle##. If you split it into its components, the two transition amplitudes will cancel, and this can be thought of as interference. On the other hand, entangle it with another particle: ##|\psi'\rangle = |00\rangle + |11\rangle##. Measurement of the first particle in the +/- basis will now yield 50-50 results. Bam, decoherence (well, at least if we call the second particle the "environment"). This can also be noticed in the reduced density matrix, where the off-diagonal elements are now zero.

But what does this have to do with changing phases? The only thing I can think of is noticing that for the state ##|\phi\rangle = |0\rangle + e^{i\theta}|1\rangle##, the measurement probabilities will depend on ##\theta##. Averaged over all possible values of ##\theta##, we will get 50-50 results (just like in the entangled case). But this doesn't mean that anything actually changed phases anywhere.

Here, Sabine Hossenfelder says that decoherence is due to particles getting "bumped" such that each bump changes its phase a tiny bit. What does she mean? Is there some real physical process by which phases change like this?

(Sorry for any typos. For some reason, the preview button does not work.)
 
Last edited:
Physics news on Phys.org
  • #2
Talisman said:
If you split it into its components
What components?

Talisman said:
entangle it with another particle: ##|\psi'\rangle = |00\rangle + |11\rangle##. Measurement of the first particle in the +/- basis will now yield 50-50 results. Bam, decoherence
There was decoherence in the first case too: the measurement entangled the particle with the detector and its environment.

Talisman said:
(well, at least if we call the second particle the "environment").
You can't, because the second particle is just one degree of freedom whose state is easily tracked. An "environment" is a very large number of degrees of freedom whose states cannot be tracked. That's how entanglement with the environment leads to decoherence.

Talisman said:
But what does this have to do with changing phases?
The spreading of entanglement among a very large number of degrees of freedom whose states cannot be tracked means their phases cannot be tracked either, so they are effectively randomized.
 
  • #3
PeterDonis said:
What components?
I just meant ##|0\rangle## and ##|1\rangle##: ##\langle -|0\rangle = 1## whereas ##\langle -|1\rangle = -1##. Their sum is zero, and this "cancellation" can be understood as interference.

PeterDonis said:
There was decoherence in the first case too: the measurement entangled the particle with the detector and its environment.
Sure, I didn't mean to imply otherwise. I was just giving an example of single-particle interference that is lost when it becomes entangled and we lose its partner.

PeterDonis said:
You can't, because the second particle is just one degree of freedom whose state is easily tracked.
What if it is the polarization state of a photon now zipping off into space? Or if that's a bad example, why can't I just say it's some other particle that's no longer "easily tracked"? I don't see why this isn't effectively decoherence.

PeterDonis said:
The spreading of entanglement among a very large number of degrees of freedom whose states cannot be tracked means their phases cannot be tracked either, so they are effectively randomized.
From my example above, we can see that even one "strong" entanglement (i.e., with orthogonal states) destroys interference. The need for a "very large number" here suggests to me that each entanglement is relatively weak -- i.e., resulting in a state ##|0\rangle|a\rangle + |1\rangle|b\rangle## where ##\langle a|b\rangle \approx 1##?

If I have understood your answer correctly, the problem is simply that we can no longer track the overall quantum state, right? Or is it that each time our state entangles, the different components acquire new (and unknown) phases?

I suppose part of what confuses me is that even if we knew the phases exactly (e.g., as in the two-qubit example I gave above, where we know the interaction Hamiltonian), this doesn't imply that we have experimental control over the environmental particles, and thus we still lose any "quantumness" in practice.
 
Last edited:
  • #4
Talisman said:
even one "strong" entanglement (i.e., with orthogonal states) destroys interference.
"Destroys interference" is not the same as decoherence. They are related in the sense that making interference terms negligible by randomizing phases is involved in decoherence. But in the scenario you describe, where two particles are entangled and this precludes either of them from showing interference effects in individual measurements, there is no randomization of phases at all.

Talisman said:
The need for a "very large number" here suggests to me that each entanglement is relatively weak -- i.e., resulting in a state ##|0\rangle|a\rangle + |1\rangle|b\rangle## where ##\langle a|b\rangle \approx 1##?
Not at all. No mathematical expression written down anywhere in this thread has anything to do with decoherence.

Talisman said:
If I have understood your answer correctly, the problem is simply that we can no longer track the overall quantum state, right?
The overall quantum state including a large number of degrees of freedom whose phases get randomized.

Talisman said:
Or is it that each time our state entangles, the different components acquire new (and unknown) phases?
Of course not. Entanglement is a unitary process. The phase randomization in decoherence is a result of the time evolution of a large number of degrees of freedom. Roughly speaking, the phase of each degree of freedom will "rotate" in time at a slightly different rate, so as the entanglement spreads out through more and more degrees of freedom, the original phase of the particle that was originally measured gets lost.
 
  • #6
@Demytifier Could you comment the relation between decoherence and the heisenberg phase-number uncertainty ?
 
  • Like
Likes Demystifier
  • #7
Heidi said:
@Demytifier Could you comment the relation between decoherence and the heisenberg phase-number uncertainty ?
Excellent question! The "phase" in decoherence and "phase" in phase-number uncertainty are two different things. The former is a purely quantum thing, it's the phase in abstract Hilbert space of quantum states, it's not an observable represented by a hermitian operator on that space. The latter is an observable represented by a hermitian operator in quantum field theory. Their relation can only be seen through the relation between first and second quantization, because the "phase" in the former sense in first quantization becomes promoted to "phase" in the latter sense in second quantization (aka quantum field theory). So decoherence and phase-number uncertainty are pretty much unrelated.
 
  • Like
Likes Heidi, vanhees71, PeroK and 1 other person
  • #8
Demystifier said:
Decoherence is loosing information about phases, because this information gets hidden in environment. For more details see my http://thphys.irb.hr/wiki/main/images/5/50/QFound3.pdf , especially pages 2, 4, 5, 8.
Thanks, I just read pages 1-11, and I think they are saying exactly what I am saying in my first post: if we start with a coherent state ##|\psi\rangle = \sum_k{a_k |\psi_k\rangle}## (for orthonormal basis ##|\psi_k\rangle##) and it evolves into the entangled state ##|\psi'\rangle = \sum_k{c_k |\psi_k\rangle|\phi_k\rangle}## (for orthonormal basis ##|\phi_k\rangle##), then the first subsystem must be treated as a mixed state, whose density matrix is given by the partial trace over the second subsystem. All measurements of the first subsystem will be equivalent to those of ##|\psi''\rangle = \sum_k{e^{i\theta_k} a_k |\psi_k\rangle}## averaged over each ##\theta_k##.

Did I characterize the slide deck correctly?

If so, then my original question remains: we can treat the first system as though it has unknown phase, but this seems to be a very different thing than Hossenfelder is saying. Here's a copy paste of her transcript:
The superposition which we want to measure interacts with many other particles, both along the way to the detector, and in the detector. This is what you want to describe with decoherence. The easiest way to describe these constant bumps that the superposition has to endure is that each bump changes the phase of the state, so the theta, by a tiny little bit.

There are two things that don't make sense to me here:

1. She makes it sound like there's a physical mechanism by which the phase actually changes. This is a very different thing from saying that we get an entanglement whose effect is equivalent to losing information about the phase. (If the phase merely changed, then there would still be a definite phase relationship, i.e., coherence.)

2. If even one of those "bumps" results in a strong entanglement, then as I showed above, it is equivalent to the phase information being lost completely, not "a tiny little bit." Therefore, the kinds of "bumps" she's mentioning must be such that the ##|\phi_k\rangle## mentioned above are not orthogonal. In fact, for it to be a "tiny bit," we must have ##\langle \phi_j|\phi_k\rangle \approx 1## for ##j \neq k## -- which @PeterDonis explicitly disavows. (Edit: I am saying exactly what you are saying on pp12-15, about many "partial decoherence" events, so I don't understand Peter's disagreement.)

I also cannot square this with the explanation @PeterDonis gave above, which is about the time evolution of the joint state and the phase rotation in time of the resulting components.

Are Sabine and Peter perhaps talking about a different definition of decoherence than you and I are? Or am I missing something obvious?
 
Last edited:
  • #9
I think this is a case of "lost in translation".

In the "semi-classical" quantum physics that has been used in e.g. NMR (and MRI), electron spin resonance etc for many decades we typically model an ensemble of two-levels (say spins) as behaving as one system with some "average" properties. Moreover, these systems are typically also modelled using the so-called Bloch equations. These are a set of ODEs that can be visualised using the so called Bloch-sphere, where each state of the system/ensemble can be described as arrow starting at the centre of the sphere whose point is on (or inside) the surface of a sphere. The coordinates of each point is defined by a an amplitude and two phases.

Now, in the Bloch equations there are two phenomenological parameters called T1 (relaxation time, essentially how quickly the system losses energy) and T2 (dephasing time). As the time evolves T1 describes how the "arrow" to gets shorter, and T2 how the system looses phase information. You can think if the latter as the "arrow" getting kicked around and gradually the position of the arrow becomes randomized.
Now, in an ensemble of e.g. spins T2 is limited by many things, one example would be random electromagnetic interaction between the spins since they are all moving around a bit because of thermal fluctuations (this is know as diffusion). This is the "bumping" Hossenfelder is talking about.

Today, even when working with single two-level systems we still use T1 and T2. Here these parameters should be understood as the average value you get if you repeat the experiment many times.
Note also that there is no entanglement here (which requires more than one system), only superposition of states: the ground state (the "north pole" of the Bloch sphere) and excited state ("south pole"); the system is in an superposition of states whenever the arrow is pointing anywhere else.

Note that "decoherence" is a bit ambiguous, it should be the same time as the dephasing for a single two-level system; but sometimes it refers to both T1 and T2.

In e.g. quantum computing T2 is a very important metric since it tells you how long on average each qubit is useful. Even if T1 is very long (in some spin systems it can be hours) T2 is what limits the "usefulness" of a qubit, if T2 is short it means that the phase is quickly randomized and the system is not very useful since we can't initialize or control it: we don't know where the "arrow" is pointing.

Not sure if this helps. Regardless, I would suggest getting familiar with the Bloch sphere, for single two-level systems it an excellent way of getting an intuition of what decoherence does to a system.
 
  • Like
Likes vanhees71 and DrClaude
  • #10
Talisman said:
If even one of those "bumps" results in a strong entanglement, then as I showed above, it is equivalent to the phase information being lost completely,
No, it doesn't. The entanglement you describe in the OP between two particles does not lose any phase information at all. It's still right there in the wave function you wrote down for the two-particle state.

Talisman said:
I am saying exactly what you are saying on pp12-15
The scenario you describe in your OP is not the same as the "partial decoherence" described on those slides. See below.

Talisman said:
am I missing something obvious?
Yes. The key thing you are missing is on p. 8 of the slides that @Demystifier referenced:

"If we don't know the state of the environment, the effect is the same as if we don't know the phase".

In the scenario in your OP, we do know the state of the second particle, what you are calling the "environment". It's right there in the wave function you wrote down. So there is no decoherence.

I suppose that technically, if you could entangle the first particle with one other particle whose state was not known, so that the state of the entangled two-particle system was not known, and then carefully never measured the second particle, you could sort of count that as "decoherence"; formally, you could write down something describing this process that looked like what @Demystifier wrote down in the slides. But in practice, this operation would have no effect whatever on the statistics of measurements on the first particle (because any such effect would be equivalent to knowledge of the state of the second particle), so it might just as well not have happened. To put it another way, in this scenario there are no interference effects that can be eliminated by decoherence, since the only thing you ever knew the quantum state of was a single particle, which can't show any interference. So there's no point in even defining this as decoherence, since the whole point of decoherence is to explain classical behavior, i.e., to explain how quantum interference effects get eliminated.
 
  • Like
Likes vanhees71
  • #11
@PeterDonis I appreciate your help, and I see what you mean about page 8, but I still can't tell the difference between what I wrote and page 11 of the slides. There, @Demystifier shows a well-defined entangled joint state (whose phases are known), and demonstrates that the reduced density matrix has zero off-diagonal terms. This is also how I first learned the concept of decoherence.

@Demystifier, could you please clarify?
 
  • #12
(Some follow-up comments...)

No, it doesn't. The entanglement you describe in the OP between two particles does not lose any phase information at all. It's still right there in the wave function you wrote down for the two-particle state.
Right, that's why I said "is equivalent to the phase information being lost." Losing the phase information of a single qubit has the same effect as entangling it with a (known) second state: both cases result in the same density matrix, as per p11 of the slides (or as he says explicitly on p6, "Destruction of phases equivalent to destruction of interference.")

PeterDonis said:
I suppose that technically, if you could entangle the first particle with one other particle whose state was not known, so that the state of the entangled two-particle system was not known, and then carefully never measured the second particle, you could sort of count that as "decoherence"; formally, you could write down something describing this process that looked like what @Demystifier wrote down in the slides. But in practice, this operation would have no effect whatever on the statistics of measurements on the first particle (because any such effect would be equivalent to knowledge of the state of the second particle), so it might just as well not have happened. To put it another way, in this scenario there are no interference effects that can be eliminated by decoherence, since the only thing you ever knew the quantum state of was a single particle, which can't show any interference. So there's no point in even defining this as decoherence, since the whole point of decoherence is to explain classical behavior, i.e., to explain how quantum interference effects get eliminated.
This also confuses me. Given my single-particle state, we can see interference, right? And when we entangle it with a particle whose state is known, measurement of the first particle alone cannot show interference, right? Why does this not "explain [one mechanism by which] quantum interference effects get eliminated?"

Sorry if I'm being dense here, but this is precisely the takeaway I'm getting from the slides, and it will be helpful if @Demystifier explains that I've completely misunderstood his point.
 
  • #13
Talisman said:
Given my single-particle state, we can see interference, right?
Interference with what? I know you claimed "interference" in your OP by changing the basis you used to describe the state, but that's just a mathematical trick.

Part of the issue here might be that "interference" and "decoherence" are ordinary language words that are used with different meanings by different people, both in this thread and in the literature more generally.
 
  • #14
f95toli said:
I think this is a case of "lost in translation".
Thanks! That sent me down a path of learning about NMR. I think I see what you are saying.
 
  • #15
PeterDonis said:
Interference with what? I know you claimed "interference" in your OP by changing the basis you used to describe the state, but that's just a mathematical trick.

Part of the issue here might be that "interference" and "decoherence" are ordinary language words that are used with different meanings by different people, both in this thread and in the literature more generally.
I am surprised to hear this. All examples of "interference" I've seen ultimately reduce to the same idea, but then again, I'm not (remotely) a physicist! Perhaps the confusion here comes from the fact that I'm choosing an observable with finite basis, whereas normally we consider one with continuous spectrum (usually position)?

More explicitly:

Consider a classical 50-50 mixture of ##|0\rangle## and ##|1\rangle##. What are the odds this mixture will be measured in ##|-\rangle##? Well, each possibility has probability ##||\langle 0|-\rangle||^2 = ||\langle 1|-\rangle||^2 = 50\%##, so the overall probability is 50%. Here we are adding regular probabilities (##0.5*0.5 + 0.5*0.5 = 0.5##).

What if our state is instead a superposition of the two (##|\psi\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)##)? Then we follow Feynman's dictum:
When there are two ways for the particle to reach the detector, the resulting probability is not the sum of the two probabilities, but must be written as the absolute square of the sum of two amplitudes.

And find that ##||\frac{1}{\sqrt{2}}\langle 0|-\rangle + \frac{1}{\sqrt{2}}\langle 1|-\rangle ||^2 = ||0.5 - 0.5||^2 = 0## (which by linearity of the inner product is the same as ##||\langle\psi|-\rangle||^2##, saving us a bit of time given that we secretly know that ##|\psi\rangle = |+\rangle## -- which is the "trick" I think you are referring to). Voila, the two "paths" destructively interfered (and would constructively interfere for ##|+\rangle##).

Finally, if all we know is that ##|\psi\rangle = \frac{1}{\sqrt{2}}(|0\rangle + e^{i\theta}|1\rangle)##, then averaged over all possible ##\theta##, we will again find 50-50 results. I.e., when we lose phase information, we recover classical probabilities (i.e., it looks like a mixed state).

Edit: And if we do the same experiment with the entangled state, then as you already know, we will also get 50-50 results, just as we do by losing phase information or having a classical mixture (and not just for this observable, but all observables on this Hilbert space). That was the "punchline" of decoherence as it was taught to me.

Edit 2: And if have weak entanglement, we get "weak decoherence," per this answer: https://physics.stackexchange.com/a/35049:
the probability of emitting a "hard" graviton (with short enough wavelength to distinguish the paths) is far, far smaller than for photons, and therefore gravitational decoherence is extremely weak.
...
In practice, loss of visibility in decoherence experiments usually occurs due to more mundane processes that cause "which-way" information to be recorded (e.g. the electron gets scattering by a stray atom, dust grain, or photon). Decoherence due to entanglement of the particle with its field (i.e. the emission of photons or gravitons that are not very soft) is always present at some level, but typically it is a small effect.
 
Last edited:
  • #16
But none of those calculations have anything to do with decoherence.
Firstly, involving entanglement in the discussion is just going to to make things confusing; as mentioned above entanglement and decoherence are not directly related: decoherence will "destroy" entanglement, but that is primarily because the systems that are entangled are -separately-loosing coherence.

Secondly, it is not possible to mathematically describe decoherence using the QM that is typically taught in undergraduate courses (and what seems to be what you are using). The reason is that the "common QM" only works for closed systems, which by definition do not decohere (at least not in the way the word is normally used); the presence of decoherence implies that you are dealing with an open system.

To describe open quantum systems you need to user other formalisms; in the the simplest case the Bloch equations I mentioned above (which are a good starting point). More sophisticated approaches include using Lindblad super-operators (which are in a sense also phenomenological but allows you to use more complicated Hamiltonians and different combinations of collapse operators) or the Bloch-Redfield master equation.
As far as I remember there are good wiki pages for both methods.
 
  • #17
f95toli said:
Firstly, involving entanglement in the discussion is just going to to make things confusing; as mentioned above entanglement and decoherence are not directly related: decoherence will "destroy" entanglement, but that is primarily because the systems that are entangled are -separately-loosing coherence.
Then we are certainly using different definitions of decoherence! From the Wikipedia article on decoherence, for example:
Decoherence happens when different portions of the system's wave function become entangled in different ways with the measuring device.
I agree that undergraduate treatments do not cover this, but early-grad courses that introduce the density operator formalism certainly do.

Again, I invite @Demystifier into this conversation, because I am confident that the explanations given on his slides are the same ones I am using here (but am happy to be proven wrong).
 
Last edited:
  • #18
Talisman said:
@Demystifier, could you please clarify?
I think you understood decoherence correctly. About the text of Hossenfelder, it's a popular level explanation so should not be taken too literally.
 
  • Informative
Likes Talisman
  • #19
@f95toli, @PeterDonis Please also see the answers here, which all explain the same thing that I have been, regarding how entanglement causes decoherence: https://physics.stackexchange.com/q/204100/

E.g.:
So that is how to easily understand entanglement as destroying coherence: the more you're entangled, the more the orthogonality of the other system kills your off-diagonal terms, and the more your substate looks like a classical probability mixture, transferring the cool quantum effects to the system-as-a-whole.
(top answer)

And:
Finally, a very brief answer to question 3: Yes, decoherence understood as loss of coherent superposition involves entanglement and/or a dissipative dynamics in the presence of another system (measurement apparatus, environment, etc). Sometimes though it may mean loss of phase coherence under internal interactions.
(second answer -- which it appears I should now read in more detail!)
 
Last edited:
  • #20
Talisman said:
Then we are certainly using different definitions of decoherence! From Wikipedia, for example:
That is not incorrect; but it is certainly NOT a general definition of decoherence and it is not how the word is commonly used. You don't need to have a measurement device present or even do a measurement to have decoherence*.
In the case of e.g. a real-world spin ensemble (say a bunch of implanted ions in a piece of silicone) thermal fluctuations will cause decoherence whether we measure the system or not and in most mathematical descriptions of such a system the measurement is not even part of the model (you effectively just use the Born rule at the end). As mentioned above we quantify this using e.g. the T2 parameter (as well as bunch of other related parameters related to coherence one can measure).

The connection between decoherence and the measurement problem might be interesting; but is it completely irrelevant for how the concept of decoherence is today used in mainstream physics. Most people who use this word daily (like me) have little or no interest in say the measurement problem or interpretations of QM.

*)Yes, I do know that one could argue that decoherence is caused by the "environment performing a measurement", but that is in my view a somewhat silly way to think about things since this is certainly not what most people mean by "the act of measuring" .
 
  • #21
Thanks @f95toli, it is certainly interesting to learn about the word's usage in a broader context. My education has been primarily in quantum computing and the foundations of QM, and I appreciate that the usage of words may differ.

For whatever it's worth, Hossenfelder's example does involve a measurement device:
A detector is basically a device that amplifies a signal. This necessarily means that the superposition which we want to measure interacts with many other particles, both along the way to the detector, and in the detector. This is what you want to describe with decoherence.
I am inclined to agree with Demystifier's explanation in this case.
 
  • Like
Likes Demystifier
  • #22
Talisman said:
Perhaps the confusion here comes from the fact that I'm choosing an observable with finite basis, whereas normally we consider one with continuous spectrum (usually position)?
No.

Talisman said:
That was the "punchline" of decoherence as it was taught to me.
How, when, and by whom was it taught to you?

Have you read any of the classic treatments of decoherence in the literature? For example, by Zurek or Schlosshauer?

Talisman said:
Edit 2: And if have weak entanglement, we get "weak decoherence," per this answer: https://physics.stackexchange.com/a/35049:
Any purported "explanation" that talks about gravitons is speculation. We don't have a good theory of quantum gravity and we don't have any experimental evidence for quantum aspects of gravity.

Talisman said:
Please also see the answers here, which all explain the same thing that I have been, regarding how entanglement causes decoherence
Consider a pair of entangled qubits whose spins are measured in a typical experiment that is testing for violations of the Bell inequalities. Would you say the entanglement of those two qubits destroys their coherence? If so, how can the experimental tests that find violations of the Bell inequalities possibly work?
 
  • #23
PeterDonis said:
How, when, and by whom was it taught to you?
It was first taught to me in a quantum computing course in university almost two decades ago, and reinforced in various ways since. You can find the same explanation here (in the section titled "Decoherence") from Scott Aaronson. Demystifier agrees with my explanation, as do the top two results on the physics SE question I shared, so I am surprised that you still find it controversial. Clearly many of us have been taught this way. I do not claim it is the only way.

Yes, I read Zurek's original long ago.

PeterDonis said:
Consider a pair of entangled qubits whose spins are measured in a typical experiment that is testing for violations of the Bell inequalities. Would you say the entanglement of those two qubits destroys their coherence? If so, how can the experimental tests that find violations of the Bell inequalities possibly work?
The way I was taught, we only use the word "decoherent" when the particles playing the role of "environment" are not "easily tracked" (to use your language). Bell experiments are designed specifically to track both particles, so they are not called decoherent.

But anyway, the reduced density matrix of a Bell state is exactly the same as that of a formally decoherent two-qubit system, and hence it does not show interference, which is actually central to the experiment. In particular, the state ##|z_+\rangle + |z_-\rangle## will always be measured as ##|x_+\rangle## and never as ##|x_-\rangle## (for the same reason as my above single-particle example). On the other hand, given the Bell state ##|z_+\rangle|z_+\rangle + |z_-\rangle|z_-\rangle##, the first particle will measure ##|x_+\rangle## and ##|x_-\rangle## with equal probability (and same for y-spin), as is required to test the inequalities.
 
  • Like
Likes Demystifier
  • #24
Talisman said:
The way I was taught, we only use the word "decoherent" when the particles playing the role of "environment" are not "easily tracked" (to use your language). Bell experiments are designed specifically to track both particles, so they are not called decoherent.
Ok, but then consider: suppose I set up exactly the same scenario as in the standard Bell experiments, but I only measure one particle. I let the other one just fly off (for example, it could be a photon that flies away, per a previous post of yours). According to you, that means the particle I do measure is decohered. But the preparation process that produced the two entangled particles is exactly the same in both cases. So your position implies that whether or not entanglement destroys coherence depends on what does or does not happen to one of the particles in the future, after the entanglement is prepared. That makes no sense.

Talisman said:
the reduced density matrix of a Bell state is exactly the same as that of a formally decoherent two-qubit system, and hence it does not show interference
According to your idiosyncratic use of the term "interference", perhaps. I would say that interference is irrelevant to Bell inequality tests since that's not what they're testing for. In any case, we're not talking about interference here, we're talking about decoherence.
 
  • #25
PeterDonis said:
Ok, but then consider: suppose I set up exactly the same scenario as in the standard Bell experiments, but I only measure one particle. I let the other one just fly off (for example, it could be a photon that flies away, per a previous post of yours). According to you, that means the particle I do measure is decohered. But the preparation process that produced the two entangled particles is exactly the same in both cases. So your position implies that whether or not entanglement destroys coherence depends on what does or does not happen to one of the particles in the future, after the entanglement is prepared. That makes no sense.
Indeed, and you far from the first to notice this! For example, check out this paper of Susskind and Bousso: https://arxiv.org/pdf/1105.3796.pdf. (Incidentally, here they give precisely the same definition of decoherence that I do, and I hope you will agree that Leonard Susskind is a trustworthy source.)
Decoherence has two important limitations: it is subjective, and it is in principle reversible. This is a problem if we rely on decoherence for precise tests of quantum mechanical predictions. ... . Because coherence is never lost in the full Hilbert space SAE, the speed, extent, and possible outcomes of decoherence depend on the definition of the environment E. This choice is made implicitly by an observer, based on practical circumstances: the environment consists of degrees of freedom that have become entangled with the system and apparatus but remain unobserved.

PeterDonis said:
According to your idiosyncratic use of the term "interference", perhaps.
Well, mine and Scott Aaronson's and a whole bunch of other people's, apparently.
 
  • #26
Talisman said:
you far from the first to notice this! For example, check out this paper of Susskind and Bousso: https://arxiv.org/pdf/1105.3796.pdf.
Thanks for the reference, I'll take a look.

Talisman said:
I hope you will agree that Leonard Susskind is a trustworthy source
I don't consider anyone a trustworthy source in the sense that I'll accept what they say as true without reading it and evaluating it for myself.

Even leaving that aside, I'm less inclined to trust Susskind because he's been claiming victory in what he calls the "black hole war" for years now on the basis of string theory, which has no experimental evidence for it.

Talisman said:
mine and Scott Aaronson's and a whole bunch of other people's, apparently.
I haven't had a chance to read the Aaronson reference you gave, I'll take a look.
 
  • #27
Talisman said:
For example, check out this paper of Susskind and Bousso: https://arxiv.org/pdf/1105.3796.pdf.
This paper's claims are interpretation dependent, since they are explicitly using the MWI. Discussion of QM interpretations belongs in the interpretation subforum, not here.
 
  • #28
Talisman said:
The way I was taught, we only use the word "decoherent" when the particles playing the role of "environment" are not "easily tracked" (to use your language).
From the Aaronson article you linked to, I think it's a bit more than that. The particles playing the role of the environment have to not have been prepared by the experimenter. For example, Aaronson asks, talking about a case where we have two entangled qubits and we are only measuring one, "what if the second qubit was a stray photon, which happened to pass through your experiment on its way to the Andromeda galaxy?" In this kind of situation, it's not just that you can't track the second qubit after it gets entangled with the first; it's that whatever entanglement it has with the first is not due to any controlled preparation process that you did, it's just a random unpredictable event that messes up the isolation of your first qubit. (Which means that you also don't know the exact state of the two-qubit system; all you know is that it is some state that gives the reduced density matrix for the first qubit corresponding to a 50-50 chance of each measurement outcome for the measurement you do on it. But there are an infinite number of possible two-qubit states that can do that.)

In this hypothetical case where the "environment" is just one stray photon, yes, you could say that entanglement destroys coherence, because you didn't and can't control the entanglement process. And this does make the scenario different from the Bell inequality experiment, because of course in that experiment you are deliberately preparing a particular two-qubit entangled state.
 
  • #29
Talisman said:
mine and Scott Aaronson's
I'm not so sure. Here is an example of Aaronson's use of the term "interference" (from the article you linked to):

"To see an interference pattern, you'd have to perform a joint measurement on the two qubits together."

This implies that it makes no sense to talk about "interference" if you're only measuring a single qubit. Which is the position I've been taking, not the position you've been taking.
 
  • #30
PeterDonis said:
I'm not so sure. Here is an example of Aaronson's use of the term "interference" (from the article you linked to):

"To see an interference pattern, you'd have to perform a joint measurement on the two qubits together."
Yes, here he's giving an example of decoherence using two entangled qubits.

PeterDonis said:
This implies that it makes no sense to talk about "interference" if you're only measuring a single qubit. Which is the position I've been taking, not the position you've been taking.

https://www.scottaaronson.com/democritus/lec9.html
So given a qubit, we can transform it by applying any 2-by-2 unitary matrix -- and that leads already to the famous effect of quantum interference. For example, consider the unitary matrix ...
He goes on to give exactly the same example of single-qubit interference that I do. I'm afraid it's very common in quantum computing, and you're going to have to evict it from many more heads than mine. Unfortunately, as he goes on to show, it indeed captures the very essence of all "quantum weirdness", so you will have much more work to do still!
So, cancellation between positive and negative amplitudes can be seen as the source of all "quantum weirdness" -- the one thing that makes quantum mechanics different from classical probability theory. How I wish someone had told me that when I first heard the word "quantum"!
PeterDonis said:
Which means that you also don't know the exact state of the two-qubit system
Yes you do. In fact, he writes down the state explicitly! But the whole punchline is that even if you know the second state perfectly, tracing over it still makes the first into a mixed state. As he says, "If we now ignore the second qubit..."

I could find you more examples from his site and blog where he harps on the same point repeatedly (as have numerous others, including @Demystifier), but I'm afraid I've run out of steam here.

Anyway, thanks for your help!
 
  • #31
Talisman said:
Yes you do. In fact, he writes down the state explicitly!
If the second qubit is a random photon that happens to be passing through your experiment, you don't know the entangled state. You can't possibly.

I don't think you're actually addressing the issues I'm raising.
 
  • #32
PeterDonis said:
If the second qubit is a random photon that happens to be passing through your experiment, you don't know the entangled state. You can't possibly.

I don't think you're actually addressing the issues I'm raising.
Please see the example again. He writes down the state explicitly: ##\frac{|00\rangle + |11\rangle}{2}##.

But it doesn't matter, because you can take the partial trace yourself and see that it results in the maximally mixed state that he (and others) equate with decoherence -- provided that you are thereafter unable to measure the "environmental" particle for whatever reason. That does not depend on MWI or any other interpretation.
 
  • #33
Talisman said:
Please see the example again. He writes down the state explicitly: ##|00\rangle + |11\rangle##.
Again, you're not addressing the issue I'm raising. You can't know that the two-qubit system is in this state unless you prepared it that way. If the second qubit is a random photon that happened to pass through your experiment, you didn't prepare the two-qubit state, so you don't know what it is.

I realize Aaronson does not explicitly state in his article what I just stated above. That doesn't mean he disagrees with it. He just didn't say it. What I say is obvious. If you don't agree with what I say, then give me an actual argument. Don't just keep repeating what Aaronson says. I know what's in Aaronson's article. I also know that what's in Aaronson's article does not address the issue I'm raising.
 
  • #34
Talisman said:
That does not depend on MWI or any other interpretation.
The mixed state you obtain by partial tracing is independent of any interpretation, yes. I did not say otherwise. The only thing I have said is interpretation dependent is the claims made by the Bousso & Susskind paper you referenced, which explicitly uses the MWI.
 
  • #35
Talisman said:
you can take the partial trace yourself and see that it results in the maximally mixed state that he (and others) equate with decoherence -- provided that you are thereafter unable to measure the "environmental" particle for whatever reason.
I already stated my objection to this position in post #24. Your only response to that was to say I'm far from the first to notice it and reference the Bousso & Susskind paper. I am unable to find anything relevant to that issue in that paper.
 
Back
Top