How Does Environmentally Induced Decoherence Affect Quantum State Reduction?

  • B
  • Thread starter Feeble Wonk
  • Start date
  • Tags
    Decoherence
In summary: The unitary dynamic evolution is pure and zero entropy and exact for the composite. The reduced density operator of the system alone is mixed and has higher entropy. The reduced density operator of the environment alone is mixed and has higher entropy. But the total entropy of the composite is zero and this is the only thing that is exact and pure. As to what you are missing, that is subtler but perhaps the following will help. In summary, the concept of spontaneous quantum state reduction through environmentally induced decoherence involves the interaction between a system and its environment, causing the system to become "mixed" and increase in entropy while the composite system remains in a "pure" and zero entropy state.
  • #176
naima said:
I know that we can go on to decompose everything in term of vectors, to add them, to square them, to multiply each case by a probability, to add them again. It works very well. But...
All the things like POVM's, open quantum systems, and so on, aren't really a generalization of standard QM. They have an equivalent description in standard QM with a larger Hilbert space. So in order to discuss foundational issues, we can just discuss standard QM and then later take partial traces and so on if we want to restrict to subsystems. If we can clarify the interpretational issues in standard QM, we automatically clarify them for open quantum systems as well.

atyy said:
But you still need "brain" or "information" as something special. If there is no brain in the universe, then does the theory predict that anything happens?
No, a brain is just matter like everything else. And information isn't a primitive concept at all. These concepts don't have a special status. If there is no brain, then all the processes still happen. There is just nobody who think about them. For instance, there might be a Hamiltonian that describes the whole universe and it just doesn't make matter accumulate into things like brains. A brain is just a certain constellation of matter, just like a chair or a molecule, although a quite complex one. (Note that this has nothing to do with consciousness or so. I just describe all the matter in the universe within the same quantum theory, including the matter that makes up physicists. If this matter is governed by the laws of quantum mechanics as well, then this should certainly be possible.)
 
Physics news on Phys.org
  • #177
rubi said:
No, a brain is just matter like everything else. And information isn't a primitive concept at all. These concepts don't have a special status. If there is no brain, then all the processes still happen. There is just nobody who think about them. For instance, there might be a Hamiltonian that describes the whole universe and it just doesn't make matter accumulate into things like brains. A brain is just a certain constellation of matter, just like a chair or a molecule, although a quite complex one. (Note that this has nothing to do with consciousness or so. I just describe all the matter in the universe within the same quantum theory, including the matter that makes up physicists. If this matter is governed by the laws of quantum mechanics as well, then this should certainly be possible.)

But how does this work? Let's say we have only a wave function of the universe, with deterministic unitary time evolution. What happens? Either nothing is happening since we only have a wave function, or we have many worlds since the wave function is a superposition and all branches happen.
 
  • #178
Gleason an Gleason/Busch theorems were a major step forward in QM.
They are talking about "events" that sum to Id. They have a scalar product and a norm (the trace norm).
Decoherence is not about vectors in the Hilbert space. Decoherence is about events.
 
  • #179
atyy said:
But how does this work? Let's say we have only a wave function of the universe, with deterministic unitary time evolution. What happens? Either nothing is happening since we only have a wave function, or we have many worlds since the wave function is a superposition and all branches happen.
You don't only have a wave function ##\Psi##, you also have observables ##\hat X\phi^{\hat X}_a = \lambda^{\hat X}_a\phi^{\hat X}_a## for all possible physical questions and you have the Born rule ##P(X\in A)=\int_A |\left<\phi^{\hat X}_a,\Psi\right>|^2\mathrm d a##. The wave function ##\Psi## contains all the data you need in order to obtain the answer to any (probabilistic) question you might ask. If you have a question to the system, you just choose the appropriate observable ##\hat X## and then the Born rule allows you to compute the probability for something happening. In order for something to happen, there need not be a human who observes it. I'm not using the MWI interpretation, there is only one world in my interpretation. I'm just saying that the state ##\Psi## allows me to calculate probabilities for all possible physical events. Of course only one of these events will ever happen, but if we accept that nature is intrinsically random, then we can't do better than to have a theory that calculates only probabilities and there is no underlying mechanism that selects one of them.

naima said:
Gleason an Gleason/Busch theorems were a major step forward in QM.
They are talking about "events" that sum to Id. They have a scalar product and a norm (the trace norm).
Decoherence is not about vectors in the Hilbert space. Decoherence is about events.
I don't deny that Gleason's theorem is a great theorem, but it has nothing to do with decoherence. Decoherence is the mechanism that ensures that the probability distributions of QM don't show oscillatory behaviour, so we don't usually get interference patterns, unless we face a situation, where decoherence doesn't play a role. Decoherence is just standard QM of very large, unisolated systems.
 
  • Like
Likes eloheim and Mentz114
  • #180
rubi said:
You don't only have a wave function ##\Psi##, you also have observables ##\hat X\phi^{\hat X}_a = \lambda^{\hat X}_a\phi^{\hat X}_a## for all possible physical questions and you have the Born rule ##P(X\in A)=\int_A |\left<\phi^{\hat X}_a,\Psi\right>|^2\mathrm d a##. The wave function ##\Psi## contains all the data you need in order to obtain the answer to any (probabilistic) question you might ask.

That's not really true. There are probabilistic questions that don't have answers: "What is the probability that this electron has spin-up in the x-direction and the y-direction?" There are specific questions that you're allowed to ask in QM, and it answers all of those, but that's sort of tautological: It answers the questions that it can answer.
 
  • #181
stevendaryl said:
That's not really true. There are probabilistic questions that don't have answers: "What is the probability that this electron has spin-up in the x-direction and the y-direction?" There are specific questions that you're allowed to ask in QM, and it answers all of those, but that's sort of tautological: It answers the questions that it can answer.
That's right, but it's not a problem of QM. The violation of Bell's inequality shows that it is in principle impossible to improve this situation (unless you want to exploit loopholes). It's not QM that hinders us to ask that question, but nature itself. So in some sense, QM is a theory that already achieves everything that a physical theory can possibly achieve. (Of course it's not the only theory that can achieve everything, but it's one of them.) QM just accepts it as a fact that nature forces these questions to be meaningless.

--
By the way, I understand that this doesn't seem satisfactory and I'm also interested in how to interpret this situation. All I'm saying is that it's not QM's fault that it gives unsatisfactory answers. It has to.
 
Last edited:
  • #182
rubi said:
You don't only have a wave function ##\Psi##, you also have observables ##\hat X\phi^{\hat X}_a = \lambda^{\hat X}_a\phi^{\hat X}_a## for all possible physical questions and you have the Born rule ##P(X\in A)=\int_A |\left<\phi^{\hat X}_a,\Psi\right>|^2\mathrm d a##. The wave function ##\Psi## contains all the data you need in order to obtain the answer to any (probabilistic) question you might ask. If you have a question to the system, you just choose the appropriate observable ##\hat X## and then the Born rule allows you to compute the probability for something happening. In order for something to happen, there need not be a human who observes it. I'm not using the MWI interpretation, there is only one world in my interpretation. I'm just saying that the state ##\Psi## allows me to calculate probabilities for all possible physical events. Of course only one of these events will ever happen, but if we accept that nature is intrinsically random, then we can't do better than to have a theory that calculates only probabilities and there is no underlying mechanism that selects one of them.

But if you have the observables too, and you use words like "questions you might ask", then the "you" is still postulated as something that you need to know that is not defined by the wave function alone.
 
  • #183
rubi said:
That's right, but it's not a problem of QM. The violation of Bell's inequality shows that it is in principle impossible to improve this situation (unless you want to exploit loopholes). It's not QM that hinders us to ask that question, but nature itself. So in some sense, QM is a theory that already achieves everything that a physical theory can possibly achieve. (Of course it's not the only theory that can achieve everything, but it's one of them.) QM just accepts it as a fact that nature forces these questions to be meaningless.

It's not at all clear to me how much of the QM formalism is about the way nature is. The Born rule that says that "if you measure observable O you'll an eigenvalue with such-and-such a probability" is not really about nature. In nature, we don't have observables. Not directly, anyway. You set up an experiment and the result of the experiment is this or that macroscopically distinguishable state of a detector. So what you're observing is not (directly) any property at all of the system under investigation (an electron, for example). You're observing a property of a macroscopic object, the position of a pointer, or the location of a dark spot on a photographic film, etc. So, to me, the whole mathematical apparatus of Hermitian operators and their expectation values seems removed from what's really going on in nature. I'm not exactly sure what I would like in a quantum theory, but I think that there should be a way to formulate it that doesn't mention measurements or observables or a macroscopic/microscopic distinction. Those should be derived concepts, not primitives.
 
  • #184
stevendaryl said:
It's not at all clear to me how much of the QM formalism is about the way nature is. The Born rule that says that "if you measure observable O you'll an eigenvalue with such-and-such a probability" is not really about nature. In nature, we don't have observables. Not directly, anyway. You set up an experiment and the result of the experiment is this or that macroscopically distinguishable state of a detector. So what you're observing is not (directly) any property at all of the system under investigation (an electron, for example). You're observing a property of a macroscopic object, the position of a pointer, or the location of a dark spot on a photographic film, etc. So, to me, the whole mathematical apparatus of Hermitian operators and their expectation values seems removed from what's really going on in nature. I'm not exactly sure what I would like in a quantum theory, but I think that there should be a way to formulate it that doesn't mention measurements or observables or a macroscopic/microscopic distinction. Those should be derived concepts, not primitives.

Both Many-Worlds and Bohmian interpretations DO formulate QM without observables being primitives. I'm not completely satisfied with either of those, but they are more along the lines of what I would want, I think.
 
  • #185
atyy said:
But if you have the observables too, and you use words like "questions you might ask", then the "you" is still postulated as something that you need to know that is not defined by the wave function alone.
Well, if you like it better, I could have written "questions that the universe might have to decide upon". It's not necessary for some being to ask the questions in order to have the universe decide upon them. But if you are a being, made of matter, governed by the laws of QM, then the evolution of the universe (containing yourself) might make you become aware of the answer that the universe has assigned to these questions. The word "observable" is also not to be taken literally. It's just a name for the mathematical objects that refer to parts of the universe, whether they are observed or not.
 
  • #186
stevendaryl said:
It's not at all clear to me how much of the QM formalism is about the way nature is. The Born rule that says that "if you measure observable O you'll an eigenvalue with such-and-such a probability" is not really about nature.
That's one way to phrase the Born rule, but you can also phrase it in a way that doesn't use the word measurement: "With such-and-such a probability, the eigenvalue ##\lambda## will be physically realized by nature."

In nature, we don't have observables. Not directly, anyway. You set up an experiment and the result of the experiment is this or that macroscopically distinguishable state of a detector. So what you're observing is not (directly) any property at all of the system under investigation (an electron, for example). You're observing a property of a macroscopic object, the position of a pointer, or the location of a dark spot on a photographic film, etc.
Observables refer to some parts of the universe. Of course, not all of these parts are accessible to humans, so humans can usually only learn about observables corresponding to macroscopic objects. But in the physical theory, observables are just how the correspondence between the theory and the real world is made, independent of whether humans can access them. The word "observable" is probably not very good.

So, to me, the whole mathematical apparatus of Hermitian operators and their expectation values seems removed from what's really going on in nature. I'm not exactly sure what I would like in a quantum theory, but I think that there should be a way to formulate it that doesn't mention measurements or observables or a macroscopic/microscopic distinction. Those should be derived concepts, not primitives.
Well, I think one can formulate the theory without mentioning words like measurement. We just have to choose our words more carefully. We just usually don't do this, because we are used to the physics slang. When I use these words, I don't really have their literal meaning in mind.

stevendaryl said:
Both Many-Worlds and Bohmian interpretations DO formulate QM without observables being primitives. I'm not completely satisfied with either of those, but they are more along the lines of what I would want, I think.
Well, in BM, at least position is a primitive observable, although it can't be accessed directly. In many worlds, observables are avoided by just specifying a basis directly, which essentially corresponds to specifying a preferred set of observables. There always needs to be some correspondence between the physical theory and some parts of the universe, otherwise the theory can't make predictions about those parts. I use the word "observable" for this correspondence.
 
  • #187
rubi said:
That's one way to phrase the Born rule, but you can also phrase it in a way that doesn't use the word measurement: "With such-and-such a probability, the eigenvalue ##\lambda## will be physically realized by nature."

I don't think it makes any sense to phrase it that way. Suppose I put an electron into a state state that is spin-up in the z-direction. We can compute a probability of [itex]\frac{1}{2}[/itex] associated with the statement "The electron has spin-up in the x-direction". How long do I have to wait for that statement to be "physically realized by nature"? It's never going to be physically realized. If I don't act on the electron, it'll continue to be spin-up in the z-direction forever.
 
  • #188
rubi said:
Well, if you like it better, I could have written "questions that the universe might have to decide upon". It's not necessary for some being to ask the questions in order to have the universe decide upon them. But if you are a being, made of matter, governed by the laws of QM, then the evolution of the universe (containing yourself) might make you become aware of the answer that the universe has assigned to these questions. The word "observable" is also not to be taken literally. It's just a name for the mathematical objects that refer to parts of the universe, whether they are observed or not.

But if you do that, then you the universe will simultaneously decide upon a particle's position and momentum, since the wave function allows you to calculate the distribution of both observables.
 
  • #189
stevendaryl said:
I don't think it makes any sense to phrase it that way. Suppose I put an electron into a state state that is spin-up in the z-direction. We can compute a probability of [itex]\frac{1}{2}[/itex] associated with the statement "The electron has spin-up in the x-direction". How long do I have to wait for that statement to be "physically realized by nature"? It's never going to be physically realized. If I don't act on the electron, it'll continue to be spin-up in the z-direction forever.

atyy said:
But if you do that, then you the universe will simultaneously decide upon a particle's position and momentum, since the wave function allows you to calculate the distribution of both observables.

The following answer applies to both of you:
This is where the consistency requirement comes in. The universe doesn't decide on all these facts individually, but it chooses one history among a set of consistent histories. So if the universe has decided for a history that has a definite spin-z value at ##t=t_1##, then it didn't decide for a history, in which spin-x had a value at ##t=t_1##. If the universe has decided for a history that had a well-defined position at ##t=t_1##, then it didn't decide for a history with a well-defined momentum at ##t=t_1##.
 
  • #190
rubi said:
The following answer applies to both of you:
This is where the consistency requirement comes in. The universe doesn't decide on all these facts individually, but it chooses one history among a set of consistent histories. So if the universe has decided for a history that has a definite spin-z value at ##t=t_1##, then it didn't decide for a history, in which spin-x had a value at ##t=t_1##. If the universe has decided for a history that had a well-defined position at ##t=t_1##, then it didn't decide for a history with a well-defined momentum at ##t=t_1##.

Okay, I do not know enough about consistent histories to make an intelligent argument, but just for confirmation about what you're saying:

I put an electron into a state of being spin-up in the z-direction. So I have a probability of [itex]\frac{1}{2}[/itex] of it being spin-up in the x-direction. The meaning of that is that for all histories in which the electron has a spin in the x-direction (which might be none), half of them have spin-up and half have spin-down.
 
  • #191
stevendaryl said:
Okay, I do not know enough about consistent histories to make an intelligent argument
If you're interested, there is a very nice book called "Consistent Quantum Theory" by Robert Griffiths.

I put an electron into a state of being spin-up in the z-direction. So I have a probability of [itex]\frac{1}{2}[/itex] of it being spin-up in the x-direction. The meaning of that is that for all histories in which the electron has a spin in the x-direction (which might be none), half of them have spin-up and half have spin-down.
That depends on the time. The electron has spin-z up at ##t=t_1##. Then there are several histories in which the electron has spin-x up at later time ##t=t_2##, but during the time evolution, the probabilities might have changed. If the time evolution doesn't touch the electron anymore, then you are right.
 
  • #192
rubi said:
The following answer applies to both of you:
This is where the consistency requirement comes in. The universe doesn't decide on all these facts individually, but it chooses one history among a set of consistent histories. So if the universe has decided for a history that has a definite spin-z value at ##t=t_1##, then it didn't decide for a history, in which spin-x had a value at ##t=t_1##. If the universe has decided for a history that had a well-defined position at ##t=t_1##, then it didn't decide for a history with a well-defined momentum at ##t=t_1##.

If you are using consistent histories, that is probably fine. But the view of reality there is much weaker, and whether the observer is really removed is debatable. Also, in a sense, consistent histories has collapse built into it. In any case, I don't intend to debate consistent histories here - mainly, if you are using consistent histories, I don't have a huge disagreement. I thought your point was that we could retain common sense reality, and remove the observer without introducing hidden variables or MWI - I certainly recognize a weaker sense of reality as a reasonable approach to solving the measurement problem.
 
  • Like
Likes eloheim
  • #193
rubi said:
If you're interested, there is a very nice book called "Consistent Quantum Theory" by Robert Griffiths.

Ok, now I understand - you are using consistent histories. I do acknowledge that as a reasonable apporach to the measurement problem. But it would be clearer if you just stated that upfront, eg. if one is not using the orthodox interpretation, one should say I am taking an approach which attempts to solve the measurement problem of Copenhagen by doing at least one of the following (1) hidden variables (2) many worlds (3) retrocausation (4) weaker reality, etc ...
 
  • #194
atyy said:
I certainly recognize a weaker sense of reality as a reasonable approach to solving the measurement problem.
I'm not sure if I understand what you mean by "weaker sense of reality" here. Could you expand on that?
 
  • Like
Likes AlexCaledin
  • #195
Feeble Wonk said:
I'm not sure if I understand what you mean by "weaker sense of reality" here. Could you expand on that?

I first heard about it from bhobba:
https://www.physicsforums.com/threa...gen-interpretation.735465/page-6#post-4654211

From my more naive point of view - consistent histories does not admit a single fine grained reality. As we know in regular Copenhagen, collapse changes the evolution of the wave function. In consistent histories, we can get one set of "coarse grained" consistent histories by choosing a certain set of times at which to collapse the wave function. If we collapse the wave function more often, then we have a "fine grained" set of consistent histories. However, the coarse grained set is not obtainable by coarse graining the fine grained set. A way to escape this is to allow probabilities that are negative or greater than one: http://arxiv.org/abs/1106.0767 (I'm not advocating this solution, but this paper has a good explanation of the problem).
 
  • #196
atyy said:
consistent histories does not admit a single fine grained reality

Steven Weinberg says it this way,

"There is nothing absurd or inconsistent about the decoherent histories approach in particular, or about the general idea that the state vector serves only as a predictor of probabilities, not as a complete description of a physical system. Nevertheless, it would be disappointing if we had to give up the “realist” goal of finding complete descriptions of physical systems... it is hard to live with no description of physical states at all, only an algorithm for calculating probabilities."

Steven Weinberg, Lectures on Quantum Mechanics
https://en.wikiquote.org/wiki/Consistent_histories
 
  • Like
Likes eloheim, atyy and vanhees71
  • #197
atyy said:
However, the coarse grained set is not obtainable by coarse graining the fine grained set. A way to escape this is to allow probabilities that are negative or greater than one: http://arxiv.org/abs/1106.0767 (I'm not advocating this solution, but this paper has a good explanation of the problem).
So, if I'm understanding this properly, this theory would suggest that Nature is fundamentally deterministic, but not fully predictable even in principle. Correct?
 
  • #198
Feeble Wonk said:
So, if I'm understanding this properly, this theory would suggest that Nature is fundamentally deterministic, but not fully predictable even in principle. Correct?

I very much appreciate the detailed working out of consistent histories and decoherent histories and their variations by Omnes, Griffiths, Hartle, Gell-Mann etc. Their solid work goes far beyond the empty handwaving of Ballentine or Peres (I should point out that consistent histories does not support Ballentine or Peres, because consistent histories does not have deterministic unitary evolution of the wave function as fundamental). However, I cannot say that I am convinced that it represents a viable solution of the measurement problem - in particular, whether it can really be said to remove the observer from quantum mechanics. So I can't really answer your question.

It might be better for me to point to Griffiths own work http://plato.stanford.edu/entries/qm-consistent-histories/ and criticism in the general article by Laloe http://arxiv.org/abs/quant-ph/0209123.
 
  • #199
I don't know if this paper has been mentioned already in this thread, but Weinberg wrote a paper about quantum mechanical measurement:
http://arxiv.org/pdf/1603.06008v1.pdf

He speculates that the evolution of large-scale systems might not be unitary, but that speculation is not assumed in his paper. As mentioned (either here, or in a different, related thread), the treatment of non-isolated systems interacting with an environment is nonunitary, although it's not clear with unitarity might be restored if you consider the complete system (including the environment).
 
  • #200
bhobba said:
'However, classicality is implicitly contained in 2 and 3 through the partitioning of the universal degrees of freedom into separable, localized substructures interacting via Hamiltonians that do not re-entangle them, so (given U-O) one has to put in classicality to get classicality out'

That's the factorisation issue. Its a legit issue but as I have said many times far too much is made of it IMHO. We do the same thing in classical mechanics for example but no one jumps up an down about that.

That said I have read Wallaces book and he uses an approach based on histories that seems to bypass it.

Thanks
Bill
No, Wallace doesn't bypass the problem. He just helps himself to already disjoint Hilbert space descriptions as ostensibly part of the 'bare theory'. This fails to account for the emergence of classical distinguishability--i.e, 'system' as distinct from its 'environment'. And it wrongly describes contingent information (the empirical situation at hand) as part of the pure theory.
This is not an issue in classical physics because classical physics does not have entanglement and indisguishability and the measurement problem. One can't get rid of these QM problems by saying that they aren't problems in classical physics. There is a solution to these problems in a non-unitary direct action approach, so it's not necessary for people to cling to these circular arguments, where the only way to get distinguishability in the unitary-only theory is to assume it from the beginning and then claim that one has demonstrated that it naturally emerges. We can do better than this.
 
  • #201
rkastner said:
No, Wallace doesn't bypass the problem. He just helps himself to already disjoint Hilbert space descriptions as ostensibly part of the 'bare theory'. This fails to account for the emergence of classical distinguishability--i.e, 'system' as distinct from its 'environment'. And it wrongly describes contingent information (the empirical situation at hand) as part of the pure theory.

You raise a lot of points here that I can't really follow. Ok - my background is math and Wallace uses a very theorem proof, theorem proof approach. I really can't fault his math. So let's look at one of his key theorems - the non contextuality theorem on page 475. Where is his error?

Also it needs to be said MW is not the only approach using histories - decoherent/consistent histories does as well. Are your objections the same for that as well?

Thanks
Bill
 
  • #202
Demystifier said:
There are some examples in Sec. 4 of
http://arxiv.org/abs/1210.8447
Dear Myster Demystifier,
I'm writing you because I think you're one of the clearer advisors, minimal jargon when it's not needed.
I think this thread is chaotic, misunderstandings and arguments among the cognoscenti, and the fellow who started it is likely further in the dark. And these issues are not unique to this thread. I think (you notice I do that a lot) one of the reasons this happens is due to the level of abstraction that's carried on and the ambiguity of the terminology. A lot of egos want to sound smart.
If someone asked me to explain and prove the Law of Large Numbers I would not start off with Lebesgue measure theory and Kolmogorov's product measures. I would start with the simplest random variable, a fair coin with values of + and - 1, and I would prove the theorem solely in that context so to minimize the chance he would get lost. Only then I would ask if he wanted to see further generalizations and recommend texts.
So I ask you why not start with the simplest model that can capture the essence of the problem, e.g. start with a single polarized photon and a polarization analyzer as a measuring device and add as little as possible to make the problem meaningful? Then everyone is on the same page.
 
  • Like
Likes Demystifier
  • #203
Zafa Pi said:
So I ask you why not start with the simplest model that can capture the essence of the problem, e.g. start with a single polarized photon and a polarization analyzer as a measuring device and add as little as possible to make the problem meaningful?
OK, but please specify the problem/question you would like me to explain/answer.
 
  • #204
Demystifier said:
OK, but please specify the problem/question you would like me to explain/answer.
The original statement by the person starting this thread: Despite the best efforts of some of PF's finest, I continue to struggle with the general concept of spontaneous quantum state reduction by means of environmentally induced decoherence.

For example, if we let |hθ> = [cosθ,sinθ] represent the state of a planar polarized photon whose angle of polarization is θ degrees from horizontal then the mixed state given by |h0> and |h45> each with probability 1/2 would have density matrix ρ = ½|h0><h0| + ½|h45><h45|. Would this be enough of a model for the various explanations of what's going on?
 
  • #205
Zafa Pi said:
The original statement by the person starting this thread: Despite the best efforts of some of PF's finest, I continue to struggle with the general concept of spontaneous quantum state reduction by means of environmentally induced decoherence.

For example, if we let |hθ> = [cosθ,sinθ] represent the state of a planar polarized photon whose angle of polarization is θ degrees from horizontal then the mixed state given by |h0> and |h45> each with probability 1/2 would have density matrix ρ = ½|h0><h0| + ½|h45><h45|. Would this be enough of a model for the various explanations of what's going on?
No, it would not be enough. You didn't explain the role of environment.
 
  • #206
Zafa Pi said:
The original statement by the person starting this thread: Despite the best efforts of some of PF's finest, I continue to struggle with the general concept of spontaneous quantum state reduction by means of environmentally induced decoherence.

For example, if we let |hθ> = [cosθ,sinθ] represent the state of a planar polarized photon whose angle of polarization is θ degrees from horizontal then the mixed state given by |h0> and |h45> each with probability 1/2 would have density matrix ρ = ½|h0><h0| + ½|h45><h45|. Would this be enough of a model for the various explanations of what's going on?
I appreciate your attempt at keeping it simple Zafa. My mathematical deficiencies make that a necessity. Unfortunately (for him anyway), I suspect that Demystifier has become familiar with my particular line of inquiry - typical of the non-professionals dabbling in these issues out of sheer curiosity.

Demystifier said:
No, it would not be enough. You didn't explain the role of environment.
As Demystifier suggests... I believe that my primary cognitive dilemma revolves around the seemingly arbitrary delineation between the environment and the system considered in the decoherence calculations. But, as I've said before, while I lack the mathematical chops to run the numbers, I fully accept that the decoherence process limits observed quantum states such that macroscopic superpositions are suppressed. I'm coming to the gradual conclusion that my confusion comes down to the quantum factorization problem (as I understand it - which is always a major wildcard).

Demystifier has proposed an interesting interpretational perspective (Solipsistic Hidden Variables) that I think might address that problem in an intriguing manner.
http://lanl.arxiv.org/abs/1112.2034
I'm rolling it around in my head at present. I'd be curious about what the other professionals think about the concept though.
 
Last edited:
  • #207
Demystifier said:
No, it would not be enough. You didn't explain the role of environment.
I realize that it is not enough. In my original request (#202) I said: "So I ask you why not start with the simplest model that can capture the essence of the problem, e.g. start with a single polarized photon and a polarization analyzer as a measuring device and add as little as possible to make the problem meaningful?"

A while back I put together a mathematically rigorous treatment of how the measurement of entangled particles could violate Bell's inequality for some high school science enthusiasts. I started with some notions of polarized lenses and light, and ended with how states in the tensor product space are measured (proving my version of Bell's Theorem along the way), all in a dozen comprehensible pages. All that was required was high school algebra, baby Cartesian plane, and a few very elementary probability concepts. It took a lot of effort on my part to get it right.

So I am asking what is the simplest model that could elucidate decoherence? What more is necessary to add to post #204 to make an explanation work? It may well be that this requires more effort than you are willing to devote, and I certainly not hold that against you, yet I would sure like to see it and I'm sure many others would as well.
 
  • #208
Zafa Pi said:
I realize that it is not enough. In my original request (#202) I said: "So I ask you why not start with the simplest model that can capture the essence of the problem, e.g. start with a single polarized photon and a polarization analyzer as a measuring device and add as little as possible to make the problem meaningful?"

A while back I put together a mathematically rigorous treatment of how the measurement of entangled particles could violate Bell's inequality for some high school science enthusiasts. I started with some notions of polarized lenses and light, and ended with how states in the tensor product space are measured (proving my version of Bell's Theorem along the way), all in a dozen comprehensible pages. All that was required was high school algebra, baby Cartesian plane, and a few very elementary probability concepts. It took a lot of effort on my part to get it right.

So I am asking what is the simplest model that could elucidate decoherence? What more is necessary to add to post #204 to make an explanation work? It may well be that this requires more effort than you are willing to devote, and I certainly not hold that against you, yet I would sure like to see it and I'm sure many others would as well.
Read Ballentine, page 244 from 'instead of consdering the environment ... as an external effect we may include the environment as part of the system.'
Some simple maths shows how the environment can cause a loss of interference by providing which path information as the random kets ##|e_1\rangle, |e_2\rangle## become orthogonal. This can also be seen as a diminishing of interference terms in ##\rho## as ##\langle e_1|e_2 \rangle \rightarrow 0##.
 
Last edited:
  • Like
Likes bhobba
  • #209
Just to be absolutely clear: decoherence does not result in the cat being either dead or alive. The environment + apparatus + system are, in principle, Quantum Mechanical. Therefore, all three remain superposition. There is an observable of environment + apparatus + system that, if measured, would tell an observer whether all three are in a superposition or not - my guess is QM predicts you'd get the value indicating they are in superposition.
 
  • #210
StevieTNZ said:
Just to be absolutely clear: decoherence does not result in the cat being either dead or alive. The environment + apparatus + system are, in principle, Quantum Mechanical. Therefore, all three remain superposition. There is an observable of environment + apparatus + system that, if measured, would tell an observer whether all three are in a superposition or not - my guess is QM predicts you'd get the value indicating they are in superposition.
Ouch. I'm confused again. Could you please define your terms for this explanation? Specifically, what precisely is designated by "environment", "apparatus" and "system", with respect to the cat in the box scenario, and the "observable" that is the sum of those three?
 
Last edited:

Similar threads

  • Quantum Interpretations and Foundations
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
Replies
9
Views
2K
  • Quantum Physics
2
Replies
40
Views
7K
  • Quantum Interpretations and Foundations
Replies
25
Views
2K
Replies
6
Views
3K
Replies
12
Views
2K
Replies
2
Views
2K
Replies
102
Views
16K
  • Beyond the Standard Models
2
Replies
38
Views
8K
  • Quantum Physics
Replies
8
Views
4K
Back
Top