Von Neumann QM Rules Equivalent to Bohm?

In summary: Summary: In summary, the conversation discusses the compatibility between Bohm's deterministic theory and Von Neumann's rules for the evolution of the wave function. It is argued that although there is no true collapse in Bohmian mechanics, there is an effective (illusionary) collapse that is indistinguishable from the true collapse. This is due to decoherence, where the wave function splits into non-overlapping branches and the Bohmian particle enters only one of them. However, there is a disagreement about the applicability of Von Neumann's second rule for composite systems.
  • #36
vanhees71 said:
The non-unitarity comes in, because you project to the relevant macroscopic observables (coarse-graining).
Yes, but not a kind of non-unitarity which could pick up only one of the terms in the superposition. Such coarse-graining leads to decoherence, which induces a transition from a coherent to an incoherent superposition. The density matrix evolves from a pure state to a mixed state, i.e. from a non-diagonal matrix to a diagonal one. But the diagonal matrix still has more than one non-vanishing component on the diagonal (e.g. one corresponding to the dead cat and another to the alive cat), so the system still does not pick up only one of the possibilities.

To really get only one of the possibilities from this you need to assume something additional (for example a collapse, or some hidden variables, or many worlds), but you, as adherent of a minimal statistical ensemble interpretation, refuse to take any specific additional assumption. Yes, by accepting such a minimal interpretation you avoid unjustified speculations, but the problem is that such a minimal interpretation leaves some questions unanswered. For me, it's more honnest to risk with a possibly wrong answer (including collapse) than to pretend that there is no question.
 
Last edited:
Physics news on Phys.org
  • #37
If a particle hits a photoplate it leaves a spot there. So what else do you need to measure the particles position? Quantum theory predicts the probability for this to happen, not more and not less. So why would you introduce a collapse to "explain" something which is not explainable within the theory (because there's no cause for the particle to end up at the observed position within QT, which only states probabilities for this to happen) but at the cost of introducing inconsistencies of the theory? And it's also not clear to me, what the collapse explains, because it doesn't provide an explanation either, why the specific particle hits the specific spot on the photo plate. So what is what it does explain?
 
  • #38
vanhees71 said:
If a particle hits a photoplate it leaves a spot there. So what else do you need to measure the particles position?
Nothing, that's enough for measurement. But measurement is not an explanation.

vanhees71 said:
Quantum theory predicts the probability for this to happen, not more and not less.
Exactly!

vanhees71 said:
So why would you introduce a collapse to "explain" something which is not explainable within the theory
This is like asking, for instance, why would you introduce neutrino masses to explain observed neutrino oscillations if the oscillations are not explainable by the standard model of massless neutrinos? New ideas in physics are introduced precisely because some phenomena are not explainable within old theories.

vanhees71 said:
but at the cost of introducing inconsistencies of the theory?
New theories often look inconsistent at first (e.g. UV divergences in QFT), but then the job of scientists is to further develop the theory to remove the inconsistencies.

vanhees71 said:
And it's also not clear to me, what the collapse explains, because it doesn't provide an explanation either, why the specific particle hits the specific spot on the photo plate. So what is what it does explain?
That's a much better question. The role of collapse is not so much to explain something, but to offer a possible ontology behind the measured phenomena. In the collapse picture, the wave function is not merely a probability, but an actual physical thing that exists at the level of a single object. Suppose I ask you how does an electron look like before I measure it? Before I measure it, is it a wave or a particle? Does it have any shape at all before I measure it? With standard minimal QM you cannot answer such questions. With a collapse picture you can. You may say those are philosophical questions, but sometimes thinking about philosophical questions may eventually lead to new measurable predictions. For example, the GRW theory of collapse leads to new predictions which seem to be ruled out by experiments. There are also other collapse theories which are not (yet) ruled out.
 
Last edited:
  • #39
rubi said:
Would you agree that collapse can be described by a Lindblad equation? If so, then one can take ones algebra of observables and define an algebraic state on it by ##\omega(A)=\mathrm{Tr}(\rho A)## and the trace preserving time evolution by the Lindblad equation defines a stable *-automorphism ##\alpha_t## on the algebra of observables. One can then compute the GNS Hilbert space ##\mathcal{H}_\omega## for ##\omega## and then there is a theorem that let's us represent ##\alpha_t## by unitary operators. So one can represent the collapse by a unitary evolution by sacrificing the irreducibilty of the representation.

Is the final equation on http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form what you mean by the Lindblad equation? Does the Lindblad equation only include trace-preserving maps? I think collapse is not trace preserving.
 
  • #40
atyy said:
I think collapse is not trace preserving.
It is, because the collapse is not only picking one term in the superposition, but also includes the appropriate change of the normalization of that term.
 
  • #41
Demystifier said:
It is, because the collapse is not only picking one term in the superposition, but also includes the appropriate change of the normalization of that term.

Yes, one can define it that way too, but in that case, it is also true that the evolution of the larger system cannot be unitary and deterministic.

Anyway, for the definition of collapse as trace non-preserving, I was thinking of the language used in Nielsen and Chuang around their Eq 8.28. The relationship between definitions in which collapse is defined as trace non-preserving or trace preserving is given in their exercise 8.8, in which one introduces an additional operator.

So if all the measurement operators sum to 1, which is how it's discussed in http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form, I think it is the case that collapse is not trace-preserving.
 
Last edited:
  • #42
atyy said:
Yes, one can define it that way too, but in that case, it is also true that the evolution of the larger system cannot be unitary and deterministic.

Anyway, for the definition of collapse as trace non-preserving, I was thinking of the language used in Nielsen and Chuang around their Eq 8.28. The relationship between definitions in which collapse is defined as trace non-preserving or trace preserving is given in their exercise 8.8, in which one introduces an additional operator.

So if all the measurement operators sum to 1, which is how it's discussed in http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form, I think it is the case that collapse is not trace-preserving.
Ah, you are talking about Lindblad equation in a narrower context, in which it is derived from unitary evolution in the larger Hilbert space. I was talking about Lindblad equation in a wider context, in which it does not necessarily need to be derived from a unitary evolution in the larger space. In such a wider context there is no Eq. (8.28).
 
  • #43
So maybe rubi and you are talking about different Lindblad equations in posts #32 and #35?
 
  • #44
Maybe. But then the answer to his question would be a "trivial" no, so that's why I assumed that he had a wider point of view.
 
  • #45
Demystifier said:
Maybe. But then the answer to his question would be a "trivial" no, so that's why I assumed that he had a wider point of view.

So perhaps the criticism then is that even from the wider point of view, no matter what tricks one uses to get the whole system to evolve deterministically and unitarily, the Lindblad equation does not include collapse because

(1) collapse is probabilistic time evolution

(2) rubi cannot calculate from the Lindblad equation the joint probability of being at position A at tA and at position B at tB, which is what collapse allows one to do.
 
Last edited:
  • Like
Likes Demystifier
  • #46
Demystifier said:
Nothing, that's enough for measurement. But measurement is not an explanation.
This is like asking, for instance, why would you introduce neutrino masses to explain observed neutrino oscillations if the oscillations are not explainable by the standard model of massless neutrinos? New ideas in physics are introduced precisely because some phenomena are not explainable within old theories.
There's a very big difference between the introduction of neutrino masses into the Standard Model and the assumption of a collapse in QT. Neutrino oscillations are an observed fact and you have to introduce neutrino masses into the standard model (which is possible btw. without destroying the (perturbative) consistency of this model). To the contrary there's no observed fact which would force me to introduce a collapse and, in addition, the introduction of this idea is very problematic (EPR!). So while I'm practically forced to introduce neutrino masses to adequately describe the observed fact of mixing, there's no necessity to bother oneself is unobserved and unnecessary collapses in QT!

New theories often look inconsistent at first (e.g. UV divergences in QFT), but then the job of scientists is to further develop the theory to remove the inconsistencies.

QFT is a pretty successful model (although not strictly consistent one must admit) which can be defined in an approximate sense (perturbative QFT with renormalization, which even has a physical interpretation thanks to Kadanoff and Wilson).

That's a much better question. The role of collapse is not so much to explain something, but to offer a possible ontology behind the measured phenomena. In the collapse picture, the wave function is not merely a probability, but an actual physical thing that exists at the level of a single object. Suppose I ask you how does an electron look like before I measure it? Before I measure it, is it a wave or a particle? Does it have any shape at all before I measure it? With standard minimal QM you cannot answer such questions. With a collapse picture you can. You may say those are philosophical questions, but sometimes thinking about philosophical questions may eventually lead to new measurable predictions. For example, the GRW theory of collapse leads to new predictions which seem to be ruled out by experiments. There are also other collapse theories which are not (yet) ruled out.
Since when do you need an "ontology" in physics? Physics is about the description of objective observable facts about nature and not to provide an ontology (although famous people like Einstein opposed this view vehemently). E.g., it doesn't make sense to ask, whether a "particle" (better say "quantum" here) has a "shape" at all within QT. You can only describe the probability of the outcome of concrete measurements (observables), which are defined as (an equivlance class) of measurement procedures.

Yesterday, someone quoted the book

F. Strocci, An introduction to the mathematical structure of Quantum Mechanics, World Scientific (2005)

This is one of the best exhibitions of QT, I've seen for years, although everything is unfortunately hidden behind quite formal mathematics, but that's the essence of QT without any superfluous additions which cause only trouble!
 
  • #47
vanhees71 said:
To the contrary there's no observed fact which would force me to introduce a collapse and, in addition, the introduction of this idea is very problematic (EPR!). So while I'm practically forced to introduce neutrino masses to adequately describe the observed fact of mixing, there's no necessity to bother oneself is unobserved and unnecessary collapses in QT!

vanhees71 said:
Since when do you need an "ontology" in physics? Physics is about the description of objective observable facts about nature and not to provide an ontology (although famous people like Einstein opposed this view vehemently).

The collapse, or an equivalent assumption, is necessary in quantum mechanics, and its predictions have been verified experimentally.

However, one should be clear that the standard collapse is not intended to provide an ontology, unlike the GRW collapse. Throughout most of this thread, including the OP, the collapse is the standard collapse, not the GRW collapse.

By citing EPR as an objection against collapse, it shows that you believe ontology is important in physics. It means that you believe that in special relativity, the cause of an event should be in its past light cone.
 
Last edited:
  • #48
atyy said:
The collapse, or an equivalent assumption, is necessary in quantum mechanics, and its predictions have been verified experimentally.
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!

However, one should be clear that the standard collapse is not intended to provide an ontology, unlike the GRW collapse. Throughout most of this thread, including the OP, the collapse is the standard collapse, not the GRW collapse.

As far as I can tell, most of your objections to the standard collapse are because you believe there should be an ontology in physics (the cause of an event should be in its past light cone), and you believe that standard collapse causes trouble for your ontology, which is why you reject it. Citing EPR as a reason not to believe in collapse means that you believe that ontology is important in physics.
It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).
 
  • #49
vanhees71 said:
Uups. Can you translate this for a poor theoretical physicist into physics?
The idea of the algebraic framework is to extract the relevant part of QM (observable facts) and get rid of the mathematical parts that have no relevance (like the choice of a Hilbert space). In QM, we are interested in the behaviour of certain sets of observables (position, momenum, ...) and these observables form an algebra (they can be multiplied for example). A state of a system tells us all physical information that can be extracted in principle (like expectation values, probabilities, ...). In QM, we usually have a Hilbert space with operators and a state is determined by a vector ##\Psi##. Expectation values are given by ##\left<A\right>=\left<\Psi,A\Psi\right>##. A state could also be given by a density matrix ##\rho## and the expectation values would be ##\left<A\right>=\mathrm{Tr}(\rho A)##. So the expectation value functional takes an observable and spits out a number (the expectation value). Now there is a mathematical theorem (GNS) that says then when we have a certain algebra (of observables) and know all the expectation values of these observables, then we can reconstruct a Hilbert space ##\mathcal H##, a representation ##\pi## of the algebra and a vector ##\Omega##, such that the expectation values are given by ##\left<A\right> = \left<\Omega,\pi(A)\Omega\right>##. (The expectation value functional is usually denoted by ##\omega(A)## rather than ##\left<A\right>##.) But that also means that even if we have an algebra of observables and a state given by a density matrix, we can construct a new Hilbert space such that the state that was formerly given by a density matrix now is a plain old vector state (##\Omega##): We just use our old algebra as the algebra and the "algebraic state" ##\omega(A)=\mathrm{Tr}(\rho A)## as the expectation value functional and apply the theorem. (It constructs the new Hilbert space and the new representation of the algebra explicitely.)

Now what does that look like concretely? Let's say we have an algebra of observables ##\mathfrak A## on a concrete Hilbert space ##\mathcal H## and a density matrix ##\rho## on ##\mathcal H##. The density matrix can always be written as ##\rho=\sum_n \rho_n b_n \left<b_n,\cdot\right>##, where ##(b_n)_n## is an ONB for ##\mathcal H##. We can now define a new Hilbert space ##\mathcal H' = \bigoplus_n\mathcal H##, a representation ##\pi(A) (\bigoplus_n v_n) = \bigoplus_n A v_n## and a vector ##\Omega_\rho = \bigoplus_n \sqrt{\rho_n} b_n##. We can verify that we get the same expectation value as before: ##\mathrm{Tr}(\rho A) = \left<\Omega_\rho,\pi(A)\Omega_\rho\right>##. Every density matrix on ##\mathcal H## can be represented this way by a normalized vector ##\Omega_\rho## in ##\mathcal H'## and since they are normalized, they are related by unitary transformations. So if one has two density matrices ##\rho(t_1)## and ##\rho(t_2)## in ##\mathcal H##, there is a unitary operator ##U(t_2,t_1)## in ##\mathcal H'## such that ##\Omega_{\rho(t_2)}=U(t_2,t_1)\Omega_{\rho(t_1)}##.

Edit: I should probably add what the inner product on ##\mathcal H'## is: ##\left<\bigoplus_n v_n, \bigoplus_n w_n\right>_{\mathcal H'} = \sum_n\left<v_n,w_n\right>_{\mathcal H}##
 
Last edited:
  • #50
vanhees71 said:
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).
But I don't understand why you associate collapse(or call it non-unitary measurement evolution postulate since you don't like the word collapse, this is how is called in my QM notes that never mention the word collapse) with breaking of QFT microcausality. In the sense is used here that doesn't take the wavefunction as something real(like in the ensemble interpretation you subscribe to) there is no FTL or anything like that implied.
 
Last edited:
  • #51
vanhees71 said:
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!

The bell tests need collapse, when the Alice's and Bob's observations are timelike separated or they are calculated in a frame in which the their observations are not simultaneous.

vanhees71 said:
It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).

Collapse causes no trouble to relativistic spacetime, unless one believes in a special relativistic ontology. Einstein causality and locality are beliefs in ontology. Special relativity does not require Einstein causality and locality - that is one of great lessons of collapse.
 
  • Like
Likes TrickyDicky
  • #52
rubi said:
The idea of the algebraic framework is to extract the relevant part of QM (observable facts) and get rid of the mathematical parts that have no relevance (like the choice of a Hilbert space). In QM, we are interested in the behaviour of certain sets of observables (position, momenum, ...) and these observables form an algebra (they can be multiplied for example). A state of a system tells us all physical information that can be extracted in principle (like expectation values, probabilities, ...). In QM, we usually have a Hilbert space with operators and a state is determined by a vector ##\Psi##. Expectation values are given by ##\left<A\right>=\left<\Psi,A\Psi\right>##. A state could also be given by a density matrix ##\rho## and the expectation values would be ##\left<A\right>=\mathrm{Tr}(\rho A)##. So the expectation value functional takes an observable and spits out a number (the expectation value). Now there is a mathematical theorem (GNS) that says then when we have a certain algebra (of observables) and know all the expectation values of these observables, then we can reconstruct a Hilbert space ##\mathcal H##, a representation ##\pi## of the algebra and a vector ##\Omega##, such that the expectation values are given by ##\left<A\right> = \left<\Omega,\pi(A)\Omega\right>##. (The expectation value functional is usually denoted by ##\omega(A)## rather than ##\left<A\right>##.) But that also means that even if we have an algebra of observables and a state given by a density matrix, we can construct a new Hilbert space such that the state that was formerly given by a density matrix now is a plain old vector state (##\Omega##): We just use our old algebra as the algebra and the "algebraic state" ##\omega(A)=\mathrm{Tr}(\rho A)## as the expectation value functional and apply the theorem. (It constructs the new Hilbert space and the new representation of the algebra explicitely.)

Now what does that look like concretely? Let's say we have an algebra of observables ##\mathfrak A## on a concrete Hilbert space ##\mathcal H## and a density matrix ##\rho## on ##\mathcal H##. The density matrix can always be written as ##\rho=\sum_n \rho_n b_n \left<b_n,\cdot\right>##, where ##(b_n)_n## is an ONB for ##\mathcal H##. We can now define a new Hilbert space ##\mathcal H' = \bigoplus_n\mathcal H##, a representation ##\pi(A) (\bigoplus_n v_n) = \bigoplus_n A v_n## and a vector ##\Omega_\rho = \bigoplus_n \sqrt{\rho_n} b_n##. We can verify that we get the same expectation value as before: ##\mathrm{Tr}(\rho A) = \left<\Omega_\rho,\pi(A)\Omega_\rho\right>##. Every density matrix on ##\mathcal H## can be represented this way by a normalized vector ##\Omega_\rho## in ##\mathcal H'## and since they are normalized, they are related by unitary transformations. So if one has two density matrices ##\rho(t_1)## and ##\rho(t_2)## in ##\mathcal H##, there is a unitary operator ##U(t_2,t_1)## in ##\mathcal H'## such that ##\Omega_{\rho(t_2)}=U(t_2,t_1)\Omega_{\rho(t_1)}##.

Edit: I should probably add what the inner product on ##\mathcal H'## is: ##\left<\bigoplus_n v_n, \bigoplus_n w_n\right>_{\mathcal H'} = \sum_n\left<v_n,w_n\right>_{\mathcal H}##

As far as I can tell, this corresponds to
(1) A proper mixture and an improper mixture (reduced density matrix) are indistinguishable if one only looks at local observables
(2) Every mixture can be interpreted as a reduced density matrix, and purified.

The physical picture behind this is decoherence. However, decoherence does not derive the collapse. No matter what mathematical tricks one plays, deterministic unitary evolution and the born rule are insufficient, because
(1) There are two types of time evolution: deterministic and probabilistic
(2) The Born rule does not give the joint probabilities for observations carried out at different times

In addition to standard physics texts, rigourous texts like Holevo deal extensively with collapse, and it is mentioned in the rigourous text by Dimock.
Holevo https://www.amazon.com/dp/3540420827/?tag=pfamazon01-20
Dimock https://www.amazon.com/dp/1107005094/?tag=pfamazon01-20
 
Last edited by a moderator:
  • #53
atyy said:
As far as I can tell, this corresponds to
(1) A proper mixture and an improper mixture (reduced density matrix) are indistinguishable if one only looks at local observables
(2) Every mixture can be interpreted as a reduced density matrix, and purified.
No, I'm not restricting the set of observables anywhere. Every observable ##A## on ##\mathcal H## corresponds to an observable ##\pi(A)## on ##\mathcal H'## and every pure and mixed state (proper or improper) on ##\mathcal H## corresponds to a vector state ##\Omega## in ##\mathcal H'##. (##\sum_n \rho_n b_n \left<b_n,\cdot\right>## is sent to ##\bigoplus_n \sqrt{\rho_n} b_n##.)

The physical picture behind this is decoherence. However, decoherence does not derive the collapse. No matter what mathematical tricks one plays, deterministic unitary evolution and the born rule are insufficient, because
(1) There are two types of time evolution: deterministic and probabilistic
It's independent of decoherence. Both types of time evolution can be described by a general Lindblad equation in ##\mathcal H## and this induces a family of time evolution operators ##U(t_2,t_1)## on ##\mathcal H'## as described above. In fact, one doesn't even need a Lindblad equation. It's enough to know the density matrices in ##\mathcal H## at all times to get the family.

(2) The Born rule does not give the joint probabilities for observations carried out at different times
I'm sure one can write down joint probabilities also in ##\mathcal H'##, since ##\mathcal H'## contains exactly the same information as the set of density matrices on ##\mathcal H##. It's only encoded differently (as an infinite direct sum instead of a matrix). It just needs a little bit of extra work to get the correct formulas. Very roughly speaking, I just put the (square roots) of the entries of a density matrix in a list, rather than in a matrix, so if one can use that information to calculate joint probabilities on ##\mathcal H##, one should also be able to do that on ##\mathcal H'##.

In addition to standard physics texts, rigourous texts like Holevo deal extensively with collapse, and it is mentioned in the rigourous text by Dimock.
Holevo https://www.amazon.com/dp/3540420827/?tag=pfamazon01-20
Dimock https://www.amazon.com/dp/1107005094/?tag=pfamazon01-20
Thanks, I will have a look at them.
 
Last edited by a moderator:
  • #54
rubi said:
No, I'm not restricting the set of observables anywhere. Every observable ##A## on ##\mathcal H## corresponds to an observable ##\pi(A)## on ##\mathcal H'## and every pure and mixed state (proper or improper) on ##\mathcal H## corresponds to a vector state ##\Omega## in ##\mathcal H'##. (##\sum_n \rho_n b_n \left<b_n,\cdot\right>## is sent to ##\bigoplus_n \sqrt{\rho_n} b_n##.)

Yes, but is it also the case that every observable in ##\mathcal H'## corresponds uniquely to an observable in ##\mathcal H##? If it doesn't, then that would correspond to what physicists call a local observable, since the space ##\mathcal H## is "smaller" or "local" compared to ##\mathcal H'## which is "larger".

Is the theorem you are thinking about what is called the GNS construction on this page about the church of the larger Hilbert space: http://www.quantiki.org/wiki/The_Church_of_the_larger_Hilbert_space? If it is, then I do think it is equivalent to purifications, and the two "churches" of quantum theory. The other denomination is of course the church of the smaller Hilbert space: http://mattleifer.info/wordpress/wp-content/uploads/2008/11/commandments.pdf.

rubi said:
It's independent of decoherence. Both types of time evolution can be described by a general Lindblad equation in ##\mathcal H## and this induces a family of time evolution operators ##U(t_2,t_1)## on ##\mathcal H'## as described above. In fact, one doesn't even need a Lindblad equation. It's enough to know the density matrices in ##\mathcal H## at all times to get the family.

rubi said:
I'm sure one can write down joint probabilities also in ##\mathcal H'##, since ##\mathcal H'## contains exactly the same information as the set of density matrices on ##\mathcal H##. It's only encoded differently (as an infinite direct sum instead of a matrix). It just needs a little bit of extra work to get the correct formulas. Very roughly speaking, I just put the (square roots) of the entries of a density matrix in a list, rather than in a matrix, so if one can use that information to calculate joint probabilities on ##\mathcal H##, one should also be able to do that on ##\mathcal H'##.

What I mean is that if I know the density matrices at all time and the Born rule, there is still experimental data that exists that I cannot predict without collapse or some other postulate. For example, if I know the state is f(x) at t1 and g(x) at t2, I can calculate the probability of some being at x=y at t1, and the probability of being at x=z at t2. However, I cannot calculate the probability of being at z at t2 given that I was at y at t1, ie. I cannot calculate p(y,z) or p(z|y).

rubi said:
Thanks, I will have a look at them.

In Holevo's book, it is in the chapter on repeated and continuous measurements, and the concept that is called "collapse" is dealt with by the concept of an instrument. In Dimock's book it is just mentioned, and there is not extensive discussion about it.
 
  • #55
atyy said:
The bell tests need collapse, when the Alice's and Bob's observations are timelike separated or they are calculated in a frame in which the their observations are not simultaneous.
This I don't understand. In Bell tests you start with an entangled pair of photons (biphotons), created by some local process, e.g., the parametric down conversion in a crystal. They are mostly emitted back to back and you simply have to wait long enough to be able to detect the photons at large distances (making sure that nothing disturbs them to prevent the decoherence the state). The single-photon polarizations are maximally random (unpolarized photons) but the 100% correlation between polarizations measured in the same direction are inherent in this state. So it's a property of the biphotons, and the correlations are thus "caused" by their production in this state and not due to the measurement of one of the single-photon polarization states. It doesn't matter, whether the registration events by A and B are time-like or space-like separated. You'll always measure the correlation due to the entanglement provided there was no disturbance in between to destroy the entanglement by interactions with this disturbances. This shows that there's no collapse necessary in the analysis of these experiments (the same holds of course when you use not aligned setups of the two polarizers at A's and B's places, as necessary to demonstrate the violation of Bell's inequality or variations of it).

Collapse causes no trouble to relativistic spacetime, unless one believes in a special relativistic ontology. Einstein causality and locality are beliefs in ontology. Special relativity does not require Einstein causality and locality - that is one of great lessons of collapse.
I don't know, what you precisely mean by "special relativistic ontology". The space-time structure described by the Minkowski space-time is only consistent with the principle of causality, if there cannot be causal influences over space-like distances, and the collapse assumption introduces precisely such a thing, because it states that the bi-photon state instantaneously collapses to a two-photon state, where by letting one of the entangled photons go through a polarization filter at A's causes B's photon to have the corresponding complementary polarization state. This happens instantaneously in the usual collapse assumptions, and clearly violates Einstein causality and thus directly contradicts the very foundations of QED, which is a very well tested theory. So it's much simpler and more natural not to make this unnecessary assumption but take for granted what Born's Rule tells you about the correlations of the entangled biphoton state as detailed above. As I said, I don't see, where you need a collapse to describe a (theoretically) pretty simple experiment via quantum theory.
 
  • #56
atyy said:
Yes, but is it also the case that every observable in ##\mathcal H'## corresponds uniquely to an observable in ##\mathcal H##? If it doesn't, then that would correspond to what physicists call a local observable, since the space ##\mathcal H## is "smaller" or "local" compared to ##\mathcal H'## which is "larger".
The space ##\mathcal H'## certainly contains more states than ##\mathcal H##, but is that a bad thing? (Even ##\mathcal H## usually contains pathological states that are never physically realized. Think of ##\sum_n \chi_{[n,n+2^{-n}]}\in L^2(\mathbb R)##, which is normalized and doesn't vanish at ##\infty## or some fancy nowhere continuous function or whatever crazy things mathematicians can come up with.) Every physical situation that can be described in ##\mathcal H## can also be described in ##\mathcal H'## and the results are equivalent. Thus it doesn't matter whether I choose to describe my physics in ##\mathcal H## or ##\mathcal H'##. Of course, one would usually choose ##\mathcal H##, since the description is easier there, but this choice is not physically relevant. The point of the example was not to promote the use of ##\mathcal H'##, but rather to explain that whether time evolution is unitary or not is only a matter of how one organizes the available information and not a physical principle. Physics only cares about probability conservation. Whether that happens unitarily or not depends on the way we choose to encode our states. In other words: We cannot "detect" the Hilbert space. It's analogous to the situation in general relativity, where the choice of coordinate system is not relevant either.

Is the theorem you are thinking about what is called the GNS construction on this page about the church of the larger Hilbert space: http://www.quantiki.org/wiki/The_Church_of_the_larger_Hilbert_space? If it is, then I do think it is equivalent to purifications, and the two "churches" of quantum theory. The other denomination is of course the church of the smaller Hilbert space: http://mattleifer.info/wordpress/wp-content/uploads/2008/11/commandments.pdf.
Yes, I was talking about the GNS construction, but my example was not strictly a GNS construction, but equivalent to a GNS construction in many situations. The theorem on that page seems to use tensor products instead of direct sums though, so I don't think it's the same thing and it's not clear whether that also works in infinitely many dimensions. Contrary to what is said on the page, the GNS construction is a much more general result and it only yields Hilbert spaces that are equivalent to tensor products in special situations. Stinesprings theorem seems to be something completely different in general. In the ##\mathcal H'## I gave, one cannot reconstruct ##\rho## by taking a partial trace for example.

What I mean is that if I know the density matrices at all time and the Born rule, there is still experimental data that exists that I cannot predict without collapse or some other postulate. For example, if I know the state is f(x) at t1 and g(x) at t2, I can calculate the probability of some being at x=y at t1, and the probability of being at x=z at t2. However, I cannot calculate the probability of being at z at t2 given that I was at y at t1, ie. I cannot calculate p(y,z) or p(z|y).
Well, if you can do it on ##\mathcal H##, then you can also do it on ##\mathcal H'##. No information gets lost in that transition.

In Holevo's book, it is in the chapter on repeated and continuous measurements, and the concept that is called "collapse" is dealt with by the concept of an instrument. In Dimock's book it is just mentioned, and there is not extensive discussion about it.
Thanks. I can't get hold of that book before monday, though. :H
 
  • #57
atyy said:
Yes, but is it also the case that every observable in ##\mathcal H'## corresponds uniquely to an observable in ##\mathcal H##? If it doesn't, then that would correspond to what physicists call a local observable, since the space ##\mathcal H## is "smaller" or "local" compared to ##\mathcal H'## which is "larger".
I also want to add the following to my previous post: For every state ##\Psi## in ##\mathcal H'## and every ##\epsilon>0##, there is a density matrix ##\rho## in ##\mathcal H## such that for all observables ##A##, the following holds: ##|\mathrm{Tr}_{\mathcal H}(\rho A) - \left<\Psi,A\Psi\right>_{\mathcal H'}|<\epsilon##. That means one can find a density matrix in ##\mathcal H## that has expectation values that are arbitrarily close to the expectation values in ##\mathcal H'##, so ##\mathcal H## and ##\mathcal H'## can't be distinguished physically, even if one uses a state in ##\mathcal H'## that doesn't directly come from a state in ##\mathcal H##. This is the content of Fell's theorem.

Edit: Oops. I just realized that you were asking about observables, not states. Well, observables are not something that one gets out of the mathematics, but rather something one has to put in. Just because I use a different Hilbert space, it doesn't mean that I can magically build more measurement devices than I could before. So the right order is to specify what I can possibly measure (for example position, momentum, variances, correlations, ...) and then build a theory that describes these measurements. I can do that in both ##\mathcal H## and ##\mathcal H'##, but both Hilbert spaces also contain a huge amount of excess operators that don't correspond to physically realizable measurement apparata. And both spaces contain equally many of them (as in cardinality of the set of such operators).

P.S. I'm now reading the paper you quoted.
 
Last edited:
  • #58
rubi said:
Thanks. I can't get hold of that book before monday, though. :H

I'll read your other comments too, will reply later. But before Monday, one can also try

http://arxiv.org/abs/0706.3526 (the "collapse" is defined by postulating an instrument as in Eq 3)
http://arxiv.org/abs/0810.3536 (the "collapse" is again defined by postulating an instrument in section 6.2, Eq 6.7 to Eq 6.12)

What is interesting about the presentation by Heinosaari and Ziman is that the argument leading up to Eq 6.7 almost seems to derive from the Schroedinger equation and the Born rule. However, I think it requires the assumption that if B is measured a little later than A, the same result is obtained as if B and A are measured at the same time. This seems to be some sort of continuity argument, which is how the projection postulate was argued for by Dirac.
 
Last edited:
  • #59
vanhees71 said:
Where do you need a collapse here? I just measure, e.g., a spin component (to have the simple case of a discrete observable) and take notice of the result. If you have a filter measurement (the usually discussed Stern-Geralch apparati are such), I filter out all partial beams I don't want and am left with a polarized beam with in the spin state I want. That's all. I don't need a collapse. The absorption of the unwanted partial beams are due to local interactions of the particles with the absorber. There's no collapse!

Well, in an EPR setup, with experimenters Alice and Bob, Alice performs a measurement on one particle. She gets a result. You can explain that in terms of filters. But, immediately after the measurement, she can compute the probabilities for Bob's result. Now, that can't be due to filtering. She isn't filtering Bob's particles.
 
  • #60
No it's due to the fact that A knows that the photons are entangled and thus what B must measure at his photon. Again: The result for Bob's photon is not caused by Alice's measurement. The preparation was before when the biphoton was created by some process (parametric down conversion).
 
  • #61
vanhees71 said:
This I don't understand. In Bell tests you start with an entangled pair of photons (biphotons), created by some local process, e.g., the parametric down conversion in a crystal. They are mostly emitted back to back and you simply have to wait long enough to be able to detect the photons at large distances (making sure that nothing disturbs them to prevent the decoherence the state). The single-photon polarizations are maximally random (unpolarized photons) but the 100% correlation between polarizations measured in the same direction are inherent in this state. So it's a property of the biphotons, and the correlations are thus "caused" by their production in this state and not due to the measurement of one of the single-photon polarization states. It doesn't matter, whether the registration events by A and B are time-like or space-like separated. You'll always measure the correlation due to the entanglement provided there was no disturbance in between to destroy the entanglement by interactions with this disturbances. This shows that there's no collapse necessary in the analysis of these experiments (the same holds of course when you use not aligned setups of the two polarizers at A's and B's places, as necessary to demonstrate the violation of Bell's inequality or variations of it).

Well, immediately before the first measurement, the most complete description of the state of the photons possible is:

Description 1:
  • Statement 1.A: The probability of the first photon passing a filter at angle [itex]A[/itex] is 50%
  • Statement 1.B: The probability of the second photon passing a filter at angle [itex]B[/itex] is 50%
  • Statement 1.C: The probability of both events happening is [itex]0.5 cos^2(A-B)[/itex]
Now you perform the first measurement, and the result is that the first photon does pass the filter oriented at angle [itex]A[/itex]. Then immediately after the first measurement, but before the second measurement, the most complete description of the photons possible is:

Description 2:
  1. Statement 2.A: The first photon is polarized at angle [itex]A[/itex]
  2. Statement 2.B: The probability of the second photon passing a filter at angle [itex]B[/itex] is [itex]cos^2(A-B)[/itex]
The "collapse" is simply a name for the transition from Description 1 to Description 2. So it's definitely there. The only issue is, what is the nature of this transition? Is it simply a change of knowledge in the mind of the experimenters? Or, is there some objective facts about the world that change?

Before either measurement is made, is Description 1 an objective fact about the world, or is it simply a statement about our knowledge? Same question for Description 2.

If you say that Description 1 and Description 2 are objective facts about the world, then it seems to me that collapse is a physical process.
 
  • #62
Descriptions 1 and 2 are different experiments! I relabel thus "Description" with "Experiment". No wonder that you get different results. Of course, the state of a system depends on the preparation procedure, which is (in my understanding of quantum theory) a tautology because the state is defined as an equivalence class of preparation procedures (see also this nice book by Strocchi, where this is formalized using the ##C^*## approach to observables).

Experiment 1 considers all biphotons and Experiment 2 filters out those biphotons, where A finds it to be polarized at angle A. So the probabilities (relative frequencies!) refer to different ensembles.

Note that Experiment 2 can be achieved even post factum, i.e., if you make precise enough timing at both A and B you have the information about which B photon belongs to which A photon, i.e., you know which photons belonged to the same biphoton. Then you can make the selection necessary to do Experiment 2 after everything has long happened to the photons. This is this famous post-selection thing, which of course also becomes somewhat "spooky" when (in my opinion falsely) interpret the measurement at A as the cause for the outcome at B and not as (in my opinion correctly) interpret the preparation in an entangled state as the cause for the correlations described by it.
 
  • #63
vanhees71 said:
No it's due to the fact that A knows that the photons are entangled and thus what B must measure at his photon.

Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.
 
  • #64
atyy said:
What is the status of domain wall fermions and the standard model? Can a lattice standard model be constructed with domain wall fermions, at least in principle, even if it is too inefficient to simulate? Or is the answer still unknown?
I don't think there is a necessity for chiral fermions on the lattice. In the standard model, fermions come in pairs of Dirac particles. These can be put on the lattice.

The point is, of course, that one wants some gauge groups acting only on chiral components. And this seem impossible to do exactly on a simple lattice model. So strange things like domain wall fermions are invented. In fact, there is no need for them - use an approximate gauge symmetry on the lattice - anyway, these gauge fields are massive. Not renormalizable? Who cares, the long distance limit rules out the non-renormalizable elements anyway. (Of course, one has to start with the old Dirac-Fermi approach to gauge field quantization, because the Gupta-Bleuler approach depends on exact gauge symmetry to get rid of negative probabilities.)
 
  • Like
Likes atyy
  • #65
stevendaryl said:
Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.

I think vanhees71 considers that one can add Description 2 to Description 1. To do so, one would modify Description 2 to be conditional: "If Alice measures the photon to be polarized at angle A ...

All that is fine. But what he doesn't realize is that whatever one does Einstein causality is gone, and Einstein causality is either not meaningful (if the variables of QM are not real) or violated by QED (if the variables of QM are real).
 
  • #66
Again: The result for Bob's photon is not caused by Alice's measurement
Sure, but only physical collapse contradicts this, non-physical collapse follows from Bell's inequalities violations and respects forbidden ftl causation.
vanhees71 said:
No it's due to the fact that A knows that the photons are entangled and thus what B must measure at his photon.

It seems to me entanglement correlations are empirical embodiments of non-physical collapse
 
  • #67
What is a "non-physical collapse"? Either there is something collapsing in the real world when a measurement is made or not! In my opinion there's not the slightest evidence for anything collapsing when we observe something.
 
  • #68
stevendaryl said:
Saying that "the photons are entangled" is just a description of the initial state of the two photons (my Description 1) above. "Collapse" is about the transition from Description 1 to Description 2.
But where is anything "collapsing" here. A measures her photon's polarization. If she find it to be polarized in A-direction, she considers what's the probability that B finds his photon to be polarized in B-direction; if her photon is absorbed, she doesn't consider anything further, i.e., she uses a sub-ensemble of the complete ensemble originally prepared, which is another preparation procedure, i.e., a different measurement than done in Experiment 1, where all photon pairs are considered. Nothing has collapsed, it's just the choice of the ensemble based on Alice's (local!) measurement of her photon's polarization. Everything is just calculated by the usual rules of probability theory from the given initial biphoton state. There's no need to assume an instantaneous collapse of B's photon's state by A's measurement on her photon necessary to explain all the probabilistic properties of the two experiments under consideration.
 
  • #69
vanhees71 said:
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!

How do you obtain a wave function as the initial state? You make a measurement, it has a value, that means, you have obtained a state with the corresponding eigenstate as the wave function. Without collapse there would be no method of state preparation in quantum theory.

vanhees71 said:
Either you believe in the existence of a collapse or Einstein causality and locality.
Anyway, to believe in Einstein causality is nonsensical (causality with Reichenbach's principle of common cause is not causality, one could name correlaity or so. But if you accept common cause, then you need FTL causal influences to explain the violations of Bell's inequalities. So, the collapse is not important for this at all, the violation of Bell's inequality is the point, and this point is quite close to loophole-free experimental validation.

vanhees71 said:
The most successful model ever, the Standard Model of elementary particle physics, obeys both.
No, it obeys only what I have named "correlaity" - instead of claims about causality it contains only claims about correlations.
 
  • #70
vanhees71 said:
But where is anything "collapsing" here.

Well, it has to do with whether you think that my Description 1 and Description 2 are objective facts about the world, or whether they are just states of somebody's knowledge. If they are objective facts about the world, then there is a physical transition from the situation described by Description 1 to the situation described by Description 2.

On the other hand, if Description 1 and Description 2 are not objective facts about the world, then that raises other thorny questions. What IS an objective fact about the world? If you say it's only the results of measurements, then that's a little weird, because a measurement is just a macroscopic interaction. Why should facts about macroscopic states be objective if facts about microscopic states are not?

A measures her photon's polarization. If she find it to be polarized in A-direction, she considers what's the probability that B finds his photon to be polarized in B-direction; if her photon is absorbed, she doesn't consider anything further, i.e., she uses a sub-ensemble of the complete ensemble originally prepared, which is another preparation procedure

It seems to me that this business of "preparing a sub-ensemble" is equivalent to invoking collapse.
 

Similar threads

Replies
2
Views
679
Replies
92
Views
8K
Replies
2
Views
3K
Replies
92
Views
14K
Replies
105
Views
6K
Replies
4
Views
2K
Back
Top