Can a conscious observer collapse the probability wave?

In summary, there is debate about whether a conscious observer is necessary to collapse the wave function in quantum mechanics. However, there is no experimental evidence indicating that a conscious observer is the key in collapsing the wave function. It is the recording of information that determines collapse, and human memory is not a reliable recording device. Therefore, a conscious observer may not be an effective means of collapsing the wave function. In experiments, an interference pattern is expected for particles that cannot be remembered which path they went through, and a non-interference pattern for those that can be remembered.
  • #106
bhobba said:
Obviously since I hold to it I don't agree. But you are not the only one to hold that view - indeed there are those who believe that the ensemble interpretation (with or without decoherence) is simply a restating of the math and should not even be given the title of an actual interpretation.


Except that you are not allowed to disagree by the laws of logic unless you actually have explanations for the quantum phenomena.

It's not "my view" that the ensemble interpretation with or without decoherence does not solve anything, it is objective reality.
 
Physics news on Phys.org
  • #107
bhobba said:
Nor do I - my view doesn't solve all the issues - I simply like it because the problems it doesn't solve I find acceptable. As I often say all current interpretations suck - you simply choose the one that sucks the least to you.
hahaha, that's an answer i love!

kith said:
Why not? What would be an "allowed" assumption for the observables? Why are functions on the phase space "allowed" and self-adjoint operators on the Hilbert space are not?
simply you are not allowed to give measurement any special role. just like in classical physics you would need to calculate the outcomes of a measurement apparatus by applying the equations of motion to it and finding that the calculated behavior is consistent with the display (e.g. the calculated amplitude of the needle in a galvanometer corresponds to the labels on the display). this is required to legitimize that your detector measures exactly what he is said to measure and not something entirely different.

alternatively if you want to say that a measurement can be represented by a self-adjoint operator you must exactly define where this operator arises from and why applying it to the state yields the value you are searching for. say if i give you the blueprint of a detector you must be able to calculated the corresponding self-adjoint operator it measures and proof that the measurement process using the operator is consistent with the equations of motion of the theory (thus applying it to the state is merely a short-cut to calculate the results). guessing the observable for a detector is not rigorous enough.

and sure. in any case you need to find an adequate representation of your detector within the theory. and that already implies some interpretation of what parts of the apparatus is actually relevant for the measuring and thus must be modeled (though that should be experimentally checkable). in case of QM and in the simplest case one would expect that a detector can be represented by the potential (and other physical fields) it puts the measured object in and that themselves arise from the components your detector is build from (which in their finest decomposition will be themselves molecules, atoms and so on thus objects the theory must describe).

that said every theory requires a kind of 'interpretation' that translates everything we experience in reality into an adequate representation within the theory. obviously this translation must be well defined and unique for every object. in case of classical physics this is mostly obvious but becomes difficult in QM due to the fact that it focuses on describing microscopic objects. but this is the only way to construct a general theory that in principle can be applied to any problem. otherwise a theory is not a complete description and has interpretation related degrees of freedom that can be used bend the results in any way needed (i.e. if the theory yields wrong results i could just say hey, my self adjoint measurement operator was wrong (does not represent my new detector) and i just construct one that gives me the results i want and say that the new operator is the adequate representation)
 
Last edited:
  • #108
Quantumental said:
Except that you are not allowed to disagree by the laws of logic unless you actually have explanations for the quantum phenomena. It's not "my view" that the ensemble interpretation with or without decoherence does not solve anything, it is objective reality.

Yea - I guess guys like Ballentine have got it all wrong then - he uses it in his standard textbook to solve pretty much every issue and even purports to show (see Chapter 9) that any other interpretation leads to problems. Now, even though I hold to that interpretation, I am not saying I necessarily agree with him but it does show its not quite the 'objective reality' you seem to think it is. The fact of the matter is what any interpretation solves or even if it needs to be solved is a matter of opinion - nothing to do with 'objective reality' whatever that is in this connection.

Thanks
Bill
 
Last edited:
  • #109
Killtech said:
alternatively if you want to say that a measurement can be represented by a self-adjoint operator you must exactly define where this operator arises from and why applying it to the state yields the value you are searching for.

No - all you need is to have it as an axiom - which it is. See Chapter 2 of Ballentine.

Of course you may decide to give it a deeper justification - but you don't have to. In this case though I believe there is. Suppose there is a system and observational apparatus with n outcomes yi. Write them out as a vector sum yi |bi>. Problem is the yi are not invariant to a change in basis and since that is an entirely arbitrary man made thing it should be expressed in such a way as to be invariant. By changing the |bi> to |bi><bi| we have sum yi |bi><bi| which is a Hermitian operator whose eigenvalues are the possible outcomes of the measurement and basis invariant.

Thanks
Bill
 
  • #110
bhobba said:
No - all you need is to have it as an axiom - which it is. See Chapter 2 of Ballentine.
let's move the idea to the extreme. i give you the theory of everything: the black box function theory. this theory is very simple and consists of only one axiom: for every possible setup there exists an exact black box function that yields always the right results. of course you can measure things and therefore obtain parts of the back box function and use it to predict future experiments with the same setup. but because there is no interpretation available you will never be able to derive the black box function theoretically from the experimental setup. thus all you get is a pure empirical theory without any content at all. and of course by definition it describes the world perfectly and is always right.

but that's not what we are searching for. we want a theory that given any blueprint of an experimental setup can calculate everything correctly without any need of measurements beforehand. but therefore you need an interpretation to translate the setup into the terms of the theory first so you can do all the calculations. and the theory should require minimal input, that is if the electron change can be derived within the theory then this is preferable instead of having it as a kind of variable dependent on measurements.
 
Last edited:
  • #111
Killtech said:
but that's not what we are searching for. we want a theory given any possible blueprint of an experimental setup can calculate everything correctly without any measurements beforehand. but therefore you need an interpretation to translate the setup into the terms of the theory first so you can do all the calculations.

Why do you believe QM requires measurements beforehand to make predictions?

QM is a theory about measurements but it does not require any beforehand.

Or is your beef that values cannot be assigned independent of measurement - sorry - Bell and Aspect ruled it out.

Thanks
Bill
 
Last edited:
  • #112
bhobba said:
Why do you believe QM requires measurements beforehand to make predictions?

QM is a theory about measurements but it does not require any beforehand.

Or is you beef that values cannot be assigned independent of measurement - sorry - Bell and Aspect ruled it out.

Thanks
Bill
how you know then that a spin detector actually measures the spin?
you cannot check it within the theory so you must check it experimentally. or you rely on a different (classical) theory to give it a justification (though that theory is known to be wrong for microscopic objects).
if you yield wrong results in a setup you cannot rule out that the observable operator you were using was perhaps wrong and if it was it does not falsify the theory because the theory does not give any derivation for the operator for a given detector. so the theory is fail safe in that regard.
 
Last edited:
  • #113
Killtech said:
how you know then that a spin detector actually measures the spin?
you cannot check it within the theory so you must check it experimentally.

The same way an engineer designs anything - a combination of theory and of course testing.

Killtech said:
if you yield wrong results in a setup you cannot rule out that the observable operator you were using was perhaps wrong and if it was it does not falsify the theory because the theory does not give any derivation for the operator for a given detector. so the theory is fail safe in that regard.

Thats why experimental results are checked independently. If an experiment produces anomalous results all sorts of things are checked - but so far QM has come through unscathed.

Do you have any actual comment about QM rather than this general philosophical waffling?

Thanks
Bill
 
Last edited:
  • #114
given any self-adjoint operator derive a blue print for a detector that measures it.
 
  • #115
Killtech said:
given any self-adjoint operator derive a blue print for a detector that measures it.

Given E=MC2 design a cyclotron and detector to measure it.

Sorry mate this will be my last reply to this off topic irrelevancy. I suggest you take it to the philosophy forums.

Thanks
Bill
 
  • #116
Darwin123 said:
According to decoherence theory, the isolated system containing environmental system and probed system really evolve by Schroedinger equation. The "randomness" of the measured results corresponds to unknown phases in the environmental system. There is an assumption here that there are far more unknown phases in the environmental system then in the measured system. Thus, the environment is considered complex.
That doesn't cut it. In my view, decoherence theory is actually something completely different than that-- it is something that allows you to treat subsystems via projections. That's it, that's all it does. It never says anything at all about isolated systems, because we never do observations on isolated systems. That is the key statement at the very heart of "the measurement problem", and note that decoherence has nothing whatever to say about it (because decoherence theory is all about how to treat subsystems). Even with decoherence theory, which in my view is just basic quantum mechanics, one still has the unanswered question: does the isolated system evolve by the Schroedinger equation, or doesn't it? Taking a stand on that question invokes an interpretation of quantum mechanics, and decoherence theory simply doesn't help at all.

Let me give an example, the Schroedinger cat. Decoherence theory has no trouble saying why the cat is in a mixed state, so is either dead or alive-- it's because "the cat" is actually a projection from a much larger isolated system. So in "true" physical terms, there is no such thing as "the cat", it is merely a choice we make to consider only a fraction of what the reality holds. Decoherence theory is no help with this, all it does is recognize that in fact "the cat" does not exist as an independent entity in the theory of quantum mechanics, it is a kind of social construct that involves a projection from that which is treated in the physical theory. The social construct is easily constructed as being either alive or dead, and there is no contradiction with the unitary evolution of the actual physical entities treated by the Schroedinger equation (if one holds that interpretation). Hence, decoherence explains why our social constructs behave as they do (pure states project into mixed states, that's just basic quantum mechanics-- the same would be true for the social construct of "one electron" in what is actually a two-electron system, or writ large, in a white dwarf star). What decoherence does not explain is that the isolated system is doing-- why, when we observe an "alive cat" projection, is there nothing left of the "dead cat" projection, if in fact the entire system was a pure state to begin with? Decoherence has nothing at all to say about that, you still have to choose: either the state was initially pure and evolved into something whose projections became pure substates (Copenhagen), or it was initially pure and evolved into a bunch of entangled projections of which our perceptions are restricted to only one (many worlds), or it was never pure in the first place because wave functions for macro systems don't really exist, macro systems are always mixed states so are always only statistical amalgamations (the ensemble view).

One question that I haven't entirely satisfied in my own mind is why you can't consider the unknown phases as "hidden variables". The answer, to the degree that I understand it, is that the unknown phases in the decoherence model do not have the properties of a "hidden variables" defined in Bell's Theorem. When Bell proved that "hidden variables" do not explain quantum mechanics, he carefully defined "hidden variable" in a mathematically formal way. However, the phases of the waves in decoherence theory are "variables" and they are "hidden" in the broadest meaning of the words.
Yes, I think that's right-- it's like von Neumann's "no-go" theorem about hidden variables, he chose a restricted definition of how they have to behave. I believe that if one wishes to hold that macro systems evolve strictly deterministically, one has gone beyond the ensemble view (which is inherently statistical) and into the Bohmian view (which is deterministic, and involves the kind of generalized hidden variables that you are talking about).
1) Why can't the unknown phases in the environment of the probed system be considered "hidden variables"?
They can-- to a Bohmian. To someone using the ensemble interpretation, the unknown phases don't really solve the problem if you think the initial state is a pure state with unknown phases. Such a pure state must still evolve unitarily, even under decoherence, and there still is a dead cat in there just as much as an alive one. There is no way that the initial phases can all prefer an alive cat after one half-life of the apparatus, why would they turn out that way?
2) Why isn't "decoherence theory" ever called a "hidden variable" theory?
Because decoherence only explains the behavior of the projection, whereas hidden variable theory is about the whole isolated system.
 
  • #117
bhobba said:
As I often say all current interpretations suck - you simply choose the one that sucks the least to you.
I completely agree with you that choosing an interpretation is very much making a "devil's bargain," and as such is quite subjective. But I would like to offer you an alternative to the thought that all interpretations suck, which is that what we regard as a "sucky" aspect of our devil's bargain might actually end up being a game-changing insight into how physics can move forward.

As an example, I give you the interpretation of classical mechanics that was normally adopted, which was often viewed as "sucky" in Newton's day: it said that what is going to happen is only determined by what has already happened, not by some "first cause" or what "should" happen. To many in Newton's day, this was a complete failure of the theory-- it completely sucked that you had to know what had already happened before you could know what was going to happen, that was like "passing the buck" as far as they were concerned. Some went as far as saying it didn't tell you anything at all, it was completely circular to have to know what had already happened to know what was going to happen! But no one thinks of that as a "sucky" element of classical mechanics now, instead we simply moved the goal posts of what a physical theory is supposed to do.

In other words, instead of making the interpretation fit our preconceptions about physics, we learned to modify our conceptions of physics to fit the workable interpretation of classical mechanics. I submit the only problem with quantum mechanics is that there are still too many allowable interpretations, so we cannot see what the "lesson" is that we should be using to change what we think physics is. The only thing that sucks about the interpretations is that they force us to look in different directions to see the future of physics, placing us in an uncomfortably uncertain place. That's why we still need to find the "best" interpretation, the one that guides future progress and teaches us what physics is supposed to be at this stage.
 
  • #118
Killtech said:
...
Thanks for your post. I have never thought about some of these things before, so my answer will necessarily be half-baked.

The first important question for me is, how can we know how to measure a given observable and which observable does a given apparatus measure. The answer to this question is not entirely clear to me even in classical mechanics. Further input is appreciated.

You suggest, that we have to construct the Hamiltonian of the apparatus and calculate explicitly that the pointer/needle/whatever points to a label which is the actual value of the observable we want to measure.

This raises a couple of issues for me. First of all, it explicitly assumes that the observable has a well-defined value at all times. This would require our QM theory to have value-definiteness (like dBB) which is a very strong assumption. Why should we assume this?

If we leave it out, decoherence brings us in close analogy to the classical case: we construct a Hamiltonian for the apparatus, the system and their interaction; we use unitarian evolution; we trace over the environment; we get decoherence and the interference is gone. Most importantly, the basis we get decoherence in determines what observable is being measured.

The only thing that we don't get is a definite outcome. But once we use a collapse-free interpretation, we have a fully consistent theory.
 
  • #119
Ken G said:
Decoherence has nothing at all to say about that, you still have to choose: either the state was initially pure and evolved into something whose projections became pure substates (Copenhagen), or it was initially pure and evolved into a bunch of entangled projections of which our perceptions are restricted to only one (many worlds), or it was never pure in the first place because wave functions for macro systems don't really exist, macro systems are always mixed states so are always only statistical amalgamations (the ensemble view).
That's an interesting view of which I'm not sure if it is correct. My current view is that the interpretational question really lies in the interpretation of the mixed state of the (sub)system and not in assumptions about the state of the whole, because decoherence can be derived from the unitarian dynamics of the whole. Now in your view, we have already chosen an interpretation by the initial state we use. This seems uncommon, because decoherence is derived from the unitarian dynamics of the whole in the theory of open systems, where no interpretational questions are discussed. Also I'm not sure if we can always find an initial state of the whole where the state of the subsystem is led from a pure superposition state to a pure eigenstate as your Copenhagen version would imply.

Independent of this, I'd really like to hear your view on the measurement issues raised by Jazzdude, me and Killtech. ;-)
 
Last edited:
  • #120
kith said:
Thanks for your post. I have never thought about some of these things before, so my answer will necessarily be half-baked.

The first important question for me is, how can we know how to measure a given observable and which observable does a given apparatus measure. The answer to this question is not entirely clear to me even in classical mechanics. Further input is appreciated.

You suggest, that we have to construct the Hamiltonian of the apparatus and calculate explicitly that the pointer/needle/whatever points to a label which is the actual value of the observable we want to measure.

This raises a couple of issues for me. First of all, it explicitly assumes that the observable has a well-defined value at all times. This would require our QM theory to have value-definiteness (like dBB) which is a very strong assumption. Why should we assume this?

If we leave it out, decoherence brings us in close analogy to the classical case: we construct a Hamiltonian for the apparatus, the system and their interaction; we use unitarian evolution; we trace over the environment; we get decoherence and the interference is gone. Most importantly, the basis we get decoherence in determines what observable is being measured.

The only thing that we don't get is a definite outcome. But once we use a collapse-free interpretation, we have a fully consistent theory.
as you correctly write the basis that arises from an observable determines what is being measured. so you will have to find a relation between the hamiltonian and your basis and postulate that this relation is responsible for the decoherence (for all systems). finding this relation is a real addition to the theory and could complete it in the sense that it provides a first vague (but somewhat defined) mechanism to determine when the collapse actually happens and what causes it.

for example in the simple case of a hydrogen atom you could argue that only energy eigenstates have a time independent charge density. so if you classically couple the EM-field to the charge you find that only those are no source for EM-waves (although they have an angular momentum ;)). mixed states of two energy states oscillate periodically with a frequency proportional to their energy difference. all mixed states lose energy in this way thus must be unstable. this distinguishes the energy eigenstates from all other basis and could be a hint of how the above mentioned relation could look like.the other option i see would be to try to construct a justification of measurement from the equations of motion (EOM). but one finds that these yield unphysical results in systems where a wave function interacts with multiple objects that are macroscopic far away because the EOM describe quantum objects as pure waves with no particle nature whatsoever. thus they are wrong in general and relay on the measurement postulate as a supplement for macroscopic interactions. a possibility of generalizing them to yield better results at macro level is to add non-linear terms that change the macro behavior. it makes sense to go for non-linear dynamics because they are know to produce results astonishingly similar to QM predictions. for example they provide a source of randomness arising from chaos (sensitivity to initial conditions), collapse behavior and may have soliton solutions that behave like waves microscopically but as particles macroscopically. finally the QED is a linearization (2nd quantization) of naturally non-linear field theory (dirac-maxwell, EM-field classically coupled to the charge density of wave function).

however non-linear dynamics are way more complex and much much harder to solve. in case of dirac-maxwell little is known about any solutions even in the simplest free case. on the other hand a non-linear QM must provide a derivation of the measurement postulate because the usual probability interpretation breaks apart when the wave function is no more normed to 1. due to the current lack of a mechanism to decide when the collapse happens it is very difficult to guess the form of the non linear interactions to reproduce it.

in any case you need to extend QM by something to solve the measurement problem.
 
Last edited:
  • #121
kith said:
Now in your view, we have already chosen an interpretation by the initial state we use.
We never "use" an initial macro state, it would be way too difficult in any interpretation. The interpretation is around what we imagine the initial state is. None of the interpretations involve usage, everything we actually use is the same in every interpretation and that's why they all get the same answers.
This seems uncommon, because decoherence is derived from the unitarian dynamics of the whole in the theory of open systems, where no interpretational questions are discussed.
Decoherence can be used to show that a closed system evolving unitarily can project onto a density matrix of a subsystem that evolves, over time, to be diagonal. So we get that subsystem density matrices can be diagonal, so projections of pure states can be mixed states. But we never know the quantum state of the measuring device, so we simply don't know if it even has one-- this is purely a choice of our imagination to make. Decoherence allows us avoid contradiction when we imagine that macro systems have a quantum state, but it is not evidence that they do. And above all, it begs the key issue in the measurement problem-- how does a diagonal density matrix for the substate turn, unitarily or otherwise, into a definite measured outcome?
Also I'm not sure if we can always find an initial state of the whole where the state of the subsystem is led from a pure superposition state to a pure eigenstate as your Copenhagen version would imply.
We can never find any initial state for the whole, if a measurement is involved. There is never any measurement that has a well defined initial state for the environment, that's why we need an interpretation of the environmental interaction. Copenhagen just says that part of the interaction in the measurement creates a "collapse" which need not involve unitary evolution of the entire system. It is close to the ensemble interpretation, in that neither asserts there is a unitarily evolving quantum state for the whole, but the ensemble interpretation does not take the "collapse" literally because the whole mathematical structure applies only to the ensemble, whereas Copenhagen suggests that something inherently non-deterministic is occurring.
Independent of this, I'd really like to hear your view on the measurement issues raised by Jazzdude, me and Killtech.
I'd say there are two very different "measurement problems" that tend to get confused. One is, how does a unitarily evolving quantum state of the whole project into a diagonal density matrix for a subspace, and the other is, how does a diagonal density matrix turn into a definite outcome. The various interpretations hinge on the answer to the latter question, and I don't see any progress on that issue at all-- I see it as entirely a subjective choice for the philosopher/physicist. Decoherence has made interesting progress on the former question, but in my view that was always the easy question.
 
  • #122
Ken G said:
I'd say there are two very different "measurement problems" that tend to get confused. One is, how does a unitarily evolving quantum state of the whole project into a diagonal density matrix for a subspace, and the other is, how does a diagonal density matrix turn into a definite outcome. The various interpretations hinge on the answer to the latter question, and I don't see any progress on that issue at all-- I see it as entirely a subjective choice for the philosopher/physicist. Decoherence has made interesting progress on the former question, but in my view that was always the easy question.
excuse my naive approach but i didn't dig into decoherence so far. but whenever you have an operator on a subspace there exists a basis that diagonalizes it (AoC assumed). the question for me always was which basis will that be / which operators will become diagonalized and what determines it. isn't that the interesting question regarding the first part of the measurement problem?
 
Last edited:
  • #123
Killtech said:
excuse my naive approach but i didn't dig into decoherence so far. but whenever you have an operator on a subspace there exists a basis that diagonalizes it (AoC assumed).
The operator that corresponds to the measurement is diagonalized in the basis corresponding to the eigenstates of the measurement, but that's not what is getting diagonalized in decoherence. It's the density matrix, which does not characterize the measurement, it characterizes the state of the system. The connection to the measurement is that an environment capable of doing a given measurement is an environment that will also diagonalize the density matrix in the eigenbasis of the measurement, but the key point is, diagonalizing the density matrix is not a mathematical operation, it is a physical change.
So diagonalizing a subspace has little direct relevance to doing a measurement on it, the only time the question for me always was which basis will that be / which operators will become diagonalized and what determines it. isn't that the interesting question regarding the first part of the measurement problem?
I think the problem with that question is it goes away when it is not framed backwards. We don't wonder why a given operator diagonalizes with respect to some observational basis, we say that that operator corresponds to whatever measurement has the eigenbasis it is that diagonalizes the operator. In other words, the fact that we have a given measurement is because we have that diagonalization, not the other way around.
 
  • #124
Ken G said:
I'd say there are two very different "measurement problems" that tend to get confused. One is, how does a unitarily evolving quantum state of the whole project into a diagonal density matrix for a subspace, and the other is, how does a diagonal density matrix turn into a definite outcome. The various interpretations hinge on the answer to the latter , and I don't see any progress on that issue at all-- I see it as entirely a subjective choice for the philosopher/physicist. Decoherence has made interesting progress on the former , but in my view that was always the easy question.

from a nonlinear process ? to which part ? process 1 or process 2 ?
 
  • #125
Killtech said:
as you correctly write the basis that arises from an observable determines what is being measured. so you will have to find a relation between the hamiltonian and your basis and postulate that this relation is responsible for the decoherence (for all systems).
Well, that's my main point: we don't have to postulate anything here. In principle, we can derive decoherence and the basis it occurs in from the unitarian dynamics of the combined system using its full Hamiltonian. I think this is called environment-induced superselection ("einselection").

Killtech said:
the other option i see would be to try to construct a justification of measurement from the equations of motion (EOM). but one finds that these yield unphysical results in systems where a wave function interacts with multiple objects that are macroscopic far away because the EOM describe quantum objects as pure waves with no particle nature whatsoever. thus they are wrong in general and relay on the measurement postulate as a supplement for macroscopic interactions.
The Schrödinger equation is wrong for open systems, but new equations can be derived from the unitarian evolution of the larger (isolated) system. So beeing wrong doesn't necessarily mean that they rely on the measurement postulate.

I'm not familiar with nonlinear QM and the like. But I don't see the necessity for such things.
 
  • #126
Ken G said:
Copenhagen just says that part of the interaction in the measurement creates a "collapse" which need not involve unitary evolution of the entire system.
Yes, this sounds logical. So probably, I should refine my view a bit. Copenhagen is special in the way that the assumption of unitarian evolution of the whole system doesn't explain collapse.

Ken G said:
And above all, it begs the key issue in the measurement problem-- how does a diagonal density matrix for the substate turn, unitarily or otherwise, into a definite measured outcome?
Yes, I agree. At this point, we need an interpretation. But Jazzdude and Killtech think we haven't done enough, if we derive the mixed state of the system from unitarian evolution of the whole and then explain the question of definite outcomes by an interpretation.
 
  • #127
audioloop said:
from a nonlinear process ? to which part ? process 1 or process 2 ?
There's no need for nonlinearity in process 1, as decoherence accomplishes that linearly. But process 2 is another story. Some might invoke nonlinearity that is outside quantum mechanics to get the final stage of the "collapse", others might say it just happens and cannot be described in any theory, still others say it doesn't happen at all, it is merely an illusion of our perception. We just don't know which one is right at this stage, but I wager that the future of physics will be guided by the answer.
 
  • #128
kith said:
Yes, I agree. At this point, we need an interpretation. But Jazzdude and Killtech think we haven't done enough, if we derive the mixed state of the system from unitarian evolution of the whole and then explain the question of definite outcomes by an interpretation.
I think it's fair to say that interpretations often have to fill in for missing physics. The problem is, the physics is indeed missing, so the interpretation is the best we can do at present. It might always be missing-- we've been lucky so far that we rarely reach a "dead end" beyond which physics can go no further. Collapse might be too fundamentally wrapped up in the functioning of the observer to be reduced to fundamental physics, in which case it might be that dead end that will always have to be relegated to interpretation. Or, it might be resolved, and set the stage for the next big revolution in physics.
 
  • #129
kith said:
I'm not familiar with nonlinear QM and the like. But I don't see the necessity for such things.
actually this is no non-linear QM. as far as i know dirac-maxwell and alike are pure classical field theories that just behave very much like corresponding quantum field theories except for the measurement which simply isn't described within such theories.

however there are non-linear theories that can reproduce the collapse and even derive born's rule in some form. they are an extension to usual QM and show another solution to the measurement problem's 2nd part - collapse to a well defined measured value.
 
Last edited:
Back
Top