Gleason's Theorem and the Measurement Problem

In summary, "Gleason's Theorem and the Measurement Problem" explores the implications of Gleason's Theorem for quantum mechanics, particularly its role in addressing the measurement problem. The theorem provides a mathematical foundation for the probabilistic nature of quantum states and the assignment of probabilities to measurement outcomes based on the geometry of Hilbert space. It explains how quantum mechanics can be consistently interpreted through the framework of density operators, which represent mixed states. The discussion highlights the challenges of defining measurement and the implications of Gleason's Theorem for understanding the collapse of the wave function, ultimately contributing to the ongoing debate about the interpretation of quantum mechanics.
  • #1
gentzen
Science Advisor
Gold Member
943
750
bob012345 said:
How does QM explain decoherence?
bhobba said:
Some say it solves the measurement problem; others say it solves it for all practical purposes. I think it is a pseudo-problem personally because of Gleason's theorem, but that is another story. If interested, start a new thread about Gleason's Theorem and the Measurement Problem.
 
Physics news on Phys.org
  • #2
Ok,

For convenience, here is a copy of a post I did a while ago. It is not an interpretation but a heuristic of how the QM formalism can be justified.

Suppose two systems interact, and the result is several possible outcomes. We imagine that, at least conceptually, these outcomes can be displayed as a number on a digital readout. Such is an observation in QM. All I need to know is the number. But I will be more general than this and allow different outcomes to have the same number. To model this, we write the number from the digital readout of the ith outcome in position i of a vector. We arrange all the possible outcomes as a square matrix with the numbers on the diagonal. Those who know some linear algebra recognise this as a linear operator in diagonal matrix form. To be as general as possible, this is logically equivalent to a hermitian matrix in an assumed complex vector space where the eigenvalues are the possible outcomes. Why complex? That is a profound mystery of QM - it needs a complex vector space. Those who have calculated eigenvalues and eigenvectors of operators know they often have complex eigenvectors - so from an applied math viewpoint, it is only natural. But just because something is natural mathematically does not mean nature must oblige.

So, we have the first Axiom of Quantum Mechanics:

To every observation, there exists a hermitian operator from a complex vector space such that its eigenvalues are the possible outcomes of the observation. This is called the Observable of the observation.

But nothing is mystical or strange about it, just a common sense way to model observations. The only actual assumption is it is from a complex vector space.

Believe it or not, that is all we need to develop Quantum Mechanics. This is because of Gleason's Theorem:

https://www.arxiv-vanity.com/papers/quant-ph/9909073/

This leads to the second hypothesis of QM.

The expected value of the outcome of any observable O, E(O), is E(O) = trace (OS), where S is a positive matrix of unit trace, called the state of a system.

These are the two axioms of QM from Ballantine - QM - A Modern Development.

A state is just a calculational aid implied by modelling observations as an operator.

It is like probabilities themselves. If I put a dice in a box and shake it, when does it land on a number? Is it when the box is open? What is the difference between Schrodinger's Cat and the dice? In that lies a fundamental difference between QM and classical physics, if not the critical difference.

When an interaction is made (ie an observation), the wavefunction aids in calculating the outcome. Of course, it collapses - that is its job in helping to calculate the probability of an outcome. Is collapse itself an issue, or is it the fundamental difference between the situation between the dice and Schodinger's cat?

This is similar to the fundamental difference between statistical correlations in Bell and that of a simple couple of dice in a box that is shacked.

Thanks
Bill
 
Last edited:
  • Like
Likes PeroK, vanhees71 and gentzen
  • #3
bhobba said:
A state is just a calculational aid implied by modelling observations as an operator.

It is like probabilities themselves. If I put a dice in a box and shake it, when does it land on a number? Is it when the box is open? What is the difference between Schrodinger's Cat and the dice? In that lies a fundamental difference between QM and classical physics, if not the critical difference.
So this is the new part, compared to your old post. Being careful not to give too much ontological meaning to the state is certainly a good idea. So MWI for example risks to cause misinterpretations, if one is not very careful.

However, the state might not be the only reason, why there is a measurement problem. For example, A. Neumaier's thermal interpretation uses q-expectations and q-corrections instead of the state. But even here, you don't "automatically" solve the problem of unique results. The thermal interpretation needs an additional assumption for that (the assumption is stated as: there is only a single world).
 
  • #4
What I was getting at is the peculiar non-classical two objects can be related, ie as a single entangled object. The dice are always separate objects.

Thanks
Bill
 
  • Like
Likes vanhees71
  • #5
bhobba said:
Ok,

For convenience, here is a copy of a post I did a while ago. It is not an interpretation but a heuristic of how the QM formalism can be justified.

Suppose two systems interact, and the result is several possible outcomes. We imagine that, at least conceptually, these outcomes can be displayed as a number on a digital readout. Such is an observation in QM. All I need to know is the number. But I will be more general than this and allow different outcomes to have the same number. To model this, we write the number from the digital readout of the ith outcome in position i of a vector. We arrange all the possible outcomes as a square matrix with the numbers on the diagonal. Those who know some linear algebra recognise this as a linear operator in diagonal matrix form. To be as general as possible, this is logically equivalent to a hermitian matrix in an assumed complex vector space where the eigenvalues are the possible outcomes. Why complex? That is a profound mystery of QM - it needs a complex vector space. Those who have calculated eigenvalues and eigenvectors of operators know they often have complex eigenvectors - so from an applied math viewpoint, it is only natural. But just because something is natural mathematically does not mean nature must oblige.

So, we have the first Axiom of Quantum Mechanics:

To every observation, there exists a hermitian operator from a complex vector space such that its eigenvalues are the possible outcomes of the observation. This is called the Observable of the observation.

But nothing is mystical or strange about it, just a common sense way to model observations. The only actual assumption is it is from a complex vector space.

Believe it or not, that is all we need to develop Quantum Mechanics. This is because of Gleason's Theorem:

https://www.arxiv-vanity.com/papers/quant-ph/9909073/

This leads to the second hypothesis of QM.

The expected value of the outcome of any observable O, E(O), is E(O) = trace (OS), where S is a positive matrix of unit trace, called the state of a system.

These are the two axioms of QM from Ballantine - QM - A Modern Development.

A state is just a calculational aid implied by modelling observations as an operator.

It is like probabilities themselves. If I put a dice in a box and shake it, when does it land on a number? Is it when the box is open? What is the difference between Schrodinger's Cat and the dice? In that lies a fundamental difference between QM and classical physics, if not the critical difference.

When an interaction is made (ie an observation), the wavefunction aids in calculating the outcome. Of course, it collapses - that is its job in helping to calculate the probability of an outcome. Is collapse itself an issue, or is it the fundamental difference between the situation between the dice and Schodinger's cat?

This is similar to the fundamental difference between statistical correlations in Bell and that of a simple couple of dice in a box that is shacked.

Thanks
Bill
Where do you think the collapse is needed in this? I've no clue!
 
  • #6
bhobba said:
Of course, it collapses - that is its job in helping to calculate the probability of an outcome.
I don't think this is correct, or at least it is stated ambiguously. In the 7 Basic Rules of QM, the role "collapse" plays is not to calculate the probability of an outcome of this experiment. It is to update the wave function you use, once you know the result of this experiment, in order to correctly calculate probabilities for the results of future experiments.
 
  • Like
Likes mattt, bhobba and vanhees71
  • #7
The collapse assumption is unnecessary. All you need is the probability interpretation of the state to predict the probability for the outcome of measurements. In which state the measured system is after the measurement depends on the details of the apparatus, i.e., the interaction between the apparatus and the system. E.g., usually photons are detected using the photoelectric effect, i.e., after the measurement the photon is absorbed by the detector material and not in an eigenstate of the measured quantitie's representing self-adjoint operator.
 
  • Like
Likes bhobba
  • #8
Agree. I stated it incorrectly. It is indeed to update the wavefunction so the probabilities of future observations can be calculated.

Thanks
Bill
 
  • Like
Likes gentzen and vanhees71
  • #9
Yes, and you have to update it to the correct state. Depending on the interaction of the measured system this state usually is very different from an eigenstate of the measured observable. Of course sometimes you can realize good approximations to such von Neumann filter measurements, which then can be interpreted again as prepartion procedures for the system's state after the measurement.
 
  • Like
Likes bhobba
  • #10
vanhees71 said:
The collapse assumption is unnecessary.
vanhees71 said:
you have to update it to the correct state
You're contradicting yourself: "update it to the correct state" once you know the results of a measurement is "the collapse assumption" (at least according to the 7 Basic Rules which I referenced, which is what we have all agreed to as a basis for discussion here).
 
  • Like
Likes dextercioby
  • #11
There is no contradiction. The apodictic collapse assumption is that after a measurement the system is projected instantaneously to an eigenstate with the eigenvalue of the corresponding operator. This is outside of the quantum dynamics. In Nature the "update" is determined, however, by the time evolution given by the complete Hamiltonian of the system + measurement apparatus, and that is usually not an eigenstate as assumed by the collapse assumption. E.g., when detecting a photon by absorbing it at a photoplate you have a transition of the type ##|\psi_{\gamma} \rangle \otimes |\psi_{\text{plate}} \rangle \rightarrow |\psi_{\text{plate}}' \rangle##.
 
  • Sad
  • Skeptical
Likes weirdoguy and gentzen
  • #12
vanhees71 said:
The apodictic collapse assumption is that after a measurement the system is projected instantaneously to an eigenstate with the eigenvalue of the corresponding operator This is outside of the quantum dynamics.
Not according to the 7 Basic Rules that we agreed on as a basis for discussion in the quantum forum. According to those rules, "collapse" is just a mathematical procedure we use to update the state once we know the result of a measurement. No claim is made either way about whether that mathematical process corresponds to an actual physical dynamical process.

There are particular interpretations in which "collapse" means what you describe here. And of course no particular interpretation can claim to be "necessary", so in that sense the collapse assumption is "unnecessary". But that doesn't mean that updating the state once you know the result of a measurement is unnecessary. It is necessary, and, as above, it is "the collapse assumption" according to the 7 Basic Rules.

vanhees71 said:
In Nature the "update" is determined, however, by the time evolution given by the complete Hamiltonian of the system + measurement apparatus
This is also interpretation dependent; in fact, what you are describing here is the Many Worlds Interpretation, in which the unitary dynamics you describe here is the only dynamics.

vanhees71 said:
E.g., when detecting a photon by absorbing it at a photoplate you have a transition of the type ##|\psi_{\gamma} \rangle \otimes |\psi_{\text{plate}} \rangle \rightarrow |\psi_{\text{plate}}' \rangle##.
Now you're contradicting yourself again. The transition you describe is not what "the time evolution given by the complete Hamiltonian of the system + measurement apparatus" gives. What that time evolution gives is a superposition of all the different possible interactions between the photon and the plate. The only way to get to a single physical state of the plate at the end is to invoke a non-unitary collapse process as actual dynamics.
 
  • Like
Likes mattt
  • #13
PeterDonis said:
Not according to the 7 Basic Rules that we agreed on as a basis for discussion in the quantum forum. According to those rules, "collapse" is just a mathematical procedure we use to update the state once we know the result of a measurement. No claim is made either way about whether that mathematical process corresponds to an actual physical dynamical process.
I don't agree with this one point of the postulates. It's simply not true. E.g., if you register a photon after letting it run through a polarization filter, it's absorbed, i.e., the quantum state of this photon is not an eigenstate of a single photon with this polarization direction. It's simply a 0-photon state.

It depends on the measurement procedure, in which state the system is after the measurement is done. In some rare cases you may have an ideal von Neumann filter measurement, which then can be described FAPP by the "collapse of the state", but it's not a fundamental feature of QT, and there are no two types of dynamics in the description of Nature (at least there's not the slightest empirical evidence for it).
PeterDonis said:
There are particular interpretations in which "collapse" means what you describe here. And of course no particular interpretation can claim to be "necessary", so in that sense the collapse assumption is "unnecessary". But that doesn't mean that updating the state once you know the result of a measurement is unnecessary. It is necessary, and, as above, it is "the collapse assumption" according to the 7 Basic Rules.
See above. If you have realized a von Neumann filter measurement, then it's justified to use the collapse assumption as to describe the state of the system after the measurement. Otherwise it's not.
PeterDonis said:
This is also interpretation dependent; in fact, what you are describing here is the Many Worlds Interpretation, in which the unitary dynamics you describe here is the only dynamics.
This is not what I describe. For me the Many Worlds interpretation doesn't make any sense at all. We investigate a quantum system, which interacts with a measurement device. The dynamics of the "system" alone is not a unitary time evolution but that of an open quantum system, where you have decoherence through entanglement with the measurement device, and this enables the measurement: after the interaction the measurement devices "pointer state" is entangled with the system in such a way that you have (with more or less accuracy) measured the observable you wanted to measure, and the corresponding (macroscopic observable, the "pointer reading") is in general a macroscopic state you can read off and store.
PeterDonis said:
Now you're contradicting yourself again. The transition you describe is not what "the time evolution given by the complete Hamiltonian of the system + measurement apparatus" gives. What that time evolution gives is a superposition of all the different possible interactions between the photon and the plate. The only way to get to a single physical state of the plate at the end is to invoke a non-unitary collapse process as actual dynamics.
As I said, this is well understood by the treatment of the system as an open quantum system. There's no need to assume some "collapse process" outside of the generally valid quantum dynamics. The time evolution of a closed system is unitary, that of open systems isn't!
 
  • Sad
Likes weirdoguy
  • #14
This review of ensemble interpretations by Home and Whitaker discusses the collapse postulate and its relation to the minimal ensemble interpretation in section 5.4. Using their notation, they consider a fictitious infinite ensemble ##|I; \psi\rangle## of a microscopic system ##I## and observable ##R = \sum_r \lambda_r|r\rangle\langle r|## measured by apparatus ##II##. Before measurement, they invoke the ensemble$$|I; \psi\rangle = \sum_r|I;r\rangle\langle r|\psi\rangle$$After measurement, they invoke an infinite ensemble of experimental runs in equation 5.5 $$|I+II;f\rangle = \sum_{r}|I;\phi_r\rangle|II;\alpha_r\rangle\langle r|\psi\rangle$$ (##\phi_r## is used instead of ##r## to accommodate non-projective measurements). They then contrast the orthodox approach with the minimal ensemble approach.
Home and Whitaker said:
The argument continues that an orthodox interpretation must now instigate a collapse [...] However, the argument continues, in an ensemble interpretation, the right-hand side of (5.5) merely tells us that the relative frequency over the ensemble of finding the measuring device in state ##\alpha_r## is ##|\langle r|\psi\rangle|^2##
It might be useful to further contrast orthodox vs ensemble approaches by introducing a subsequent measurement of observable ##S = \sum\lambda_s|s\rangle\langle s|## on the same system with apparatus ##III##. We invoke a third infinite ensemble of two-measurement experimental runs
$$|I+II+III;g\rangle = \sum_{r,s}|I;\chi_s\rangle|II;\alpha_r\rangle|III;\beta_s\rangle\langle r|\psi\rangle\langle s|\phi_r\rangle$$and we can compute any relative frequencies we are interested in. E.g. The relative frequency of outcome ##\beta_s## given the prior outcome ##\alpha_r## is ##|\langle s|\phi_r\rangle|^2##. I.e. Home and Whitaker say explicit invocation of a collapse postulate can be avoided by considering ensembles of post-measurements experimental runs that include the relevant apparatus degrees of freedom, and extracting relative frequencies accordingly. In this sense I think there is a shared spirit between the minimal ensemble interpretation and the thermal interpretation.
 
Last edited:
  • Like
Likes martinbn and vanhees71
  • #15
The state ket ##|I+II;f \rangle## describes an ideal von Neumann filter measurement. In such a case (and that's a very rare case in practice!) you can of course project to one eigenstate by just filtering out all other outcomes. For this you need something else blocking all the unwanted outcomes. E.g., in an SG experiment you need to block Ag atoms in the partial beam belonging to the unwanted spin projection behind the magnet. Then the state behind this additional "blocking element" you can describe the new state of course as in the collapse assumption. This, however, is just an effective description what in reality is described by quantum dynamics of the system+measurement device + "blocker". There's no ominous additional mechanism outside the standard quantum dynamics at work.
 
  • #16
vanhees71 said:
I don't agree with this one point of the postulates.
We had an extensive discussion at the time to clarify what the rules do and don't say. That discussion included the issues you raise.

vanhees71 said:
If you have realized a von Neumann filter measurement, then it's justified to use the collapse assumption as to describe the state of the system after the measurement.
Yes, and this point was already addressed in the discussions I referred to above.
 
  • #17
vanhees71 said:
The time evolution of a closed system is unitary
And the "open system" you refer to is part of the closed system whose evolution you say is unitary--and in the unitary evolution of the closed system, the open system that is part of it is entangled with other parts of the overall closed system. And entangled subsystems of a closed system do not have well defined quantum states on their own. Only the overall closed system does.

What your "open system dynamics" does is to trace over the other parts of the closed system in order to assign what is called a "state" (a density operator) to the open system, and then assign a "dynamics" to that state by pretending that it is a well-defined state. But while of course this procedure works just fine in a practical sense, it does not explain why we can get away with this when the open system is entangled, and therefore, as above, should not even have a well-defined state at all.

I understand that the issue I just described isn't even on your radar; you simply don't understand why it's an issue at all. But even if it isn't an issue for you, it is an issue for others. Continuing to point out in threads like this that, to you, there isn't an issue at all, doesn't really contribute anything to the discussion for people who, unlike you, think there is an issue, because you are not actually providing any resolution to the issue; you're just ignoring it.
 
  • Like
Likes weirdoguy
  • #18
PeterDonis said:
And the "open system" you refer to is part of the closed system whose evolution you say is unitary--and in the unitary evolution of the closed system, the open system that is part of it is entangled with other parts of the overall closed system. And entangled subsystems of a closed system do not have well defined quantum states on their own. Only the overall closed system does.
Of course they have well-defined quantum states, i.e., the reduced statistical operators, who follow some kind of master equation rather than a unitary time evolution. The point of course IS the entanglement between "system" and "environment/heat bath" it is coupled to to get decoherence and classical behavior of this "system" part.
PeterDonis said:
What your "open system dynamics" does is to trace over the other parts of the closed system in order to assign what is called a "state" (a density operator) to the open system, and then assign a "dynamics" to that state by pretending that it is a well-defined state. But while of course this procedure works just fine in a practical sense, it does not explain why we can get away with this when the open system is entangled, and therefore, as above, should not even have a well-defined state at all.
It IS a well-defined state, the reduced state of the "system". Of course it "explains" the emergent classical behavior of such coarse-grained observables.

It's not different from classical kinetic theory: there you also have the exact time evolution of the closed system of ##N## particles (with ##N =\mathcal{O}(10^{24})##, equivalent to the Liouville equation of the ##N##-body phase-space distribution functions, which itself again is equivalent to an entire "tower of coupled equations" for the ##n##-body "reduced" phase-space distribution functions ##n \leq N##. In practice you have to cut this socalled BBGKY hierarchy by some assumption like the "molecular-chaos assumption=Stoßzahlansatz" a la Bohr, and then you get dissipation and irreversibility, aka the H-theorem.

The quantum case is very similar to this. There you have the exact evolution equation for the ##N##-point Schwinger-Keldysh Green's function, which you can describe by a Kadanoff-Baym equation for the two-point (one-particle) Green's function, derived from the ##\Phi##-functional variational principle. The first step is to cut the infinitely many 2PI diagrams, defining the exact ##\Phi##-functional by expanding with respect to some parameter (coupling constant(s) or ##\hbar##) and then you coarse grain by, e.g., using the gradient expansion for the corresponding Wigner functions. Then you end up with BUU-like equations, "explaining" the classical behavior of the macroscopic observables on large macroscopic systems.
PeterDonis said:
I understand that the issue I just described isn't even on your radar; you simply don't understand why it's an issue at all. But even if it isn't an issue for you, it is an issue for others. Continuing to point out in threads like this that, to you, there isn't an issue at all, doesn't really contribute anything to the discussion for people who, unlike you, think there is an issue, because you are not actually providing any resolution to the issue; you're just ignoring it.
It's not me, who's providing the resolution of the issue, of course but the work of a large community of physicists dealing with many-body theory in many branches of physics.
 
  • #19
vanhees71 said:
Of course they have well-defined quantum states, i.e., the reduced statistical operators
In a practical sense, yes, I know that. But we are talking about interpretations and foundations in this thread. As I have already posted, I understand that you don't even see such issues as issues, but other people do. And to those who do see an issue here, your blithe claim that "reduced statistical operators" count as well-defined quantum states is not valid as far as foundations are concerned.

vanhees71 said:
Of course it "explains" the emergent classical behavior of such coarse-grained observables.
Again, your blithe claim here is simply not valid to those who see an issue here that you don't see.

Please stop continuing to assert your personal viewpoint that there is no issue here, as though it were fact. It's not.
 
  • Like
Likes gentzen and vanhees71
  • #20
vanhees71 said:
It's not different from classical kinetic theory
You must be joking. Classical kinetic theory is classical; there is no entanglement. QM has entanglement. I really think you should take a step back and consider the possibility that entanglement is much, much stranger than you think, and presents foundational issues that you are simply not considering.
 
  • Like
Likes gentzen
  • #21
This is not my personal viewpoint but the standard point of view in the literature about open quantum systems. See, e.g.,

E. Joos et al, Decoherence and the Appearance of a Classical World in Quantum Theory, 2nd edition, Springer (2003)

Note the title!
 
  • Sad
  • Skeptical
Likes weirdoguy and gentzen
  • #23
vanhees71 said:
I don't agree with this one point of the postulates. It's simply not true. E.g., if you register a photon after letting it run through a polarization filter, it's absorbed, i.e., the quantum state of this photon is not an eigenstate of a single photon with this polarization direction. It's simply a 0-photon state.
The observable actually measured at the registration device is the presence of a photon, represented by a number operator with the eigenvalues 0 (observed) and 1 (passing). This is a standard von Neumann measurement where the simple collapse rule is not violated.
 
  • #24
vanhees71 said:
E. Joos et al, Decoherence and the Appearance of a Classical World in Quantum Theory, 2nd edition, Springer (2003)

Note the title!
The appearance of a classical world (as a macroscopic average view) is only part of the measurement problem (which also is about what happens in a single measurement).
 
  • Like
Likes vanhees71 and PeterDonis
  • #25
QT explains what happens in a single measurement, according to what's observed. The outcome of the measurement is in general random, and that's what's observed (e.g., in the paradigmatic double-slit experiment with single particles or photons).
 
  • #26
vanhees71 said:
QT explains what happens in a single measurement, according to what's observed.
QT predicts probabilities for what happens in a single measurement.

Not everyone shares your blithe confidence that this counts as an explanation. And you have given no argument for your opinion: you have simply asserted it without argument. That does not give any basis for a productive discussion.
 
  • Like
Likes gentzen
  • #27
My argument is very simple: It's what's observed, i.e., the outcome of a single measurement is random. The predicted probabilities, of course, can only be checked by taking sufficiently large "statistical samples".

BTW, you also do not discuss very productively, because you don't explicitly say, what doesn't satisfy you with the minimally interpreted QT.
 
  • #28
vanhees71 said:
My argument is very simple: It's what's observed.
That's not an argument. It's just an assertion that "what's observed" must be sufficient for everyone. It's not.

vanhees71 said:
You also do not discuss very productively, because you don't say, what doesn't satisfy you with the minimally interpreted QT.
That's because I have concluded that doing so with you is a waste of my time. You either are unfamiliar with or choose to wilfully ignore the vast published literature that already exists that explains why "minimally interpreted QT" is not foundationally sufficient. We have had multiple previous threads on this. Enough is enough. At this point I am not trying to convince you of anything. I am just asking you to stop hijacking threads about QM interpretations and foundations to assert your opinion when it adds nothing productive to the discussion. If you seriously can't see why there is a measurement problem in QM, or why other people are interested in discussions on QM interpretations and foundations, then it seems to me that the least you can do, as a matter of common courtesy, is to just stop posting in this subforum so others who do see an issue can talk about it without you constantly interfering.
 
  • Like
  • Love
Likes GarberMoisha, weirdoguy and Demystifier
  • #29
Ok, I try not to participate anymore in these discussions. It indeed obviously doesn't lead to anything. Maybe you are so kind to point out one review article you have in mind about this "vast published literature" (physics of course, not philosophy!).
 
  • #30
vanhees71 said:
Maybe you are so kind to point out one review article you have in mind about this "vast published literature" (physics of course, not philosophy!).
Nope. You should be familiar enough with the literature to know what I am referring to. And your comment about "physics of course, not philosophy" means you might not agree with me anyway about which papers count as "physics" and which you can dismiss as "philosophy". I don't see the point. If you can keep your promise to simply stay out of these discussions in the future, we're good.
 
  • Like
Likes weirdoguy
  • #31
Everywhere else in this forum it's you who pedantically insists on giving concrete references. It's ridiculous!
 
  • Haha
  • Sad
Likes weirdoguy, Lord Jestocost and Demystifier
  • #32
vanhees71 said:
Everywhere else in this forum it's you who pedantically insists on giving concrete references. It's ridiculous!
I have already explained why I am taking this particular position in this particular discussion. Context matters.
 
  • #33
A. Neumaier said:
The observable actually measured at the registration device is the presence of a photon, represented by a number operator with the eigenvalues 0 (observed) and 1 (passing). This is a standard von Neumann measurement where the simple collapse rule is not violated.
But afterwards there's no photon. The usual collapse assumption means that after the measurement the photon's state is in an eigenstate of the measured observable. However, in this case you have the QED vacuum and no single-photon state.
 
  • #34
vanhees71 said:
But afterwards there's no photon. The usual collapse assumption means that after the measurement the photon's state is in an eigenstate of the measured observable. However, in this case you have the QED vacuum and no single-photon state.
That's a very narrow and naive vision of collapse. In general, collapse "happens" in the Hilbert space of all physical states, including photon vacuum, not necessarily in the subspace of one-photon states.
 
  • #35
vanhees71 said:
E.g., when detecting a photon by absorbing it at a photoplate you have a transition of the type ##|\psi_{\gamma} \rangle \otimes |\psi_{\text{plate}} \rangle \rightarrow |\psi_{\text{plate}}' \rangle##.
But this transition is not unitary. In your view of QT, how is it compatible with unitarity? Most physicists refer to such transition as collapse.
 
  • Like
Likes PeterDonis

Similar threads

Back
Top