Exploring the Connection Between Quantum Mechanics and Quantum Field Theory

In summary: It applies to the blobs but is not used as far as I know later - at least I haven't seen it. One can almost certainly find a use for it - its just at my level of QFT I haven't seen it. Some others who know more may be able to comment. BTW the link I gave which proved Gleason showed its not really an axiom - but rather a consequence of non-contextuality - but that is also a whole new...
  • #176
vanhees71 said:
This is a bit too short an answer to be convincing. Why is choosing a subsystem of the universe the classical/quantum cut? Matter as we know it cannot be described completely by classical physics at all. So how can just taking a lump of matter as the choice of a subsystem define a classical/quantum cut?

Well, I'm not sure that the cut needs to be classical/quantum, but in order to compare theory with experiment, there needs to be such a thing as "the outcome of an experiment". The theory predicts that you have a probability of [itex]P[/itex] of getting outcome [itex]O[/itex], then it has to be possible to get a definite outcome in order to compile statistics and compare with the theoretical prediction. But for the subsystem described by quantum mechanics, there are no definite outcomes. The system is described by superpositions such as [itex]\alpha |\psi_1\rangle + \beta |\psi_2\rangle[/itex]. So it seems to me that we distinguish between the system under study, which we treat as evolving continuously according to Schrodinger's equation, and the apparatus/detector/observer, which we treat as having definite (although nondeterministic) outcomes. That's the split that is sometimes referred to as the classical/quantum split, and it seems that something like it is necessary in interpreting quantum mechanics as a probabilistic theory.
 
Physics news on Phys.org
  • #177
atyy said:
Well, if you agree that quantum mechanics cannot describe the whole universe, but it can describe subsystems of it, then it seems that at some point quantum mechanics stops working.
Sure. But what has this to do with the quantum/classical cut. Classical physics is also not working!
 
  • #178
stevendaryl said:
The Born interpretation itself to me seems to require a choice of basis before it can be applied. The rule gives the probability for obtaining various values for the results of measurements. I don't see how you can make sense of the Born rule without talking about measurements. How can you possible compare QM to experiment unless you have a rule saying: If you do such and such, you will get such and such value? (or: if you do such and such many times, the values will be distributed according to such and such probability)
Sure, it requires a choice of basis, but that's the choice of what you measure, because you have to choose the eigenbasis of the operator representing the observable you choose to meausure. There's nothing very surprising.

QT subscribes only to the 2nd formulation in parentheses: "if you do such and such many times, the values will be distributed according to such and such probability." That's precisely how QT in the minimal formulation works: "doing such and such" is called "preparation" in the formalism and defines what a (pure or mixed state) is, and "the values" refer to an observable you choose to measure. The prediction of QT is that in the given state the probability (distribution) to find a value of this measured observable is given by Born's rule.
 
  • #179
vanhees71 said:
Sure. But what has this to do with the quantum/classical cut. Classical physics is also not working!

Yes, classical/quantum cut does not literally mean classical. It just means where we take QM to stop working, and where we get definite outcomes.
 
  • #180
stevendaryl said:
Well, I'm not sure that the cut needs to be classical/quantum, but in order to compare theory with experiment, there needs to be such a thing as "the outcome of an experiment". The theory predicts that you have a probability of [itex]P[/itex] of getting outcome [itex]O[/itex], then it has to be possible to get a definite outcome in order to compile statistics and compare with the theoretical prediction. But for the subsystem described by quantum mechanics, there are no definite outcomes. The system is described by superpositions such as [itex]\alpha |\psi_1\rangle + \beta |\psi_2\rangle[/itex]. So it seems to me that we distinguish between the system under study, which we treat as evolving continuously according to Schrodinger's equation, and the apparatus/detector/observer, which we treat as having definite (although nondeterministic) outcomes. That's the split that is sometimes referred to as the classical/quantum split, and it seems that something like it is necessary in interpreting quantum mechanics as a probabilistic theory.
Sure, but where is there a problem? The very success of very accurate measurements in accordance with the predictions of QT shows that there is no problem. To understand how a measurement apparatus works, ask the experimentalists/engineers who invented it, which model of the apparatus they had in mind to construct it. It's almost always classical, and that the classical approximation works is shown by the very success of the apparatus to measure what it is supposed to measure.

Another question is, how to understand the classical behavior of macroscopic objects from QT, including that of measurement devices (which are, of course, themselves just macroscopic objects, obeying the same quantum laws of nature as any other). I think that this is quite well understood in terms of quantum statistics and appropriate effective coarse-grained descriptions of macroscopic observables derived from QT.
 
  • #181
atyy said:
Yes, classical/quantum cut does not literally mean classical. It just means where we take QM to stop working, and where we get definite outcomes.
You get definite outcomes and "classical behavior" for coarse-grained macroscopic variables. The microscopic details are only probabilistically described according to QT.
 
  • #182
vanhees71 said:
Sure, but where is there a problem?

The conceptual problem is how to say, rigorously, what it means for a device to measure an observable. Informally, or semi-classically, it means that the device is in a metastable state, and that a small perturbance proportional to the observable being measured will cause it to make a transition into one of a (usually discrete) number of stable pointer states. So there is physics involved in designing a good detector/measurement device, but it doesn't seem that this physics is purely quantum mechanics.
 
  • #183
vanhees71 said:
Another question is, how to understand the classical behavior of macroscopic objects from QT, including that of measurement devices (which are, of course, themselves just macroscopic objects, obeying the same quantum laws of nature as any other). I think that this is quite well understood in terms of quantum statistics and appropriate effective coarse-grained descriptions of macroscopic observables derived from QT.

I don't agree that it is well understood. Coarse graining is not going to get you from a deterministic superposition of possibilities to one possibility selected randomly out of the set.
 
  • #184
vanhees71 said:
Sure, it requires a choice of basis, but that's the choice of what you measure, because you have to choose the eigenbasis of the operator representing the observable you choose to meausure. There's nothing very surprising.

But you don't choose a basis, you construct a measurement device. In what sense does a measurement device choose a basis? Only in the sense that the measurement device amplifies microscopic differences in one basis so that they become macroscopic differences. The treatment of macroscopic differences is completely unlike the treatment of microscopic differences in standard quantum mechanics. At the microscopic level, an electron can be in a superposition of spin-up and spin-down. But if we have a spin measurement, the result of which is a pointer that points to the word "Up" for spin-up and "Down" for spin-down, then we don't consider superpositions of those possibilities, we get one or the other.
 
  • #185
stevendaryl said:
I don't agree that it is well understood. Coarse graining is not going to get you from a deterministic superposition of possibilities to one possibility selected randomly out of the set.
As I had said before, people working in statistical mechanics do not use the eigenvalue-eigenstate link to measurement but the postulates that I had formulated (though they are not explicit about these). This is enough to get a unique macroscopic measurement result (within experimental error).
 
  • Like
Likes vanhees71
  • #186
vanhees71 said:
You get definite outcomes and "classical behavior" for coarse-grained macroscopic variables. The microscopic details are only probabilistically described according to QT.

No, once you apply the Born rule, you already transition into definite outcomes. Each outcome is definite after you get it, but for identically prepared systems the definite outcomes are distributed according to the Born rule.

So it is not correct to solve the problem by coarse graining after the Born rule is applied, since there is no problem once the Born rule is applied.

The question is: who determines when a measurement is made, ie, who determines when the Born rule is applied?
 
  • #187
stevendaryl said:
But you don't choose a basis, you construct a measurement device. In what sense does a measurement device choose a basis? Only in the sense that the measurement device amplifies microscopic differences in one basis so that they become macroscopic differences. The treatment of macroscopic differences is completely unlike the treatment of microscopic differences in standard quantum mechanics. At the microscopic level, an electron can be in a superposition of spin-up and spin-down. But if we have a spin measurement, the result of which is a pointer that points to the word "Up" for spin-up and "Down" for spin-down, then we don't consider superpositions of those possibilities, we get one or the other.
A measurement device chooses the basis because it measures the observable it is constructed for. Of course, to explain any real-world measurement device in all microscopic details with quantum mechanics (or even relativistic quantum field theory) is of course impossible and obviously not necessary to construct some very accurate measurement devices like big detectors at the LHC, photon detectors in quantum-optics labs etc. etc.
 
  • #188
atyy said:
No, once you apply the Born rule, you already transition into definite outcomes. Each outcome is definite after you get it, but for identically prepared systems the definite outcomes are distributed according to the Born rule.

So it is not correct to solve the problem by coarse graining after the Born rule is applied, since there is no problem once the Born rule is applied.

The question is: who determines when a measurement is made, ie, who determines when the Born rule is applied?
I think, I'm still not able to make this very simple argument clear. Let's try on the paradigmatic example of measuring the spin with the Stern-Gerlach experiment (in non-relativistic approximation).

You shoot (an ensemble of) single particles through an inhomogeneous magnetic field with a large static component in ##z##-direction. According to quantumtheoretical calculations with the Pauli equation (Schrödinger equation for a spin-1/2 particle with a magnetic moment) you get a position-spin entangled state where particles in one region are (almost) 100% in the spin state with ##\sigma_z=+1/2## and those in another macroscopically well separated region with ##\sigma_1=-1##. Depending on the initial state (let's assume for simplicity an unpolarized source of spin-1/2 particles as in Stern's and Gerlach's original experiment, where they used a little oven with silver vapour) you get the particle with some probability (in our case 1/2) to be deflected in one or the other direction. So you measure with this probability ##\sigma_z=1/2## and with the corresponding complementary probability ##\sigma_z=-1/2##.

The measurement process itself in this case consists in putting some scintillator or CCD screen, where the particles leave a macroscopic trace to be analyzed (in the case of the original experiment sent around the world on a now famous postcard).

Where is here the measurement problem? Of course, to describe in all microscopic detail the chemistry leading to a coloured grain on the photoplate is very difficult, but it's not needed FAPP to understand the outcome of the experiment and to measure the spin component of your spin-1/2 particle in this setup. So there is FAPP no measurement problem.
 
  • #189
vanhees71 said:
A measurement device chooses the basis because it measures the observable it is constructed for. Of course, to explain any real-world measurement device in all microscopic details with quantum mechanics (or even relativistic quantum field theory) is of course impossible and obviously not necessary to construct some very accurate measurement devices like big detectors at the LHC, photon detectors in quantum-optics labs etc. etc.

Yeah, it's difficult to give a complete quantum mechanical description of a macroscopic object, but we don't need a complete description to know that you're not going to get definite results from a theory that predicts smooth unitary evolution.
 
  • #190
vanhees71 said:
Where is here the measurement problem?

The measurement problem is to explain how we get definite results for a macroscopic system, instead of smooth evolution of probability amplitudes. You can say that it's because of the enormous number of details involved in a realistic measurement, but I don't see how the number of particles involved can make a difference. Whether you have one particle or two or [itex]10^{10^{10}}[/itex], if quantum mechanics applies, then the evolution will be smooth and unitary.
 
  • #191
vanhees71 said:
I think, I'm still not able to make this very simple argument clear. Let's try on the paradigmatic example of measuring the spin with the Stern-Gerlach experiment (in non-relativistic approximation).

You shoot (an ensemble of) single particles through an inhomogeneous magnetic field with a large static component in ##z##-direction. According to quantumtheoretical calculations with the Pauli equation (Schrödinger equation for a spin-1/2 particle with a magnetic moment) you get a position-spin entangled state where particles in one region are (almost) 100% in the spin state with ##\sigma_z=+1/2## and those in another macroscopically well separated region with ##\sigma_1=-1##. Depending on the initial state (let's assume for simplicity an unpolarized source of spin-1/2 particles as in Stern's and Gerlach's original experiment, where they used a little oven with silver vapour) you get the particle with some probability (in our case 1/2) to be deflected in one or the other direction. So you measure with this probability ##\sigma_z=1/2## and with the corresponding complementary probability ##\sigma_z=-1/2##.

The measurement process itself in this case consists in putting some scintillator or CCD screen, where the particles leave a macroscopic trace to be analyzed (in the case of the original experiment sent around the world on a now famous postcard).

Where is here the measurement problem? Of course, to describe in all microscopic detail the chemistry leading to a coloured grain on the photoplate is very difficult, but it's not needed FAPP to understand the outcome of the experiment and to measure the spin component of your spin-1/2 particle in this setup. So there is FAPP no measurement problem.

Mathematically, the state space of quantum mechanics is not a simplex. In the Ensemble interpretation, this means that an ensemble does not have a unique division into sub-ensembles. This lack of uniqueness is the lack of a definite reality.

In contrast, the state space of a classical probability theory is a simplex. In the Ensemble interpretation, this means that an ensemble has a unique division into sub-ensembles. This means we can say there is a definite reality of which we are ignorant.

http://arxiv.org/abs/1112.2347
"The simplex is the only convex set which is such that a given point can be written as a mixture of pure states in one and only one way."
 
  • #192
vanhees71 said:
I think, I'm still not able to make this very simple argument clear. Let's try on the paradigmatic example of measuring the spin with the Stern-Gerlach experiment (in non-relativistic approximation).

You shoot (an ensemble of) single particles through an inhomogeneous magnetic field with a large static component in ##z##-direction. According to quantumtheoretical calculations with the Pauli equation (Schrödinger equation for a spin-1/2 particle with a magnetic moment) you get a position-spin entangled state where particles in one region are (almost) 100% in the spin state with ##\sigma_z=+1/2## and those in another macroscopically well separated region with ##\sigma_1=-1##. Depending on the initial state (let's assume for simplicity an unpolarized source of spin-1/2 particles as in Stern's and Gerlach's original experiment, where they used a little oven with silver vapour) you get the particle with some probability (in our case 1/2) to be deflected in one or the other direction. So you measure with this probability ##\sigma_z=1/2## and with the corresponding complementary probability ##\sigma_z=-1/2##.

The measurement process itself in this case consists in putting some scintillator or CCD screen, where the particles leave a macroscopic trace to be analyzed (in the case of the original experiment sent around the world on a now famous postcard).

Where is here the measurement problem? Of course, to describe in all microscopic detail the chemistry leading to a coloured grain on the photoplate is very difficult, but it's not needed FAPP to understand the outcome of the experiment and to measure the spin component of your spin-1/2 particle in this setup. So there is FAPP no measurement problem.
What quantum mechanics (without collapse) predicts is that *everytime* you get half a silver atom in the first direction and another half of a silver atom in the second direction. That is the measurement problem.
Best.
Jim Graber
 
  • #193
atyy said:
This lack of uniqueness is the lack of a definite reality.
Only if one thinks that the pure state is the definite reality. But this is an untestable assumption.
 
Last edited:
  • #194
A. Neumaier said:
Only if one thinks that the pure state is the definite reality. But this is an untestable assuption.

But nonetheless, there are times when one has to define sub-ensembles, for example when one performs a second measurement conditioned on the result of the first. The conditioning is done on a sub-ensemble.
 
  • #195
atyy said:
But nonetheless, there are times when one has to define sub-ensembles, for example when one performs a second measurement conditioned on the result of the first. The conditioning is done on a sub-ensemble.
That's why it is far more natural to regard the mixed state as the definite reality. Decomposing it into pure states is physically meaningless. This is why I formulated my postulates for the formal core of quantum mechanics without reference to wave functions. It is completely natural, and (as demonstrated there) one can get the case of pure states as a special case if desired.
 
  • #196
stevendaryl said:
The measurement problem is to explain how we get definite results for a macroscopic system, instead of smooth evolution of probability amplitudes. You can say that it's because of the enormous number of details involved in a realistic measurement, but I don't see how the number of particles involved can make a difference. Whether you have one particle or two or [itex]10^{10^{10}}[/itex], if quantum mechanics applies, then the evolution will be smooth and unitary.
We don't get "definite results" on the microscopic level but on the macroscopic level. The average values of a pointer status have small standard deviation in relation to the macroscopically relevant accuracy. The measurement of an observable of a quantum system like a particle is due to interaction of this system with a macroscopic apparatus, leading to entanglement between the measured observable and the pointer status, which is a coarse grained, i.e., over many microscopic states averaged quantity. The art is to "amplify" the quantum observable through this interaction sufficiently such that the macroscopic resolution of the pointer reading is sufficient to infer the value of the measured observable of the quantum system. This works in practice and thus there is no measurement problem from a physics point of view.
 
  • #197
vanhees71 said:
We don't get "definite results" on the microscopic level but on the macroscopic level. The average values of a pointer status have small standard deviation in relation to the macroscopically relevant accuracy. The measurement of an observable of a quantum system like a particle is due to interaction of this system with a macroscopic apparatus, leading to entanglement between the measured observable and the pointer status, which is a coarse grained, i.e., over many microscopic states averaged quantity. The art is to "amplify" the quantum observable through this interaction sufficiently such that the macroscopic resolution of the pointer reading is sufficient to infer the value of the measured observable of the quantum system. This works in practice and thus there is no measurement problem from a physics point of view.

All you are doing is replacing the "classical/quantum cut" with the "macroscopic/microscopic cut".
 
  • Like
Likes Demystifier
  • #198
vanhees71 said:
We don't get "definite results" on the microscopic level but on the macroscopic level

Yes, that's what I said was the essence of the measurement problem.

The average values of a pointer status have small standard deviation in relation to the macroscopically relevant accuracy. The measurement of an observable of a quantum system like a particle is due to interaction of this system with a macroscopic apparatus, leading to entanglement between the measured observable and the pointer status, which is a coarse grained, i.e., over many microscopic states averaged quantity. The art is to "amplify" the quantum observable through this interaction sufficiently such that the macroscopic resolution of the pointer reading is sufficient to infer the value of the measured observable of the quantum system. This works in practice and thus there is no measurement problem from a physics point of view.

Hmm. It seems to me that you've said exactly what the measurement problem is. If you have a system that is in a superposition of two states, and you amplify it so that the differences become macroscopic, why doesn't that lead to a macroscopic system in a superposition of two states? Why aren't there macroscopic superpositions?

It seems to me that there are only two possible answers:
  1. There are no macroscopic superpositions. In that case, the problem would be how to explain why not.
  2. There are macroscopic superpositions. In that case, the problem would be to explain why they're unobservable, and what the meaning of Born probabilities are if there are no choices made among possibilities.
People sometimes act as if decoherence is the answer, but it's really not the complete answer. Decoherence is a mechanism by which a superposition involving a small subsystem can quickly spread to "infect" the rest of the universe. It does not solve the problem of why there are definite outcomes.
2.
 
  • #199
atyy said:
All you are doing is replacing the "classical/quantum cut" with the "macroscopic/microscopic cut".

It's the same cut. The cut that is important is such that one one side, you have superpositions of possibilities, evolving smoothly according to Schrodinger's equation. On the other side, you have definite properties: Cats are either alive or dead, not in superpositions.
 
  • #200
jimgraber said:
What quantum mechanics (without collapse) predicts is that *everytime* you get half a silver atom in the first direction and another half of a silver atom in the second direction. That is the measurement problem.
Best.
Jim Graber
No, it predicts that, repeating the experiment very often, I always measure one silver atom which in half of all cases is in the first and the other half in the second direction.
 
  • #201
atyy said:
All you are doing is replacing the "classical/quantum cut" with the "macroscopic/microscopic cut".
Yep, but contrary to the former the latter makes physical sense!
 
  • #202
stevendaryl said:
It's the same cut.
It's not. In some cases (superfluids, superconductors, laser beams) macroscopic objects can behave quantum mechanically, in the sense of having macroscopic quantum coherence.
 
  • #203
stevendaryl said:
Yes, that's what I said was the essence of the measurement problem.
Hmm. It seems to me that you've said exactly what the measurement problem is. If you have a system that is in a superposition of two states, and you amplify it so that the differences become macroscopic, why doesn't that lead to a macroscopic system in a superposition of two states? Why aren't there macroscopic superpositions?

It seems to me that there are only two possible answers:
  1. There are no macroscopic superpositions. In that case, the problem would be how to explain why not.
  2. There are macroscopic superpositions. In that case, the problem would be to explain why they're unobservable, and what the meaning of Born probabilities are if there are no choices made among possibilities.
People sometimes act as if decoherence is the answer, but it's really not the complete answer. Decoherence is a mechanism by which a superposition involving a small subsystem can quickly spread to "infect" the rest of the universe. It does not solve the problem of why there are definite outcomes.
2.
Sure, coarse-graining and decoherence is the answer. What else do you need to understand why macroscopic objects are well described by classical physics? Note that this is a very different interpretation from the quantu-classical cut (imho erroneously) postulated in Bohr's version of the Copenhagen interpretation.

Note again that there are no definite outcomes but only approximately definitive outcomes for the coarse-grained macroscopic quantities.
 
  • #204
vanhees71 said:
Yep, but contrary to the former the latter makes physical sense!
Are you sure? See my post #202 above!
 
  • #205
Demystifier said:
Are you sure? See my post #202 above!
That's a important point but not against my interpretation. To the contrary, it shows that there is no general "quantum-classical part". Superfluidity and superconductivity are nice examples showing that you have to be careful to take all relevant macroscopic observables into account, i.e., you shouldn't somehow 'coarse-grain away" relevant quantum effects.
 
  • #206
vanhees71 said:
That's a important point but not against my interpretation. To the contrary, it shows that there is no general "quantum-classical part". Superfluidity and superconductivity are nice examples showing that you have to be careful to take all relevant macroscopic observables into account, i.e., you shouldn't somehow 'coarse-grain away" relevant quantum effects.
So how to know in general where to put the micro/macro cut? The size of the system is obviously not a good criterion. Would you agree that the best criterion is nonexistence/existence of substantial decoherence? If so, should we better call it coherence/decoherence cut?
 
  • #207
That's the art of modeling. Theoretical physics is a very creative endeavor, and for the description of superconductivity and superfluidity there were awarded rightfully some Nobel prizes!
 
  • #208
vanhees71 said:
Sure, coarse-graining and decoherence is the answer.

That seems completely wrong. If you have:
  • A microscopic subsystem in state [itex]|\psi_1\rangle[/itex] will lead to macroscopic detector state [itex]|\Psi_1\rangle[/itex]
  • A microscopic subsystem in state [itex]|\psi_2\rangle[/itex] will lead to macroscopic detector state [itex]|\Psi_2\rangle[/itex]
then I would think that it would follow from the Rules of Quantum Mechanics that:
  • A microscopic subsystem in a superposition of [itex]|\psi_1\rangle[/itex] and [itex]|\psi_2\rangle[/itex] would lead to a macroscopic detector in a superposition of [itex]|\Psi_1\rangle[/itex] and [itex]|\Psi_2\rangle[/itex]
Decoherence and coarse-graining does not change this fact. If there is a nonzero amplitude for [itex]\Psi_2\rangle[/itex], it's not going to go to zero through coarse-graining.

What decoherence and coarse-graining does for you is that it gives a mechanism for converting a pure-state density matrix into an effective mixed-state density matrix. A mixed-state density matrix can be given an "ignorance" interpretation for the probabilities. So some people say that once you've got an effective mixed-state, you can act as if you have definite outcomes, but you just don't know which.

But in such a case, you KNOW that the mixed state is not due to ignorance. So acting as if the mixed state arose from ignorance is lying to yourself. So the "decoherence" approach to solving the measurement problem basically amounts to: If we pretend to believe things that we know are false, then our problems go away. Okay, I can see that, from a pragmatic point of view. But from a pragmatic point of view, "measurement collapses the wave function" is perfectly fine. Or "consciousness collapses the wave function". The only reason for not assuming those things is because you suspect they are false. So is "decoherence solves the measurement problem".
 
  • #209
Since when are macroscopic coarse-grained observables described by a state vector or a density matrix? It's an effective classical description of averages.
 
  • #210
Demystifier said:
So how to know in general where to put the micro/macro cut? The size of the system is obviously not a good criterion. Would you agree that the best criterion is nonexistence/existence of substantial decoherence? If so, should we better call it coherence/decoherence cut?

I think that's right.

For practical purposes, the issue is whether there is a well-defined (pure) state of a subsystem. If there is, then you can treat it quantum-mechanically, and have superpositions, unitary evolution, etc. After decoherence, the subsystem no longer has a well-defined state. Perhaps a larger system still does, but not the subsystem. So for practical purposes, in studying a subsystem, we can treat it quantum-mechanically as long as it has a well-defined state, and afterwards, we can treat it using mixed states, and pretend that the mixed states are due to ignorance.
 
  • Like
Likes Demystifier

Similar threads

Replies
36
Views
4K
Replies
182
Views
12K
Replies
11
Views
909
Replies
15
Views
2K
Replies
113
Views
8K
Replies
69
Views
5K
Back
Top