Exploring the Connection Between Quantum Mechanics and Quantum Field Theory

In summary: It applies to the blobs but is not used as far as I know later - at least I haven't seen it. One can almost certainly find a use for it - its just at my level of QFT I haven't seen it. Some others who know more may be able to comment. BTW the link I gave which proved Gleason showed its not really an axiom - but rather a consequence of non-contextuality - but that is also a whole new...
  • #211
vanhees71 said:
Since when are macroscopic coarse-grained observables described by a state vector or a density matrix? It's an effective classical description of averages.

I'm saying that IF you wanted to treat a macroscopic system using quantum mechanics, one would have to use density matrices. You can certainly just pretend that you have a classical system. That's the sense in which the measurement problem is solved: there is a way to pretend that it is solved.
 
Physics news on Phys.org
  • #212
vanhees71 said:
Since when are macroscopic coarse-grained observables described by a state vector or a density matrix? It's an effective classical description of averages.
stevendaryl said:
I'm saying that IF you wanted to treat a macroscopic system using quantum mechanics, one would have to use density matrices. You can certainly just pretend that you have a classical system. That's the sense in which the measurement problem is solved: there is a way to pretend that it is solved.
A. Neumaier said:
As I had said before, people working in statistical mechanics do not use the eigenvalue-eigenstate link to measurement but the postulates that I had formulated (though they are not explicit about these). This is enough to get a unique macroscopic measurement result (within experimental error).
The latter supports the position of vanhees71 without having to resolve anything about superpositions or ignorance. No pretense is involved.
 
  • #213
A. Neumaier said:
The latter supports the position of vanhees71 without having to resolve anything about superpositions or ignorance. No pretense is involved.

I don't agree. Treating quantum uncertainty as if it were thermal noise is pretense.
 
  • #214
stevendaryl said:
I don't agree. Treating quantum uncertainty as if it were thermal noise is pretense.

As I said in another post: suppose we have a set up such that:
  • An electron with spin up will trigger a detector to go into one "pointer state", called "UP".
  • An electron with spin down will trigger a detector to go into a macroscopically different pointer state, called "DOWN".
Then the standard quantum "recipe" tells us:
  • An electron in the state [itex]\alpha |up\rangle + \beta |down\rangle[/itex] will cause the detector to either go into state "UP" with probability [itex]|\alpha|^2[/itex] or into state "DOWN" with probability [itex]|\beta|^2[/itex]
If you claim that this conclusion follows from pure unitary evolution of the wave function, I think you're fooling yourself. But if it doesn't follow from unitary evolution, then it seems to me that you're proposing an extra process in quantum mechanics, whereby a definite result is selected out of a number of possibilities according to the Born rule. That's fine: there is no reason to assume that there is only one kind of process in nature. But if you're proposing this extra process, then to me, you have a measurement problem. Why does this process apply to large, macroscopic systems, but not to small systems such as single electrons or single atoms?
 
  • #215
stevendaryl said:
As I said in another post: suppose we have a set up such that:
  • An electron with spin up will trigger a detector to go into one "pointer state", called "UP".
  • An electron with spin down will trigger a detector to go into a macroscopically different pointer state, called "DOWN".
Then the standard quantum "recipe" tells us:
  • An electron in the state [itex]\alpha |up\rangle + \beta |down\rangle[/itex] will cause the detector to either go into state "UP" with probability [itex]|\alpha|^2[/itex] or into state "DOWN" with probability [itex]|\beta|^2[/itex]
If you claim that this conclusion follows from pure unitary evolution of the wave function, I think you're fooling yourself. But if it doesn't follow from unitary evolution, then it seems to me that you're proposing an extra process in quantum mechanics, whereby a definite result is selected out of a number of possibilities according to the Born rule. That's fine: there is no reason to assume that there is only one kind of process in nature. But if you're proposing this extra process, then to me, you have a measurement problem. Why does this process apply to large, macroscopic systems, but not to small systems such as single electrons or single atoms?
This doesn't follow from anything but is a fundamental postulate, called Born's rule. Weinberg gives quite convincing arguments that it cannot be derived from the other postulates. So it's part of the "axiomatic setup" of the theory. In this sense there is no problem, because in physics the basic postulates are anyway subject to empirical testing and cannot be justified otherwise than be their empirical success!
 
  • #216
stevendaryl said:
As I said in another post: suppose we have a set up such that:
  • An electron with spin up will trigger a detector to go into one "pointer state", called "UP".
  • An electron with spin down will trigger a detector to go into a macroscopically different pointer state, called "DOWN".
There are no pointer states called UP or DOWN. The pointer is a macroscopic object, and the measurement result is that some macroscopic expectation (of mass density of the pointer) is large in a neighborhood of the mark called UP and zero in a neighborhood of the mark called DOWN, or conversely. To model this by a quantum state UP amounts to blinding oneself to macroscopic reality. In terms of quantum mechanics, there are an astronomical number of microstates (of size of the minimal uncertainty) that make up either UP or DOWN, and even more that make up neither UP nor DOWN (since the pointer moves continuously and takes time to make the measurement). It is no surprise that reducing this realistic situation to a simple black-and-white situation with only two quantum states leads to interpretation problems. This is due to the oversimplification of the measurement process. To quote Einstein: Everything should be modeled as simply as possible but not simpler.
 
  • #217
A. Neumaier said:
there are no pointer states called UP or DOWN.

Putting it in bold face doesn't make it more true. In a Stern-Gerlach type experiment, an electron is either deflected upward, where it collides with a photographic plate making a dark spot on the upper plate. Or it is deflected downward, where it collides with a photographic plate making a dark spot on the lower plate. So I'm using the word "UP" to mean "there is a dark spot on the upper plate" and the word "DOWN" to mean "there is a dark spot on the lower plate".

So I don't know what you mean by saying that there is no such thing as "UP" or "DOWN". We may not be able to give a complete description of these states as mixed or pure states in a Hilbert space, but empirically they are possible states of the detector.
 
  • Like
Likes vanhees71 and Demystifier
  • #218
vanhees71 said:
This doesn't follow from anything but is a fundamental postulate, called Born's rule. Weinberg gives quite convincing arguments that it cannot be derived from the other postulates. So it's part of the "axiomatic setup" of the theory. In this sense there is no problem, because in physics the basic postulates are anyway subject to empirical testing and cannot be justified otherwise than be their empirical success!

That's the measurement problem! You have two different processes: (1) Smooth unitary evolution, and (2) selection of one outcome out of a set of possible outcomes. The latter process only applies to macroscopic systems, not microscopic systems. Why?
 
  • #219
stevendaryl said:
Putting it in bold face doesn't make it more true.
It is not intended to do that, but:
Physics Forums Global Guidelines said:
When replying in an existing topic it is fine to use CAPS or bold to highlight main points.
stevendaryl said:
In a Stern-Gerlach type experiment, an electron is either deflected upward, where it collides with a photographic plate making a dark spot on the upper plate. Or it is deflected downward, where it collides with a photographic plate making a dark spot on the lower plate. So I'm using the word "UP" to mean "there is a dark spot on the upper plate" and the word "DOWN" to mean "there is a dark spot on the lower plate".
But this is not a pointer state but the electron state. "there is a dark spot on the upper plate" is a large collection of possible microstates!

Before reaching the screen, the electron is in the superposition you describe, and the system of electron plus detector is in a state described by a tensor product of a pure state and a density matrix for the screen. This system undergoes (because of decoherence through the rest of the universe) a dissipative, stochastic dynamics that results in a new state described by a density matrix of the combined system of electron plus detector, in which the expectation of the integral of some field density over one of the two screen spots at the end of the electron beams changes in a macroscopically visible way. We observe this change of the expectation and say ''The electron collapsed to state ''up'' or ''down'' depending on which spot changed macroscopically.
 
Last edited:
  • #220
The fundamental postulates of a theory are a problem in so far as you have to empirically test them. So you have to invent experiments to do stringent tests on them, which has a theoretical and an experimenting (engineering) part. The theorist can invent conceptual experiments which than have to be realized by experimentalists who build up the machinery to do these experiments. Whether a mathematical scheme is relevant for physics is decided in how far it is possible to invent such experiments down to their practical realization. That's it. After this is solved for a sufficient large range of situations you call a model theoretical physics, and I'd say that's achieved to an amazing degree for QT, which is not only the most comprehensive theory developed so far but also the most stringently tested one either.

Now you may have your philosophical and metaphysical head aches, but that's not part of physics anymore. It's philophy or religion. This is legitimate, but you must not expect satisfactory answers from physics, which is not about explaining the world but describing it. At this point in the development of physics, from a physics point of view we have to live with the observation that quantum theory, including Born's rule of probabilistic interpretation of the quantum state, describes nature quite comprehensively. As long as one doesn't find an even better theory, there won't be solutions for your philosophical quibbles!
 
  • #221
A. Neumaier said:
It is not intended to do that, but:But this is not a pointer state but the electron state. "there is a dark spot on the upper plate" is a large collection of possible microstates!

Before reaching the screen, the electron is in the superposition you describe, and the system of electron plus detector is in a state described by a tensor product of a pure state and a density matrix for the screen. This system undergoes (because of decoherence through the rest of the universe) a dissipative, stochastic dynamics that results in a new state described by a density matrix of the combined system of electron plus detector, in which the expectation of the integral of some field density over one of the two screen spots at the end of the electron beams changes in a macroscopically visible way. We observe this change of the expectation and say ''The electron collapsed to state ''up'' or ''down'' depending on which spot changed macroscopically.
I agree with everything you said, except for the last sentence. Nothing collapsed here. You just get a FAPP irreversible result due to the dissipative process resulting in a macroscopic mark of the electron on the photoplate. Usually, it's impossible to say anything definitive about the fate of the poor electron hitting the plate, because it's absorbed. You cannot say that it is described by the state ##|\text{up} \rangle## when hitting a place in the "up region".
 
  • #222
vanhees71 said:
I agree with everything you said, except for the last sentence. Nothing collapsed here. You just get a FAPP irreversible result due to the dissipative process resulting in a macroscopic mark of the electron on the photoplate. Usually, it's impossible to say anything definitive about the fate of the poor electron hitting the plate, because it's absorbed.
I said we say "collapsed", and describe with ''we'' current practice - one can find this phrase in many places. Even though, of course the collapse is not needed on the level of the many-particle description but only in the approximate reduced description. What one can say depends on the nature of the screen. If it is a bubble chamber one can see a track traced out by the electron. If it is a photographic plate it will probably be part of a bound state of the detector.
 
  • #223
vanhees71 said:
You cannot say that it is described by the state |up⟩ when hitting a place in the "up region".
Yes, I agree. One shouldn't use this formulation, though it is used a lot.
 
  • #224
vanhees71 said:
Sure, coarse-graining and decoherence is the answer. What else do you need to understand why macroscopic objects are well described by classical physics? Note that this is a very different interpretation from the quantu-classical cut (imho erroneously) postulated in Bohr's version of the Copenhagen interpretation.

Note again that there are no definite outcomes but only approximately definitive outcomes for the coarse-grained macroscopic quantities.

But you need to introduce one more postulate to decide what to coarse grain, ie. where do you put the cut to decide what is macroscopic and must be coarse grained.
 
  • #225
atyy said:
But you need to introduce one more postulate to decide what to coarse grain, ie. where do you put the cut to decide what is macroscopic and must be coarse grained.
This needs no postulates. Coarse-graining means removing precisely those features that oscillate too fast in space or time to be relevant for the macroscopic averages. What this is depends on the problem at hand but is an objective property of the microscopic model. And in many cases it is known. Correct coarse-graining is revealed by the fact that the memory kernel decays exponentially and sufficiently fast, which is the case only if exactly the right macroscopic set of variables is retained.
 
  • Like
Likes vanhees71
  • #226
A. Neumaier said:
This needs no postulates. Coarse-graining means removing precisely those features that oscillate too fast in space or time to be relevant for the macroscopic averages. What this is depends on the problem at hand but is an objective property of the microscopic model. And in many cases it is known. Correct coarse-graining is revealed by the fact that the memory kernel decays exponentially and sufficiently fast, which is the case only if exactly the right macroscopic set of variables is retained.
I like this answer a lot. But what frequency qualifies as 'oscillating too fast' ?
 
  • #227
Mentz114 said:
I like this answer a lot. But what frequency qualifies as 'oscillating too fast' ?
This depends on the accuracy and generality with which you want your model to be accurate.

One has the same problem in classical mechanics. Do you describe a pendulum or a spring by a linear or a nonlinear equation? It depends on how big your deviations from the equilibrium state is, and how accurate your predictions should be. Thus this is not a problem with the foundations but with the use of the theory.
 
  • Like
Likes Mentz114
  • #228
vanhees71 said:
At this point in the development of physics, from a physics point of view we have to live with the observation that quantum theory, including Born's rule of probabilistic interpretation of the quantum state, describes nature quite comprehensively. As long as one doesn't find an even better theory, there won't be solutions for your philosophical quibbles!

That's fine--as long as we're in agreement that there are no solutions within existing theory. That's my only point.
 
  • Like
Likes vanhees71
  • #229
vanhees71 said:
I agree with everything you said, except for the last sentence. Nothing collapsed here.

Wow. It seems to me that what you're saying is just contrary to fact. There are two possible outcomes of the experiment: Either the upper plate has a black dot, or the lower plate has a black dot. You do the experiment, and only one of those possibilities becomes actual. That's what collapse means.

When you have a small system, involving a small number of particles, the superposition principle holds: If you have two possible states, then the superposition of the two is another possible state. Is there some maximal size for which the superposition holds? Is there some maximum number of particles for which it holds? I understand the point that an actual macroscopic outcome, such as a blackened spot on a photographic plate, involves countless numbers of particles, so it is completely impossible for us to describe such a thing using quantum mechanics. But the honest answer is that the problem of how definite results emerge is just unsolved. You don't know. That's fine. But it seems wrong to me to pretend otherwise.
 
  • #230
stevendaryl said:
I understand the point that an actual macroscopic outcome, such as a blackened spot on a photographic plate, involves countless numbers of particles, so it is completely impossible for us to describe such a thing using quantum mechanics. But the honest answer is that the problem of how definite results emerge is just unsolved. You don't know. That's fine. But it seems wrong to me to pretend otherwise.

Standard statistical mechanics implies dissipative deterministic or stochastic classical dynamics for coarse-grained variables in appropriate models. Even though the Stern-Gerlach experiment may not have been treated in this way there is no doubt that the deterministic (and dissipative) Navier-Stokes equations for classical hydromechanics follow from quantum statistical mechanics in a suitable approximation. This is done completely independent of observers and without any measurement problem, just with an interpretation according to my post #212 rather than the collapse interpretation. Thus one does not have to solve a selection problem to obtain definite results from quantum mechanics.

Combining this knowledge what we know from how detectors work it is easy to guess that the total picture is indeed the one painted by vanhees71 and myself, even though details are a lot more complex than appropriate for PF. Concerning statistical mechanics and measurement, have your read the following papers? (I had mentioned the first of them in an earlier thread.)

Understanding quantum measurement from the solution of dynamical models
Authors: Armen E. Allahverdyan, Roger Balian, Theo M. Nieuwenhuizen
http://arxiv.org/abs/1107.2138

Lectures on dynamical models for quantum measurements
Authors: Theo M. Nieuwenhuizen, Marti Perarnau-Llobet, Roger Balian
http://arxiv.org/abs/1406.5178

There are many more articles that deal with suitable model settings...
 
Last edited:
  • Like
Likes vanhees71
  • #231
For a fully quantum theoretical description of the Stern-Gerlach experiment, see

http://arxiv.org/abs/quant-ph/0409206

It also shows that you can only come close to an idealized SG experiment as it is discussed in introductory chapters of many QT books. I think, the SG experiment is great to be treated at several stages of the QT course, showing on a not too complicated example that can be treated almost exactly (although some numerics is necessary as indicated in the paper) as soon as the full description by the Pauli equation is available.
 
  • #232
vanhees71 said:
For a fully quantum theoretical description of the Stern-Gerlach experiment, see

http://arxiv.org/abs/quant-ph/0409206

It also shows that you can only come close to an idealized SG experiment as it is discussed in introductory chapters of many QT books. I think, the SG experiment is great to be treated at several stages of the QT course, showing on a not too complicated example that can be treated almost exactly (although some numerics is necessary as indicated in the paper) as soon as the full description by the Pauli equation is available.
Thank you! I will read it.
 
  • #233
vanhees71 said:
For a fully quantum theoretical description of the Stern-Gerlach experiment, see

http://arxiv.org/abs/quant-ph/0409206
But this only treats what happens during the flight, not what happens when the spinning particles reach the detector - namely that exactly one spot signals the presence of the particle. Thus it is not directly relevant to the problem discussed here.
 
  • #234
Well, at the detector it gets absorbed and leaves a trace there. That's why we use it as a detector. Nobody asks how to measure a trajectory in Newtonian mechanics. So why are you asking, how the atom leaves a trace on a photoplate? I guess, one could try to do a complicated quantum mechanical evaluation of the chemical reaction of the atom with the molecules in the photoplate, but what has this to do with the quantum theory of the atom in the inhomogeneous magnetic field of the SG apparatus?
 
  • #235
Did any of you, people, read Karl Popper and Mario Bunge? Your discussion is not too far from philosophy, but these two guys put a lot of maths in their writings.
 
  • #236
A. Neumaier said:
Standard statistical mechanics implies dissipative deterministic or stochastic classical dynamics for coarse-grained variables in appropriate models. Even though the Stern-Gerlach experiment may not have been treated in this way there is no doubt that the deterministic (and dissipative) Navier-Stokes equations for classical hydromechanics follow from quantum statistical mechanics in a suitable approximation. This is done completely independent of observers and without any measurement problem, just with an interpretation according to my post #212 rather than the collapse interpretation. Thus one does not have to solve a selection problem to obtain definite results from quantum mechanics.

Combining this knowledge what we know from how detectors work it is easy to guess that the total picture is indeed the one painted by vanhees71 and myself, even though details are a lot more complex than appropriate for PF. Concerning statistical mechanics and measurement, have your read the following papers? (I had mentioned the first of them in an earlier thread.)

Understanding quantum measurement from the solution of dynamical models
Authors: Armen E. Allahverdyan, Roger Balian, Theo M. Nieuwenhuizen
http://arxiv.org/abs/1107.2138

Lectures on dynamical models for quantum measurements
Authors: Theo M. Nieuwenhuizen, Marti Perarnau-Llobet, Roger Balian
http://arxiv.org/abs/1406.5178

There are many more articles that deal with suitable model settings...

I have to think more about the work from Allahverdyan, Balian, and Nieuwenhuizen. I came across it several years ago when someone posted it on PF. I think their approach is very interesting and worth studying. However, I think it also shows how inadequate the terrible book of Ballentine's is, and even how handwavy the wonderful book of Peres's is. Neither book comes close to supplying the non-trivial considerations that Allahverdyan and colleagues present regarding the non-uniqueness of sub-ensemble assignment.
 
  • #237
I suspect the introduction of sub-ensembles by Allahverdyan, Balian, and Nieuwenhuizen is the same (in spirit, even if not technically) as introducing Bohmian hidden variables - since the point of the hidden variables is to pick out a unique set of sub-ensembles.

Then the question is whether the dynamics are correct, and also "robust or "universal" in some sense since the problem then becomes analogous to classical statistical mechanics. In Ballentine's famous and grossly erroneous 1970 review, he makes the mistake of introducing hidden variables without realizing it, and then proceeding with the wrong dynamics for the hidden variables.

We do know that there are many different realization of hidden variables in the Bohmian spirit. It would be interesting if there were some sort of "universality argument" that quantum mechanics is the resulting theory for wide class of hidden variable dynamics and initial conditions, which seems to be what Allahverydan and colleagues are talking about.
 
  • #238
stevendaryl said:
Wow. It seems to me that what you're saying is just contrary to fact. There are two possible outcomes of the experiment: Either the upper plate has a black dot, or the lower plate has a black dot. You do the experiment, and only one of those possibilities becomes actual. That's what collapse means.

When you have a small system, involving a small number of particles, the superposition principle holds: If you have two possible states, then the superposition of the two is another possible state. Is there some maximal size for which the superposition holds? Is there some maximum number of particles for which it holds? I understand the point that an actual macroscopic outcome, such as a blackened spot on a photographic plate, involves countless numbers of particles, so it is completely impossible for us to describe such a thing using quantum mechanics. But the honest answer is that the problem of how definite results emerge is just unsolved. You don't know. That's fine. But it seems wrong to me to pretend otherwise.
Having had a long look at the Nieuwenhuizen et al.(1014) treatment I find support for the idea that nature has no cut off/transition between quantum and classical. Quantum mechanics is always in operation - there is only one set of laws. So why do we not see 'cat' states ? At what point can we use classical approximations instead of QM ?

With continuous properties like position there is no problem because a baseball can be in a superposition of 2 position states if the difference between the positions is very small compared to the baseball. How could one ever detect such a thing ?

With discrete states the picture is different. If we have a property (operator) with a few possible outcomes we can reduce this to (say) to a binary state by averaging over a few degress of freedom. But defining live and dead states for a cat requires averaging over millions of dof. Adding random phases reduces and eventually destroys interference and the quantum effects, mathematically the equations of motion become trivial when the commutator ##\left[\hat{\mathcal{D}},\hat{\mathcal{H}}\right]## approaches zero. At this point there is no change which predicts that the cat remains forever in its initial state. Since we can only prepare a cat in either state, that is all we can ever see.

I'm sure this is oversimplified and naive but it works for me.
 
  • #239
atyy said:
I have to think more about the work from Allahverdyan, Balian, and Nieuwenhuizen. I came across it several years ago when someone posted it on PF. I think their approach is very interesting and worth studying. However, I think it also shows how inadequate the terrible book of Ballentine's is, and even how handwavy the wonderful book of Peres's is. Neither book comes close to supplying the non-trivial considerations that Allahverdyan and colleagues present
All wonderful books - those by Dirac, von Neumann, Messiah, Landau and Lifshitz, Ballentine, Peres, etc. - are inadequate, terrible and handwavy in this respect! Peres is still the best of them all regarding foundations, and presents, carefully avoiding collapse, the ensemble interpretation with measurement in terms of POVMs instead of eigenvalues and eigenstates.

For me, the real message of the Allahverdyan et al. paper - and the fact that it is 160 pages long! - is that foundations should be completely free of measurment issues since the latter can be treated fully adequately only by fairly complex statistical mechanics. This is why I recommend alternative foundations based upon the postulates (EX) and (SM) that I had formulated. They apply to measuring both macroscopic variables (as expectations with error bars) and pure eigenstates of an operator ##A## with eigenvalue ##\alpha## (where ##\bar A=\alpha## and ##\sigma_A=0##, capture far better the quantum mechnaical practice, and are much easier to state than Born's rule, especially if one compares it with the complicated form of Born's rule needed in the applications. Born's rule is derivable from these postulates in the special cases where it fully applies. See Section 10.5 of http://arxiv.org/abs/0810.1019. .
 
Last edited by a moderator:
  • #240
vanhees71 said:
Well, at the detector it gets absorbed and leaves a trace there. That's why we use it as a detector. Nobody asks how to measure a trajectory in Newtonian mechanics. So why are you asking, how the atom leaves a trace on a photoplate? I guess, one could try to do a complicated quantum mechanical evaluation of the chemical reaction of the atom with the molecules in the photoplate, but what has this to do with the quantum theory of the atom in the inhomogeneous magnetic field of the SG apparatus?
The measurement problem appears here in the form that if we place the screen only at the left part of the beam and shoot single electrons from the source, then the right part of the beam (which continues to exist at later times) contains the electron precisely when nothing is measured in the left part. This needs explanation, and is not covered by the analysis of the Stern-Gerlach setting without screen interaction.

Given the superposition |left beam,up>+|right beam,down> created by the magnet (as described in the paper you cited), the selection problem is how to ensure that, rather than that we end up with a superposition
|left event>##\otimes##|right beam,empty>+|no left event>##\otimes##|right beam,down>,
we find exactly one of the two cases |left event>##\otimes##|right beam,empty> if the electron is recorded on the left, and to |no left event>##\otimes##|right beam,down> otherwise.
The collapse achieves that.
 
  • #241
A. Neumaier said:
For me, the real message of the Allahverdyan et al. paper - and the fact that it is 160 pages long! - is that foundations should be completely free of measurment issues since the latter can be treated fully adequately only by fairly complex statistical mechanics. This is why I recommend alternative foundations based upon the postulates (EX) and (SM) that I had formulated. They apply to measuring both macroscopic variables (as expectations with error bars) and pure eigenstates of an operator ##A## with eigenvalue ##\alpha## (where ##\bar A=\alpha## and ##\sigma_A=0##, capture far better the quantum mechnaical practice, and are much easier to state than Born's rule, especially if one compares it with the complicated form of Born's rule needed in the applications. Born's rule is derivable from these postulates in the special cases where it fully applies. See Section 10.5 of http://arxiv.org/abs/0810.1019. .

I have not studied the paper enough to know if it is technically sound, but the big thing in its favour is the extensive discussion they have about sub-ensemble uniqueness. To me, their paper essentially introduces hidden variables. I have no problem with introducing hidden variables as a good approach to try to solve the measurement problem and a statistical mechanical treatment after that - the problem I have is when hidden variables are introduced without acknowledgment.

One way to see how closely hidden variables are to QM without collapse is that Bohmian Mechanics has unitary evolution of the wave function, explicit choice of sub-ensembles and sub-ensemble dynamics, and it is critical to consider the measurement apparatus and decoherence in BM.
 
Last edited by a moderator:
  • #242
atyy said:
I suspect the introduction of sub-ensembles by Allahverdyan, Balian, and Nieuwenhuizen is the same (in spirit, even if not technically) as introducing Bohmian hidden variables - since the point of the hidden variables is to pick out a unique set of sub-ensembles.
I don't understand you. Please explain what exactly the hidden variable are in their treatment. Or do you only talk in an as if manner - that what they do is analogous to hidden variables?
 
  • #243
In the SG experiment with the right setup (see the paper I cited yesterday) you have entanglement between position and the spin-z component. If you block the partial beam with spin-z down, you are left with a beam with spin-z up. It may be a philosophical problem in how you come to sort out one beam. It's like choosing a red marble rather than a blue just because you like to choose the red one. What's the problem?

Of course the setup of the preparation and measurement leads to the choice which (sub-)ensemble I meausure. I don't know, why I should do a very complicated calculation to explain, why an atom gets stuck in some material to filter out the unwanted spin state in an SG experiment. Experience tells us, how to block particles with matter. For this purpose it's enough. For other it's not, and then you can think deeper. E.g., if you want to use energy loss, dE/dx, for particle ID you better have an idea how it works and you read about the Bethe-Bloch formula and how it is derived, but there really is no principle problem from the point of view of theoretical and experimental physics.
 
  • #244
A. Neumaier said:
I don't understand you. Please explain what exactly the hidden variable are in their treatment. Or do you only talk in an as if manner - that what they do is analogous to hidden variables?

Let me read the paper carefully and see. My idea on a quick reading is that anytime a specific sub-ensemble is mentioned, one is either introducing hidden variables or collapse - without that there is no unique decomposition, as the authors themselves mention.
 
  • #245
vanhees71 said:
In the SG experiment with the right setup (see the paper I cited yesterday) you have entanglement between position and the spin-z component. If you block the partial beam with spin-z down, you are left with a beam with spin-z up. It may be a philosophical problem in how you come to sort out one beam. It's like choosing a red marble rather than a blue just because you like to choose the red one. What's the problem?

Of course the setup of the preparation and measurement leads to the choice which (sub-)ensemble I meausure. I don't know, why I should do a very complicated calculation to explain, why an atom gets stuck in some material to filter out the unwanted spin state in an SG experiment. Experience tells us, how to block particles with matter. For this purpose it's enough. For other it's not, and then you can think deeper. E.g., if you want to use energy loss, dE/dx, for particle ID you better have an idea how it works and you read about the Bethe-Bloch formula and how it is derived, but there really is no principle problem from the point of view of theoretical and experimental physics.

If you block the beam, you are introducing hidden variables. You cannot block the beam in real space, if the beam is only in Hilbert space.
 

Similar threads

Replies
36
Views
4K
Replies
182
Views
12K
Replies
11
Views
854
Replies
15
Views
2K
Replies
113
Views
8K
Replies
69
Views
5K
Back
Top