An argument against Bohmian mechanics?

In summary: Simple systems can exhibit very different behavior from more complex systems with a large number of degrees of freedom. This is a well-known fact in physics. Thus, I don't understand why you keep bringing up the hydrogen atom as a counterexample to ergodic behavior, when it is not a representative system for such a discussion. In summary, Neumaier argues that Bohmian mechanics is wrong because it fails to predict all observed results from experiments. However, this argument ignores the theory of quantum measurements and fails to take into account the effect of measurement. Furthermore, the Bohmian theory of quantum measurements is incomplete and cannot fully explain the behavior of the single universe we know of. Additionally, the claim that ergodic theorem is necessary for
  • #351
PeterDonis said:
Therefore, a superposition of spin eigenstates ##\vert \psi \rangle = a \vert z+ \rangle + b \vert z- \rangle##, where ##\vert a \vert^2 + \vert b \vert^2 = 1##, will induce evolution as follows:

$$
\vert \psi \rangle \vert R \rangle \rightarrow a \vert z+ \rangle \vert U \rangle + b \vert z- \rangle \vert D \rangle
$$

This state does not describe "a single outcome"; it describes a superposition of "outcomes". But this state is what unitary evolution predicts. So if in fact the final state is not the above, but either

$$
\vert z+ \rangle \vert U \rangle
$$

or

$$
\vert z- \rangle \vert D \rangle
$$

with probabilities ##\vert a \vert^2## and ##\vert b \vert^2## respectively, then some other process besides unitary evolution must be involved, and this other process is what is referred to by the term "collapse".
I'd like to point out two subtleties which haven't been mentioned in this thread yet. I think both might be relevant to the discussion with vanHees71.

First, proponents of the ensemble interpretation might say that the final state really is the full final state which involves a superposition of macroscopically distinct apparatus states (Ballentine does this in his textbook, see section 9.3, 1st edition). This is possible because in the ensemble interpretation, states do not refer to single systems.

Second, let's look at a very similar situation which prepares a beam of particles in state [itex]|z+ \rangle[/itex]. Suppose that we modify your device such that particles which would hit the "UP"-part of the detector are transmitted and particles which would hit the "DOWN"-part are reflected back, becoming trapped in the device. The final state then is $$a|z+, \text{transmitted} \rangle \otimes |\psi_\text{Device} \rangle \,\,+\,\,\, b|z-, \text{trapped}\rangle \otimes|\phi_\text{Device} \rangle $$ If we use [itex]|z+ \rangle[/itex] as the state for further measurements, it looks like a textbook example of collapse. What actually happens is that we make the choice to use only the first part of the final state because we know that the overlap of the second term wrt to the eigenstates of all subsequent observables is zero. So in this case, "collapse" is simply the redefinition of what the system of interest is.
 
Physics news on Phys.org
  • #352
stevendaryl said:
I'm not sure what a nonweird version of QM would be like, either.
What's your personal expectation: will there be a future theory which removes the weirdness of QM?
 
  • #353
kith said:
First, proponents of the ensemble interpretation might say that the final state really is the full final state which involves a superposition of macroscopically distinct apparatus states (Ballentine does this in his textbook, see section 9.3, 1st edition). This is possible because in the ensemble interpretation, states do not refer to single systems.

In my opinion, saying that the QM state refers to an ensemble, rather than an individual system, doesn't seem to be of much help in the interpretational problems. You might be tempted to say that "an electron has 50% chance of being spin-up in the z-direction" means that in the ensemble, 50% of the electrons of the ensemble are spin-up in the z-direction, and 50% are spin-down. That would imply that for each element of the ensemble, the spin component in the z-direction is fixed. Probabilities arise from not knowing which element is the "actual world". That's the way ensembles work classically.

But we know that QM doesn't work that way. That would be a hidden-variable theory, saying that the spin component is definite, but we don't know what its value is, so we describe it using probability. Bell proved that that is not a viable way to interpret quantum probabilities. So I don't see how introducing ensembles does any good.
 
  • #354
kith said:
Second, let's look at a very similar situation which prepares a beam of particles in state [itex]|z+ \rangle[/itex]. Suppose that we modify your device such that particles which would hit the "UP"-part of the detector are transmitted and particles which would hit the "DOWN"-part are reflected back, becoming trapped in the device. The final state then is $$a|z+, \text{transmitted} \rangle \otimes |\psi_\text{Device} \rangle \,\,+\,\,\, b|z-, \text{trapped}\rangle \otimes|\phi_\text{Device} \rangle $$ If we use [itex]|z+ \rangle[/itex] as the state for further measurements, it looks like a textbook example of collapse. What actually happens is that we make the choice to use only the first part of the final state because we know that the overlap of the second term wrt to the eigenstates of all subsequent observables is zero. So in this case, "collapse" is simply the redefinition of what the system of interest is.

I think that that makes sense for most experiments, but not for EPR-type experiments involving distant entangled particles. If Alice measures spin-up along one axis, we know that Bob will measure spin-down along that axis. So it's not just a matter of Alice ignoring those electrons that we spin-down. (Unless she's also somehow ignoring those "Bobs" that measured the wrong value.)
 
  • #355
rubi said:
Well, there are two possibilities:
1. BM makes the same predictions as QM, which is what the Bohmians usually claim. In that case, the analogy is spot on.
2. BM makes different predictions than QM. In that case it either contradicts experiments or the different predictions concern only situations that have not been experimentally tested yet. Then, most physicist would expect the QM predictions to be right and the BM predictions to be wrong. If the QM predictions turned out to be wrong, people would be more likely to just adjust the QM model (e.g. modify the Hamiltonian) than to adopt BM (e.g. see neutrino oscillation).

My preferred analogy is that BM is really more like string theory. Heuristics suggest that we try constructing another theory that matches existing theory over a wide range of phenomena, but that (may) deviate from it in some regime. However, the heuristics do not uniquely indicate what the better theory is.

In QG, the heuristic is the nonrenormalizability of gravity, and the alternative approaches are string theory, LQG, asymptotic safety ...

In QM, the heuristic is the measurement problem, and the alternative approaches are BM, MWI, CH, ...

QM is like stat mech (nonsensical but works), equilibrium BM is like kinetic theory, non-equilibrium BM is like Newtonian mechanics or the Einstein-Maxwell equations.
 
  • #356
stevendaryl said:
You might be tempted to say that "an electron has 50% chance of being spin-up in the z-direction" means that in the ensemble, 50% of the electrons of the ensemble are spin-up in the z-direction, and 50% are spin-down. That would imply that for each element of the ensemble, the spin component in the z-direction is fixed. Probabilities arise from not knowing which element is the "actual world". That's the way ensembles work classically. But we know that QM doesn't work that way.
Yes, the ensemble can't be interpreted this way.

stevendaryl said:
I think that that makes sense for most experiments, but not for EPR-type experiments involving distant entangled particles. If Alice measures spin-up along one axis, we know that Bob will measure spin-down along that axis. So it's not just a matter of Alice ignoring those electrons that we spin-down. (Unless she's also somehow ignoring those "Bobs" that measured the wrong value.)
Yes, I also don't see how what I wrote could be applied to the EPR case.
 
  • #357
kith said:
I'm not sure how this relates to what I wrote.

Your starting point seems to be that out of all possible theories, we got one which is demonstrably nonlocal on a fundmental level but at the same time, everything we as humans can think of to exploit this nonlocality is equally demonstrably impossible. How strange! (Please correct me if I'm wrong)

The starting point of my post #281 was us doing experiments. From this point of view, your first question doesn't make sense. We cannot not ignore part of the universe because in order to observe something, we have to exclude at least the part of ourselves which experiences the observation.

(Also, more fundamental theories must include the older ones as limiting cases. So shouldn't you condition your second question on the fact that classical physics is local? But here I see even less connection to my post)

RIght, we do experiments And we only do experiments on a very small part of the universe. Yet we infer laws that govern the whole universe.

Some (Weinberg, Feynman) have argued that this is due to locality or symmetry (local translation invariance). So in fact we exploit locality.

Can we have a nonlocal theory, given that we only make local observations? Or can our local observations point to nonlocality - eg. Black hole information problem, measurement problem - and can we even devise successful nonlocal theories - eg. Bohmian Mechanics, Holography?
 
  • #358
rubi said:
If the problem were only physical impossibility, then the statement ##S_x=+1\wedge S_y=-1## would be completely unproblematic. We would just assign ##P=0## to it and be happy. However, no possible assignment of probabilities to such a proposition is consistent with QM, so it must be the case that taking this conjunction is an invalid operation.
Such a probability cannot be assigned in standard QM, but it does not imply that it cannot be assigned in any theory, perhaps some more fundamental theory (yet unknown) for which QM is only an approximation. There should exist general logical rules which can be applied to any theory of nature, not only to a particular theory (such as QM) which, after all, may not be final theory of everything. In such a general logical framework, the statement ##S_x=+1\wedge S_y=-1## must not be forbidden.

What can such a theory look like? Well, the minimal Bohmian mechanics is not such a theory, because minimal BM cannot associate a probability with ##S_x=+1\wedge S_y=-1##. This is because spin is not ontological in minimal BM. However, there is a non-minimal version of BM (see the book by Holland) in which spin is ontological and ##S_x=+1\wedge S_y=-1## makes perfect sense. Nevertheless, this theory makes the same measurable predictions as standard QM. Even though you can have ##S_x=+1\wedge S_y=-1## at the level of spin hidden variables, when you measure the spin you can never measure both ##S_x=+1## and ##S_y=-1##.

I am not saying that this non-minimal version of BM is how nature really works. Personally, I think it isn't. What I am saying is that there is a logical possibility that nature might work that way. Therefore it is not very wise to restrict logical rules to a form which forbids you to even think about such alternative theories.
 
  • #359
stevendaryl said:
Unless I'm misunderstanding you, you seem to be agreeing with me. Decoherence, or irreversibility is not going to result in a definite outcome.

So if someone says that a measurement results in either the state [itex]|U\rangle[/itex], with such-and-such probability, or the state [itex]|D\rangle[/itex], with such and such a probability, then does that imply that something nonunitary is involved?
Well, the definite outcome is the entanglement between position and spin-z component, i.e., if you find a particle at position U/D you'll have spin up/down. Then you can block one partial beam, and you have prepared particles in a definite spin-z state. The probabilities tell you the intensity of the partial beams when you filter them out (relative to the intensity of the original beam).That's all QT tells you.
 
  • #360
vanhees71 said:
Well, the definite outcome is the entanglement between position and spin-z component, i.e., if you find a particle at position U/D you'll have spin up/down. Then you can block one partial beam, and you have prepared particles in a definite spin-z state. The probabilities tell you the intensity of the partial beams when you filter them out (relative to the intensity of the original beam).That's all QT tells you.

But in EPR, Alice and Bob both measure the x-component of spin of their respective spin-1/2 particles. They always get opposite results. That is not a matter of having one result blocked.
 
  • #361
Demystifier said:
Such a probability cannot be assigned in standard QM, but it does not imply that it cannot be assigned in any theory, perhaps some more fundamental theory (yet unknown) for which QM is only an approximation. There should exist general logical rules which can be applied to any theory of nature, not only to a particular theory (such as QM) which, after all, may not be final theory of everything. In such a general logical framework, the statement ##S_x=+1\wedge S_y=-1## must not be forbidden.

What can such a theory look like? Well, the minimal Bohmian mechanics is not such a theory, because minimal BM cannot associate a probability with ##S_x=+1\wedge S_y=-1##. This is because spin is not ontological in minimal BM.

I'm a little confused about this. Suppose that we do the following:
  • Prepare an electron in a state that is spin-up in the z-direction.
  • Either measure spin in the x-direction, or spin in the y-direction. To make it more Bohmian, we specify that the spin measurement is done with a Stern Gerlach device, so that spin-up means the electron is deflected in one direction, and spin-down means it is deflected in another direction.
In Bohmian mechanics, the apparent nondeterminism is due to lack of knowledge of the precise location of the electron. So to say that the electron goes one direction or another with 50/50 probability means that for some initial locations, the electron will deflect in one direction, and for some initial locations, it will deflect in the other direction. That implies that the volume of the lab is partitioned into two sets: [itex]V_{xL}[/itex], those points [itex]\vec{r}[/itex] such that an electron initially at that location will veer left when the Stern Gerlach device is oriented in the x-direction, and [itex]V_{xR}[/itex], those points such that the electron initially at that location will veer right. Similarly, there are sets [itex]V_{yL}[/itex] and [itex]V_{yR}[/itex] for the case where the Stern Gerlach device is oriented in the y-direction.

Now, it seems to me that it should make sense to ask about the set [itex]V_{yL} \cap V_{xL}[/itex], the set of points that would veer left, given either choice of measurement orientation. Presumably this set has a measure. So how is this different from the hidden variable that Bell proved did not exist? I understand that Bohmian mechanics, being nonlocal, allows for the sets [itex]V_{xL}, V_{xR}, V_{yL}, V_{yR}[/itex] to depend on details of the detector. But for a given detector (and maybe, for a given method of choosing which orientation to measure), it seems that there would be sets [itex]V_{aL}, V_{aR}[/itex] for all possible axes [itex]a[/itex].
 
  • Like
Likes zonde
  • #362
stevendaryl said:
I'm a little confused about this. Suppose that we do the following:
  • Prepare an electron in a state that is spin-up in the z-direction.
  • Either measure spin in the x-direction, or spin in the y-direction. To make it more Bohmian, we specify that the spin measurement is done with a Stern Gerlach device, so that spin-up means the electron is deflected in one direction, and spin-down means it is deflected in another direction.
In Bohmian mechanics, the apparent nondeterminism is due to lack of knowledge of the precise location of the electron. So to say that the electron goes one direction or another with 50/50 probability means that for some initial locations, the electron will deflect in one direction, and for some initial locations, it will deflect in the other direction. That implies that the volume of the lab is partitioned into two sets: [itex]V_{xL}[/itex], those points [itex]\vec{r}[/itex] such that an electron initially at that location will veer left when the Stern Gerlach device is oriented in the x-direction, and [itex]V_{xR}[/itex], those points such that the electron initially at that location will veer right. Similarly, there are sets [itex]V_{yL}[/itex] and [itex]V_{yR}[/itex] for the case where the Stern Gerlach device is oriented in the y-direction.

Now, it seems to me that it should make sense to ask about the set [itex]V_{yL} \cap V_{xL}[/itex], the set of points that would veer left, given either choice of measurement orientation. Presumably this set has a measure. So how is this different from the hidden variable that Bell proved did not exist? I understand that Bohmian mechanics, being nonlocal, allows for the sets [itex]V_{xL}, V_{xR}, V_{yL}, V_{yR}[/itex] to depend on details of the detector. But for a given detector (and maybe, for a given method of choosing which orientation to measure), it seems that there would be sets [itex]V_{aL}, V_{aR}[/itex] for all possible axes [itex]a[/itex].
From this post, I can tell that your understanding of BM is very good. :smile: So my answer will be rather short.

The set [itex]V_{yL} \cap V_{xL}[/itex] has a measure, but for ideal measurements this measure is zero. For realistic measurements it is not zero, but is negligible FAPP (For All Practical Purposes). Nevertheless, let us consider a case in which this measure is sufficiently large so that we cannot neglect it. Then a pointer can be both inside [itex]V_{yL}[/itex] and inside [itex]V_{yL}[/itex]. Naively, one might say that this means that we have measured both ##S_x## and ##S_y##. But I think a better interpretation is that we measured neither ##S_x## nor ##S_y##. If an apparatus cannot sharply distinguish between measurement of ##S_x## and measurement of ##S_y##, then it is not very meaningful to interpret this apparatus as a device which measures any of those two quantities. To measure two observables we need two sharp pointers. Here we have only one blurred pointer, and one blurred pointer is not a good substitute for two sharp pointers. (For analogy, consider a mechanical clock with two sharp needles, one for measuring hours and another for measuring minutes. Do you think that you could measure both hours and minutes with only one blurred needle?) And this conclusion does not depend on whether you accept Bohmian or some purely instrumental interpretation of QM.
 
  • #363
Demystifier said:
From this post, I can tell that your understanding of BM is very good. :smile: So my answer will be rather short.

The set [itex]V_{yL} \cap V_{xL}[/itex] has a measure, but for ideal measurements this measure is zero. For realistic measurements it is not zero, but is negligible FAPP (For All Practical Purposes).

Hmm. I don't see how that can work. [itex]V_{yL}[/itex] has to have measure 1/2. [itex]V_{xL}[/itex] similarly has measure 1/2. So if they have negligible overlap, that would imply [itex]V_{xL} = \bar{V_{yL}}[/itex] (the complement). But [itex]\bar{V_{yL}} = V_{yR}[/itex]. So that would seem to mean that [itex]V_{xL} = V_{yR}[/itex]. But that can't be right.
 
  • #364
stevendaryl said:
Hmm. I don't see how that can work. [itex]V_{yL}[/itex] has to have measure 1/2. [itex]V_{xL}[/itex] similarly has measure 1/2. So if they have negligible overlap, that would imply [itex]V_{xL} = \bar{V_{yL}}[/itex] (the complement). But [itex]\bar{V_{yL}} = V_{yR}[/itex]. So that would seem to mean that [itex]V_{xL} = V_{yR}[/itex]. But that can't be right.
No. In the previous post you assumed that the whole lab is partitioned into two sets: ##V_{xL}## and ##V_{xR}##. But it was wrong. Most parts of the lab do not belong to any of those two sets. For instance, I can have a picture of my wife at the table in my lab, and this picture of my wife does not belong to any of those two sets. The picture of my wife does not play any role in the measurement of spin. The Stern-Gerlach apparatus produces rather narrow beams in the up or down direction, and neither of the beams hits the picture of my wife at the table.
 
  • #365
Demystifier said:
No. In the previous post you assumed that the whole lab is partitioned into two sets: ##V_{xL}## and ##V_{xR}##. But it was wrong. Most parts of the lab do not belong to any of those two sets. For instance, I can have a picture of my wife at the table in my lab, and this picture of my wife does not belong to any of those two sets. The picture of my wife does not play any role in the measurement of spin. The Stern-Gerlach apparatus produces rather narrow beams in the up or down direction, and neither of the beams hits the picture of my wife at the table.

Good point. But I don't see how that changes much. You have a beam of electrons, and you partition that beam into parts that will go left and parts that will go right.
 
  • #366
stevendaryl said:
So to say that the electron goes one direction or another with 50/50 probability means that for some initial locations, the electron will deflect in one direction, and for some initial locations, it will deflect in the other direction.
...
So how is this different from the hidden variable that Bell proved did not exist?
I would say that this is the point of stevendaryl's post.
If all the uncertainty in the electron's trajectory depends only on it's initial position then Bell inequalities apply. There has to be some non-local influence to account for entanglement correlations in realistic model.
 
  • #367
stevendaryl said:
how is this different from the hidden variable that Bell proved did not exist?

Because it's nonlocal; the equation of motion for each individual trajectory, which is what determines which set it is in (which way a particle on that trajectory will go when it passes through the Stern Gerlach device) includes the quantum potential, which samples the wave function everywhere in the universe at a given instant. (Note that this is obviously non-relativistic; AFAIK there are difficulties with trying to make a relativistic theory along these lines that nobody has satisfactorily addressed.)
 
  • Like
Likes zonde
  • #368
zonde said:
I would say that this is the point of stevendaryl's post.
If all the uncertainty in the electron's trajectory depends only on it's initial position then Bell inequalities apply. There has to be some non-local influence to account for entanglement correlations in realistic model.
This was also my point in #283. The distinction local versus nonlocal has no empirical or operational meaning for classical realistic(i.e. hidden variables) theories because one can either consider they have nonlocal implicit influence(but no way to observe FTL signaling) as you say or if one decides the theory is local by the independent construction of the hidden variables there is no way to design an experiment to show this as it would be require the use of an instantaneous measure of spacelike separated events(i.e. FTL signaling) to do it, which is a strange way for a theory to be local . Therefore there is absolutely no scientific content in considering hidden variables local or nonlocal and Bell violations reject all hidden variables according to this.
 
  • #369
PeterDonis said:
Because it's nonlocal; the equation of motion for each individual trajectory, which is what determines which set it is in (which way a particle on that trajectory will go when it passes through the Stern Gerlach device) includes the quantum potential, which samples the wave function everywhere in the universe at a given instant. (Note that this is obviously non-relativistic; AFAIK there are difficulties with trying to make a relativistic theory along these lines that nobody has satisfactorily addressed.)
See my post above, I don't think there is any content in saying that it is nonlocal(that informs of something in addition to its being a hidden variable theory I mean).
 
  • #370
I would like to check my understanding. In BM quantum potential depends on particle's position and wave function, I suppose. And it seems to me that measurement of remote particle can't change wave function of first particle in any way except for phase factor. So this phase factor should be the thing that changes quantum potential.
 
  • #371
stevendaryl said:
Good point. But I don't see how that changes much. You have a beam of electrons, and you partition that beam into parts that will go left and parts that will go right.
When you measure ##S_x##, then you have a left-beam (L) and right-beam (R). When you measure ##S_y##, then you have an up-beam (U) and a down-beam (D). All together you have to think of four (not two) beams: L, R, U, D. Since each of the four beams goes into a different direction, these four beams do not mutually overlap.
 
  • #372
zonde said:
I would like to check my understanding. In BM quantum potential depends on particle's position and wave function, I suppose.
No, quantum potential depends only on the wave function.
 
  • #373
RockyMarciano said:
This was also my point in #283. The distinction local versus nonlocal has no empirical or operational meaning for classical realistic(i.e. hidden variables) theories because one can either consider they have nonlocal implicit influence(but no way to observe FTL signaling) as you say or if one decides the theory is local by the independent construction of the hidden variables there is no way to design an experiment to show this as it would be require the use of an instantaneous measure of spacelike separated events(i.e. FTL signaling) to do it, which is a strange way for a theory to be local . Therefore there is absolutely no scientific content in considering hidden variables local or nonlocal and Bell violations reject all hidden variables according to this.
I don't understand your argument. Say there is model that is using nonlocal influences and allows violation of Bell inequalities while it does not allow FTL signaling using higher level phenomena. Why it should be rejected?
 
  • #374
Demystifier said:
When you measure ##S_x##, then you have a left-beam (L) and right-beam (R). When you measure ##S_y##, then you have an up-beam (U) and a down-beam (D). All together you have to think of four (not two) beams: L, R, U, D. Since each of the four beams goes into a different direction, these four beams do not mutually overlap.
It seems like you take or leave determinism whenever it is convenient for your argument. If you stick to determinism you cannot consider four beams there.
 
  • #375
zonde said:
I don't understand your argument. Say there is model that is using nonlocal influences and allows violation of Bell inequalities while it does not allow FTL signaling using higher level phenomena. Why it should be rejected?
I don't understnd your question very well(what are higher level phenomena?) but what is the meaning, the actual observational content, of using nonlocal influences if it doesn't allow FTL signaling?
I don't think I'm saying anything much different than you in #366 anyway.
 
  • #376
RockyMarciano said:
It seems like you take or leave determinism whenever it is convenient for your argument. If you stick to determinism you cannot consider four beams there.
Fundamental determinism is compatible with effective probabilistic laws. For instance, laws of coin flipping are fundamentally deterministic, yet it behaves as being probabilistic for practical purposes. That's why you consider both possible outcomes in a single coin flipping.
 
  • #377
Demystifier said:
Fundamental determinism is compatible with effective probabilistic laws. For instance, laws of coin flipping are fundamentally deterministic, yet it behaves as being probabilistic for practical purposes. That's why you consider both possible outcomes in a single coin flipping.
I absolutely agree with this(I said something similar pages ago), but that compatibility with classical probabilities is attributed to lack of information, you consider both outcomes for the coin flipping because you don't know the initial conditions with enough accuracy, but determinism dictates that if you happened to know the initial conditions, there is only one side of the coin in principle, and the discussion with stevendaryl refers to the "in principle" case. The situation where you actually consider all the options as equally valid in principle is that of QM, of course when adding the Born rule you only have probabilities that allow BM to claim it has the same observational results, but if you blur the distinction between probabilities due to lack of knowledge versus intrinsic probabilities due to quantum superposition you are getting rid of classical realism also, which is the very idea you are defending.
 
  • #378
RockyMarciano said:
and the discussion with stevendaryl refers to the "in principle" case
I don't think that you properly understood my discussion with stevendaryl.
 
  • #379
Demystifier said:
I don't think that you properly understood my discussion with stevendaryl.
This might well be. I wouldn't be surprised if it turned out you were the only one that properly understood the discussion according to you.
 
  • #380
RockyMarciano said:
This might well be. I wouldn't be surprised if it turned out you were the only one that properly understood the discussion according to you.
I think that atyy and stevendaryl understand it very well too.
 
  • #381
Demystifier said:
I think that atyy and stevendaryl understand it very well too.
Well, the discussion is not over yet(I think) :cool::wink:
 
  • #382
atyy said:
Well, at least you are consistent. I've always thought the mystery was why silly things like the canonical ensemble actually work :)
Yes, those who consider it a mystery try to resolve it by silly ideas such as ergodicity. Those (including me) who do not consider it a mystery don't need ergodicity.

(EDIT: This is my 7.000th post.)
 
  • Like
Likes Jilang, durant35, UsableThought and 1 other person
  • #383
Demystifier said:
Yes, those who consider it a mystery try to resolve it by silly ideas such as ergodicity. Those (including me) who do not consider it a mystery don't need ergodicity.

(EDIT: This is my 7.000th post.)
From time to time some discussions on PF give me the impression that there is something about statistical mechanics that I simply don't see. Maybe its because I'm just a master's student who is just learning it and I need time to see it the way you guys see it but I'm just curious what's going on. You know what I mean? If yes, can you provide a reference that can shed some light on these things you guys sometimes talk about?

P.S.
Congrats on your 7000th post.:wink:
 
  • Like
Likes Demystifier
  • #384
Demystifier said:
(EDIT: This is my 7.000th post.)

You make me feel downright lazy. :smile:
 
  • Like
Likes Demystifier
  • #385
stevendaryl said:
But in EPR, Alice and Bob both measure the x-component of spin of their respective spin-1/2 particles. They always get opposite results. That is not a matter of having one result blocked.

Sure, they just measure the x component of the spin. What I described, is what's usually called "collapse", i.e., the preparation of an observable to have a determined value by filtering, which indeed means just to block all the unwanted states.
 

Similar threads

  • Quantum Interpretations and Foundations
11
Replies
376
Views
13K
  • Quantum Interpretations and Foundations
Replies
9
Views
2K
  • Quantum Interpretations and Foundations
4
Replies
109
Views
9K
  • Quantum Interpretations and Foundations
Replies
13
Views
2K
  • Quantum Interpretations and Foundations
5
Replies
159
Views
11K
  • Quantum Interpretations and Foundations
7
Replies
235
Views
20K
  • Quantum Interpretations and Foundations
15
Replies
491
Views
29K
  • Quantum Interpretations and Foundations
Replies
6
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
Replies
190
Views
10K
Back
Top