Measurement problem in the Ensemble interpretation

In summary: The moon is in a particular momentum eigenstate, but the ensemble interpretation does not say why it doesn't show interference. In summary, the ensemble interpretation of QM does not address the measurement problem as it only applies to ensembles of similarly prepared systems and does not consider single measurements. It may seem to prevent the need for wave-function collapse, but it does not explain the outcomes of single measurements or the quantum to classical transition. The inability to address the measurement problem is a problem in itself. Additionally, the ensemble interpretation weakens the explanatory power of physics in explaining classical phenomena.
  • #141
RockyMarciano said:
Are you referring only to classical theory? Because this doesn't seem to be a valid assertion in the quantum realm, at least if we go by its theoretical principles. A solid meter is most likely made up of atoms joined by chemical bonds that act as springs with a ground state energy that fluctuates, the corresponding uncertainty in the length of the spring makes the separation between atoms at each step not well defined so that they shouldn't add up to a fixed and stable expected distance between marks on the meter and therefore it can't justify a robust measure remaining stable independently of how and when it is used as a measuring tool.

Of course in practice these shortcomings are overcome by obtaining a measurement that gives a defined distance that allows to introduce an idealized meter and the atomic fluctuations only produce a minor blurring for the position of each atom(for instance in x-ray scattering). You would have to show from first principles how the meter is stable taking into account the ground state energy fluctuations.
Fluctuations are not necessarily a threat to stability. Stability does not mean that fluctuations do not exist. Stability means that initial small fluctuation remain small. Indeed, if you write down a periodic wave function for crystal lattice (which can be found in all solid state textbooks) you will see that position uncertainties of atoms are much smaller than the size of the crystal as a whole. This means that quantum fluctuations are small, which corresponds to stability.
 
  • Like
Likes RockyMarciano
Physics news on Phys.org
  • #142
martinbn said:
@Demystifier , I have the feeling that, at least in part if not fully, the problem you have is because of overuse of the term "non-local". Have you tried to explain it by giving the exact same argument but using a different word and never mentioning "non-local"?
The term "non-local" is standard, but sometimes the term "non-separable" is used instead. Indeed, some physicists hold that QM is non-separable but not non-local. That makes sense because non-locality and non-separability are not the same. In fact, non-separability is a purely technical (i.e. mathematical) concept and there is nothing controversial about the fact that QM is non-separable. But if I was talking only about non-separability, that would mean that I only talk about the mathematical structure of QM and not about its meaning. That would not be satisfying because it is precisely the meaning that I want to talk about.

Perhaps one should invent a new term, different from both non-locality and non-separability? Perhaps! But on the other hand, it could create even more confusion.
 
  • #143
Demystifier said:
Fluctuations are not necessarily a threat to stability. Stability does not mean that fluctuations do not exist. Stability means that initial small fluctuation remain small. Indeed, if you write down a periodic wave function for crystal lattice (which can be found in all solid state textbooks) you will see that position uncertainties of atoms are much smaller than the size of the crystal as a whole. This means that quantum fluctuations are small, which corresponds to stability.
Yes, I referred to this when I wrote about how this is dealt with in practice. But my question was how are these fluctuations kept small, why don't the errors at each atom add up, how are they prevented from cumulating dynamically to a big final error in the absence of conservation of any quantity.
Of course for a periodic wave function for a lattice crystal with a particular pattern it is trivial but I thought you were claiming that measurements are intrinsically aperiodic, and without a conserved pattern and in any case it should not depend on the size of a particular crystal.
 
Last edited:
  • #144
RockyMarciano said:
why don't the errors at each atom add up
Even when errors do add up, it is typical for most statistical systems that errors add up as ##\sqrt{N}##, which is small relative to the number ##N## of the constituents when ##N## is big. Intuitively, the reason why errors do not add up as ##N## is the fact different errors can also cancel each other.
 
  • #145
Demystifier said:
Even when errors do add up, it is typical for most statistical systems that errors add up as ##\sqrt{N}##, which is small relative to the number ##N## of the constituents when ##N## is big. Intuitively, the reason why errors do not add up as ##N## is the fact different errors can also cancel each other.

Hmm, but the error remains the same for any N particles for any size of the crystal, while ##\sqrt{N}## grows as ##N## gets larger. So for statistical systems it may be that error adds up slower than N changes, but for quantum objects it doesn't add up at all. Each atom seems to be aware of the location it ought to be if it were a classical particle, in the absence of uncertainty, and it only experiences its quantum uncertainty in relation to this position. It also seems to be aware of the size of the lattice to adjust its uncertainty to it.
 
  • #146
RockyMarciano said:
Hmm, but the error remains the same for any N particles for any size of the crystal, while ##\sqrt{N}## grows as ##N## gets larger. So for statistical systems it may be that error adds up slower than N changes, but for quantum objects it doesn't add up at all. Each atom seems to be aware of the location it ought to be if it were a classical particle, in the absence of uncertainty, and it only experiences its quantum uncertainty in relation to this position. It also seems to be aware of the size of the lattice to adjust its uncertainty to it.
Indeed, in this case the errors do not add up at all. That's because atoms are not mutually independent. They interact with each other by attractive interaction. The only free quantities are position ##x## and momentum ##p## of the macroscopic body as a whole, and uncertainties of those satisfy
$$\Delta x \Delta p \sim \hbar$$
which does not depend on ##N##.
 
  • #147
Demystifier said:
The term "non-local" is standard, but sometimes the term "non-separable" is used instead. Indeed, some physicists hold that QM is non-separable but not non-local. That makes sense because non-locality and non-separability are not the same. In fact, non-separability is a purely technical (i.e. mathematical) concept and there is nothing controversial about the fact that QM is non-separable. But if I was talking only about non-separability, that would mean that I only talk about the mathematical structure of QM and not about its meaning. That would not be satisfying because it is precisely the meaning that I want to talk about.

Perhaps one should invent a new term, different from both non-locality and non-separability? Perhaps! But on the other hand, it could create even more confusion.
It's very simple: Remember that the Standard Model of particle physics is a local relativistic QFT, and it's clear what local means here. It's (a) microcausality and (b) that the Lagrangian and thus the Hamiltonian is written as a polynomial in local field operators (i.e., operators transforming under the Poincare group as the analogous classical fields) and its (first) space-time derivatives at one spacetime point.

Of course, as any QT also relativistic local QFT admits long-range correlations described by entangled states. Einstein called this "non-separability" in a single-author paper concerning the unfortunately more famous EPR paper which in fact he didn't like, because of exactly this point: His quibble was more about non-separability than anything else, but with this quibble he was wrong since nowadays indeed it's demonstrated with high precision that the strong correlations encoded in entangled states are indeed what's observed.
 
  • #148
vanhees71 said:
It's very simple: Remember that the Standard Model of particle physics is a local relativistic QFT, and it's clear what local means here. It's (a) microcausality and (b) that the Lagrangian and thus the Hamiltonian is written as a polynomial in local field operators (i.e., operators transforming under the Poincare group as the analogous classical fields) and its (first) space-time derivatives at one spacetime point.

Of course, as any QT also relativistic local QFT admits long-range correlations described by entangled states. Einstein called this "non-separability" in a single-author paper concerning the unfortunately more famous EPR paper which in fact he didn't like, because of exactly this point: His quibble was more about non-separability than anything else, but with this quibble he was wrong since nowadays indeed it's demonstrated with high precision that the strong correlations encoded in entangled states are indeed what's observed.
Suppose that we are in the 1920 were theoretical physicists are equipped only with concepts of classical physics, plus relativity, plus "old" Bohr-Sommerfeld-like QM. They don't have modern quantum mechanics, they don't have quantum field theory and they don't have wave functions. And suppose that some lucky experimentalist observes "quantum" correlations by accident, but he nor anybody else knows about their quantum theoretical origin. In your opinion, how would physicists of that time interpret such correlations? Do you think they would conclude that there is some non-local mechanism involved? Or do you think that a different interpretation would look more natural? Do you think that some smart guy could reproduce the laws of modern QM just from this experiment (without Heisenberg's and Schrodinger's insights that are about to appear 5 years later)?
 
Last edited:
  • #149
Demystifier said:
Indeed, in this case the errors do not add up at all. That's because atoms are not mutually independent. They interact with each other by attractive interaction. The only free quantities are position ##x## and momentum ##p## of the macroscopic body as a whole, and uncertainties of those satisfy
$$\Delta x \Delta p \sim \hbar$$
which does not depend on ##N##.
So your answer to my question is interactions. That much we know, how about elaborating on how do quantum interactions(that are themselves subject to the HUP) achieve macroscopic objects as a whole satisfying ##\Delta x \Delta p \sim \hbar##, this is after all the core of the "measurement problem".
 
  • #150
RockyMarciano said:
how about elaborating on how do quantum interactions(that are themselves subject to the HUP) achieve macroscopic objects as a whole satisfying ##\Delta x \Delta p \sim \hbar##, this is after all the core of the "measurement problem".
If you ask me why macro objects look classical, probably the best answer is decoherence. See e.g. the book by Schlosshauer
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

If you think that decoherence cannot be the full answer, then try a Bohmian completion:
https://arxiv.org/abs/quant-ph/0112005
 
  • #151
Demystifier said:
Suppose that we are in the 1920 were theoretical physicists are equipped only with concepts of classical physics, plus relativity, plus "old" Bohr-Sommerfeld-like QM. They don't have modern quantum mechanics, they don't have quantum field theory and they don't have wave functions. And suppose that some lucky experimentalist observes "quantum" correlations by accident, but he nor anybody else knows about their quantum theoretical origin. In your opinion, how would physicists of that time interpret such correlations? Do you think they would conclude that there is some non-local mechanism involved? Or do you think that a different interpretation would look more natural? Do you think that some smart guy could reproduce the laws of modern QM just from this experiment (without Heisenberg's and Schrodinger's insights that are about to appear 5 years later)?

In my opinion they would consider it a big mystery, but I don't think they would think there is a non-local mechanism involved.
 
  • #152
martinbn said:
In my opinion they would consider it a big mystery, but I don't think they would think there is a non-local mechanism involved.
Why not non-local? After all, the good old Newton theory of gravity is also non-local. True, in 1920 there is also a better relativistic local theory of gravity, but it is not yet so rigidly encoded in physicists minds to prevent thinking in old Newtonian terms.
 
  • #153
Demystifier said:
Why not non-local? After all, the good old Newton theory of gravity is also non-local. True, in 1920 there is also a better relativistic local theory of gravity, but it is not yet so rigidly encoded in physicists minds to prevent thinking in old Newtonian terms.

It's just my opinion. I don't think they would jump to conclusions, and I don't think they would have a quantative non-local explanation. I think they would consider it an open problem. Probably very important and worth working on.
 
  • #154
Demystifier said:
If you ask me why macro objects look classical, probably the best answer is decoherence. See e.g. the book by Schlosshauer
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

If you think that decoherence cannot be the full answer, then try a Bohmian completion:
https://arxiv.org/abs/quant-ph/0112005
Thanks but both approaches skip/ignore/bypass (like you have in this thread, not surprisingly since you are a declared Bohmian) the measurability problem coming from the intrinsic quantum uncertainty that I raised, considering it trivial or nonimportant, or simply due to interactions as if that explained anything(it's like saying that the measurement problem is due to measurements, true but hardly useful).
 
  • #155
RockyMarciano said:
Thanks but both approaches skip/ignore/bypass (like you have in this thread, not surprisingly since you are a declared Bohmian) the measurability problem coming from the intrinsic quantum uncertainty that I raised, considering it trivial or nonimportant, or simply due to interactions as if that explained anything(it's like saying that the measurement problem is due to measurements, true but hardly useful).
So, do you have a better explanation? Or do you think it's still an open problem?
 
  • #156
martinbn said:
It's just my opinion. I don't think they would jump to conclusions, and I don't think they would have a quantative non-local explanation. I think they would consider it an open problem. Probably very important and worth working on.
Sure, but they would have various working hypothesis, and some of them would be more popular than others. What would be the most popular ones?
 
  • #157
Demystifier said:
Sure, but they would have various working hypothesis, and some of them would be more popular than others. What would be the most popular ones?

I would guess that the most popular one would be hidden variable. (local of course)
 
  • #158
Demystifier said:
So, do you have a better explanation? Or do you think it's still an open problem?
It's an open problem AFAICS, but I'm intrigued about what I see as a neglected angle of the problem, the stability of measurements despite the intrinsic uncertainty in QM. Curiously something similar happens in relativity where perfectly solid rods are impossible and prevent the existence of stable measuring rods in principle. I found a parallelism with your assertion about dynamics(measurements are considered dynamical) and conservation being incompatible.
 
  • #159
RockyMarciano said:
I'm intrigued about what I see as a neglected angle of the problem, the stability of measurements despite the intrinsic uncertainty in QM.
I don't follow you. Why are you saying that the solution provided by BM (a random initial "position/hidden variable" distribution) should be called a "neglected problem". Actually the determinism bundled into BM kind of guaranteed that it may be testable. Until then, it is just another interpretation.

Beside, how does it have anything to do with a meter, which is obviously stable given its classical definition (material/temperature). Are you implying that all atoms of a meter are susceptible to tunnel away into another galaxy ? It this that kind of stability that worries you ?

RockyMarciano said:
Curiously something similar happens in relativity where perfectly solid rods are impossible and prevent the existence of stable measuring rods in principle
Here also, solid rod aren't prevented by relativity. Perfectly solid rod are prevented by Nature (whatever that word is supposed to mean (within very misguided intuition)) and explained more by accoustic/mechanic/chemical theories than relativity.
 
  • #160
Demystifier said:
Suppose that we are in the 1920 were theoretical physicists are equipped only with concepts of classical physics, plus relativity, plus "old" Bohr-Sommerfeld-like QM. They don't have modern quantum mechanics, they don't have quantum field theory and they don't have wave functions. And suppose that some lucky experimentalist observes "quantum" correlations by accident, but he nor anybody else knows about their quantum theoretical origin. In your opinion, how would physicists of that time interpret such correlations? Do you think they would conclude that there is some non-local mechanism involved? Or do you think that a different interpretation would look more natural? Do you think that some smart guy could reproduce the laws of modern QM just from this experiment (without Heisenberg's and Schrodinger's insights that are about to appear 5 years later)?
Well, there's progress in science. All the people dealing with the problems of "quantum phenomena" in the years 1920-1925 were well aware that their semiclassical patchwork models were just this, and they were vigorously looking for a consistent description, leading to modern QT. Why should we bother with these quibbles today anymore? It's interesting for the history of science and it's good to know about how the modern theories (and also classical physics by the way) came about to understand the meaning of the modern theory better, but to answer foundational questions you should answer them with the most recent theories we have. The point is that the standard QFT solved all these apparent problems (at least for a physicist interested in phenomenology). The true fundamental problems of modern relativistic QFT are not in these philosophical issues but in the mathematics which are still not completely solved.
 
  • Like
Likes RockyMarciano
  • #161
vanhees71 said:
The point is that the standard QFT solved all these apparent problems (at least for a physicist interested in phenomenology).
Good point! But some of us are interested in more than phenomenology.
 
  • Like
Likes RockyMarciano
  • #162
martinbn said:
I would guess that the most popular one would be hidden variable. (local of course)
And how would non-local correlations be explained by local hidden variables?
 
  • #163
It can't be explained in this way since the Bell inequality (and related theorems) are violated by QT, and experiment shows that QT is right but not local HV theories.
 
  • Like
Likes morrobay
  • #164
Demystifier said:
Good point! But some of us are interested in more than phenomenology.
Yes, and then you leave the realm of the natural science ;-)). At best you do mathematics then, at worst...
 
  • Like
Likes RockyMarciano
  • #165
vanhees71 said:
Yes, and then you leave the realm of the natural science ;-)). At best you do mathematics then, at worst...
... I do something I like, publish it in Foundations of Physics, and get payed for that by tax payers. :-p
 
  • #166
Well, you are still on the good side of mathematical physics, and I think here the tax-payers' money is well spent :biggrin:.
 
  • Like
Likes Demystifier
  • #167
Demystifier said:
there is no law of conservation of length.
I slipped this. Mathematically there is for all our physical models measurements, actually. It is implied by things like metrics, norms, inner products, unitarity, isometries ...and involved in the dynamics. Matching this to randomness and uncertainty which is the opposite of these symmetries is the puzzle I guess.
 
  • #168
Demystifier said:
Well, to measure a distance with a meter, the length of meter should not change. But there is no law of conservation of length. What we need here is stability, not conservation laws.
Yes, and indeed the idea to just define the metre by a platinum stick in Paris, is not stable enough. That's why for more than 50 years the metre is defined via natural fundamental constants (or at least what we believe are such quantities according to our contemporary models).

The only unit in the SI that is still defined by a prototype (or better said a set of national copies of the prototype) is the kg, and it's a desaster. That's why pretty soon the kg will be redefined once and for all by fundamental constants either:

https://en.wikipedia.org/wiki/Kilogram
 
  • Like
Likes RockyMarciano
  • #169
vanhees71 said:
Well, you are still on the good side of mathematical physics, and I think here the tax-payers' money is well spent :biggrin:.
Well, the kind or research I do is neither phenomenology nor mathematical physics. It is foundations of physics. I like to define it as dealing with philosophical questions by using methodology of theoretical sciences (theoretical physics, mathematics and logic).
 
  • #170
Yes, and this is a very important constraint! It's based on the, imho, right methodology, namely theoretical physics (which of course implies mathematics and logic with some relaxation of rigorousity) and not wild speculations based on unfounded prejudices as is too often the case when philosophers without an adequate background in theoretical physics try to write about the "foundations of physics".
 
  • Like
Likes Demystifier
  • #171
vanhees71 said:
Yes, and indeed the idea to just define the metre by a platinum stick in Paris, is not stable enough. That's why for more than 50 years the metre is defined via natural fundamental constants (or at least what we believe are such quantities according to our contemporary models).
Even according to our current models those constants are not so fundamental in the sense of stable, they are "running" constants.
 
  • #173
vanhees71 said:
?
I thought you were referring to constants like the fine structure constant, and you surely know about "running constants" in QFT.
 
  • #174
RockyMarciano said:
I thought you were referring to constants like the fine structure constant, and you surely know about "running constants" in QFT.
"Running constants" do not run just because time passes. Running constants are just a convenient way to describe the fact that directly measurable quantities (like scattering cross sections) depend on energy.
 
  • #175
Demystifier said:
"Running constants" do not run just because time passes. Running constants are just a convenient way to describe the fact that directly measurable quantities (like scattering cross sections) depend on energy.
Exactly, therefore their stability upon measurement is not complete and as you say depends on energy. This is my point.
 

Similar threads

Replies
84
Views
4K
Replies
309
Views
12K
Replies
3
Views
2K
Replies
91
Views
6K
Replies
14
Views
2K
Replies
109
Views
5K
Replies
90
Views
8K
Back
Top