Evaluate this paper on the derivation of the Born rule

In summary, The paper discusses the Curie Wiess model of the quantum measurement process and how it can be used to derive the Born rule.
  • #246
vanhees71 said:
So finally what you get is the standard probabilistic/statistical connection between theory and experiment. So what are we debating after all?
I get easily both the standard probabilistic/statistical connection between theory and experiment in cases where it applies (namely for frequently repeated experiments), and the standard deterministic/nonstatistical connection between theory and experiment in cases where it applies (namely for experiments involving only macroscopic variables).

There is no need to assume a fundamental probabilistic feature of quantum mechanics, and no need to postulate anything probabilistic, since it appears as a natural conclusion rather than as a strange assumption about mysterious probability amplitudes and the like that must be put in by hand. Thus it is a significant conceptual advance in the foundations.
 
Last edited:
Physics news on Phys.org
  • #247
A. Neumaier said:
So you measure once a single operator.
You have never explained what it means to "measure...an operator".
A. Neumaier said:
But Born's rule only applies to an ensemble of measurements, not to a single one.
I strongly disagree. Born's rule tells us the distribution function for all possible results of a single measurement.
 
  • #248
A. Neumaier said:
There is no need to assume a fundamental probabilistic feature of quantum mechanics, and no need to postulate anything probabilistic, since it appears as a natural conclusion rather than as a strange assumption about mysterious probability amplitudes and the like that must be put in by hand.
I disagree with this too. The magnitude of the "amplitude" has a natural interpretation as a distribution function for the simple reason that it is largest for the smallest changes. The "closer" the detected state is to the prepared state the more likely it is to be found. ##P(a|\psi) = \mbox{monotonic function of} \ |<a|\psi>|##.
 
  • #249
mikeyork said:
Born's rule tells us the distribution function for all possible results of a single measurement.
The distribution function means almost nothing for a single measurement.

According to Born's rule, a position measurement gives a real number, and any is possible. Thus Born's rule is completely noninformative.
A number measurement gives according to Born's rule some nonnegative integer, any is possible. Again, Born's rule is completely noninformative.

For a spin measurement, Born's rule is slightly more informative for a single measurement; it tells you that you get either spin up or spin down, but this is all.
That the probability of spin up is 0.1495, say, is completely irrelevant for the single case; it means nothing.

For a measurement of the total energy of a crystal, Born's rule claims that the measurement result is one of a huge but finite number of values, most of them not representable as finite decimal or dual fractions. However, measured is always a result given as a decimal or dual fraction with a small number of digits.
Thus there is even a discrepancy between real measurement and what Born's rule claims.
 
  • Like
Likes dextercioby
  • #250
A. Neumaier said:
The distribution function means almost nothing for a single measurement.
So the distinction between (say) a delta function, a Gaussian or a uniform distribution function means "almost nothing" to you? Tell that to a gambler. Las Vegas loves people who think the games are a lottery. And Wall St loves people who stick pins in a list of stocks.
 
  • #251
vanhees71 said:
It is, e.g., very clear how to determine the mass of astronomical bodies from the motion, making use of (post)Newtonian theory.
Yes. You confirm exactly what I claimed, that the meaning of the observable called mass is determined not by a measurement procedure but by the theory - in your example (post)Newtonian theory. The measurement procedure is designed using this theory, and is known to give results of a certain accuracy only because it matches the theory to this extent.
 
Last edited:
  • #252
mikeyork said:
So the distinction between (say) a delta function, a Gaussian or a uniform distribution function means "almost nothing" to you? Tell that to a gambler. Las Vegas loves people who think the games are a lottery.
It means almost nothing for a single measurement. Gamblers never gamble only once.

At most the support has a meaning for the single case, as restricting the typical values. Not even restricting the possible values!

Note that ''with probability zero'' does not mean ''impossible'' but only that the fraction of realized cases among all tends to zero as the number of measurements goes to infinity. Thus a stochastic process that takes arbitrary values in the first 10^6 cases and zero in all later cases has a distribution function given by a delta function concentrated on zero. In particular, the distribution function is completely misleading for a single measurement of one of the initial cases.

Monte Carlo studies usually need to ignore a long sequence of initial values of a process, before the latter settles to the asymptotic distribution captured by the probabilistic model.
 
  • #253
A. Neumaier said:
It means almost nothing for a single measurement. Gamblers never gamble only once.
Have you ever played poker? Every time it is your turn to act you are faced with a different "one-off" situation (a "prepared state" if you like). You decide what to do based on your idea of the probabilities of what will happen (the "detected state"). Poker players who do not re-assess the situation every time lose a great deal of money.
 
  • Like
Likes vanhees71
  • #254
mikeyork said:
Have you ever played poker?
Poker players never play only once.
mikeyork said:
Poker players who do not re-assess the situation every time lose a great deal of money.
That you need to argue with ''every time'' proves that you are not considering the single case but the ensemble.
 
  • Like
Likes PeterDonis
  • #255
A. Neumaier said:
Poker players never play only once.

That you need to argue with ''every time'' proves that you are not considering the single case but the ensemble.
Of course not. Each single case is a different case. Most poker players never encounter the same situation twice. The ensemble does not exist.

Your argument amounts to the claim that there is no such thing as a "probability", only statistics.
 
  • #256
A. Neumaier said:
I get easily both the standard probabilistic/statistical connection between theory and experiment in cases where it applies (namely for frequently repeated experiments), and the standard deterministic/nonstatistical connection between theory and experiment in cases where it applies (namely for experiments involving only macroscopic variables).

There is no need to assume a fundamental probabilistic feature of quantum mechanics, and no need to postulate anything probabilistic, since it appears as a natural conclusion rather than as a strange assumption about mysterious probability amplitudes and the like that must be put in by hand. Thus it is a significant conceptual advance in the foundations.
I understand QT as the present fundamental symmetry of matter (with the qualification that we don't have a fully satisfactory of the QT of gravitation), and thus it should explain both extreme cases you cite from one theory, and for me the standard minimal interpretation, used to connect the theory with real-world observations/experiments, is very satisfactory, and the key feature is the probabilistic interpretation. It explains both, the meaning of observations on microscopic objects and the quasi-deterministic behavior of macroscopic observables on macroscopic systems. In the latter case, the "averaging" (done in the microscopic case by repeating an experiment many times) is "done" by the measurement apparatus itself. It's a spatial and/or temporal average. All this is well described within the statistical interpretation of the state.

You have a very similar way to define such "averages" in classical electrodynamics applied to optics, where you define the apparently time-independent intensity of light in terms of the classical electromagnetic field by a temporal average. If you follow the history of QT, I think it is fair to say that the original thinking on the meaning of the wave function by Schrödinger came via the analogy with this case. In optics you define the intensity of light as the energy density averaged over typical periods of the em. field (determined by the typical frequency of the emitted em. wave), and these are quadratic forms of the field, like the energy density itself,
$$\epsilon=\frac{1}{2} (\vec{E}^2+\vec{B}^2),$$
or the energy flow,
$$\vec{S}=c \vec{E} \times \vec{B}.$$
(em. energy per area and time; both in Heaviside-Lorentz units).

Schrödinger originally thought of the wave function as a kind of "density amplitude" and its modulus squared as a density in a classical-field sense, but this was pretty early considered a wrong interpretation and lead to Born's probability interpretation, which is the interpretation considered valid today. I still don't understand, why you deny the Born interpretation as a fundamental postulate about the meaning of the quantum state, because it satisfactorily describes both extremes you quote above (i.e., microscopic observations on few quanta and macroscopic systems consisting of very many particles, leading to classical mechanics/field theory as an effective description for the macroscopically relevant observables) and also the "mesoscopic systems" lying somehow in between (like quantum dots in cavity QCD, ultracold rarefied gases in traps including macroscopic quantum phenomena like Bose-Einstein condensation, etc.).
 
  • Like
Likes Auto-Didact
  • #257
A. Neumaier said:
Yes. You confirm exactly what I claimed, that the meaning of the observable called mass is determined not by a measurement procedure but by the theory - in your example (post)Newtonian theory. The measurement procedure is designed using this theory, and is known to give results of a certain accuracy only because it matches the theory to this extent.
Sure, you also need theory to evaluate the masses of the bodies from the observables (like in my example the "pulsar-timing data"). Thus this measurement of masses is clearly among the (amazingly accurate) operational definitions of mass. For other systems you need other measurement procedures (e.g., a mass spectrometer for single particles or nuclei). That's, why I carefully talk about "equivalence classes of measurement protocols" that define the corresponding quantitative observables.

Indeed, the pulsar timing is a very good example for this within a classical (i.e., non-quantum realm). To test General Relativity you can determine the orbital parameters of the binary system from some observables and then deduce other post-Newtonian parameters to check whether they match the prediction from GR as one special post-Newtonian model of gravity. This gives some confidence in the correctness of the deduced values for, e.g., the masses of the two orbiting stars, but indeed this is possible only by giving the operational definition of the measured quantities like these masses to make the connection between theory (GR and post-Newtonian approximations of the two-body system) and observations (pulsar-timing data taken from a real-world radiotelescope).
 
  • #258
A. Neumaier said:
The distribution function means almost nothing for a single measurement.

According to Born's rule, a position measurement gives a real number, and any is possible. Thus Born's rule is completely noninformative.
A number measurement gives according to Born's rule some nonnegative integer, any is possible. Again, Born's rule is completely noninformative.

For a spin measurement, Born's rule is slightly more informative for a single measurement; it tells you that you get either spin up or spin down, but this is all.
That the probability of spin up is 0.1495, say, is completely irrelevant for the single case; it means nothing.

For a measurement of the total energy of a crystal, Born's rule claims that the measurement result is one of a huge but finite number of values, most of them not representable as finite decimal or dual fractions. However, measured is always a result given as a decimal or dual fraction with a small number of digits.
Thus there is even a discrepancy between real measurement and what Born's rule claims.
Born's rule is very informative or doesn't tell you much, dependent on the position-probability distribution given by the state in the way defined by this rule,
$$P(\vec{x})=\langle \vec{x}|\hat{\rho}|\vec{x} \rangle.$$
If this probability distribution is sharply peaked around some value ##\vec{x}_0## it tells you that the particle will be very likely be found in a small volume around this place, and almost never at other places if the system is prepared in this state. If the probability distribution is very broad, the position is pretty much indetermined, and Born's rule indeed doesn't tell much about what to expect for the outcome of a position measurment. Of course, as any probabilistic information, you can verify this information only on an ensemble. But that's what's implied in the "minimal statistical interpretation".

I don't understand the last paragraph of your quote. Of course you need an apparatus with sufficient accuracy to resolve the single possible values of an observable with a discrete spectrum like spin. Whether or not you can achive this is a question of engineering a good enough measurement device but not a fundamental problem within the theory.
 
  • #259
mikeyork said:
The ensemble does not exist.
The ensemble exists as an ensemble of many identically (by shuffling) prepared single cases. Just like identically prepared electrons are 'realized' in the measurement as different results.
 
  • Like
Likes PeterDonis
  • #260
vanhees71 said:
I still don't understand, why you deny the Born interpretation as a fundamental postulate about the meaning of the quantum state
Because, as discussed in the other thread, there are a host of situations where Born's rule (as usually stated) does not apply, unless you interpret it (as you actually do) so liberally that any mention of probability in quantum mechanics counts as application of Born's rule. You yourself agreed that measuring the total energy (relative to the ground state) does not follow the letter of Born's rule.
vanhees71 said:
you can verify this information only on an ensemble.
The information in the statement itself is only about the ensemble since a given single case (only one measurement taken) just happened, whether it is one of the rare cases or one of the frequent ones.
vanhees71 said:
Of course you need an apparatus with sufficient accuracy to resolve the single possible values of an observable with a discrete spectrum like spin. Whether or not you can achieve this is a question of engineering a good enough measurement device but not a fundamental problem within the theory.
So you say that Born's rule is not about real measurements but about fictitious (sufficiently well resolved) idealizations of it! But this is not what the rule says. It claims to be valid about each measurement, not only about idealizations!
 
  • #261
A. Neumaier said:
The ensemble exists as an ensemble of many identically (by shuffling) prepared single cases. Just like identically prepared electrons are 'realized' in the measurement as different results.
Apparently you have never played poker. Apart from all the possible hands of cards there are all the other players at the table, their body language, the position of the dealer, the betting history and the stack sizes. As I said, most poker players never encounter the same situation twice. They merely look for similarities and possibilities and make an assessment based on their limited abilities every single hand they play.

In fact, this problem exists in all physical situations. Even every toss of a coin is a different event. No two events are ever exactly the same except in the limited terms we choose to describe them -- and that even includes your "identically prepared electrons" which, at the very least, differ in terms of the time (and therefore the environmental conditions) at which they are prepared.

My point remains: it is probability which is the fundamentally useful concept. Statistics are derivative and based on a limited description that enables counting of events where the differences are ignored.
 
  • #262
mikeyork said:
, most poker players never encounter the same situation twice.
That just means that poker is a more complex probabilistic system than a quantum spin, which has only 2 possible situations.

The paths of two Brownian particles are also never the same, but still Brownian motion is described by an ensemble. A million games of poker are in essence no different from a million paths of a Brownian particle; only the detailed model is different.
 
  • #263
A. Neumaier said:
That just means that poker is a more complex probabilistic system than a quantum spin, which has only 2 possible situations.
Yes, it's a probabilistic system.
A. Neumaier said:
The paths of two Brownian particles are also never the same, but still Brownian motion is described by an ensemble.
No. Every Brownian particle is described by a distribution function (i.e. probability). "Never the same" and "ensemble" are mutually contradictory. We make the ensemble approximation by (1) ignoring the differences and (2) the large number rule.
 
  • Like
Likes Auto-Didact and RockyMarciano
  • #264
mikeyork said:
it is probability which is the fundamentally useful concept. Statistics are derivative

What is your fundamental definition of "probability" as a concept, if it is not based on statistics from ensembles?
 
  • #265
PeterDonis said:
What is your fundamental definition of "probability" as a concept, if it is not based on statistics from ensembles?
Look at my post #3. Probability is associated with frequency counting, but it doesn't have to be defined that way. Probability is a theoretical quantity that can be mathematically encoded in many ways (QM provides one such encoding in the magnitude of the scalar product ##|<a|\psi>|##); we just require that we be able to calculate asymptotic relative frequencies with it.

We can never actually measure probability by statistics because we cannot have an infinite number of events (even if we ignore differences that we think are unimportant).
 
  • Like
Likes Auto-Didact and RockyMarciano
  • #266
mikeyork said:
We can never actually measure probability [...]
.
Which makes it pretty useless, actually. Operational definitions are at least practically relevant.
 
  • Like
Likes PeterDonis
  • #267
Mentz114 said:
Which makes it pretty useless, actually. Operational definitions are at least practically relevant.
So let's give up on theory? All that stuff about Hilbert spaces is useless guff?

Professional poker players should retire?
 
  • #268
A really interesting practical example of the failure of statistics was the 2008 financial crash. Although there were many contributory factors, the single most critical mathematical factor was the assumption that probabilities could be deduced from statistics. The particular model that was faulty was "Geometrical Brownian Motion" -- the assumption that log prices were normally distributed, so that on only had to measure the first two moments (mean and variance).

More generally, a finite number of events can only tell you a finite number of moments, yet the higher moments of the underlying distribution function (probability) might be infinite. In 2008, this manifested in the phenomenon of "fat tails".

The same false assumption of a Gaussian distribution function was responsible for the demise of Long Term Capital Management in 1998.
 
  • Like
Likes RockyMarciano
  • #269
You misunderstand me again. Of course you can apply statistics and Born's rule also to inaccurate measurements, but as stated usually it's about precise measurements, and I don't think that it helps to resolve our disagreement with introducing more and more complicating but trivial issues into the discussion before the simple cases are resolved.

You still don't give a clear explanation for your claim that Born's rule doesn't apply. If this was the case that would imply that you can clearly disprove QT by a reproducible experiment. AFAIK that's not the case!
 
  • #270
mikeyork said:
A really interesting practical example of the failure of statistics was the 2008 financial crash. Although there were many contributory factors, the single most critical mathematical factor was the assumption that probabilities could be deduced from statistics. The particular model that was faulty was "Geometrical Brownian Motion" -- the assumption that log prices were normally distributed, so that on only had to measure the first two moments (mean and variance).

More generally, a finite number of events can only tell you a finite number of moments, yet the higher moments of the underlying distribution function (probability) might be infinite. In 2008, this manifested in the phenomenon of "fat tails".

The same false assumption of a Gaussian distribution function was responsible for the demise of Long Term Capital Management in 1998.
Well, here probability theory and statistics as you describe it were failing simply, because the assumptions of a certain model were wrong. It's not a failure of the application of probability theory per se. Hopefully, the economists learned from their mistakes and refine their models to better describe the real world. That's how empirical sciences work! If a model turns out to be wrong, you try to substitute it by a better one.
 
  • #271
vanhees71 said:
Well, here probability theory and statistics as you describe it were failing simply, because the assumptions of a certain model were wrong. It's not a failure of the application of probability theory per se. Hopefully, the economists learned from their mistakes and refine their models to better describe the real world. That's how empirical sciences work! If a model turns out to be wrong, you try to substitute it by a better one.
Not entirely. My point is that although you can predict the moments from a theoretical distribution function the reverse is not true -- you cannot obtain the distribution function from the empirical moments.. That is why probability is fundamental.
 
  • #272
mikeyork said:
More generally, a finite number of events can only tell you a finite number of moments
In particular, a single measurement is completely unrelated to the distribution.
 
  • Like
Likes vanhees71
  • #273
vanhees71 said:
your claim that Born's rule doesn't apply. If this was the case that would imply that you can clearly disprove QT by a reproducible experiment.
No. Failure of Born's rule is completely unrelated to failure of quantum mechanics. The latter is applied in a much more flexible way than the Born rule demands. It seems that we'll never agree on this.
 
  • #274
A. Neumaier said:
In particular, a single measurement is completely unrelated to the distribution.
As I have repeatedly emphasized, but you have repeatedly evaded, it is not the distribution (statistics) that matters but the distribution function (probability). This enables you to predict which results are more likely. To claim that the distribution function is "completely unrelated" to a single measurement is ridiculous.
 
  • #275
mikeyork said:
Look at my post #3.

Which says:

mikeyork said:
any mathematical encoding that tells us how to compute the relative frequency can serve as a theoretical probability.

In other words, the "fundamental concept" appears to be relative frequency--i.e., statistics. So I still don't understand your statement that probability is a "fundamental concept" while statistics is "derived".
 
  • #276
mikeyork said:
So let's give up on theory? All that stuff about Hilbert spaces is useless guff?

Hilbert spaces don't require any "fundamental concept" of probability. They are just vector spaces with some additional properties.

mikeyork said:
Professional poker players should retire?

Are you claiming that professional poker players routinely compute, for example, expectation values for, say, another player bluffing?

Obviously there are known probabilities for various poker hands, but those are based on, um, relative frequencies, i.e., statistics. So to the extent that there are quantitative probabilities in poker, they are based on statistics. Everything else you mention is just on the spot subjective judgments that are, at best, qualitative, which means they're irrelevant to this discussion.
 
  • #277
PeterDonis said:
Which says:In other words, the "fundamental concept" appears to be relative frequency--i.e., statistics.
No,no,no! Frequency counting is just a way to test a probability theory -- in the same way that scattering experiments are how you test a theory of interaction.
 
  • #278
mikeyork said:
Frequency counting is just a way to test a probability theory -- in the same way that scattering experiments are how you test a theory of interaction.

In which case you still haven't answered my question: what is the "fundamental concept" of probability? All you've said so far is that, whatever it is, we can test it using statistics. (Your statement in post #3 amounts to the same thing--it's "some thingie I can use to calculate something I can test by statistics".)
 
  • #279
mikeyork said:
in the same way that scattering experiments are how you test a theory of interaction.

Ok, but if I told you that my "theory of interaction" was "I have some thingie I use to compute scattering cross sections which I then test against measured data", would you be satisfied?
 
  • #280
PeterDonis said:
Obviously there are known probabilities for various poker hands, but those are based on, um, relative frequencies, i.e., statistics. So to the extent that there are quantitative probabilities in poker, they are based on statistics. Everything else you mention is just on the spot subjective judgments that are, at best, qualitative, which means they're irrelevant to this discussion.
Actually the relative frequencies are based on a probability assumption -- that each card is equally probable. As regards the rest, it's just your subjective judgment and I've already refuted it several times.
 
Back
Top