Evaluate this paper on the derivation of the Born rule

In summary, The paper discusses the Curie Wiess model of the quantum measurement process and how it can be used to derive the Born rule.
  • #281
PeterDonis said:
Ok, but if I told you that my "theory of interaction" was "I have some thingie I use to compute scattering cross sections which I then test against measured data", would you be satisfied?
No. Because "a thingie" doesn't allow you to calculate anything. QM gives you a probability theory that does.
 
Physics news on Phys.org
  • #282
mikeyork said:
the relative frequencies are based on a probability assumption -- that each card is equally probable.

We don't have a common definition of "probability" so I can't accept this statement as it stands. I would state the assumption as: we assume that each hand is generated by choosing at random 5 cards from a deck containing the standard 52 cards and no others.

mikeyork said:
I've already refuted it several times.

Where? Where have you given the quantitative probabilities that, e.g., professional poker players calculate for other players bluffing?
 
  • #283
mikeyork said:
QM gives you a probability theory that does.

QM gives you a mathematical framework that does. But you have not explained why you think this mathematical framework is a "probability theory", except to say that "it let's me calculate stuff I can test against measured statistics". If I told you that my theory of interaction was "I have this mathematical framework that let's me calculate scattering cross sections which I then test against measured data", with no other information at all, would you be satisfied?
 
  • #284
PeterDonis said:
We don't have a common definition of "probability" so I can't accept this statement as it stands. I would state the assumption as: we assume that each hand is generated by choosing at random 5 cards from a deck containing the standard 52 cards and no others.
What does random mean if not equiprobable?
PeterDonis said:
Where? Where have you given the quantitative probabilities that, e.g., professional poker players calculate for other players bluffing?
They are not that numerically precise. In a game of poker, most factors that affect probability depend on judgment and experience. However, their interpretation of their experience is based on the probabilistic concept.
 
  • #285
PeterDonis said:
QM gives you a mathematical framework that does. But you have not explained why you think this mathematical framework is a "probability theory", except to say that "it let's me calculate stuff I can test against measured statistics".
I explained a lot more than that. I have now twice explained in this thread why scalar products offer a probability theory. The Born rule is the icing on the cake.
 
  • #286
mikeyork said:
What does random mean if not equiprobable?

It means a certain procedure for picking the cards: for example, you fan out the cards in front of me, I close my eyes and pick 5 of them. Or we have a computer program that numbers the cards from 1 to 52 and then uses one of the built-in functions in whatever programming language we are using to pick 5 "random" numbers from that list (where "random" here means "using the pseudorandom number generator built into the operating system"). Or...

In other words, "random" here is operationalized. If you ask what justifies a particular operationalization, it will come down to some argument about relative frequencies of objects chosen by that operational method, i.e., statistics. So if we even use the term "equiprobable", we mean it in a way that is ultimately justified by statistics. So still no "fundamental concept" of probability independent of statistics.

mikeyork said:
They are not that numerically precise.

They aren't numerical period, as far as I can tell.

mikeyork said:
In a game of poker, most factors that affect probability depend on judgment and experience. However, their interpretation of their experience is based on the probabilistic concept.

What "probabilistic concept"? You still haven't told me what it is. All you've done is wave your hands about "factors" and "judgment" and "experience".
 
  • #287
mikeyork said:
I have now twice explained in this thread why scalar products offer a probability theory.

Your "explanation" amounts, as I said before, to saying that "scalar products let me calculate things that I can test against statistics". So, once more, I don't see how this gives a "fundamental concept" of probability that is independent of statistics.
 
  • Like
Likes Mentz114
  • #288
PeterDonis said:
So if we even use the term "equiprobable", we mean it in a way that is ultimately justified by statistics.
No your "random" picking verifies the probability theory that all card are equally probable.
 
  • #289
PeterDonis said:
Your "explanation" amounts, as I said before, to saying that "scalar products let me calculate things that I can test against statistics". So, once more, I don't see how this gives a "fundamental concept" of probability that is independent of statistics.
I wrote a lot more than that. I'm not going to repeat it. I can't force you to read.
 
  • #290
mikeyork said:
your "random" picking verifies the probability theory that all card are equally probable.

I did not use your formulation of "probability" in my scenario. In my scenario, "random" has nothing whatever to do with "probability". It's a reference to a particular kind of experimental procedure, and that's it. I did so precisely to illustrate how the "probabilities" for poker hands could be operationalized in terms of a procedure that makes no reference at all to any "fundamental concept" of probability.

You can't make such a "fundamental concept" appear just by saying so. You have to show me what it is, and why it has to appear in any scenario such as the "probabilities" of poker hands. So far your only answer has been "scalar products", but I didn't calculate any scalar products and my operationalized procedure doesn't require any.
 
  • #291
PeterDonis said:
I did not use your formulation of "probability" in my scenario. In my scenario, "random" has nothing whatever to do with "probability". It's a reference to a particular kind of experimental procedure, and that's it. I did so precisely to illustrate how the "probabilities" for poker hands could be operationalized in terms of a procedure that makes no reference at all to any "fundamental concept" of probability.
It doesn't matter how many cards you pull,you don't know that they are random. You don't even know that they are equally probable until you have pulled an infinite number of them. Equal probability is always a theoretical assumption to be tested (and never even proven).
 
  • #292
mikeyork said:
It doesn't matter how many cards you pull,you don't know that they are random.

Sure I do; I defined "random", for my purposes in the scenario, to mean "pulled according to the procedure I gave". If you object to my using the word "random" in this way, I'll change the word, not the procedure.

mikeyork said:
Equal probability is always a theoretical assumption

For you, perhaps; but I made no such assumption at all, so I don't have to care whether it is "theoretical" or "requires an infinite number of cards pulled to verify", or anything like that.
 
  • #293
mikeyork said:
the probabilistic concept.

Let me restate the question I've asked repeatedly in a different way: presumably this "probabilistic concept" you refer to is not something you just made up, but is something that appears in some standard reference on probability theory. What reference?
 
  • #294
PeterDonis said:
Sure I do; I defined "random", for my purposes in the scenario, to mean "pulled according to the procedure I gave". If you object to my using the word "random" in this way, I'll change the word, not the procedure.
For you, perhaps; but I made no such assumption at all, so I don't have to care whether it is "theoretical" or "requires an infinite number of cards pulled to verify", or anything like that.
Then you have no theory with which to predict the frequencies. But I have because equal probability gives me that theory.
 
  • #295
PeterDonis said:
Let me restate the question I've asked repeatedly in a different way: presumably this "probabilistic concept" you refer to is not something you just made up, but is something that appears in some standard reference on probability theory. What reference?
There are masses of textbooks on probability theory. Their objective is to predict frequencies not count them.

As regards scalar products in QM like I said it is a very simple argument and I've already described it twice in this thread. I'm not going to do it again.
 
  • #296
mikeyork said:
Then you have no theory with which to predict the frequencies. But I have because equal probability gives me that theory.

In other words, now you're using "equal probability" to mean an assumption about frequencies? Basically, in the case of the cards, it would be "each of the 52 cards in a standard deck will appear with the same frequency". Calling this a "theory that predicts frequencies" doesn't change the fact that the assumption I just described is logically equivalent to the assumption "each of the 52 cards in a standard deck is equally probable". See below.

mikeyork said:
There are masses of textbooks on probability theory. Their objective is to predict frequencies not count them.

On the frequentist interpretation of probability, which is AFAIK the one that the majority of the "masses of textbooks" use, probabilities are relative frequencies. Some relative frequencies are predicted (e.g., the relative frequency of four of a kind in poker), but those predictions are based on other relative frequencies of more elementary events (e.g., the relative frequency of each individual card in a standard 52 card deck).

Evidently you are not using this interpretation. The other standard intepretation is Bayesian. Is that the one you're using? Under the Bayesian interpretation, the "equally probable" assumption about, e.g., each card in a standard 52 card deck is just a uniform prior over a finite set with 52 elements. This would be consistent with your saying that probability theory is for predicting frequencies, but I don't see the connection with scalar products.
 
  • #297
mikeyork said:
There are masses of textbooks on probability theory.

Are there masses of textbooks explaining how QM scalar products are probabilities? How, for example, they obey the Kolmogorov axioms?
 
  • #298
PeterDonis said:
In other words, now you're using "equal probability" to mean an assumption about frequencies?
No.
PeterDonis said:
Basically, in the case of the cards, it would be "each of the 52 cards in a standard deck will appear with the same frequency". Calling this a "theory that predicts frequencies" doesn't change the fact that the assumption I just described is logically equivalent to the assumption "each of the 52 cards in a standard deck is equally probable".
No it's not. It may be empirically similar but will only be equivalent if you happen to get equal frequencies over an infinite number of pulled cards..
PeterDonis said:
On the frequentist interpretation of probability, which is AFAIK the one that the majority of the "masses of textbooks" use, probabilities are relative frequencies.
The distinction, as I have repeatedly said is between measuring/counting and predicting. Just like everything else in physics. Either you have a theory or you don't.
PeterDonis said:
but I don't see the connection with scalar products.
As I said, you have to go back and read it. It's a really simple argument but I don't care in the least if you don't agree with it and I'm not going to argue about it any more.
 
  • #299
mikeyork said:
you have to go back and read it

I have read your posts in this thread repeatedly and I still don't see it. So I guess we'll have to leave it there.
 
  • #300
PeterDonis said:
Are there masses of textbooks explaining how QM scalar products are probabilities? How, for example, they obey the Kolmogorov axioms?
No. You asked me about the concept of probability theory. QM is a special case and like I said, I don't care if you don't like my argument.
 
  • #301
mikeyork said:
I don't care if you don't like my argument.

Is it just your argument? (If it is, it's off topic here--you should be publishing it as a paper.) Or does it appear in, e.g., some standard reference on QM? If so, what reference?
 
  • #302
PeterDonis said:
Is it just your argument? (If it is, it's off topic here--you should be publishing it as a paper.) Or does it appear in, e.g., some standard reference on QM? If so, what reference?
It's not just my argument. It's a trivially simple logical observation about the nature of the Born rule -- what this thread was originally about until you and Mentz114 derailed it.
 
  • #303
A. Neumaier said:
No. Failure of Born's rule is completely unrelated to failure of quantum mechanics. The latter is applied in a much more flexible way than the Born rule demands. It seems that we'll never agree on this.
No, we'll never agree to this, because to use QT in "a much more flexible way" (what ever you mean by this), you need Born's rule to derive it.

For example, in some posting above you complained about the inapplicability of Born's rule to the case that the resolution of the measurement apparatus is not accurate enough to resolve discrete values of some observable (e.g., spin). This, however, is not true. In this case, of course, you need more than Born's rule, but you need Born's rule to calculate probabilities for precisely measuring the observable and then on top you need a description of the "detector acceptance and resolution". Usually that's empirically determined using "calibrated probes". Nevertheless the fundamental connection between the QT formalism and what's observed in experiments is still Born's rule. Of course, the cases, where you can apply Born's rule in its fundamental form are rare, because it's usually difficult to build very precise measurement devices, but this doesn't invalidate Born's rule as a fundamental part of the (minimal) interpretation of QT to make it applicable to real-world experiments.

Also the often cited formalism of POVM, which generalizes Born's rule to more general "inaccurate measurements" is based on Born's rule.
 
  • Like
Likes Auto-Didact
  • #304
PeterDonis said:
Which says:
In other words, the "fundamental concept" appears to be relative frequency--i.e., statistics. So I still don't understand your statement that probability is a "fundamental concept" while statistics is "derived".
Indeed. There is also a debate about the general meaning of probabilities in application to empirical facts (statistics), independent of QT. Some people seem to deny the meaning of probabilities as "frequencies of occurance" when a random experiment is repeated on an ensemble of equally prepared setups of this experiment. Nobody, particularly not Qbists (another modern "interpretation" of QT), could ever convincingly explain to me, how I should be able to empirically check a hypothesis (i.e., assumed probabilities or probability distributions associated with a random experiment) if not using the usual "frequentist interpretation" of probabilities. It is also clear that probability theory does not tell you which probability distribution might be a successful description, but you need to "guess" somehow the probabilities for the outcome of random experiments and the verify or falsify them by observation. On the other hand the frequentist interpretation has a foundation in probability theory in terms of theorems like the "Law of Large Numbers", and thus there's a clear foundation of the "frequentist interpretation" within probability theory itself, and this is a convincing argument for this interpretation, which then makes probability theory applicable to concrete real-world problems, in giving the foundation for the empirical investigation about assume probabilities/probability distributions.

In extension to pure probability theory (as, e.g., formalized by the Kolmogorov axioms) there are also ideas about how to "guess" probabilities. One is the maximum entropy method, which defines a measure for the missing information (classically the Shannon entropy) which has to be maximized under the constraint of given information about the system one aims to describe by a probability function or distribution. Of course, it doesn't tell you which information you should have to get a good guess for these probabilities in a given real-world situation.
 
  • Like
Likes Mentz114
  • #305
vanhees71 said:
It is also clear that probability theory does not tell you which probability distribution might be a successful description, but you need to "guess" somehow the probabilities for the outcome of random experiments and the verify or falsify them by observation.
Isn't that how all science works?

A probabilty theory takes a physical idea (e.g. kinematics of particle collisions) and adds in a random principle and deduces a distribution for some variable (e.g. a normal distribution). Adding in a random principle is just like any other hypothesis in a scientific theory.

The whole point of such a theory, just like any other theory, is to predict, not to count. And its applications extend far beyond just predicting frequencies. For example, financial derivative pricing theory is critically based on the theory of random processes.

vanhees71 said:
On the other hand the frequentist interpretation has a foundation in probability theory in terms of theorems like the "Law of Large Numbers", and thus there's a clear foundation of the "frequentist interpretation" within probability theory itself, and this is a convincing argument for this interpretation, which then makes probability theory applicable to concrete real-world problems, in giving the foundation for the empirical investigation about assume probabilities/probability distributions.

In extension to pure probability theory (as, e.g., formalized by the Kolmogorov axioms) there are also ideas about how to "guess" probabilities. One is the maximum entropy method, which defines a measure for the missing information (classically the Shannon entropy) which has to be maximized under the constraint of given information about the system one aims to describe by a probability function or distribution. Of course, it doesn't tell you which information you should have to get a good guess for these probabilities in a given real-world situation.
But, of particular relevance to the Born rule, one can encode probabilities in many ways other than directly hypothesizing a distribution. One simply builds a theory of some quantity f(x) and then expresses P(x) as a unique function of f(x). The Born rule says to do that in a specific way via the scalar product. And, as I have tried to explain, this is quite profound because, given the usual Hilbert space picture, a moderately stable universe in which small transitions are more likely than large transitions will suggest (but no, it doesn't prove) that P(x) be a monotonically increasing function of the magnitude of the scalar product.
 
Last edited:
  • Like
Likes Auto-Didact
  • #306
mikeyork said:
It's a trivially simple logical observation about the nature of the Born rule

Ok, so you're saying that this...

mikeyork said:
one can encode probabilities in many ways other than directly hypothesizing a distribution. One simply builds a theory of some quantity f(x) and then expresses P(x) as a unique function of f(x).

...is a "trivially simple logical observation", and so if I look in any textbook on probability theory, I will see it referred to? And then the addition of this...

mikeyork said:
The Born rule says to do that in a specific way via the scalar product.

...is a "trivially simple logical observation" that I will see in any textbook on QM?

mikeyork said:
what this thread was originally about until you and Mentz114 derailed it.

IIRC you were the one who brought up the idea of a "fundamental concept of probability" independent of statistics. That seems to me to be a thread derail, since Born's rule only claims to relate squared moduli of amplitudes to statistics of ensembles of observations.
 
  • #307
mikeyork said:
But, of particular relevance to the Born rule, one can encode probabilities in many ways other than directly hypothesizing a distribution. One simply builds a theory of some quantity f(x) and then expresses P(x) as a unique function of f(x). The Born rule says to do that in a specific way via the scalar product. And, as I have tried to explain, this is quite profound because, given the usual Hilbert space picture, a moderately stable universe in which small transitions are more likely than large transitions will suggest (but no, it doesn't prove) that P(x) be a monotonically increasing function of the magnitude of the scalar product.
I don't understand what you are after with this. Could you give a simple physics example? Formally in QT it's clear

If you have a state, represented by the statistical operator ##\hat{\rho}##, then you can evaluate the probability (distribution) to find a certain value of any observable you want. Measuring an arbitrary observable ##A## on the system, represented by the self-adjoing operator ##\hat{A}##, which has orthonormalized (generalized) eigenvectors ##|a,\lambda \rangle## (##\lambda## is some variable or a finite set of variables labelling the eigenstates of ##\hat{A}## to eigenvalue ##a## as
$$P(a)=\sum_{\lambda} \langle a,\lambda|\hat{\rho}|a,\lambda \rangle.$$
 
  • #308
PeterDonis said:
...is a "trivially simple logical observation", and so if I look in any textbook on probability theory, I will see it referred to?
Any textbook that discusses a lognormal distribution gives you an explicit example: ##f(x) = log x##, ##P(x) = G(f(x))## where G() is a Gaussian. Almost any book on stochastic processes will explain why the Ito Arithmetic Brownian process for ##log x## with the solution ##P(x) = G(log(x))## is more natural (as well as simpler to understand) than trying to express the Geometric Brownian process for ##x## directly. (It's because it is scale-independent.)

PeterDonis said:
...is a "trivially simple logical observation" that I will see in any textbook on QM?
Mostly yes, though some may express it differently: ##f(a) = |<a|\psi>|##, ##P(a|\psi) = f(a)^2## That is Born's rule.
PeterDonis said:
.IIRC you were the one who brought up the idea of a "fundamental concept of probability" independent of statistics.
Always in the context of Born's rule until others, such as yourself, interjected with your primitive view of probability.
 
  • Like
Likes Auto-Didact
  • #309
vanhees71 said:
I don't understand what you are after with this. Could you give a simple physics example? Formally in QT it's clear

If you have a state, represented by the statistical operator ##\hat{\rho}##, then you can evaluate the probability (distribution) to find a certain value of any observable you want. Measuring an arbitrary observable ##A## on the system, represented by the self-adjoing operator ##\hat{A}##, which has orthonormalized (generalized) eigenvectors ##|a,\lambda \rangle## (##\lambda## is some variable or a finite set of variables labelling the eigenstates of ##\hat{A}## to eigenvalue ##a## as
$$P(a)=\sum_{\lambda} \langle a,\lambda|\hat{\rho}|a,\lambda \rangle.$$
Prepare a state ##\psi##. Project into the representation ##A## with eigenvalues ##a_i##:

## |\psi> = \sum_i |a_i><a_i|\psi>##

Born's rule tells you that if you try to measure ##A##, then ##P(a_i|\psi) = |<a_i|\psi>|^2##

I really have no idea why anyone should have such difficulty with this.

As regards the relevance of small transitions v. big transitions. First consider the analogy of Cartesian vectors. Two unit vectors that are close to each other will have a large scalar product compared to two vectors that are nearly orthogonal. Might two state vectors that are "near" each other in the same sense of a larger scalar product represent states that are more similar than states that have less overlap in Hilbert space? Now imagine you prepare a state within a narrowly-defined momentum band, measure position as lightly as possible, then measure momentum as lightly as possible, then would you expect the measured momentum to be nearer its original band or farther away?
 
Last edited:
  • Like
Likes Auto-Didact
  • #310
That's identical to what I wrote for the special case of a pure state, where ##|\psi \rangle## is a representant of the ray, defining this state. The statistical operator in this case is ##\hat{\rho}=|\psi \rangle \langle \psi|##.

I still don't understand, what you want to say (also in regard to #308).
 
  • #311
vanhees71 said:
I still don't understand, what you want to say (also in regard to #308).
Mathematically, yes, which is why I don't understand why you have had such difficulty with my posts. However, my interpretation is different.

There are two related things that I have tried to bring to this thread in order to make a simple remark about the Born rule being a very natural interpretation of the scalar product. I'll make one last attempt.

1. Historically we tend to be fixated on equating "probability" with a distribution function which equates to asymptotic relative frequency. But if we think of "probability" as a more amorphous idea which is not necessarily a distribution function but something which enables us to calculate a unique distribution function, then any mathematical encoding that does this would do in principle. In particular, an encoding ##f(a)## for which ##P(a) = G(f(a))## where ##G(f)## is monotonically increasing in ##f## gives ##f(a)## an appropriate significance. In QM, the Born rule suggests that the scalar product, ##f(a) = |<a|\psi>|## is one such encoding. There is nothing in this idea except a simple revision of the concept of probability that distinguishes it from, yet enables us to calculate, a distribution function; the scalar product is the fundamental underlying idea. If you don't like this and want to stick with probability as meaning a distribution function, then fine. I'm just pointing out that probability can be a much more general idea and in QM the scalar product serves this purpose well.

2. The relative stability of the universe -- change is gradual rather than catastrophic -- gives a natural significance to the scalar product in QM as serving as a probability encoding. You just have to interpret this gradual change as meaning that transitions between states that are "close" to each other in the sense of a large scalar product are more likely than transitions between states that are less "close". This clearly suggests that ##P(a|\psi)## should be a monotonically increasing function of ##|<a|\psi>|##.

And this is why I say that ##|<a|\psi>|## offers a "natural" expression of "probability" in QM. I am not saying it is a proof; just that it is a very reasonable and attractive idea. I also think it suggests that ##|<a|\psi>|## offers a deeper, more fundamental, idea of "probability" than a simple distribution function. But this is a secondary (and primarily semantic) issue that you can ignore if you wish.
 
  • Like
Likes Auto-Didact
  • #312
The quote in my last post #311 should have read
vanhees71 said:
That's identical to what I wrote for the special case of a pure state, where ##|\psi \rangle## is a representant of the ray, defining this state.
for my remark "Mathematically, yes,..." to make sense. Sorry about that, I don't know how it got screwed.
 
  • #313
Mike, another thought, along similar lines. The scalar product is a function of the amplitude of the initial and final state. The final state is an eigenstate of the measuring equipment. If the measuring equipment is sufficiently macroscopic the final state will be pretty close to the first state in many scenarios. The Born rule arises then as a neat approximation.
 
  • #314
Jilang said:
Mike, another thought, along similar lines. The scalar product is a function of the amplitude of the initial and final state. The final state is an eigenstate of the measuring equipment. If the measuring equipment is sufficiently macroscopic the final state will be pretty close to the first state in many scenarios. The Born rule arises then as a neat approximation.
I would say that the final state of the apparatus will be close to its initial state, but such a small change in the apparatus could be compensated by what is a relatively significant change in the observed microscopic system. (E.g. the energy/momentum exchanged between apparatus and, say, a particle, might be small compared to the apparatus, but large compared to the particle.)

Also remember that in a scattering experiment, for instance, or particle decay, the apparatus does not take part in the actual transition, it merely prepares the collision and detects the individual resultant particles after they leave the collision location. So we still need a hypothesis that small transitions in the total system are more likely, whether macroscopic or microscopic -- i.e. the Born rule. Then we can view this as the reason for a fairly stable (gradually evolving) universe.
 
  • #315
mikeyork said:
''
So we still need a hypothesis that small transitions in the total system are more likely, whether macroscopic or microscopic -- i.e. the Born rule. Then we can view this as the reason for a fairly stable (gradually evolving) universe.
I don't understand. What transitions ? If these transitions are multifactorial noise then all you are saying is that their spectrum tends to a Gaussian of mean zero - viz. small changes are more likely than large ones.
 
Back
Top