Quantum Bayesian Interpretation of QM

In summary, the QBism model presented by Fuchs & Schack is an attempt to reformulate quantum mechanics in a way that removes some of the paradoxes and inconsistencies found in the current model. However, Fields argues that the model is flawed because it does not provide a physical distinction between observers and the systems they observe, treating all quantum systems as autonomous agents that respond to observations by updating beliefs and employ quantum mechanics as a “users’ manual” to guide behavior.
  • #36
vanhees71 said:
Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view.

Here's a simplified example. Suppose we have two competing theories about a coin: Theory A says that it is a fair coin, giving "heads" 1/2 of the time. Theory B says that it is a trick coin, weighted to give "heads" 2/3 of the time. To start off with, we don't have any reason for preferring one theory over the other, so we write:

[itex]P(A) = P(B) = \dfrac{1}{2}[/itex]

Now flip the coin 4 times, and suppose you get HHTT. Call this event E. We compute probabilities:

[itex]P(E|A) = 0.0625[/itex]

[itex]P(E|B) = 0.0494[/itex]

[itex]P(E) = P(E|A) P(A) + P(E|B) P(B) = 0.0560[/itex]

Now, the Bayesian rules say that we revise our likelihood of the two theories in light of this new information:

[itex]P'(A) = \dfrac{P(A) P(E|A)}{P(E)} = 0.558[/itex]
[itex]P'(B) = \dfrac{P(B) P(E|B)}{P(E)} = 0.441[/itex]

So based on this one experiment, the likelihood of theory A has risen, and the likelihood of B has fallen.
 
Physics news on Phys.org
  • #37
bhobba said:
I agree justifying the frequentest interpretation without the Kolmogorov axioms leads to problems such as circular arguments. But then again I don't know of any book on probability that does that - all I have ever seen start with the Kolmogorov axioms and show, with varying degrees of rigor, and actually proving the key theorems, that the frequentest view follows from it.

I think maybe there's some disagreement about what "the frequentist view" is. If you mean that for many trials, the relative frequency gives you (with high probability) a good approximation to the probability, that's a conclusion from the axioms of probability, whether frequentist or bayesian. I thought that "the frequentist view" was that the meaning of probability is given by relative frequencies. That is not possible in a consistent way.
 
  • #38
stevendaryl said:
Here's a simplified example. Suppose we have two competing theories about a coin: Theory A says that it is a fair coin, giving "heads" 1/2 of the time. Theory B says that it is a trick coin, weighted to give "heads" 2/3 of the time. To start off with, we don't have any reason for preferring one theory over the other, so we write:

[itex]P(A) = P(B) = \dfrac{1}{2}[/itex]

Now flip the coin 4 times, and suppose you get HHTT. Call this event E. We compute probabilities:

[itex]P(E|A) = 0.0625[/itex]

[itex]P(E|B) = 0.0494[/itex]

[itex]P(E) = P(E|A) P(A) + P(E|B) P(B) = 0.0560[/itex]

Now, the Bayesian rules say that we revise our likelihood of the two theories in light of this new information:

[itex]P'(A) = \dfrac{P(A) P(E|A)}{P(E)} = 0.558[/itex]
[itex]P'(B) = \dfrac{P(B) P(E|B)}{P(E)} = 0.441[/itex]

So based on this one experiment, the likelihood of theory A has risen, and the likelihood of B has fallen.

I see, but that's nothing else than what I get with my "frequentist" approach. Here, you have a somewhat small ensemble of only 4 realizations of the experiment, but that's how I would do this statistical analysis as a "frequentist".
 
  • #39
vanhees71 said:
I see, but that's nothing else than what I get with my "frequentist" approach. Here, you have a somewhat small ensemble of only 4 realizations of the experiment, but that's how I would do this statistical analysis as a "frequentist".

I don't see that. What sense, in a frequentist approach, does it mean to say that theory A has probability 1/2 of being true, and theory B has a probability 1/2 of being true? That doesn't mean that half the time A will be true, and half the time B will be true.

I don't see that this example is compatible with frequentism, at all.
 
  • #40
vanhees71 said:
Particularly the subjectivity makes it highly suspicious for me.

In the natural sciences (and hopefully also in medicine and the social sciences) to the contrary, one has to try to make statements with the "least prejudice", given the (usually incomplete) information.
That can be done very easily in Bayesian statistics. Simply take what is called a non-informative prior, or an ignorance prior. Bayesian statistics and frequentist statistics are generally the same thing for a non-informative prior and a large amount of data.

However, Bayesian statistics let's you rationally account for information that you DO have in the form of an informed prior. Consider the recent FTL neutrino results from CERN, before the glitch was discovered. Most scientists looked at those results and rationally said something like "this new evidence is unlikely under SR, but we have all of this other evidence supporting SR so we still think that P(SR) is quite high even considering the new evidence, we await further information". That is a very Bayesian approach, and is the approach that rational people actually take when reasoning under uncertainty. When they have prior knowledge they integrate it into their evaluation of new evidence.

vanhees71 said:
Any experiment must be able to be reproducible precisely enough such that you can get "high enough statistics" to check a hypothesis quantitatively, i.e., to get the statistical significance of your measurement.
But this is exactly what you can not do with frequentist statistics. With frequentist methods you never test the hypothesis given the data, you always test the data given the hypothesis. When you do a frequentist statistical test the p value you obtain is the probability of the data, given the hypothesis. When doing science (at least outside of QM), most people think of the hypothesis as being the uncertain thing, not the data, but that is simply not what frequentist statistical tests measure.

vanhees71 said:
Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view. Is there a good book for physicists to understand the Bayesian point of view better?
I liked this series of video lectures by Trond Reitan:
http://www.youtube.com/playlist?list=PL066F123E80494F77

Be forwarned, it is a very low-budget production. He spends relatively little time on the philosophical aspects of Bayesian probability, but quite a bit of time on Bayesian inference and methods. I found it appealed to my "shut up and calculate" side quite a bit. The Bayesian methods and tests are scientifically more natural, regardless of how you choose to interpret the meaning of probability.
 
  • #41
stevendaryl said:
I think maybe there's some disagreement about what "the frequentist view" is. If you mean that for many trials, the relative frequency gives you (with high probability) a good approximation to the probability, that's a conclusion from the axioms of probability, whether frequentist or bayesian. I thought that "the frequentist view" was that the meaning of probability is given by relative frequencies. That is not possible in a consistent way.

I think our discussion has been slightly marred by a bit of a misunderstanding of what the other meant. I now see where you are coming from and agree. Basing probability purely on a frequentest interpretation has problems conceptually in that it can become circular. I suspect it can be overcome by a suitable amount of care - but why bother - the mathematically 'correct' way is via the Kolmogorov axioms and starting with that the frequentest interpretation is seen as a perfectly valid realization of those axioms based rigorously on the law of large numbers. Every book on probability I have read does it that way. Bayesian probability theory fits into exactly the same framework - although I personally haven't come across textbooks that do that but my understanding is they certainly exist, and in some areas of statistical inference may be a more natural framework. At least the university I went to certainly offers courses on it.

Thanks
Bill
 
Last edited:
  • #42
My take as a mathematicians is that a probability space is a type of mathematical structure just like a group or vector space or metric space. One wouldn't spend time arguing about whether this or that particular vector space is the real or more fundamental vector space, so why do it with probability spaces. Frequencies of results of repeatable experiments can be described by a probability space, so can a persons state of knowledge of factors contributing to the outcome of a single non-repeatable event. Probability is just a special case of the more general mathematical concept of a measure - a probability is a measure applied to parts of a whole indicating the relative extent the parts contribute to the whole in the manner under consideration. Saying something has a probability of 1/3 might mean that it came up 1/3 of the time in repeated experiment if that is what you are talking about (frequencies of outcomes) or it might mean that you know 30 scenarios 10 of which produce the outcome if that is what you a considering. Neither case is a more right or wrong example of probability, in the same way that neither SO(3) nor GL(4,C) is a more right or wrong use of group theory.
 
  • #44
vanhees71 said:
Perhaps it would help me to understand the Bayesian view, if you could explain how to test a probilistic theoretical statement empirically from this point of view. Is there a good book for physicists to understand the Bayesian point of view better?

I like Jaynes, Probability theory: The logic of science.
 
  • Like
Likes 1 person
  • #45
Have any of the QBist papers been accepted for publication? (As opposed to merely being uploaded to arxiv?) I'm not sure if I want to spend time reading lots of stuff that might turn out to be half-baked. (Various subtle puns in that sentence intended :) )
 
  • #46
Salman2 said:
Any comments (pro-con) on this Quantum Bayesian interpretation of QM by Fuchs & Schack ?: http://arxiv.org/pdf/1301.3274.pdf

I propose another variant of a "quantum Bayesian" interpretation, see arxiv.org:1103.3506

It is not completely Bayesian, instead, it is in part realistic, following de Broglie-Bohm about the reality of the configuration q(t). But it is Bayesian about the wave function.

Again, with care: What is interpreted as Bayesian is only the wave function of a closed system - that means, that of the whole universe. There is also the wave function we work with in everyday quantum mechanics. But this is only an effective wave function, It is defined, as in dBB theory, from the global wave function and the configuration of the environment, that means, mainly from the macroscopic measurement results of the devices used for the preparation of the particular quantum state.

Thus, because the configuration of the environment is ontic, the effective wave function is also defined by these ontic variables, thus, is essentially ontic. Therefore, no contradiction with the PBR theorem.

With this alternative in mind, I criticize QBism as following the wrong direction: Away from realism, away from nontrivial hypotheses about more fundamental theories. But this is what is IMHO the most important thing why it is important for scientists to think about interpretations at all. For computations, the minimal interpretation is sufficient. But it will never serve as a guide to find a more fundamenal theory.

This is different for dBB-like interpretations. They make additional hypotheses, about real trajectories q(t). Ok, we cannot test them now, and, because of the equivalence theorems, will be unable to test them in future too. A problem? Not really. Because we have internal problems of the interpretation itself, and these internal problems are a nice guide. We can try to find solutions for them, and these solutions may contain new, different physics, which becomes, then, testable.

This is also not empty talk. One internal problem of dBB are the infinities of the velocities [itex]\dot{q}(t)[/itex] near the zeros of the wave function. Another one, related, is the Wallstom objection - the necessity to explain why probability and probabiliy flow combine into a wave function which appears if one does not consider the wave function as fundamental. To solve these problems, one has to make nontrivial assumptions about a subquantum theory, see arxiv.org:1101.5774. So, the interpretation gives strong hints where we have to look for physics different from quantum physics, in this case near the zeros of the wave function.

QBism, instead, does not lead to such hints where to look for new subquantum physics. The new mathematics of QBism looks like mathematics in the other, positivistic direction - not more but less restrictive, not less but more general. At least this is my impression.
 
  • #47
I just read for the first time the term QBism and found this discussion. On a first look, the paper by Fuchs and Schack looks horrible. Why pages full of speculations about what Feynman may have meant?
Isn't there some crisp axiomatic paper available?
 
  • #48
Last edited by a moderator:
  • #49
Mathematech said:
I've just come across this book http://www.springer.com/physics/the...+computational+physics/book/978-3-540-36581-5 based on Streater's website http://www.mth.kcl.ac.uk/~streater/lostcauses.html . I'm only just started reading it, seems that his views are totally against any notion of non-locality and that probability explains all the weirdness in QM. Comments?

The following says it all:
'This page contains some remarks about research topics in physics which seem to me not to be suitable for students. Sometimes I form this view because the topic is too difficult, and sometimes because it has passed its do-by date. Some of the topics, for one reason or another, have not made any convincing progress.'

There are many interpretations of QM - some rather 'backwater' like Nelson Stochastics. Some very mainstream and of great value in certain situations such as the Path Integral approach.

But as with any of them its pure speculation until someone can figure out an experiment to decide between them and have it carried out.

While discussion of interpretations in on topic in this forum, its kept on a tight leash to stop it degenerating into philosophy, which is off-topic.

So exactly what do you want to discuss - if you have in mind some interpretation, or issues in a specific interpretation, you want classification on, then fire away and me or others will see if they can help. Or do you want a general waffle about interpretations such as this doesn't tell us about realty (whatever that is - those that harp on it seldom define it - for good reason - its a philosophical minefield) or whatever which would be off topic.

Thanks
Bill
 
Last edited by a moderator:
  • #50
I want to discuss Streater's take that there is no need for assuming non-locality and that EPR etc can purely be understood via correct application of probability.
 
  • #51
Mathematech said:
I want to discuss Streater's take that there is no need for assuming non-locality and that EPR etc can purely be understood via correct application of probability.

Its well known non locality is not required by simply abandoning that objects have properties independent of measurement context. Bells theorem proves, and its pretty watertight, you can't have locality and objects having properties.

Thats about all there is really.

Thanks
Bill
 
  • #52
Mathematech said:
I want to discuss Streater's take that there is no need for assuming non-locality and that EPR etc can purely be understood via correct application of probability.

I have criticized some points of Streater's texts at http://ilja-schmelzer.de/realism/dBBarguments.php.

I think that realism - in the sense of what has been used, except locality, by Bell to prove his inequalities - is a simple minimal standard of explanation. If you are unable to desciribe some observation using a realistic theory, you have not understood or explained it.

That these assumptions about realism are, really, such a minimal standard of explanation, is, of course, an argument which can be discussed. In particular, by thinking about what "explanations" become possible if we weaken one or another part of this minimal standard. Roughly speaking, you cannot weaken realism without accepting, after this, "and then a miracle happens" as a valid explanation.
 
  • #53
bhobba said:
Its well known non locality is not required by simply abandoning that objects have properties independent of measurement context.

No, that's wrong. Looks like the classical error of identifying the conclusions of the first part of Bell's proof (the EPR part) with assumptions made by Bell.

What has to be assumed is realism, in a very weak sense. The reality λ should not even consist of localized objects, it can be whatever you can imagine.
 

Similar threads

Replies
1
Views
687
Replies
33
Views
3K
Replies
1
Views
915
Replies
223
Views
8K
Replies
42
Views
3K
Back
Top