QBism - Is it an extension of "The Monty Hall Problem"?

In summary: The two main camps in this dispute are those who believe that the probabilities associated with a particular event are a state of knowledge that the observer possesses prior to observation and those who believe that the probabilities are a result of the observations made during the experiment.The "frequentist" camp contends that the probabilities are a result of the observations made during the experiment, while the "Bayesian" camp contends that the probabilities are a state of knowledge that the observer possesses prior to observation.The frequentist position is usually associated with the Copenhagen interpretation of quantum mechanics, while the Bayesian position is usually associated with the wave-particle duality interpretation.'Bayesianism takes account of the subjective knowledge of the observer/me
  • #36
bhobba said:
Nobody ever said it was. Again I repeat what Feller said and highlight the key point:
'We shall no more attempt to explain the true meaning of probability than the modern physicist dwells on the real meaning of the mass and energy or the geometer discusses the nature of a point. Instead we shall prove theorem's and show how they are applied'

Now exactly what is your issue?

Your rhetoric does not interest me.
Physics is not limited to citations

Once again with the same mathematical axiomatic we can biuld different model/semantic.

Patrick
 
Physics news on Phys.org
  • #37
stevendaryl said:
I once sketched out a Bayesian "theory of everything". Theoretically (not in practice, because it's computationally intractable, or maybe even noncomputable), you would never need any other theory.

Let [itex]T_1, T_2, ...[/itex] be an enumeration of all possible theories. Let [itex]H_1, H_2, ...[/itex] be an enumeration of all possible histories of observations. (It might be necessary to do some coarse-graining to make a discrete set of possibilities.)

Let [itex]P(T_i)[/itex] be the a-priori probability that theory [itex]T_i[/itex] is true.
Let [itex]P(H_j | T_i)[/itex] be the (pretend it's computable) probability of getting history [itex]H_j[/itex] if theory [itex]T_i[/itex] were true. Then we compute the probability of [itex]T_i[/itex] given [itex]H_j[/itex] has been observed via Bayes' rule:

[itex]P(H_j) = \sum_i P(T_i) P(H_j | T_i)[/itex]
[itex]P(T_i | H_j) = P(H_j | T_i) P(T_i)/P(H_j)[/itex]

So this gives us an a posteriori probability that any theory [itex]T_i[/itex] is true.

How can we enumerate all possible theories? Well, we can just think of a theory as an algorithm for computing probabilities of future histories given past histories. Computability theory shows us a way to enumerate all such algorithms.

But does this include that the same formal theory can correspond to different semantics (models - both mathematically and physically)?
 
  • #38
atyy said:
But does this include that the same formal theory can correspond to different semantics (models - both mathematically and physically)?

Well, for the practical purposes of science (trying to build rockets and lasers and computers and so forth), the semantics aren't important. The only thing of importance is how to relate past observations to future observations.

Of course, that viewpoint completely ignores the reason people are drawn to science--not for practical purposes, but to understand. It also ignores the fact that the connection between past observations and future observations is enormously complex, and from a purely computational standpoint, having a semantic understanding of what's going on is tremendously powerful in creating the connection. If you're just tinkering with algorithms without being guided by physical insight, it's hopeless, in practice. But in principle...
 
  • #39
microsansfil said:
Your rhetoric does not interest me. Physics is not limited to citations

No its not. But precisely what makes you think when someone 'show's how they are applied' that does not include a mapping from the abstract things in axioms to what you apply it to? For example in probability you map this abstract thing called probability to outcomes. One then applies the law of large numbers, and a few reasonableness assumptions, to show that abstract thing is the proportion of outcomes of a large number of trials. My suspicion is you don't have much experience in applying math. How its done is usually so obvious its not even spelt out - simply assumed.

microsansfil said:
Once again with the same mathematical axiomatic we can biuld different model/semantic.

Translation - the same axioms can be applied to different situations. Why you want to make a point out of something utterly trivial has me beat.

But its obvious you come from an entirely different background to me - and I suspect its philosophy - not applied math or physics.

I have discussed this sort of thing with philosophy types before - we talk past each other.

Thanks
Bill
 
  • #40
microsansfil said:
Your rhetoric does not interest me.
Physics is not limited to citations

Once again with the same mathematical axiomatic we can biuld different model/semantic.

Patrick

I think you're barking up the wrong tree in arguing with Bill. There's no disagreement between you two about the fact that the same mathematical theory can have different, non-isomorphic models. If you disagree with Bill, it would be helpful to try to pinpoint what the disagreement really is. I can assure you that it is not about model theory or Godel's theorem.
 
  • Like
Likes 1 person
  • #41
bhobba said:
But its obvious you come from an entirely different background to me - and I suspect its philosophy - not applied math or physics.

bhobba said:
That's why the truth lies in the axioms.

No comments.

I had the full range of your Fallacy. You did not interest me, I move on.

Patrick
 
Last edited:
  • #42
stevendaryl said:
Well, for the practical purposes of science (trying to build rockets and lasers and computers and so forth), the semantics aren't important. The only thing of importance is how to relate past observations to future observations.

Of course, that viewpoint completely ignores the reason people are drawn to science--not for practical purposes, but to understand. It also ignores the fact that the connection between past observations and future observations is enormously complex, and from a purely computational standpoint, having a semantic understanding of what's going on is tremendously powerful in creating the connection. If you're just tinkering with algorithms without being guided by physical insight, it's hopeless, in practice. But in principle...

That's true, but I don't mean mathematical semantics as much as physical semantics. For example, Euclid's points can model either physical lines or to physical points, so the formal object can have more than one valid physical correspondence, and I'm not sure if you can list all conceivable physical correspondences to a given formal theory.
 
  • #43
microsansfil said:
No comments.

I had the full range of your Fallacy. You did not interest me, I move on.

Patrick

Sorry, all french are not like that.
 
  • #44
naima said:
Sorry, all french are not like that.

Its not a French thing I am sure.

I think, for reasons best known to him, he was simply being contrary.

I looked at his background. He is evidently a research engineer and should have understood many of the fundamental issues he bought up and how they are resolved in practice. I thought his background was philosophy because some (fortunately very few) can carry on like that - but evidently it isn't - which leads me to believe he was simply being contrary.

Thanks
Bill
 
Last edited:
  • #45
atyy said:
That's true, but I don't mean mathematical semantics as much as physical semantics. For example, Euclid's points can model either physical lines or to physical points, so the formal object can have more than one valid physical correspondence, and I'm not sure if you can list all conceivable physical correspondences to a given formal theory.

Of course that is true.

But when one evokes axioms in an applied context its usually utterly obvious from the context what you are mapping to what.

When it is said the truth lies in the axioms, and people like Feller say we don't attempt to define what the basic objects are; what is meant is the modern mathematical method. We prove theorems without referencing the meaning of the objects the axioms apply to; so when applied we have all these consequences without any further ado. You can apply the same axioms to many different areas with great economy of thought.

In relation to Frequentest vs Bayesian its simply a matter of interpretation of that undefined thing called probability in the Kolmogogrov axioms. You interpret it as plausibility ie a belief we as human being have and you get a Bayesian view - although that's usually done by the so called Cox axioms - but they are logically equivalent to Kolmogorov's axioms. You can leave it undefined and simply show via the law of large numbers (and yes some other assumptions such as assuming a very small probability is FAPP zero is required as well - but as is usual in applied math its not explicitly stated - you glean it with a bit of experience) that undefined thing is the proportion of a large number of trials. Also you can assume probability is a propensity and arrive at exactly the same thing. Kolomgorv's axioms guarantee it.

The only caveat with all of this is if you decide to get really tricky and map the same axioms to the same physical situation in different ways as mentioned about Euclidean geometry. Then you are in for a whole world of hurt in saying what's true and what isn't - of course you can do it - but great care would be required in keeping each separate.

It goes without saying that's not what is going on here so regardless of what view you take of probability you must get the same results.

Thanks
Bill
 
  • #46
naima said:
Sorry, all french are not like that.

This is the only argument you have. it is rather pathetic.

Is not a question of nationality because it is E.T Jaynes points of View. E.T Jaynes is not French isn't he ?

In purely mathematical framework (axiomatic) bayesian, frequentist are interpretation. They don't change mathematical framework. This is an evidence since it is outside the mathematical framework.

But mathematics isn't physics and neither the inverse. We can not reduce the QM in a single axiom purely syntactic.

I can only advise you to read E.T Jaynes. He provides concrete examples on different physical results obtained.

I have nothing to sell, no proselytizing in my case. I am just an Amanuensis

Patrick
 
Last edited:
  • #47
« for all practical purposes »

E.T Jaynes :
http://bayes.wustl.edu/etj/articles/confidence.pdf

Confidence Interval (Frequentist) vs. Credible Interval (Bayesian)
AEC Graduate Course - Statistics :
http://www.lhep.unibe.ch/schumann/docs/nirkko_tufanli_intervals.pdf

To express uncertainty in our knowledge after an experiment :

– Frequentist approach uses a "confidence interval"
– Bayesian approach uses a “credible interval”

Example - Cookie jar

See results Confidence vs. credible interval slide 20

my position : Just Amanuensis

Patrick
 
Last edited by a moderator:
  • #48
microsansfil said:
But mathematics isn't physics and neither the inverse. We can not reduce the QM in a single axiom purely syntactic.

This is the issue that leaves me scratching my head.

I can't find anything in this thread that says otherwise. My quote from Feller states we determine the meaning of axiomatic systems by seeing how they are applied. When someone says the truth is in the axioms, obviously it doesn't mean the axioms are the truth - because such are neither true or false - what it means is when applied the results from the axioms are so well known it becomes a testable theory.

Maybe it's because English is not your first language - I simply do not know.

So let's go back to what you said right at the beginning:

microsansfil said:
prob = 1/2 is not a property of the coin.
prob = 1/2 is not a joint property of coin and tossing mechanism.
Any probability assignment starts from a prior probability.

As far as I can see that is a philosophical position you are taking. There is zero reason you can't map probability as per the Kolmogerov axioms to the coin. In fact that's exactly what is done in basic courses of probability.

So can you please explain what's wrong with that? Why have textbooks like Feller got it wrong?

Thanks
Bill
 
Last edited:
  • #49
microsansfil said:
Just Amanuensis

Yes - but the issue is you are the 'Amanuensis' for a controversial position that is far from accepted eg:
http://stats.stackexchange.com/ques...ian-credible-intervals-are-obviously-inferior

'So essentially, it is a matter of correctly specifying the question and properly intepreting the answer. If you want to ask question (a) then use a Bayesian credible interval, if you want to ask question (b) then use a frequentist confidence interval.'

While I haven't read Janes book, from my knowledge of Bayesian inference the above looks a lot closer to the truth of the matter than the frequentest view is wrong.

Its simply a matter of what is the most natural way to view things making problems easier. That's nothing new, and as I have posted previously in this thread as far as Bayesian hypothesis testing is concerned the frequentest view is rather unnatural - but wrong - that is another matter.

If you argue non-standard controversial position, you really should be able to justify it - not just fall-back on - all I am doing is repeating.

Thanks
Bill
 
Last edited:
  • #50
Thread closed for the moment, pending possible moderation.
 
  • #51
Since this thread has drifted from the original topic and has devolved into arguing, it shall remain locked.
 

Similar threads

Back
Top