- #1
- 35,748
- 14,198
Well, I am a moderate Bayesian, so I do lean towards Bayes in my preferences. But being moderate I also use the frequentist interpretation and frequentist methods whenever convenient or useful.fresh_42 said:Well, a bit biased against frequentists if you ask me.
The Bayesian interpretation is straightforward. It just means that I am not certain that it is going to rain on Thursday, but I think it is likely. More operationally, if I had to bet a dollar either that it would rain on Thursday or that I would get heads on a single flip of a fair coin, then I would rather take the bet on the rain.WWGD said:There is a 60% chance of rain for (e.g.) Thursday." In frequentist perspective, I believe this means that in previous times with a similar combination of conditions as the ones before Thursday, it rained 60% of the time. I have trouble finding a Bayesian interpretation for this claim.
To update your probability you need to have a model.WWGD said:You may have a prior, but I can't see what data you would use to update it to a posterior probability.
I will have numerical examples for most of them. This one was just philosophical, so it didn’t really lend itself to examples.anorlunda said:Will this be a 3 part series? 4? Will you give numeric examples? A preview would be nice.
Dale said:Now, we need a way to determine the measure ##P(H)##. For frequentist probabilities the way to determine ##P(H)## is to repeat the experiment a large number of times and calculate the frequency that the event ##H## happens. In other words, if you do ##N## trials and get ##n_H## heads then
##P(H) = lim_{N \rightarrow \infty} \frac{ n_h} {N}##
So a frequentist probability is simply the “long run” frequency of some event.
There are theorems demonstrating that in the long run the Bayesian probability converges to the frequentist probability for any suitable prior (eg non-zero at the frequentist probability)Stephen Tashi said:"in the long run" observed frequencies of events will approximately be equal to their probability of ocurrence. ( In applying probability theory to a real life situation, would a Bayesian disagree with that intuitive notion? )
What do you mean here?Stephen Tashi said:It should be emphasized that the notation "P(H)=limN→∞nhN" conveys an intuitive belief, not a statement that has a precise mathematical definition
Dale said:What do you mean here?
Nice.Stephen Tashi said:For independent trials, the calculus type of limit that does exist, for a given ϵ>0 is limn→∞Pr(P(H)−ϵ<S(N)<P(H)+ϵ)=1 where S is a deterministic function of N.
Dale said:Is that considered problematic by frequentist purists? It seems to define probability in terms of probability.
No, of course not. But I don’t think that you can use the limit you posted above as a definition for frequency-based probability non-circularly.Stephen Tashi said:Such a limit is used in technical content of The Law Of Large Numbers and frequentists don't disagree with that theorem
I agree more or less. I would say that the issue is not exactly whether a quantity is definite but unknown, but rather whether or not to use probability to represent such a quantity.Stephen Tashi said:To me, the essential distinction between the frequentist approach and the Bayesian approach boils down to whether certain variables are assumed to represent a "a definite but unknown" quantity versus a quantity that is the outcome of some stochastic process.
Dale said:No, of course not. But I don’t think that you can use the limit you posted above as a definition for frequency-based probability non-circularly.
I think we are running into a miscommunication here. I agree with the point you are making, but it isn’t what I am asking about.Stephen Tashi said:So any difference in how the two schools formally define probability would have to be based on some method of creating a mathematical system that defines new things that underlie the concept of probability and shows how these new things can be used to define a measure.
Dale said:There needs to be operational definitions of frequentist and Bayesian probability. That is what I am talking about.
Stephen Tashi said:People make subjective decisions without having a coherent system of ideas to justify them.
Stephen Tashi said:I can't see a Bayesian (of any sort) defending an estimate of a probability that is contradicted by a big batch of data. So is it correct to say that Bayesians don't accept the intuitive idea that a probability is revealed as a limiting frequency?
If a Frequentist decides to model a population by a particular family of probability distributions, will he claim that he has made an objective decision?
atyy said:I know you mean "coherent" in a different sense, but Bayesian probability is coherent, where "coherent" is a technical term.
Although Bayesians and Frequentists start from different assumptions, Bayesians can use many Frequentist procedures when there is exchangeability and the de Finetti repesentation theorem applies.
http://www.stats.ox.ac.uk/~steffen/teaching/grad/definetti.pdf
Aren’t prominent people in a field considered prominent precisely because the consensus in that field is to adopt their view?Stephen Tashi said:You can look at what prominent Bayesians say versus prominent Frequentists say. Prominent people usually feel obligated to portray their opinions as clear and systematic. But prominent people can also be individualistic, so you might not find any consensus views.
This is a good point. But they can certainly objectively test if that decision is supported by the data. (It almost never is for large data sets).Stephen Tashi said:If a Frequentist decides to model a population by a particular family of probability distributions, will he claim that he has made an objective decision?
Dale said:Aren’t prominent people in a field considered prominent precisely because the consensus in that field is to adopt their view?
as quoted in the paper by Nau https://faculty.fuqua.duke.edu/~rnau/definettiwasright.pdfMy thesis, paradoxically, and a little provocatively, but nonetheless genuinely, is simply this:
PROBABILITY DOES NOT EXIST
The abandonment of superstitious beliefs about the existence of the Phlogiston,the Cosmic Ether, Absolute Space and Time,...or Fairies and Witches was an essential step along the road to scientific thinking. Probability, too, if regarded as something endowed with some kind of objective existence, is no less a mis-leading misconception, an illusory attempt to exteriorize or materialize our true probabilistic beliefs.
Isn’t that essentially what you proved above? I don’t understand your point.Stephen Tashi said:An interpretation of DeFinetti's position is that we cannot implement probability as an (objective) property of a physical system.
Don’t you mean “So we can’t (objectively) assign a probability to the toss of a fair coin or the throw of a fair dice?”Stephen Tashi said:So we can't (objectively) toss a fair coin or throw a fair dice ?
Besides being a mere critic of other posts, I'll make the (perhaps self-evident) points:I am not sure what point you are trying to make with your posts. Can you clarify?
In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable.
I think for me that was the big “aha” moment: when I realized that probability and randomness were different things. It doesn’t matter what ##P(A)## represents operationally, if it follows the Kolomgorov axioms then it is a probability. It could represent true randomness, it could represent ignorance, it could represent uncertainty, and I am sure that there are other things it could represent.BWV said:ISTM Bayes is just more honest about probability being a measure of ignorance.
That one isn’t particularly exotic. It is a simple “balls in an urn” probability but weighted by energy rather than being equally weighted.Demystifier said:There are many things that satisfy probability axioms and yet seem to have nothing to do with probability. Here is an example: Consider ##N## free classical particles, each with energy ##E_i##, ##i=1,...,N##. Then the quantity
$$p_i=\frac{E_i}{\sum_{j=1}^N E_j}$$
satisfies the probability axioms. @Dale any comments?
But what is probability then about? About anything that satisfies the axioms of probability? My view is that, if a set of axioms does not really capture the concept that people originally had in mind before proposing the axioms, then it is the axioms, not the concept, that needs to be changed.Dale said:The thing is to realize that probability is not about randomness. If something satisfies the axioms then it is a probability even if there is no sense of randomness or uncertainty involved.
Yes. That is what axiomatization does. It abstracts a concept. Then the word “probability” (in that mathematical and axiomatic sense) itself becomes an abstraction representing anything which satisfies the axioms.Demystifier said:But what is probability then about? About anything that satisfies the axioms of probability?
I do sympathize with that view, but realistically it is too late in this case. The Kolomgorov axioms are already useful and well accepted, and using the word “probability” to refer to measures which satisfy those axioms is firmly established in the literature.Demystifier said:My view is that, if a set of axioms does not really capture the concept that people originally had in mind before proposing the axioms, then it is the axioms, not the concept, that needs to be changed.
Dale said:I tend to like the idea of uncertainty more than randomness, because I find randomness a lot harder to pin down. It seems to get jumbled up with determinism and other things that you don’t have to worry about for uncertainty.
Not necessarily. We are certainly uncertain about random things, but we are also uncertain about some non-random things. Both can be represented as a distribution from which we can draw samples. So the mere act of drawing from a distribution does not imply randomness.atyy said:But if a Bayesian draws samples from a distribution, then wouldn't the Bayesian be using the idea of randomness?
Demystifier said:But what is probability then about? About anything that satisfies the axioms of probability? My view is that, if a set of axioms does not really capture the concept that people originally had in mind before proposing the axioms, then it is the axioms, not the concept, that needs to be changed.
It means that based on the known distribution parameters and a model of how those parameters affect weather, that there is 60% chance of rain on Thursday. Those parameters include all the things a meteorologist might use to predict the weather. How the model is determined, I'm not quite sure. The model may itself be encoded by additional distribution parameters, which are updated according to observations. The Expectation-Maximisation method is all about determining unknown distribution parameters.WWGD said:If I may offer a suggestion, or maybe you can reply here, on the two different interpretations of probabilistic statements such as :" There is a 60% chance of rain for (e.g.) Thursday." In frequentist perspective, I believe this means that in previous times with a similar combination of conditions as the ones before Thursday, it rained 60% of the time. I have trouble finding a Bayesian interpretation for this claim. You may have a prior, but I can't see what data you would use to update it to a posterior probability.