Understanding the Uniform Probability Distribution in Statistical Ensembles

In summary: My understanding of probability is that it is a mathematical concept used to quantify the likelihood of an event occurring. It is based on Kolmogorov's axioms and can be interpreted in different ways, such as frequentist, Bayesian, or decision theory. In the context of statistical physics, the concept of probability is essential for understanding the behavior of systems at equilibrium. However, the use of ensembles to explain probability can often create more confusion than explanation. Therefore, it is important to have a solid understanding of probability itself before delving into the concept of ensembles in statistical physics.
  • #71
stevendaryl said:
your subjective probability of "heads" is given by:
Why? only if you subjectively believe that the coin is fair. if your subjective belief is that the coin is forged, the subjective probability can take any value between 0 and 1 depending on your belief - independent of whether this beleif is correct or incorrect.
stevendaryl said:
Any time you make a choice to do X or Y, based on probability, you're betting in a sense.
In a scientific discussion you should use the words in the common normative way. You are making a decision, not a bet. A bet means waging money with a particular odds.

Moreover, most of the decisions you were discussing earlier were not based on probability but based on a not further specified uncertainty. We rarely have perfect information, hence our decisions are also less than perfect, but in general this has nothing to do with probability. only if the uncertainty is of an aleatoric nature (rather than epistemic), a probabilistic model is adequate. To be reliable, aleatoric uncertainty must be quatified by objective statistics, not by subjective assignment of probabilities. And epistemic uncertainty costs must be treated completely differently. At least if one doesn't want to make more regrettable decisions than unavoidable! (I published a number of papers on uncertainty modeling in real life situation, so I know.)

stevendaryl said:
a "definition" of a physical quantity is operational: the quantity describes what would happen if you were to perform a particular operation.
But there you ask Nature, which is objective, rather than a person, which is subjective. Precisely this makes the difference.

You cannot in principle ask Nature how much it bets, since betting and money are social conventions. The only way to ask Nature (i.e., to be objective) is to make statistics, and this is the frequentist approach. While asking for betting odds means extracting subjective probabilities of the particular person asked.

Maybe you are motivated to read Chapter 3 of my FAQ before continuing the discussion here...
 
Last edited:
  • Like
Likes N88 and Mentz114
Physics news on Phys.org
  • #72
Bayesian probability gives strict rules for determining probability when certain knowledge is given. These rules are perfectly objective, in the sense that you can program a computer to follow these rules and give Bayesian probability as the output. If anything is "subjective" about Bayesian probability, then it is knowledge itself. But all science (probabilistic or not) involves knowledge (e.g. knowledge obtained as a result of measurement), so Bayesian probability is not more subjective than science in general. Such a view of probability is defended in much more details in
Jaynes - Probability Theory: The Logic of Science
https://www.amazon.com/dp/0521592712/?tag=pfamazon01-20
 
  • #73
Jaynes was not a true Bayesian. He was an objective Bayesian.
 
  • #74
atyy said:
Jaynes was not a true Bayesian. He was an objective Bayesian.
Only objective Bayesian is a good Bayesian. :smile:
 
  • #76
Demystifier said:
Bayesian probability gives strict rules for determining probability when certain knowledge is given. These rules are perfectly objective, in the sense that you can program a computer to follow these rules and give Bayesian probability as the output. If anything is "subjective" about Bayesian probability, then it is knowledge itself. But all science (probabilistic or not) involves knowledge (e.g. knowledge obtained as a result of measurement), so Bayesian probability is not more subjective than science in general. Such a view of probability is defended in much more details in
Jaynes - Probability Theory: The Logic of Science
https://www.amazon.com/dp/0521592712/?tag=pfamazon01-20

Well, there is a distinction between (some) Bayesians and (most) frequentists when it comes to what sorts of uncertainty can be described by probability. (Some) Bayesians believe that any time you are uncertain about what is true, it is appropriate to express your degree of uncertainty using probability. Frequentists believe that probability should only be applied to repeatable events (like coin tosses). Applying probability to events that only happen once is perfectly fine if probability is interpreted subjectively, but doesn't make sense if probability is interpreted as relative frequency. (Although, I suppose you could turn any uncertainty into relative frequency if you consider the right type of ensemble.)
 
  • #77
stevendaryl said:
Well, there is a distinction between (some) Bayesians and (most) frequentists when it comes to what sorts of uncertainty can be described by probability. (Some) Bayesians believe that any time you are uncertain about what is true, it is appropriate to express your degree of uncertainty using probability. Frequentists believe that probability should only be applied to repeatable events (like coin tosses). Applying probability to events that only happen once is perfectly fine if probability is interpreted subjectively, but doesn't make sense if probability is interpreted as relative frequency. (Although, I suppose you could turn any uncertainty into relative frequency if you consider the right type of ensemble.)
I agree with this, except with the word "subjectively". Let me give you an example. If I give you one guzilamba with two possible states called gutu and baka, what is the probability that it will be in the state gutu?

Now if you are rational, your reasoning will be something like that:
- I have no idea what is guzilamba, let alone gutu and baka. But I do know that there are two possible states one of which is called gutu, and I have no rational reason to prefer one state over the other. Therefore, from what I know, it is rational to assign probability p=1/2 to gutu. Therefore the answer is 1/2.

Here the crucial word is rational. You can even program a computer to arrive at this rational answer, in which sense it is not subjective.
 
  • #78
Demystifier said:
I have no rational reason to prefer one state over the other. Therefore, from what I know, it is rational to assign probability p=1/2 to gutu. Therefore the answer is 1/2.
No. There is no rational reason to treat both state as equally likely unless you know what gutu and baka mean. Thus it is irrational to assign a probability of 1/2.

This is a case of epistemic uncertainty, and it is regarded as a mistake in modern uncertainty modeling to model it by equal probabilities.
 
  • #79
Demystifier said:
I agree with this, except with the word "subjectively". Let me give you an example. If I give you one guzilamba with two possible states called gutu and baka, what is the probability that it will be in the state gutu?

Now if you are rational, your reasoning will be something like that:
- I have no idea what is guzilamba, let alone gutu and baka. But I do know that there are two possible states one of which is called gutu, and I have no rational reason to prefer one state over the other. Therefore, from what I know, it is rational to assign probability p=1/2 to gutu. Therefore the answer is 1/2.

Here the crucial word is rational. You can even program a computer to arrive at this rational answer, in which sense it is not subjective.

I'm not sure that these priors are unique. Suppose I tell you further that there are two types of baka states: baka-A and baka-B. Then would you say that:
  1. There is probability 1/3 of being in state gutu, baka-A, or baka-B.
  2. There is probability 1/2 of being in state gutu or baka, and if you are in state baka, there is probability 1/2 of being in baka-A or baka-B.
One way of modeling gives probabilities [itex]\frac{1}{3}, \frac{1}{3}, \frac{1}{3}[/itex] for (gutu, baka-A, baka-B). The other way of modeling gives probabilies [itex]\frac{1}{2}, \frac{1}{4}, \frac{1}{4}[/itex].

It becomes even more ambiguous if I said "A guzilamba has an associated property, called its butu-value, which can take on any real value between 0 and 1". Now what's the probability that a random guzilamba has a butu-value of less than 1/2?

You could model it using a flat distribution, which might be rational, since you don't know which values are more likely than which other values. In which case you would conclude that the answer is "1/2". But alternatively, you could define (for example) [itex]\theta = sin^{-1}(\beta)[/itex], where [itex]\beta[/itex] is the butu-value. Isn't it just as rational to assume that [itex]\theta[/itex] has a flat distribution in the range [itex]0, \frac{\pi}{2}[/itex]? But that's a different prior.
 
Last edited:
  • #80
stevendaryl said:
I'm not sure that these priors are unique. Suppose I tell you further that there are two types of baka states: baka-A and baka-B. Then would you say that:
  1. There is probability 1/3 of being in state gutu, baka-A, or baka-B.
  2. There is probability 1/2 of being in state gutu or baka, and if you are in state baka, there is probability 1/2 of being in baka-A or baka-B.
One way of modeling gives probabilities [itex]\frac{1}{3}, \frac{1}{3}, \frac{1}{3}[/itex] for (gutu, baka-A, baka-B). The other way of modeling gives probabilies [itex]\frac{1}{2}, \frac{1}{4}, \frac{1}{4}[/itex].
Well, I am a human and as such I am subjective and not always rational, so I could not decide easily between [itex]\frac{1}{3}, \frac{1}{3}, \frac{1}{3}[/itex] and [itex]\frac{1}{2}, \frac{1}{4}, \frac{1}{4}[/itex]. But if you program your computer to decide, it will decide without any problems. What its decision will be? It depends on the program, but if the programmed algoritm says that things with different names have equal probabilities, then the result is [itex]\frac{1}{3}, \frac{1}{3}, \frac{1}{3}[/itex]. Now I myself have some additional information (from experience I know that things called something-A and something-B are often subtypes of the same thing, so it might or might not mean that...), but the computer does not posses such additional vague information so it's an easy task for the computer. More to the point, whatever information a computer possesses that information is never vague, so for the computer the task is never ambiguous.
 
  • #81
Demystifier said:
Well, I am a human and as such I am subjective and not always rational, so I could not decide easily between [itex]\frac{1}{3}, \frac{1}{3}, \frac{1}{3}[/itex] and [itex]\frac{1}{2}, \frac{1}{4}, \frac{1}{4}[/itex]. But if you program your computer to decide, it will decide without any problems. What its decision will be? It depends on the program, but if the programmed algoritm says that things with different names have equal probabilities, then the result is [itex]\frac{1}{3}, \frac{1}{3}, \frac{1}{3}[/itex]. Now I myself have some additional information (from experience I know that things called something-A and something-B are often subtypes of the same thing, so it might or might not mean that...), but the computer does not posses such additional vague information so it's an easy task for the computer. More to the point, whatever information a computer possesses that information is never vague, so for the computer the task is never ambiguous.

Okay, but by that definition, nothing is subjective. For any subjective question, I can write a program that attempts to answer it, and call that answer objective. We can decide once and for all whether the Beatles were better than The Rolling Stones.
 
  • Like
Likes Demystifier
  • #82
A. Neumaier said:
No. There is no rational reason to treat both state as equally likely unless you know what gutu and baka mean. Thus it is irrational to assign a probability of 1/2.
You are prissoned by savages who tell you that one of them means "they will kill you" and the other means "they will release you", but they don't tell you which is which. Now they press you to choose: should they gutu you, or should they baka you? If you don't choose anything, they will kill you with certainty. What will you decide, gutu or baka? What is the rational choice? Is it rational to say "I choose nothing because I don't have sufficient data?".
 
  • #83
Demystifier said:
Only objective Bayesian is a good Bayesian. :smile:

No Bayesian would say such a thing, unless he had an irrational prior :)
 
  • Like
Likes Demystifier
  • #84
stevendaryl said:
Okay, but by that definition, nothing is subjective. For any subjective question, I can write a program that attempts to answer it, and call that answer objective. We can decide once and for all whether the Beatles were better than The Rolling Stones.
Of course. The only subjective thing here is the choice of the program itself.
 
  • #85
Demystifier said:
Of course. The only subjective thing here is the choice of the program itself.

Well, that's the sense in which anything is subjective. The choice of how to think about (or model) a problem is subjective, and if any result depends on that choice, then I would call the result subjective.
 
  • Like
Likes Demystifier
  • #86
stevendaryl said:
Well, that's the sense in which anything is subjective. The choice of how to think about (or model) a problem is subjective, and if any result depends on that choice, then I would call the result subjective.
Yes, but the point is that it is a general feature of science modeling, not an exclusive property of modeling Bayesian probability. Bayesian probability is not more subjective than any other method in theoretical science.
 
  • #87
Demystifier said:
What is the rational choice?
Each choice is rational. There is no associated probability.
 
  • #88
A. Neumaier said:
Each choice is rational. There is no associated probability.
But is one of them more rational than the other? No? Then what does it tell us about Bayesian probability? Or do you claim that Bayesian probability is not probability at all?
 
  • #89
Demystifier said:
But is one of them more rational than the other? No?
In the absence of further knowledge both choices are rational. There is no way to compare the quality of the choices except by waiting for the consequences. To make choices, no concept of probability is needed.
 
  • #90
A. Neumaier said:
In the absence of further knowledge both choices are rational. There is no way to compare the quality of the choices except by waiting for the consequences. To make choices, no concept of probability is needed.
OK, then let my try something completely different. I flip the coin twice, and I obtain the result:
heads, heads
What is the probability of getting heads? Can probability be assigned in that case?
 
  • #91
Demystifier said:
OK, then let my try something completely different. I flip the coin twice, and I obtain the result:
heads, heads
What is the probability of getting heads? Can probability be assigned in that case?
It is in [0,1].
 
  • #92
A. Neumaier said:
It is in [0,1].
So is there any case in science where one can assign definite probabilities, without performing an infinite number of experiments?
 
  • #93
A. Neumaier said:
In the absence of further knowledge both choices are rational. There is no way to compare the quality of the choices except by waiting for the consequences. To make choices, no concept of probability is needed.

It's not needed, but probabilities provide a coherent way to reason about uncertainties. That's one of the arguments in favor of the axioms of probability: If you express your uncertainties in terms of probability, then you have a principled way to combine uncertainties. If you don't, then you can become the victim of a "Dutch book" scam:

https://en.wikipedia.org/wiki/Dutch_book
 
  • #94
Demystifier said:
So is there any case in science where one can assign definite probabilities, without performing an infinite number of experiments?
Assuming that probabilities have definite (infinitely accurate) values is as fictitious as assuming the length of a stick to have a definite (infinitely accurate) value. Science is the art of valid approximation, not the magic of assigning definite values.

One uses statistics to assign uncertain probabilities according to the standard rules, and one can turn these probabilities into simple numbers by ignoring the uncertainty. That's the scientific practice, and that's what theory,
and standards such as NIST, tell one should do.
 
  • #95
stevendaryl said:
probabilities provide a coherent way to reason about uncertainties.
Only about aleatoric uncertainty. This is the consensus of modern researchers in uncertainty quantification. Se the links given earlier.
 
  • #96
A. Neumaier said:
Assuming that probabilities have definite (infinitely accurate) values is as fictitious as assuming the length of a stick to have a definite (infinitely accurate) value. Science is the art of valid approximation, not the magic of assigning definite values.

One uses statistics to assign uncertain probabilities according to the standard rules, and one can turn these probabilities into simple numbers by ignoring the uncertainty. That's the scientific practice, and that's what theory,
and standards such as NIST, tell one should do.
OK, then please use this scientific practice to determine probability in my post #90.
 
  • #97
stevendaryl said:
If you don't, then you can become the victim
You don't need to teach me how to reason successfully about uncertainty. Our company http://www.dagopt.com/en/home sells customized software that allows our industrial customers to save lots of money by making best use of the information available. They wouldn't pay us if they weren't satisfied with our service.

It is a big mistake to use probabilities as a substitute for ignorance, simply because with probabilities ''you have a principled way to combine uncertainties''.
 
Last edited by a moderator:
  • #98
Demystifier said:
OK, then please use this scientific practice to determine probability in my post #90.
Respectable scientists are no fools that would determine probabilities from the information you gave.
 
  • #99
A. Neumaier said:
Respectable scientists are no fools that would determine probabilities from the information you gave.
OK, what is the minimal amount of information that would trigger you to determine probabilities? How many coin flips is the minimum?
 
  • #100
A. Neumaier said:
Only about aleatoric uncertainty. This is the consensus of modern researchers in uncertainty quantification. Se the links given earlier.

I think there are times when the different types of uncertainty have to be combined. For example, if you're taking some action that's never been done before, such as a new particle experiment, or sending someone to Mars, or whatever, some of the uncertainties are statistical, and some of the uncertainties are epistemic--we may not know all the relevant laws of physics or conditions.

So suppose that there are two competing physical theories, theory [itex]A[/itex] and theory [itex]B[/itex]. If [itex]A[/itex] is correct, then our mission has only a 1% chance of disaster, and a 99% chance of success. If [itex]B[/itex] is correct, then our mission has a 99% chance of disaster and 1% chance success. But we don't know whether [itex]A[/itex] or [itex]B[/itex] is correct. What do we do? You could say that we should postpone making any decision until we know which theory is correct, but we may not have that luxury. It seems to me that in making a decision, you have to take into account both types of uncertainty. But how to combine them, if you don't use probability? I guess you could say that you're just screwed in that case, but surely there are extreme cases where we know what to do: If [itex]A[/itex] is an accepted, mainstream, well-tested theory and [itex]B[/itex] is just somebody's hunch, then we would go with [itex]A[/itex].
 
  • #101
Demystifier said:
OK, what is the minimal amount of information that would trigger you to determine probabilities? How many coin flips is the minimum?
Are you so ignorant about statistical practice? It depends on the accuracy you want.
 
  • #102
A. Neumaier said:
Respectable scientists are no fools that would determine probabilities from the information you gave.

If they have the luxury of performing more tests, then they can put off making any kind of decision until they have more data. But at some point, you have to make a decision based on the data that you have.
 
  • #103
A. Neumaier said:
Are you so ignorant about statistical practice? It depends on the accuracy you want.
You are smart, but I am smart too. :wink:
I want the minimal possible accuracy that will trigger you to assign some definite numbers as probabilities.
 
  • #104
A. Neumaier said:
Are you so ignorant about statistical practice? It depends on the accuracy you want.

There is never a point when you know that your probability estimate is accurate. There is never a point when you can say with certainty: "The probability of heads is between [itex]45\%[/itex] and [itex]55\%[/itex]."
 
  • #105
stevendaryl said:
There is never a point when you know that your probability estimate is accurate. There is never a point when you can say with certainty: "The probability of heads is between [itex]45\%[/itex] and [itex]55\%[/itex]."
In other words, all you have is the probability of probability of probability of probability of ...
 

Similar threads

  • Thermodynamics
Replies
29
Views
1K
Replies
5
Views
2K
  • Thermodynamics
Replies
3
Views
998
  • Quantum Physics
Replies
9
Views
962
Replies
15
Views
1K
Replies
93
Views
5K
Replies
26
Views
2K
  • Quantum Interpretations and Foundations
9
Replies
309
Views
10K
Replies
4
Views
800
Replies
12
Views
2K
Back
Top