- #1
Rasalhague
- 1,387
- 2
Here, to further test my understanding, is an attempt to apply the measury theory definitions of a probability space to the binomial distribution. All comments welcome!
Let (R,D,O) be a probability space:
[tex]R = \left \{ 0,1 \right \}[/tex]
[tex]D = 2^R[/tex]
[tex]O:D\rightarrow[0,1] \; | \; O(\left \{ 1 \right \})=p[/tex]
Let (S,E,P) be another probability space:
[tex]S = \left \{ 0,1 \right \}^n[/tex]
[tex]E = 2^S[/tex]
[tex]P:E\rightarrow[0,1] \; | \; P(\left \{ s \right \})=p^n[/tex]
Let (T,F,Q) be a third probability space:
[tex]T=\left \{ 0,1,...,n \right \}[/tex]
[tex]F = 2^T[/tex]
[tex]Q:F\rightarrow[0,1] \; | \; Q(\left \{ t \right \})= \binom{n}{t}p^t(1-p)^{n-t}[/tex]
Let X be a random variable:
[tex]X:S\rightarrow T \; | \; X(s)=\sum_{i=1}^{n}s_i[/tex]
Then the probability measure Q belongs to a class of (probability) distributions called binomial distributions. Its sample space is S. The events are E. The probability is P. The observation space is T. The "observed events" are F. We can interpret p as the likelihood of success on one trial, n as the number of trials, and t as the likelihood of exactly t successes in n trials.
The components of (R,D,O) have no special name, but if we define another random variable, W, such that W is the identity function on R, then O becomes the Bernoulli distribution. Equivalently, the Bernoulli distribution is a binomial distribution with n = 1.
Footnote: I think there's a more subtle variant of this idea, which I hope to get to eventually, where the observation space is taken to be the real numbers, and F the Borel algebra (smallest sigma algebra generated by the open sets), allowing one to use general formulas for defining expectation, and so forth, that apply both to continuous and discrete cases.
Let (R,D,O) be a probability space:
[tex]R = \left \{ 0,1 \right \}[/tex]
[tex]D = 2^R[/tex]
[tex]O:D\rightarrow[0,1] \; | \; O(\left \{ 1 \right \})=p[/tex]
Let (S,E,P) be another probability space:
[tex]S = \left \{ 0,1 \right \}^n[/tex]
[tex]E = 2^S[/tex]
[tex]P:E\rightarrow[0,1] \; | \; P(\left \{ s \right \})=p^n[/tex]
Let (T,F,Q) be a third probability space:
[tex]T=\left \{ 0,1,...,n \right \}[/tex]
[tex]F = 2^T[/tex]
[tex]Q:F\rightarrow[0,1] \; | \; Q(\left \{ t \right \})= \binom{n}{t}p^t(1-p)^{n-t}[/tex]
Let X be a random variable:
[tex]X:S\rightarrow T \; | \; X(s)=\sum_{i=1}^{n}s_i[/tex]
Then the probability measure Q belongs to a class of (probability) distributions called binomial distributions. Its sample space is S. The events are E. The probability is P. The observation space is T. The "observed events" are F. We can interpret p as the likelihood of success on one trial, n as the number of trials, and t as the likelihood of exactly t successes in n trials.
The components of (R,D,O) have no special name, but if we define another random variable, W, such that W is the identity function on R, then O becomes the Bernoulli distribution. Equivalently, the Bernoulli distribution is a binomial distribution with n = 1.
Footnote: I think there's a more subtle variant of this idea, which I hope to get to eventually, where the observation space is taken to be the real numbers, and F the Borel algebra (smallest sigma algebra generated by the open sets), allowing one to use general formulas for defining expectation, and so forth, that apply both to continuous and discrete cases.