- #1
scinoob
- 17
- 0
Hello everybody. This is my first post here and I hope I'm not asking a question that's been addressed already (I did try to use the search function, but couldn't find what I'm looking for).
Both the Bayes theorem and the law of large numbers are mathematical theorems derived from Kolmogorov's axioms. I've been thinking of a way to relate the two and I'm not sure how that can be done.
Let's say I have a coin with an unknown bias towards heads Θ. I start with a uniform prior distribution of Θ (between 0 and 1 obviously) and keep flipping the coin while updating the prior using Bayes' theorem. Let's say after 300 flips the posterior looks like a relatively peaked unimodal distribution centered at 0.3. Then I decide to flip the coin another, say, 10k times without updating the posterior anymore and somebody asks the question "approximately what percentage of heads do you expect to see after the 10k flips?" So, what I am likely going to do (at least according to the standard practice) is calculate the expected value of a single flip using the posterior distribution I obtained after the 300 flips. If I then go ahead and flip the coin 10k times, can I expect that the percentage will be close to the value I obtained in the analytic calculation? If I keep flipping it, can I expect that the % of heads will converge to the expected value by appealing to the LLN?
If the answer is 'no', then is there another way to relate Bayes' theorem to the LLN, say, using my coin example in particular?
Both the Bayes theorem and the law of large numbers are mathematical theorems derived from Kolmogorov's axioms. I've been thinking of a way to relate the two and I'm not sure how that can be done.
Let's say I have a coin with an unknown bias towards heads Θ. I start with a uniform prior distribution of Θ (between 0 and 1 obviously) and keep flipping the coin while updating the prior using Bayes' theorem. Let's say after 300 flips the posterior looks like a relatively peaked unimodal distribution centered at 0.3. Then I decide to flip the coin another, say, 10k times without updating the posterior anymore and somebody asks the question "approximately what percentage of heads do you expect to see after the 10k flips?" So, what I am likely going to do (at least according to the standard practice) is calculate the expected value of a single flip using the posterior distribution I obtained after the 300 flips. If I then go ahead and flip the coin 10k times, can I expect that the percentage will be close to the value I obtained in the analytic calculation? If I keep flipping it, can I expect that the % of heads will converge to the expected value by appealing to the LLN?
If the answer is 'no', then is there another way to relate Bayes' theorem to the LLN, say, using my coin example in particular?