Tricky Bayesian question using posterior predictive distributions

In summary: 1 - (1-p)^x) = 0.5 + 0.5*(1 - (1-p)^x) = 0.5 + 0.5*(1 - (1-p)^x) = 0.5 + 0.5*(1 - (1-p)^x) = 0.5 + 0.5*(1 - (1-p)^x) = 0.5 + 0.5*(1 - (1-p)^x) = 0.5 + 0.5*(1 - (1-p)^x) = 0.5 + 0.5*(1 - (1-p)^x) =
  • #1
lauren325
1
0
A game is played using a biased coin, with unknown p. Person A and Person B flip this coin until they get a head. The person who tosses a head first wins. If there is a tie, where both people took an equal number of tosses to flip a head, then a fair coin is flipped once to determine the winner.

Person A plays until they get a head, then Person B plays afterwards.

Using the Jeffrey's Prior for the Geometric distribution and the Posterior predictive distribution for Person B's score given we know Person A's score, what is the probability that Person A will win?

My working so far:

X=the number of tosses until person A gets a head
Z=the number of tosses until person B gets a head

The Jeffrey's prior for the Geometric distribution is:

p(p) = p^-1*(1-p)^-0.5

The posterior predictive distribution for Person B's score given Person A's score is:

p(z|x) = 2(2x-1)/(2x+2z-1)(2x+2z-3)

Finally:

Prob(Person A wins|x) = Prob(Z>x) + Prob(Z=x)*0.5

Now Prob(Z=x) is computed by substituting z for x in the formula p(z|x).

But how do I compute Prob(Z>x)?

By the way, the final answer is:

Prob(Person A wins|x) = 2(2x-1)^2/(2x+2z-1)(2x+2z-3)
 
Physics news on Phys.org
  • #2


To compute Prob(Z>x), we can use the cumulative distribution function (CDF) of the geometric distribution. The CDF gives the probability that Z is less than or equal to a certain value, so we can use it to calculate the probability that Z is greater than a certain value (in this case, x).

The CDF of the geometric distribution is given by:

F(z;p) = 1 - (1-p)^z

So, to calculate Prob(Z>x), we can use:

Prob(Z>x) = 1 - F(x;p)

Substituting this into the formula for Prob(Person A wins|x), we get:

Prob(Person A wins|x) = 1 - F(x;p) + F(x;p)*0.5

= 0.5 + 0.5*F(x;p)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(p^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(1 - (1-p)^x)

= 0.5 + 0.5*(
 

FAQ: Tricky Bayesian question using posterior predictive distributions

What is a posterior predictive distribution in Bayesian statistics?

A posterior predictive distribution is a probability distribution that represents the likelihood of future data given the observed data and the chosen prior distribution. It is used to make predictions about future outcomes based on the available data and prior beliefs.

How is a posterior predictive distribution calculated?

A posterior predictive distribution is calculated by multiplying the likelihood of the observed data with the chosen prior distribution, and then normalizing the resulting product to sum up to 1. This can be done using analytical methods or through numerical approximations such as Markov chain Monte Carlo (MCMC) methods.

What is the difference between prior and posterior predictive distributions?

The prior predictive distribution represents our beliefs about the data before any observations are made, while the posterior predictive distribution represents our updated beliefs after taking into account the observed data. In other words, the prior predictive distribution is based on our assumptions and prior knowledge, while the posterior predictive distribution is based on both our prior beliefs and the new data.

How do posterior predictive distributions help in Bayesian inference?

Posterior predictive distributions play a crucial role in Bayesian inference as they allow us to incorporate both prior beliefs and observed data in making predictions about future outcomes. This allows for more accurate and personalized predictions compared to traditional statistical methods.

Can posterior predictive distributions be used for any type of data?

Yes, posterior predictive distributions can be used for any type of data as long as the data can be represented using a probability distribution. This includes continuous, discrete, and categorical data. However, the choice of prior distribution and the model used to calculate the posterior predictive distribution may vary depending on the type of data being analyzed.

Similar threads

Back
Top