Question about Bayesian Inference, Posterior Distribution

In summary, the conversation discusses a procedure for finding the probability of a specific event, E, which consists of a combination of $k_i$ for $i\in I$. The procedure involves filling in values for $k_i$ and $p_i$ in a given formula. However, $p_i$ is a random variable and the goal is to find the marginal probability $P(E)$ rather than the conditional probability $P(E|p_1,...,...p_i,...,p_{|I|})$.
  • #1
thehairygorilla
2
0
I have a posterior probability of \(\displaystyle p_i \)which is based on a Beta prior and some data from a binomial distribution:

I have another procedure:

$P(E)=\prod_{i \in I} p_i^{k_i}(1-p_i)^{1-k_i}$

which gives me the probability of a specific event of successes and failures for the set of $I$ in a model. Given the posterior distribution for $p_i$, how do I find \(\displaystyle P(E)\)?
 
Physics news on Phys.org
  • #2
thehairygorilla said:
I have a posterior probability of \(\displaystyle p_i \)which is based on a Beta prior and some data from a binomial distribution:

I have another procedure:

$P(E)=\prod_{i \in I} p_i^{k_i}(1-p_i)^{1-k_i}$

which gives me the probability of a specific event of successes and failures for the set of $I$ in a model. Given the posterior distribution for $p_i$, how do I find \(\displaystyle P(E)\)?

Hi thehairygorilla, welcome to MHB!

The event $E$ consists of a combination of $k_i$ for $i\in I$.
To find the probability $P(E)$ we would fill in those $k_i$ and the given $p_i$ in the formula, wouldn't we?
 
  • #3
I like Serena said:
Hi thehairygorilla, welcome to MHB!

The event $E$ consists of a combination of $k_i$ for $i\in I$.
To find the probability $P(E)$ we would fill in those $k_i$ and the given $p_i$ in the formula, wouldn't we?

So not really. $p_i$ is a random variable. Better notation would be $P(E|p_1,...,...p_i,...,p_{|I|})=\prod_{i \in I} p_i^{k_i}(1-p_i)^{1-k_i}$ and I would be trying to find the marginal probability $P(E)$. Given the $p_i$s, $P(E|p_1,...,...p_i,...,p_{|I|})$ would be in terms of those random variables.
 
Last edited:

FAQ: Question about Bayesian Inference, Posterior Distribution

What is Bayesian Inference?

Bayesian Inference is a statistical method used to update our beliefs or knowledge about a particular event or hypothesis based on new evidence or data. It is based on Bayes' theorem, which calculates the probability of an event occurring given prior knowledge and new evidence.

How is Bayesian Inference different from other statistical methods?

Unlike other statistical methods that rely solely on data and sample size, Bayesian Inference incorporates prior beliefs or knowledge about the event or hypothesis. This enables us to update our beliefs as we gather more evidence, making it a more flexible and dynamic approach to data analysis.

What is the posterior distribution in Bayesian Inference?

The posterior distribution in Bayesian Inference is the final probability distribution that represents our updated beliefs about the event or hypothesis after incorporating the prior knowledge and new evidence. It is calculated by multiplying the prior distribution and the likelihood function and then normalizing the result.

How is the prior distribution chosen in Bayesian Inference?

The choice of the prior distribution depends on the available prior knowledge or beliefs about the event or hypothesis. It can be chosen based on expert opinions, previous studies, or data from a related event. However, it should be chosen carefully to avoid bias in the final results.

What are the advantages of using Bayesian Inference?

Some of the advantages of using Bayesian Inference include its ability to incorporate prior knowledge, its flexibility in handling small sample sizes, and its ability to update beliefs as more evidence is gathered. It also provides a way to quantify uncertainty in our beliefs and can handle complex data and models.

Back
Top