Maximize A Posteriori: Countable Hypotheses

  • MHB
  • Thread starter OhMyMarkov
  • Start date
  • Tags
    Maximum
In summary, if there are multiple hypotheses with equally likely probability, and you want to find the unobserved parameter $\theta_m$ according to the decision rule $m_0 = arg \max _m p(x|H_m)$, then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.
  • #1
OhMyMarkov
83
0
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)
 
Physics news on Phys.org
  • #2
OhMyMarkov said:
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)

In principle there is no difference, if you want to know more you will need to be more specific.

CB
 
  • #3
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?
 
Last edited by a moderator:
  • #4
OhMyMarkov said:
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?

I see no difference between a finite and countably infinite number of hypotheses in principle. That is other than you cannot simply pick the required hypothesis out of a list of likelihoods, that is.

But you cannot have a completely disordered collection of hypotheses there must be some logic to their order, and so there will be some logic to the order of the likelihoods and it will be that logic that will allow you to find the hypothesis with the maximum likelihood.

CB
 
  • #5


In this case, we can use the Maximize A Posteriori (MAP) principle to select the most probable hypothesis. The MAP principle takes into account both the likelihood of the data and the prior probability of each hypothesis. This allows us to incorporate our prior knowledge or beliefs about the hypotheses into the decision-making process.

If there are infinitely many countable hypotheses, we can still apply the MAP principle by assigning a prior probability to each hypothesis. This can be done by using a probability distribution function that assigns probabilities to each hypothesis. Then, we can use the MAP principle to select the hypothesis with the highest posterior probability.

However, it is important to note that the choice of prior probability distribution can greatly influence the final outcome. Therefore, it is crucial to carefully consider and justify the choice of prior distribution in order to make an informed decision.

In summary, while the MAP principle can still be applied to select the most probable hypothesis in the case of infinitely many countable hypotheses, it is important to carefully consider the prior probabilities assigned to each hypothesis in order to make a robust decision.
 

FAQ: Maximize A Posteriori: Countable Hypotheses

1. What is Maximize A Posteriori (MAP)?

Maximize A Posteriori (MAP) is a statistical method used to estimate the most likely value of a parameter or hypothesis based on prior knowledge or beliefs. In other words, it is a way to find the most probable explanation for a given set of data.

2. How does MAP differ from Maximum Likelihood (ML)?

MAP takes into account prior knowledge or beliefs about the parameter or hypothesis, while ML only considers the data. This means that MAP is more robust and can handle situations where there is limited data or when the data is noisy.

3. What is the role of countable hypotheses in MAP?

Countable hypotheses refer to a finite or countably infinite set of possible explanations for a given set of data. In MAP, these hypotheses are assigned prior probabilities based on prior knowledge or beliefs. The goal is to find the hypothesis with the highest posterior probability, which represents the most likely explanation for the data.

4. How do you choose the prior probabilities for countable hypotheses in MAP?

The choice of prior probabilities is based on the scientist's prior knowledge or beliefs about the hypotheses. In some cases, these probabilities can be informed by previous studies or expert opinions. It is important to note that the choice of prior probabilities can greatly influence the results of the MAP estimation.

5. What are some applications of MAP in scientific research?

MAP has various applications in different fields of science, including machine learning, image processing, and bioinformatics. It is commonly used for parameter estimation, classification, and prediction tasks. For example, in bioinformatics, MAP can be used to identify the most likely gene expression patterns from a large dataset.

Similar threads

Replies
6
Views
54
Replies
4
Views
2K
Replies
125
Views
17K
6
Replies
175
Views
22K
Back
Top