MHB Maximize A Posteriori: Countable Hypotheses

  • Thread starter Thread starter OhMyMarkov
  • Start date Start date
  • Tags Tags
    Maximum
AI Thread Summary
The discussion centers on the decision rule for selecting an unobserved parameter $\theta_m$ from multiple hypotheses $H_1, H_2, \dots, H_N$ of equal likelihood, specifically using the criterion $m_0 = \arg \max_m p(x|H_m)$. Participants explore the implications of having infinitely many hypotheses, questioning how to estimate $\theta$ in such cases. One contributor argues that there is no fundamental difference between finite and countably infinite hypotheses, emphasizing the need for a logical structure to the hypotheses to determine maximum likelihood. The conversation highlights the importance of order and logic in the likelihoods associated with hypotheses. Ultimately, the discussion underscores the complexities of hypothesis selection in statistical inference.
OhMyMarkov
Messages
81
Reaction score
0
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)
 
Physics news on Phys.org
OhMyMarkov said:
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)

In principle there is no difference, if you want to know more you will need to be more specific.

CB
 
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?
 
Last edited by a moderator:
OhMyMarkov said:
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?

I see no difference between a finite and countably infinite number of hypotheses in principle. That is other than you cannot simply pick the required hypothesis out of a list of likelihoods, that is.

But you cannot have a completely disordered collection of hypotheses there must be some logic to their order, and so there will be some logic to the order of the likelihoods and it will be that logic that will allow you to find the hypothesis with the maximum likelihood.

CB
 
Hello, I'm joining this forum to ask two questions which have nagged me for some time. They both are presumed obvious, yet don't make sense to me. Nobody will explain their positions, which is...uh...aka science. I also have a thread for the other question. But this one involves probability, known as the Monty Hall Problem. Please see any number of YouTube videos on this for an explanation, I'll leave it to them to explain it. I question the predicate of all those who answer this...
I'm taking a look at intuitionistic propositional logic (IPL). Basically it exclude Double Negation Elimination (DNE) from the set of axiom schemas replacing it with Ex falso quodlibet: ⊥ → p for any proposition p (including both atomic and composite propositions). In IPL, for instance, the Law of Excluded Middle (LEM) p ∨ ¬p is no longer a theorem. My question: aside from the logic formal perspective, is IPL supposed to model/address some specific "kind of world" ? Thanks.
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...

Similar threads

Replies
10
Views
2K
Replies
125
Views
19K
4
Replies
175
Views
25K
Back
Top