- #1
karse
- 2
- 0
I'm working my way through pattern recognition and machine learning using this http://www.cs.pitt.edu/~milos/courses/cs2750/ as a guide.
We have to prove that a binomial random variable x, with a prior distribution for [itex]\mu[/itex] given by a beta distribution, has a posterior mean value that is x that lies between the pror mean and the maximum likelihood estimate for [itex]\mu[/itex].
[itex]\underbrace{\frac{a}{a+b}}_{prior-mean}<\underbrace{\frac{m+a}{m+a+l+b}}_{posterior-mean}< \underbrace{\frac{m}{m+l}}_{ml-estimate-of-\mu} (eq. 1)[/itex]
where a hint in the book state that it is equal to solving:
[itex]
\frac{m+a}{m+a+l+b}= \lambda\cdot \frac{a}{a+b}+(1-\lambda)\cdot\frac{m}{m+l}, 0<=\lambda<=1 \text{ (eq. 2)}
[/itex]
m and l is the numer of observed values where x=1 and x=0 respectively. a and b specifies our prior belief via the beta distribution.
My question is about the hint. how do i get from eq. 1 to eq. 2.? Is it always "legal" to solve eq. 2 instead of eq. 1??(i'm not looking for a solution to the original problem :) )
Homework Statement
We have to prove that a binomial random variable x, with a prior distribution for [itex]\mu[/itex] given by a beta distribution, has a posterior mean value that is x that lies between the pror mean and the maximum likelihood estimate for [itex]\mu[/itex].
[itex]\underbrace{\frac{a}{a+b}}_{prior-mean}<\underbrace{\frac{m+a}{m+a+l+b}}_{posterior-mean}< \underbrace{\frac{m}{m+l}}_{ml-estimate-of-\mu} (eq. 1)[/itex]
where a hint in the book state that it is equal to solving:
[itex]
\frac{m+a}{m+a+l+b}= \lambda\cdot \frac{a}{a+b}+(1-\lambda)\cdot\frac{m}{m+l}, 0<=\lambda<=1 \text{ (eq. 2)}
[/itex]
m and l is the numer of observed values where x=1 and x=0 respectively. a and b specifies our prior belief via the beta distribution.
My question is about the hint. how do i get from eq. 1 to eq. 2.? Is it always "legal" to solve eq. 2 instead of eq. 1??(i'm not looking for a solution to the original problem :) )