- #1
rayge
- 25
- 0
Homework Statement
Let [itex]Y_n[/itex] be the nth order statistic of a random sample of size n from a distribution with pdf [itex]f(x|\theta)=1/\theta[/itex] from [itex]0[/itex] to [itex]\theta[/itex], zero elsewhere. Take the loss function to be [itex]L(\theta, \delta(y))=[\theta-\delta(y_n)]^2[/itex]. Let [itex]\theta[/itex] be an observed value of the random variable [itex]\Theta[/itex], which has the prior pdf [itex]h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty[/itex], zero elsewhere, with [itex]\alpha > 0, \beta > 0[/itex]. Find the Bayes solution [itex]\delta(y_n)[/itex] for a point estimate of [itex]\theta[/itex].
The attempt at a solution
I've found that the conditional pdf of [itex]Y_n[/itex] given [itex]\theta[/itex] is:
[tex]\frac{n y_n^{n-1}}{\theta^n}[/tex]
which allows us to find the posterior [itex]k(\theta|y_n)[/itex] by finding what it's proportional to:
[tex]k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}[/tex]
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
[tex]\frac{1}{\theta^{n+\beta}}[/tex]
When I integrate from [itex]\alpha[/itex] to [itex]\infty[/itex], and solve for the fudge factor, I get [itex](n+\beta)\alpha^{n+\beta}[/itex] as the scaling factor, so for my posterior I get:
[tex](n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}[/tex]
Which doesn't even have a [itex]y_n[/itex] term in it. Weird.
When I find the expected value of [itex]\theta[/itex] with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a [itex]y_n[/itex] somewhere but I don't know where. Any thoughts? Thanks in advance.
Let [itex]Y_n[/itex] be the nth order statistic of a random sample of size n from a distribution with pdf [itex]f(x|\theta)=1/\theta[/itex] from [itex]0[/itex] to [itex]\theta[/itex], zero elsewhere. Take the loss function to be [itex]L(\theta, \delta(y))=[\theta-\delta(y_n)]^2[/itex]. Let [itex]\theta[/itex] be an observed value of the random variable [itex]\Theta[/itex], which has the prior pdf [itex]h(\theta)=\frac{\beta \alpha^\beta} {\theta^{\beta + 1}}, \alpha < \theta < \infty[/itex], zero elsewhere, with [itex]\alpha > 0, \beta > 0[/itex]. Find the Bayes solution [itex]\delta(y_n)[/itex] for a point estimate of [itex]\theta[/itex].
The attempt at a solution
I've found that the conditional pdf of [itex]Y_n[/itex] given [itex]\theta[/itex] is:
[tex]\frac{n y_n^{n-1}}{\theta^n}[/tex]
which allows us to find the posterior [itex]k(\theta|y_n)[/itex] by finding what it's proportional to:
[tex]k(\theta|y_n) \propto \frac{n y_n^{n-1}}{\theta^n}\frac{\beta \alpha^\beta}{\theta^{\beta + 1}}[/tex]
Where I'm sketchy is that apparently we can just remove all terms not having to do with theta, come up with a fudge factor to make the distribution integrate to 1 over its support, and call it good. I end up with:
[tex]\frac{1}{\theta^{n+\beta}}[/tex]
When I integrate from [itex]\alpha[/itex] to [itex]\infty[/itex], and solve for the fudge factor, I get [itex](n+\beta)\alpha^{n+\beta}[/itex] as the scaling factor, so for my posterior I get:
[tex](n+\beta)\alpha^{n+\beta}\frac{1}{\theta^{n+\beta}}[/tex]
Which doesn't even have a [itex]y_n[/itex] term in it. Weird.
When I find the expected value of [itex]\theta[/itex] with this distribution, I get 1. Which isn't a very compelling point estimate. So I think I missed a [itex]y_n[/itex] somewhere but I don't know where. Any thoughts? Thanks in advance.