- #1
tronter
- 185
- 1
Let [tex] \bold{X} [/tex] be a discrete random variable whose set of possible values is [tex] \bold{x}_j, \ j \geq 1 [/tex]. Let the probability mass function of [tex] \bold{X} [/tex] be given by [tex] P \{\bold{X} = \bold{x}_j \}, \ j \geq 1 [/tex], and suppose we are interested in calculating [tex] \theta = E[h(\bold{X})] = \sum_{j=1}^{\infty} h(\bold{x}_j) P \{\bold{X} = \bold{x}_j \} [/tex].
In some cases, why are Markov Chains better for estimating [tex] \theta [/tex] as opposed to Monte-Carlo simulations? If we wanted to calculate [tex] E[\bold{X}] [/tex] there would not be any need to use simulation at all, right?
And [itex] \lim_{n \to \infty} \frac{h(\bold{x}_j)}{n} \approx \theta \ \? [/itex]?
In some cases, why are Markov Chains better for estimating [tex] \theta [/tex] as opposed to Monte-Carlo simulations? If we wanted to calculate [tex] E[\bold{X}] [/tex] there would not be any need to use simulation at all, right?
And [itex] \lim_{n \to \infty} \frac{h(\bold{x}_j)}{n} \approx \theta \ \? [/itex]?