Exercise on law of total expectation

In summary, the "law of total expectation" states that the expected value of a random variable can be computed by taking the expected value of conditional expectations based on another variable. This principle is often applied in probability theory to simplify complex problems by breaking them down into more manageable parts. Exercises typically involve calculating expectations by conditioning on different scenarios or events, illustrating the law's practical applications in various contexts, such as decision-making and risk assessment.
  • #1
psie
264
32
Homework Statement
[First, see the relevant equations.] The object of this exercise is to show that if we not assume that ##E|Y|<\infty## in theorem 2.1, then the conclusion does not necessarily hold. Namely, suppose that ##X\in\Gamma(1/2,2)(=\chi^2(1))## and that $$f_{Y\mid X=x}\left(y\right)=\frac{1}{\sqrt{2\pi }}x^{\frac{1}{2}}e^{-\frac{1}{2}xy^2},\quad -\infty<y<\infty.$$

a) Compute ##E(Y\mid X=x), E(Y\mid X)## and, finally ##E(E(Y\mid X))##.
b) Show that ##Y\in C(0,1)##, i.e. is standard Cauchy.
c) What about ##E(Y)##?
Relevant Equations
Theorem 2.1: Suppose that ##E|Y|<\infty##. Then ##E(E(Y\mid X))=E(Y)##.
I feel like I'm doing something wrong. I have computed $$E(Y\mid X=x)=\int_\mathbb{R}y f_{Y\mid X=x}\, dy,$$with pen and paper, and I get the same that WolframAlpha gets, namely ##0##. Can this be right? If this is indeed true, then is ##E(Y\mid X)=E(E(Y\mid X))=0## too?

How do I go about showing ##Y\in C(0,1)##?
 
Physics news on Phys.org
  • #2
The distribution of ##Y## conditioned on ##X = x## is a normal distribution centered at zero, so yes, it will have zero expectation value.

Regarding (b), you have ##f_{Y|X=x}## and you have ##f_X## given. You need to show that marginalising ##f_{X,Y}## gives the distribution function for C(0,1).
 
  • Like
Likes psie

FAQ: Exercise on law of total expectation

What is the law of total expectation?

The law of total expectation states that the expected value of a random variable can be calculated by taking the expected value of that variable conditioned on another variable, and then averaging these conditional expectations. Mathematically, it is expressed as E[X] = E[E[X | Y]], where X is the random variable of interest and Y is another random variable.

How do you apply the law of total expectation in practice?

To apply the law of total expectation, first identify the random variable X whose expectation you want to compute and another random variable Y that can provide useful conditioning information. Calculate the conditional expectation E[X | Y] for each possible value of Y, then take the expectation of these conditional expectations weighted by the probabilities of each value of Y.

Can you give an example of the law of total expectation?

Sure! Suppose we have a random variable X representing the score of a student on a test, and another random variable Y representing whether the student studied (Y=1) or did not study (Y=0). If we know the conditional expectations E[X | Y=1] = 80 and E[X | Y=0] = 50, and the probabilities P(Y=1) = 0.6 and P(Y=0) = 0.4, we can calculate E[X] as follows: E[X] = E[E[X | Y]] = P(Y=1) * E[X | Y=1] + P(Y=0) * E[X | Y=0] = 0.6 * 80 + 0.4 * 50 = 68.

What are common mistakes when using the law of total expectation?

Common mistakes include failing to correctly identify the conditioning variable Y, miscalculating the conditional expectations E[X | Y], or incorrectly applying the probabilities associated with Y. It's also important to ensure that the probabilities sum to 1, as this is a requirement for proper conditioning.

How does the law of total expectation relate to other concepts in probability theory?

The law of total expectation is closely related to the law of total probability and the concept of conditional expectation. It provides a framework for breaking down complex expectations into simpler, manageable parts. Additionally, it is often used in conjunction with other statistical methods, such as Bayesian inference and Markov chains, to derive more complex results in probability and statistics.

Back
Top