How Is Conditional Expectation Derived in Normal Distributions?

  • Thread starter ares_97
  • Start date
  • Tags
    Expectation
In summary, the author of the article states that the conditional expectation of v_{i} given y_{i} is equal to the product of y_{i} and the ratio of t and s+t. This can be derived using the least squares estimator, and it makes sense because it is the part of the total variance in y_{i} attributable to v_{i}.
  • #1
ares_97
1
0
Help me in conditional expectation

Hi all..
I read one article couple days ago, yet, there is some equations that I could not understand.

let assume that y = u + v

where u is normally distributed with mean = 0 and variance = s -> u ~ N (0, s)

and v is normally distributed with mean = 0 and variance = t -> v ~ N (0, t)

thus, the author wrote that :
E (v_{i}|y_{i})= (t *y_{i})/(s+t)

I tried to find how he derived this conditional expectation.

E (v_{i}|y_{i})= Integral of x*Pr(v_{i}|y_{i}) dx

Then calculate Pr(v_{i}|y_{i}) using bayes rules.

However, it seems that I couldnot get the same answer as mentioned by the author in that article E (v_{i}|y_{i})= (t *y_{i})/(s+t)
Could some one please help me on this matter or show me the way to get the same result as the the author

Thank you..:)

ps : 1. v_{i} is v subscript i.
2. I tried to write using the math simbols (using LaTeX ref) but the results did not look good. That's why I used current style in this question. (i am very sorry for this)
 
Last edited:
Physics news on Phys.org
  • #2
The most direct answer I can come up with is the following.

E[y|y] = E[v|y] + E[u|y]

Clearly E[y|y] = y so y = E[v|y] + E[u|y].

I hypothesize that each of E[v|y] and E[u|y] is a linear function of y: E[v|y] = ay and E[u|y] = (1-a)y for some a in [0,1].

The question now becomes, why should anyone expect a = t/(s+t)?

If I were estimating a from a sample using least squares, t/(s+t) is exactly the formula I would have used. To see this, let e be the least squares estimator of a in the following equation: v = ay + bu. Then, e = Cov[v, y]/Var[y]. I know Var[y] = s+t. But I need to derive Cov[v, y].

To derive it, write out y as v+u and use the covariance formula for sums of random variables (which is very similar to an inner product): Cov[v, v+u] = Cov[v,v] + Cov[v,u] = Var[v] + 0 = t.

Therefore e = t/(s+t). Since linear squares is unbiased, E[e] = a. When t and s are assumed known (as in the problem), a = E[e|s, t] = t/(s+t).

Next, I can guess (and verify) that b must be zero. So I can conclude that E[v|y, s, t] = E[e|s, t]y = t/(s+t) y.

The final question is, why does this make sense? I have two answers:

1. The least squares estimator is the most informationally efficient estimator within the class of linear unbiased estimators. Any other estimator would have wasted some of the information used by the least squares estimator.

2. t/(s+t) is the part of total variance in y attributable to v. Given any y, the expected value of v is the part of y that v is (on average) responsible for, which is t/(s+t) times y.
 
Last edited:
  • #3
I think the easiest method is to use the fact that the covariance of y and sv-tu is zero, which implies that they are independent (by a general result for joint normal rvs). Write v=(sv-tu)/(s+t) + ty/(s+t) and the result should become clear.
 

FAQ: How Is Conditional Expectation Derived in Normal Distributions?

What is the concept of conditional expectation?

Conditional expectation is a statistical concept that refers to the expected value of a random variable given certain conditions or information. It is used to predict the outcome of a variable based on other known variables.

How is conditional expectation calculated?

The formula for calculating conditional expectation is E[X|Y] = ∫xf(x|y)dx, where E[X|Y] represents the conditional expectation of X given Y, f(x|y) is the conditional probability density function of X given Y, and dx is the differential of X.

What is the difference between conditional expectation and unconditional expectation?

Unconditional expectation is the expected value of a random variable without any conditions or additional information. Conditional expectation takes into account the conditions or information about other variables to predict the outcome of a specific variable.

How is conditional expectation used in real life?

Conditional expectation is used in various fields such as finance, economics, and engineering to make predictions or forecasts based on available information. It is also commonly used in data analysis and machine learning to make accurate predictions.

What are some common applications of conditional expectation in scientific research?

In scientific research, conditional expectation is often used to analyze data and make predictions about future outcomes. It is also used in statistical modeling to estimate parameters and evaluate the effectiveness of different variables in a given system.

Back
Top