- #1
logarithmic
- 107
- 0
I've been looking at the measure theoretic definition of a conditional expectation and it doesn't make too much sense to me.
Consider the definition given here: https://en.wikipedia.org/wiki/Conditional_expectation#Formal_definition
It says for a probability space [itex](\Omega,\mathcal{A},P)[/itex], and sigma fields [itex]\mathcal{B}\subset\mathcal{A}[/itex], the random variable [itex]Y=E(X|\mathcal{B})[/itex] is the conditional expectation, if it satisfies
[itex]\int_{B}YdP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex] (*).
But clearly setting [itex]Y=X[/itex] satisfies (*). And it goes on to say that conditional expectations are almost surely unique. So this means that [itex]E(X|\mathcal{B})=Y=X[/itex] almost surely?
If we consider the following example [itex]\Omega=\{1,2,3\}[/itex], [itex]\mathcal{A}[/itex] is the power set of [itex]\Omega[/itex], [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex], [itex]X(\omega)=\omega[/itex] and [itex]P(1) = .25, P(2)=.65, P(3)=.1[/itex], then if you write you (*) for all the elements of [itex]\mathcal{B}[/itex], you'll get [itex]E(X|\mathcal{B})=X[/itex]. But clearly this isn't correct, given {3}, the conditional expectation should be 3, and given {1,2} the conditional expectation should be [itex]1\frac{.25}{.9} + 2\frac{.65}{.9}[/itex].
It's usually said that sigma fields model information. I also don't see what sort of information [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex] gives.
Can someone explain where my understanding is wrong, and how this relates to the more intuitive definition of conditional expectations for random variables:
[itex]E(X|Y)=\int_{\mathbb{R}}xf_{X|Y}(x,y)dx[/itex].
Consider the definition given here: https://en.wikipedia.org/wiki/Conditional_expectation#Formal_definition
It says for a probability space [itex](\Omega,\mathcal{A},P)[/itex], and sigma fields [itex]\mathcal{B}\subset\mathcal{A}[/itex], the random variable [itex]Y=E(X|\mathcal{B})[/itex] is the conditional expectation, if it satisfies
[itex]\int_{B}YdP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex] (*).
But clearly setting [itex]Y=X[/itex] satisfies (*). And it goes on to say that conditional expectations are almost surely unique. So this means that [itex]E(X|\mathcal{B})=Y=X[/itex] almost surely?
If we consider the following example [itex]\Omega=\{1,2,3\}[/itex], [itex]\mathcal{A}[/itex] is the power set of [itex]\Omega[/itex], [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex], [itex]X(\omega)=\omega[/itex] and [itex]P(1) = .25, P(2)=.65, P(3)=.1[/itex], then if you write you (*) for all the elements of [itex]\mathcal{B}[/itex], you'll get [itex]E(X|\mathcal{B})=X[/itex]. But clearly this isn't correct, given {3}, the conditional expectation should be 3, and given {1,2} the conditional expectation should be [itex]1\frac{.25}{.9} + 2\frac{.65}{.9}[/itex].
It's usually said that sigma fields model information. I also don't see what sort of information [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex] gives.
Can someone explain where my understanding is wrong, and how this relates to the more intuitive definition of conditional expectations for random variables:
[itex]E(X|Y)=\int_{\mathbb{R}}xf_{X|Y}(x,y)dx[/itex].
Last edited: