- #1
JFo
- 92
- 0
I'm reading through a proof (the full theorem statement is at the bottom of the post) in a book on probability and I'm having trouble following a line in the proof. The line reads as follows:
[tex] \int_{0}^{\infty} \int_{x:g(x)>y} f(x) dx dy = \int_{x:g(x)>0} \int_{0}^{g(x)} dy f(x) dx [/tex]
Where g(x) is a nonnegative function and f(x) is a probability density function for a continuous random variable X. So all that's happened in this line is that they switched the order of integration, and the part I'm having trouble with is changing the limits of integration from the left hand side to the new limits on the right hand side. The only way I know how to determine the new limits is to draw out the region of integration in the x-y plane and determine the end-points but I keep coming up with something different.
In case anyone is wondering, the proof comes from the book "A first course in Probability" by Sheldon Ross (5th edition), and is proving the following fact about the expectation of a function of a continuous random variable X with probability density function f(x):
[tex] E[g(X)] = \int_{-\infty}^{\infty}g(x)f(x)dx [/tex]
The book only proves it in the case that g(x) is nonnegative
Thanks much in advance!
PS - Apologies if this is in the wrong forum, I felt this was more of a calculus question than a probability question, but please feel free to move it if you think it's better off somewhere else.
[tex] \int_{0}^{\infty} \int_{x:g(x)>y} f(x) dx dy = \int_{x:g(x)>0} \int_{0}^{g(x)} dy f(x) dx [/tex]
Where g(x) is a nonnegative function and f(x) is a probability density function for a continuous random variable X. So all that's happened in this line is that they switched the order of integration, and the part I'm having trouble with is changing the limits of integration from the left hand side to the new limits on the right hand side. The only way I know how to determine the new limits is to draw out the region of integration in the x-y plane and determine the end-points but I keep coming up with something different.
In case anyone is wondering, the proof comes from the book "A first course in Probability" by Sheldon Ross (5th edition), and is proving the following fact about the expectation of a function of a continuous random variable X with probability density function f(x):
[tex] E[g(X)] = \int_{-\infty}^{\infty}g(x)f(x)dx [/tex]
The book only proves it in the case that g(x) is nonnegative
Thanks much in advance!
PS - Apologies if this is in the wrong forum, I felt this was more of a calculus question than a probability question, but please feel free to move it if you think it's better off somewhere else.
Last edited: