Solving Probability Integrals with Monotone Convergence Theorem

In summary, the conversation discusses the concept of applying the monotone convergence theorem to integrals when P(An) approaches zero. The conversation also considers the example of a random variable with a Cauchy distribution where this theorem does not apply. The conversation ends with a discussion on the exact definition of the integral in this context.
  • #1
shoeburg
24
0
I'm having trouble working out a few details from my probability book. It says if P(An) goes to zero, then the integral of X over An goes to zero as well. My book says its because of the monotone convergence theorem, but this confuses me because I thought that has to do with Xn converging to X. Here is my attempt anyway:

We can write the integration as lim n to infinity of E[X(I(An))] = E(lim n to infinity of X P(An)) = E(X(0)) = E(0) = 0.
 
Physics news on Phys.org
  • #2
It would be helpful if you defined the symbols. An, X, Xn, etc.?
 
  • #3
Let An be a sequence of sets indexed by n. As n goes to infinity, P(An) = 0. Let X be a random variable. Prove that the limit as n goes to infinity of the integral of X over An, with respect to probabiliity measure P, equals zero. I thought that the monotone convergence theorem applies to when a sequence of random variables Xn converges to X, and you can interchange limit taking and integral taking (expectation). How does it apply here?

Thanks.
 
  • #4
This may be totally wrong, but if the probability of any ##A_i## is greater than 0, then surely the integral is greater than 0, being the probability of a superset? I suppose this must be wrong but it is the most obvious way to interpret the question.
 
  • #5
I do agree that the integral of anything over a set of measure zero is zero. However, if X takes the value 0 over a set B with P(B) > 0, than the expectation (integral) of X over that set B is zero. The concept that an integral over a set with probability zero is zero is certainly intuitive, it is the details of the proof that I am wondering about, namely, when can you apply the limit and let n go to infinity? Surely, we cannot just suddenly apply the limit immediately, but must calculate the integral somehow, perhaps how I tried it posted above.
 
  • #6
The "theorem" is false. Example:

Let X be a random variable with a Cauchy distribution. An = set of points where X > n. P(An) -> 0. However the integral of X over An is infinite for all n.
 
  • #7
You are right. Supposing I said, assume E(X) is finite. Does that take care of this counterexample and issue?
 
  • #8
When you say the integral of X over An, do you mean
[tex] \int_{A_n} x p(x) dx [/tex]
or
[tex] \int_{A_n} p(x) dx [/tex]

where p(x) is the distribution?
 
  • #9
The first one, I believe. My book usually has it written as X dP, but what you have written in the first one is equivalent, correct? And then you just put the limit taking as n -> infinity before the integral.




mathman said:
An = set of points where X > n. P(An) -> 0.
How do you know P({X>n}) --> 0? Is this true for all random variables with a finite expectation?
 

FAQ: Solving Probability Integrals with Monotone Convergence Theorem

What is the Monotone Convergence Theorem?

The Monotone Convergence Theorem is a mathematical theorem that states that if a sequence of functions is monotone increasing and bounded, then the limit of the sequence is also bounded and monotone increasing. In simpler terms, it means that if a sequence of functions is getting larger and larger, it will eventually converge to a limit.

How is the Monotone Convergence Theorem used in solving probability integrals?

The Monotone Convergence Theorem is used in solving probability integrals by simplifying the process of calculating the limit of a sequence of functions. By using this theorem, we can prove that the limit of a sequence of probability functions is equal to the integral of the limit of the sequence of functions.

Can the Monotone Convergence Theorem be applied to any sequence of functions?

No, the Monotone Convergence Theorem can only be applied to sequences of functions that are monotone increasing and bounded. If a sequence does not meet these criteria, then the theorem cannot be used to solve the probability integral.

What are the advantages of using the Monotone Convergence Theorem in solving probability integrals?

Using the Monotone Convergence Theorem can make the process of solving probability integrals more efficient and straightforward. It allows us to prove that the limit of a sequence of functions is equal to the integral of the limit of the sequence, which can save time and effort in calculations.

Are there any limitations to using the Monotone Convergence Theorem in solving probability integrals?

One limitation of using the Monotone Convergence Theorem is that it can only be applied to sequences of functions that are monotone increasing and bounded. Additionally, the theorem may not always be applicable in certain complex probability integrals, and alternative methods may need to be used.

Back
Top