On pdf of a sum of two r.v.s and differentiating under the integral

In summary, the concept of the probability density function (pdf) of the sum of two random variables (r.v.s) involves the convolution of their individual pdfs. When differentiating under the integral sign, one can derive the pdf of the sum by applying techniques such as Leibniz's rule. This process is useful in various statistical applications, allowing for the analysis of combined distributions and the behavior of sums of independent r.v.s. The resulting pdf captures the characteristics of both variables and their interaction, facilitating further statistical inference and modeling.
  • #1
psie
265
32
TL;DR Summary
I'm stuck at a derivation in my book on the pdf of the sum of two continuous random variables ##Y=X_1+X_2##. The formula I'm after is $$f_Y(u)=\int_{\mathbb R} f(x_1,u-x_1)\,dx_1=\int_{\mathbb R} f(u-x_2,x_2)\,dx_2,$$ where ##f## is the joint density of ##(X_1,X_2)##.
I'm reading in my book about the pdf of the sum of two continuous random variables ##X_1,X_2##. First, I'm a bit confused about the fact that the sum of two continuous random variables may not be continuous. Does this fact make the derivation below still valid or is there some key assumption that I'm missing for it to be valid?

Regarding the derivation in my book, I will omit some details, but assume ##X_1,X_2## are both real-valued and ##P## is the probability measure on some probability space. Recall ##\int 1_A \, dP=P(A)## and for a measurable function ##g## such that ##E[|g(X)|]<\infty##, we have $$E[g(X)]=\int_\Omega g(X)\, P(d\omega)=\int_\mathbb{R} g(x) \, P_X(dx)=\int_{\mathbb R}g(x) f(x)dx,$$ where ##P_X## is the induced probability by ##X## (the pushforward measure of ##P## under ##X##). The distribution is then simply given by $$\begin{align}F_{Y}(u)&=P(X_1+X_2\leq u) \nonumber \\ &=E[1_{X_1+X_2\leq u} ] \nonumber \\ &=\int_{\mathbb R^2}1_{x_1+x_2\leq u}f(x_1,x_2)\,dx_1dx_2 \nonumber \\ &=\int_{\mathbb R}\int_{\mathbb R} 1_{x_1\leq u-x_2}f(x_1,x_2)\, dx_1dx_2 \nonumber \\ &=\int_{-\infty}^\infty\int_{-\infty}^{u-x_2}f(x_1,x_2)\,dx_1dx_2. \nonumber \end{align}$$ We used the definition of the expectation and Fubini-Tonelli's theorem. Then the author goes; we differentiate with respect to ##u## and move ##\frac{d}{du}## inside the outer integral and use the fundamental theorem of calculus. However, there is not a lot of motivation given for this maneuver. Why can we do this? I'm familiar with Leibniz rule, but I'm unsure if this applies here.
 
Last edited:
Physics news on Phys.org
  • #2
I think I found an answer to my question. In the last integral, we make a change of variables: ##z=x_1+x_2## and rename ##x_2=x## (for aesthetics), then $$F_Y(u)=\int_{-\infty}^\infty\int_{-\infty}^{u}f(z-x,x)\,dzdx. $$We change the order of integration and then just use the fundamental theorem of calculus: $$f_Y(u)=\frac{d}{du} F_Y(u)= \frac{d}{du}\int_{-\infty}^u\int_{-\infty}^{\infty}f(z-x,x)\,dxdz= \int_{\mathbb R} f(u-x,x)\,dx.$$
 
Last edited:
  • #3
I guess regarding my first question, the whole derivation assumes ##Y## to be a continuous random variable, since its density is what we are deriving, i.e. we neglect the cases where the sum of two continuous random variables is not continuous.
 

Similar threads

Replies
2
Views
1K
Replies
12
Views
3K
Replies
1
Views
2K
Replies
1
Views
845
Replies
6
Views
2K
Replies
1
Views
1K
Replies
6
Views
1K
Back
Top