On pdf of a sum of two r.v.s and differentiating under the integral

In summary, the concept of the probability density function (pdf) of the sum of two random variables (r.v.s) involves the convolution of their individual pdfs. When differentiating under the integral sign, one can derive the pdf of the sum by applying techniques such as Leibniz's rule. This process is useful in various statistical applications, allowing for the analysis of combined distributions and the behavior of sums of independent r.v.s. The resulting pdf captures the characteristics of both variables and their interaction, facilitating further statistical inference and modeling.
  • #1
psie
272
32
TL;DR Summary
I'm stuck at a derivation in my book on the pdf of the sum of two continuous random variables ##Y=X_1+X_2##. The formula I'm after is $$f_Y(u)=\int_{\mathbb R} f(x_1,u-x_1)\,dx_1=\int_{\mathbb R} f(u-x_2,x_2)\,dx_2,$$ where ##f## is the joint density of ##(X_1,X_2)##.
I'm reading in my book about the pdf of the sum of two continuous random variables ##X_1,X_2##. First, I'm a bit confused about the fact that the sum of two continuous random variables may not be continuous. Does this fact make the derivation below still valid or is there some key assumption that I'm missing for it to be valid?

Regarding the derivation in my book, I will omit some details, but assume ##X_1,X_2## are both real-valued and ##P## is the probability measure on some probability space. Recall ##\int 1_A \, dP=P(A)## and for a measurable function ##g## such that ##E[|g(X)|]<\infty##, we have $$E[g(X)]=\int_\Omega g(X)\, P(d\omega)=\int_\mathbb{R} g(x) \, P_X(dx)=\int_{\mathbb R}g(x) f(x)dx,$$ where ##P_X## is the induced probability by ##X## (the pushforward measure of ##P## under ##X##). The distribution is then simply given by $$\begin{align}F_{Y}(u)&=P(X_1+X_2\leq u) \nonumber \\ &=E[1_{X_1+X_2\leq u} ] \nonumber \\ &=\int_{\mathbb R^2}1_{x_1+x_2\leq u}f(x_1,x_2)\,dx_1dx_2 \nonumber \\ &=\int_{\mathbb R}\int_{\mathbb R} 1_{x_1\leq u-x_2}f(x_1,x_2)\, dx_1dx_2 \nonumber \\ &=\int_{-\infty}^\infty\int_{-\infty}^{u-x_2}f(x_1,x_2)\,dx_1dx_2. \nonumber \end{align}$$ We used the definition of the expectation and Fubini-Tonelli's theorem. Then the author goes; we differentiate with respect to ##u## and move ##\frac{d}{du}## inside the outer integral and use the fundamental theorem of calculus. However, there is not a lot of motivation given for this maneuver. Why can we do this? I'm familiar with Leibniz rule, but I'm unsure if this applies here.
 
Last edited:
Physics news on Phys.org
  • #2
I think I found an answer to my question. In the last integral, we make a change of variables: ##z=x_1+x_2## and rename ##x_2=x## (for aesthetics), then $$F_Y(u)=\int_{-\infty}^\infty\int_{-\infty}^{u}f(z-x,x)\,dzdx. $$We change the order of integration and then just use the fundamental theorem of calculus: $$f_Y(u)=\frac{d}{du} F_Y(u)= \frac{d}{du}\int_{-\infty}^u\int_{-\infty}^{\infty}f(z-x,x)\,dxdz= \int_{\mathbb R} f(u-x,x)\,dx.$$
 
Last edited:
  • #3
I guess regarding my first question, the whole derivation assumes ##Y## to be a continuous random variable, since its density is what we are deriving, i.e. we neglect the cases where the sum of two continuous random variables is not continuous.
 

FAQ: On pdf of a sum of two r.v.s and differentiating under the integral

What is the probability density function (pdf) of the sum of two random variables?

The probability density function of the sum of two independent random variables can be found using the convolution of their individual pdfs. If X and Y are two independent random variables with pdfs f_X(x) and f_Y(y), respectively, then the pdf of the sum Z = X + Y is given by:

f_Z(z) = ∫ f_X(x) f_Y(z - x) dx

This integral computes the area under the curve of the product of the two pdfs, shifted by the value of z.

Can the pdf of the sum of two dependent random variables be derived similarly?

For dependent random variables, the approach differs because the joint distribution must be considered. The pdf of the sum Z = X + Y can be computed by integrating the joint pdf f_{X,Y}(x,y) over the appropriate region:

f_Z(z) = ∫ f_{X,Y}(x, z - x) dx

This requires knowledge of the joint distribution of X and Y, which captures their dependence.

What is the significance of differentiating under the integral sign in this context?

Differentiating under the integral sign is a useful technique in probability and statistics, particularly when dealing with parameterized distributions. It allows one to find how the pdf changes with respect to a parameter, such as a mean or variance. If F(θ) is an integral that depends on a parameter θ, then:

F'(θ) = ∫ (∂/∂θ) f(x, θ) dx

This can simplify calculations when determining the effects of changing parameters on the distribution of the sum of random variables.

Are there any conditions for differentiating under the integral sign?

Yes, there are conditions that must be satisfied to differentiate under the integral sign. The most common conditions include:

  • The integrand must be continuous in both the variable of integration and the parameter.
  • The integral must converge uniformly with respect to the parameter.
  • Interchanging the order of differentiation and integration should not change the value of the integral.

If these conditions are met, one can safely differentiate under the integral sign.

How can the pdf of the sum of two random variables be used in practical applications?

The pdf of the sum of two random variables is crucial in various fields such as finance, engineering, and natural sciences. For example, in finance, it can be used to model the total return of a portfolio consisting of two assets. In engineering, it can help assess the combined effects of different sources of uncertainty,

Similar threads

Replies
3
Views
1K
Replies
12
Views
3K
Replies
1
Views
2K
Replies
1
Views
982
Replies
6
Views
2K
Replies
1
Views
1K
Replies
6
Views
2K
Back
Top