Confusion about determining distribution of sum of two random variables

In summary, the determination of the distribution of the sum of two random variables can be confusing due to the different approaches required based on whether the variables are independent or dependent. For independent variables, the convolution of their individual distributions provides the sum's distribution. However, when variables are dependent, their joint distribution must be considered, complicating the analysis. Understanding these distinctions is crucial for accurate statistical modeling and predictions.
  • #1
psie
269
32
Homework Statement
Let ##X## and ##Y## be independent r.v. such that ##X\in U(0,1)## and ##Y\in U(0,\alpha)##. Find the density function of ##Z=X+Y##. Remark: Note that there are two cases: ##\alpha\geq 1## and ##\alpha <1##.
Relevant Equations
The relevant equation is that the pdf of the sum of two continuous random variables is a convolution.
Let's recall the densities of ##X## and ##Y##:
\begin{align}
f_X(x)=\mathbf{1}_{(0,1)}(x), \quad f_Y(y)=\frac{1}{\alpha}\mathbf{1}_{(0,\alpha)}(y)
\end{align}
Let ##z\in (0,1+\alpha)##. So we know that ##f_Z(z)## is given by:
\begin{align}
f_Z(z)=\int_\mathbb{R} f_X(t)f_Y(z-t)\,dt
\end{align}
Both ##f_X## and ##f_Y## are zero most of the time. We check ##f_X## and ##f_Y## one by one. We start with ##f_X(t)##; it is nonzero when ##0<t<1##. We have that ##f_Y(z-t)## is nonzero when ##0<z-t<\alpha##. That means ##t<z## and ##z-\alpha<t##. We want to satisfy all these inequality at once. So that means ##\max\{z-\alpha,0\}<t<\min\{1,z\}##. Hence:
\begin{align*}
f_Z(z)=\int_{\mathbb{R}}f_X(t)f_Y(z-t)\,dt=\int^{\min\{1,z\}}_{\max\{z-\alpha,0\}}\frac{1}{\alpha}\,dt=\frac{\min\{1,z\}- \max\{z-\alpha,0\}}{\alpha}
\end{align*}

Now, what troubles me is the remark in the problem statement. I don't see that there are two cases to consider. For me, the density is simply the one given in the last equation, or?
 
Physics news on Phys.org
  • #2
Consider the case ##\alpha = 1## and ##z = -1##. Your function would be
$$
f_Z(-1) = \min(1,-1) - \max(-1-1,0) = -1 - 0 = -1.
$$
Is this reasonable?
 
  • Like
Likes psie
  • #3
psie said:
Homework Statement: Let ##X## and ##Y## be independent r.v. such that ##X\in U(0,1)## and ##Y\in U(0,\alpha)##. Find the density function of ##Z=X+Y##. Remark: Note that there are two cases: ##\alpha\geq 1## and ##\alpha <1##.
Relevant Equations: The relevant equation is that the pdf of the sum of two continuous random variables is a convolution.

Let's recall the densities of ##X## and ##Y##:
\begin{align}
f_X(x)=\mathbf{1}_{(0,1)}(x), \quad f_Y(y)=\frac{1}{\alpha}\mathbf{1}_{(0,\alpha)}(y)
\end{align}
Let ##z\in (0,1+\alpha)##. So we know that ##f_Z(z)## is given by:
\begin{align}
f_Z(z)=\int_\mathbb{R} f_X(t)f_Y(z-t)\,dt
\end{align}
Both ##f_X## and ##f_Y## are zero most of the time. We check ##f_X## and ##f_Y## one by one. We start with ##f_X(t)##; it is nonzero when ##0<t<1##. We have that ##f_Y(z-t)## is nonzero when ##0<z-t<\alpha##. That means ##t<z## and ##z-\alpha<t##. We want to satisfy all these inequality at once. So that means ##\max\{z-\alpha,0\}<t<\min\{1,z\}##. Hence:
\begin{align*}
f_Z(z)=\int_{\mathbb{R}}f_X(t)f_Y(z-t)\,dt=\int^{\min\{1,z\}}_{\max\{z-\alpha,0\}}\frac{1}{\alpha}\,dt=\frac{\min\{1,z\}- \max\{z-\alpha,0\}}{\alpha}
\end{align*}

Now, what troubles me is the remark in the problem statement. I don't see that there are two cases to consider. For me, the density is simply the one given in the last equation, or?
Nitpick: Convolutionof _Independent_ Random Variables.
 
  • Like
Likes psie
  • #4
Orodruin said:
Consider the case ##\alpha = 1## and ##z = -1##. Your function would be
$$
f_Z(-1) = \min(1,-1) - \max(-1-1,0) = -1 - 0 = -1.
$$
Is this reasonable?
How can ##z=-1##? If ##\alpha=1##, then ##z\in (0,2)##, no?
 
  • #5
psie said:
How can ##z=-1##? If ##\alpha=1##, then ##z\in (0,2)##, no?
Exactly. The distribution should be zero there. Your expression is not only non-zero, but negative.
 
  • Like
Likes psie
  • #6
Orodruin said:
Exactly. The distribution should be zero there. Your expression is not only non-zero, but negative.
Ok. But couldn’t we simply say that the expression for ##f_Z(z)## I gave is the distribution for ##z\in (0,1+\alpha)## and ##0## otherwise?
 
  • #7
psie said:
Ok. But couldn’t we simply say that the expression for ##f_Z(z)## I gave is the distribution for ##z\in (0,1+\alpha)## and ##0## otherwise?
Sure, but that is still kind of breaking it up into cases. So is using the min/max functions.
 
  • Like
Likes psie

FAQ: Confusion about determining distribution of sum of two random variables

1. What is the distribution of the sum of two independent random variables?

The distribution of the sum of two independent random variables can be determined using the convolution of their individual probability distributions. If X and Y are independent random variables with probability density functions (PDFs) f_X(x) and f_Y(y), respectively, the PDF of the sum Z = X + Y is given by the convolution: f_Z(z) = ∫ f_X(x) f_Y(z - x) dx.

2. How do I find the mean and variance of the sum of two random variables?

The mean of the sum of two random variables is simply the sum of their means. If X and Y are random variables with means E[X] and E[Y], then E[Z] = E[X] + E[Y]. The variance of the sum depends on whether the variables are independent. If X and Y are independent, then Var(Z) = Var(X) + Var(Y). If they are not independent, you must also consider the covariance: Var(Z) = Var(X) + Var(Y) + 2Cov(X, Y).

3. What if the random variables are not independent?

If the random variables are not independent, you cannot simply add their variances to find the variance of their sum. Instead, you need to include the covariance between the two variables. The formula for the variance of the sum becomes: Var(Z) = Var(X) + Var(Y) + 2Cov(X, Y), where Cov(X, Y) is the covariance between X and Y.

4. Can the sum of two non-identically distributed random variables still have a well-defined distribution?

Yes, the sum of two non-identically distributed random variables can still have a well-defined distribution. The convolution of their probability distributions will yield the distribution of the sum, regardless of whether the individual distributions are identical. The resulting distribution will depend on the specific shapes and parameters of the individual distributions.

5. How do I compute the distribution of the sum using moment-generating functions?

The moment-generating function (MGF) can be used to find the distribution of the sum of two random variables. If X and Y are independent random variables with MGFs M_X(t) and M_Y(t), respectively, then the MGF of the sum Z = X + Y is given by M_Z(t) = M_X(t) * M_Y(t). By finding the inverse MGF, you can determine the distribution of Z.

Back
Top