Determine marginal densities and distributions from joint density

In summary, determining marginal densities and distributions from a joint density function involves integrating the joint density over the relevant variables. For a joint density \( f(x, y) \), the marginal density of \( X \) is found by integrating \( f(x, y) \) with respect to \( y \), while the marginal density of \( Y \) is obtained by integrating with respect to \( x \). The resulting marginal densities provide the probabilities of individual variables independent of the other variable's values. This process is essential for understanding the behavior of each variable in the context of their joint relationship.
  • #1
psie
269
32
Homework Statement
Let ##X## and ##Y## have joint density $$f(x,y)=\begin{cases} 1& \text{for }0\leq x\leq 2,\max(0,x-1)\leq y\leq\min(1,x) \\ 0 &\text{otherwise}.\end{cases}$$ Find the marginal density functions and the joint and marginal distribution functions.
Relevant Equations
The marginal distribution of ##X## given the joint density ##f_{X,Y}## is given by ##f_X(x)=\int_\mathbb{R} f_{X,Y}(x,y) \,dy## and similar for ##Y##.
This is the follow-up problem to my previous problem.

"Integrating out" the ##y##-variable and ##x##-variable separately, we see that ##f_Y(y)=2## and ##f_X(x)=\min(1,x)-\max(0,x-1)##. From my previous post, we see that ##X## is the sum of two independent ##U(0,1)##-distributed r.v.s. What is the distribution of ##Y## though? It looks to me that if ##0\leq x\leq 2##, i.e. if ##0\leq x\leq 1## or ##1<x\leq 2##, then ##0\leq y\leq x## or ##x-1\leq y\leq 1## respectively. So ##y## ranges from ##0## to ##1##, which doesn't make sense since then the pdf ##f_Y(y)=2## does not integrate to ##1##. However, currently I don't see the error.
 
Last edited:
Physics news on Phys.org
  • #2
What makes you think that this
psie said:
$$f(x,y)=\begin{cases} 1& \text{for }0\leq x\leq 2,\max(0,x-1)\leq y\leq\min(1,x) \\ 0 &\text{otherwise}.\end{cases}$$
Is the density function of the independent RVs ##X## and ##Y## from your previous post?

If ##X## and ##Y## are uniform independent RVs then the density function is a constant on the product of the supports of each variable.
 
  • #3
Additionally, even if the problem is not supposed to be the independent RVs from your previous post, the correct computation of ##f_Y(y)## has the result ##f_Y(y) = 1##, not 2.
 
  • #4
Orodruin said:
What makes you think that this is the density function of the independent RVs ##X## and ##Y## from your previous post?
I am not saying that ##f## given here is the joint density of two ##U(0,1)##-distributed, independent RVs. What I'm claiming is that ##X## in this post is the sum of two independent RVs, both of which are ##U(0,1)##, since in my previous post, we found that the density of the sum of two independent RVs which are ##U(0,1)## is ##f_X## as specified here. Am I making sense?
Orodruin said:
Additionally, even if the problem is not supposed to be the independent RVs from your previous post, the correct computation of ##f_Y(y)## has the result ##f_Y(y) = 1##, not 2.
How did you obtain ##f_Y(y) = 1##? I just don't see it.
 
  • #5
psie said:
How did you obtain ##f_Y(y) = 1##? I just don't see it.
Did you try drawing the support of the joint distribution function? If you do it should be pretty clear.
 
  • #6
Also note that writing down the support region is significantly simpler if you change the order. In other words, start writing down the domain for ##y## and then write the restriction on ##x## given ##y##.
 
  • #7
Orodruin said:
Did you try drawing the support of the joint distribution function? If you do it should be pretty clear.
Ok, I will try. What is wrong about calculating the marginal density by integrating out the ##x##-variable though? In other words, $$f_Y(y)=\int_\mathbb{R} f(x,y) \,dx=\int_0^2 1\, dx=2?$$
 
  • #8
psie said:
Ok, I will try. What is wrong about calculating the marginal density by integrating out the ##x##-variable though? In other words, $$f_Y(y)=\int_\mathbb{R} f(x,y) \,dx=\int_0^2 1\, dx=2?$$
$$f_Y(y)=\int_\mathbb{R} f(x,y) \,dx \neq \int_0^2 1\, dx=2?$$

Draw the support region for ##f## and you will see it.
 
  • Like
Likes psie

FAQ: Determine marginal densities and distributions from joint density

What is a joint density function?

A joint density function describes the probability distribution of two or more random variables simultaneously. It provides a way to understand how the variables interact with each other and is defined over a multi-dimensional space. The joint density function must be non-negative and integrate to one over the entire space of the random variables.

How do you determine marginal densities from a joint density function?

To determine the marginal density of a random variable from a joint density function, you need to integrate the joint density function over the other variables. For example, if you have a joint density function f(x, y) for random variables X and Y, the marginal density of X, denoted as f_X(x), can be found by calculating the integral: f_X(x) = ∫ f(x, y) dy, where the integration is performed over the entire range of Y.

What is the difference between marginal density and joint density?

The joint density function represents the probability distribution of two or more random variables together, while the marginal density function represents the probability distribution of a single random variable irrespective of the others. Marginal densities provide insights into the behavior of individual variables, while joint densities illustrate the relationship between them.

Can you give an example of calculating a marginal density?

Sure! Suppose we have a joint density function f(x, y) = 6x for 0 < x < 1 and 0 < y < 1 - x. To find the marginal density of X, we would integrate f(x, y) with respect to y: f_X(x) = ∫ f(x, y) dy from 0 to 1 - x. After performing the integration, we would obtain the marginal density function f_X(x) for the variable X.

Why are marginal distributions important?

Marginal distributions are important because they allow us to analyze the behavior of individual random variables without considering the influence of other variables. They simplify complex multi-dimensional problems and help in understanding the basic characteristics and properties of each variable, which is essential for statistical inference and decision-making.

Back
Top