Finding a conditional probability from joint p.d.f

In summary, finding a conditional probability from a joint probability density function (p.d.f) involves using the relationship between joint and marginal distributions. The conditional probability of one variable given another can be expressed as the ratio of the joint p.d.f of the two variables to the marginal p.d.f of the variable we are conditioning on. Mathematically, this is represented as \( P(X|Y) = \frac{P(X,Y)}{P(Y)} \), where \( P(X,Y) \) is the joint p.d.f and \( P(Y) \) is the marginal p.d.f of \( Y \). This approach allows for the determination of the likelihood of one event occurring under the condition of another event.
  • #1
Hamiltonian
296
193
Homework Statement
If the following joint p.d.f. can be considered for the random variables X, Y, and Z:
$$f(x,y,z) = \begin{cases} 2 & for & 0<x<y<1\ \&\ 0<z<1 \\ 0 & otherwise\end{cases}$$

Evaluate ##\mathbb{P}(2X > Y |1 < 4Z < 3).##
Relevant Equations
$$f_{X|Y}(x|y) = \frac{f_{X,Y}(x,y)} {f_{Y}(y)}$$
using the equation mentioned under Relevant Equations I can get, $$\mathbb{P}(2X > Y |1 < 4Z < 3) = \frac{\mathbb{P}(2X>Y, 1<4z<3)}{\mathbb{P}(1<4z<3)}$$ I can find the denominator by finding the marginal probability distribution, ##f_{Z}(z)## and then integrating that with bounds 0 to 1. But I am a little confused as to the limits of integration I need to use to find ##f_{Z}(z)## and then there's still the question of what I need to do to find the numerator.
$$f_{Z}(z) = \int_{?}^{?}\int_{?}^{?} f(x,y,z) dx dy$$

Additionally, I wonder if this approach is completely flawed and whether there is a better way to approach this problem.
 
Physics news on Phys.org
  • #2
The approach is not flawed, but there is an easier way.
Are X and Y independent of Z?
If so how can we simplify the target expression ##\mathbb{P}(2X > Y |1 < 4Z < 3)##?

Regarding limits for integration, the starting point is ##-\infty## to ##+\infty##. But usually you can narrow that down by identifying the region over which the integrand is nonzero. If the region is rectangular, with sides aligned with coordinate axes, your limits will be simple constants. Otherwise your limits for the inner integral will depend on the values of the integration variable of the outer integral
 
  • #3
andrewkirk said:
The approach is not flawed, but there is an easier way.
Are X and Y independent of Z?
If so how can we simplify the target expression ##\mathbb{P}(2X > Y |1 < 4Z < 3)##?
X and Y are independent of Z but are dependent on each other. So is ##\mathbb{P}(2X > Y |1 < 4Z < 3) = \mathbb{P}(2X>Y)##
 
  • #5
Verify the equality ##f_{X,Y}(x,y)f_Z(z) = f(x,y,z)## to determine independence of ##(X,Y)## and ##Z## if necessary.
 

FAQ: Finding a conditional probability from joint p.d.f

What is a joint probability density function (joint p.d.f)?

A joint probability density function (joint p.d.f.) is a function used in probability theory to describe the likelihood of two continuous random variables occurring simultaneously. It is denoted as f(x, y) for random variables X and Y, and it must satisfy the properties that f(x, y) ≥ 0 for all x and y, and the integral of f(x, y) over the entire range of X and Y equals 1.

How do you find the marginal probability density functions from a joint p.d.f?

To find the marginal probability density functions from a joint p.d.f., you integrate the joint p.d.f. over the range of the other variable. For example, the marginal p.d.f. of X, denoted as f_X(x), is found by integrating the joint p.d.f. f(x, y) over all values of y: f_X(x) = ∫ f(x, y) dy. Similarly, the marginal p.d.f. of Y, denoted as f_Y(y), is found by integrating f(x, y) over all values of x: f_Y(y) = ∫ f(x, y) dx.

What is conditional probability density function?

The conditional probability density function of a random variable X given that another random variable Y is equal to a specific value y is a function that describes the probability distribution of X under the condition that Y = y. It is denoted as f_{X|Y}(x|y) and is calculated using the formula f_{X|Y}(x|y) = f(x, y) / f_Y(y), where f(x, y) is the joint p.d.f. and f_Y(y) is the marginal p.d.f. of Y.

How do you derive the conditional p.d.f. from a joint p.d.f.?

To derive the conditional p.d.f. from a joint p.d.f., you need to divide the joint p.d.f. by the marginal p.d.f. of the conditioning variable. For example, to find the conditional p.d.f. of X given Y = y, you use the formula f_{X|Y}(x|y) = f(x, y) / f_Y(y). First, calculate the marginal p.d.f. f_Y(y) by integrating the joint p.d.f. f(x, y) over all values of x. Then, divide the joint p.d.f. by this marginal p.d.f.

What are the properties of a conditional p.d.f.?

The properties of a conditional p.d.f. include that it must be non-negative, f_{X|Y}(x|y) ≥

Similar threads

Replies
2
Views
1K
Replies
12
Views
2K
Replies
5
Views
1K
Replies
1
Views
923
Replies
2
Views
1K
Replies
14
Views
2K
Replies
19
Views
3K
Back
Top