Probability: Nested Uniform Distributions

In summary: I was trying to integrate the conditional density fU|T(t) = 1/t over the bounds of T as a function of U, but I should have been using the conditional mass function pX|P=p(x) = (6/(x!(3-x)!))px(1-p)3-x. That would be the density of the marginal distribution of X.That would be the density of the marginal distribution of X.In summary, for Problem A, the probability Pr(U>1/2) can be calculated by integrating the conditional density fU|T(t) = 1/t over the bounds of T as a function of U. For Problem B, the probability that exactly two parts are defective can be
  • #1
ObliviousSage
36
0

Homework Statement



Problem A:
A random variable T is selected from a uniform distribution over (0,1]. Then a second random variable U is selected from a uniform distribution over (0,T]. Determine the probability Pr(U>1/2).

Problem B:
Suppose 3 identical parts are chosen for inspection. Each part be defective with probability p independently of the other parts. Parameter p is, in turn, a uniform random variable over the interval (0,1]. What is the probability that exactly two parts are defective?

Homework Equations



for a uniform distribution over (a,b), the density is f(x) = 1/(b-a)

for a binomial distribution, n trials each with probability p of success, Pr(k successes) = (n!/(k!(n-k)!))pk(1-p)n-k

The Attempt at a Solution



Problem A:
It seems like we want the marginal distribution for U.

The density of T is fT(t) = 1. The conditional density of U is fU|T=t(u) = 1/t.

The joint density is the conditional of U times the marginal of T, so fTU(t,u) = 1/t.

The marginal density for U is the joint integrated for T over its bounds, or fU(u) = integral(T: 0 to 1) of (1/t)dt. This integrates to fU(u) = substitute(x: 0 to 1) of ln x. Except that resolves to 0 minus negative infinity.

The setup and integration seems pretty simple, did I do something wrong? Or is my entire approach wrong?Problem B:
P has density 1. X is a binomial distribution with success representing a defective part, so 3 trials with probability p of "success". Thus the conditional probability mass for X is pX|P=p(x) = (6/(x!(3-x)!))px(1-p)3-x. I want pX(2).

To get the marginal probability for X, I need to integrate for p on the conditional mass for X times the marginal density for P; since P's marginal density is 1, that's just the conditional mass for X.

Thus, I want to solve pX(x) = (6/(x!(3-x)!)) * integral(p: 0 to 1) of px(1-p)3-xdp. I have no idea how to actually integrate that; is there a formula for reducing it to a sum in terms of X? Integration by parts doesn't look like it'll get me anywhere, and Wolfram Alpha's fancy integrator gave me a huge string of stuff that involved a lot of something called the hypergeometric function...
 
Last edited:
Physics news on Phys.org
  • #2
The marginal density for U is the joint integrated for T over its bounds, or fU(u) = integral(T: 0 to 1) of (1/t)dt.

The bounds for integration with respect to U vary with T. They aren't 0 to 1 for all values of T. Visual the joint density as f(x,y). It is non-zero on half the unit square. So if you integrate with respect to U, the bounds of integration must be a function of T. If someone told you to integrate the function f(x,y) = 1/y over a triangular region, you would recognize the problem.

An interesting shorthand sometimes used in probability theory is to state all theorems involving integrals of distributions with the limits of integration from minus infinity to plus infinity. This works with the understanding that the distributions involved will be defined to zero everywhere they need to be. When it comes time to actually do problems, the limits of integration must be worked out because that is the only convenient way to apply calculus to functions that are given by certain formulas on restricted areas and are "zero everywhere else".
 
  • #3
integral(p: 0 to 1) of px(1-p)3-xdp. I have no idea how to actually integrate that;

Can you do the integral [tex] \int_0^1 p^2 (1-p) dp [/tex] ? You only have to get an answer for 2 defective parts so don't try to do the integration with the symbolic value "x" in the integrand.
 
  • #4
From what you've described, you are dealing with two random variables, U which is your standard uniform (0,1], and V which is between (0,T] where T is the realization of U.

In this case we know that V <= U. Based on that you need to get a distribution of this.

Since they are both uniform distributions (both with a potential (0,1] parameter/density/configuration/whatever you want to call it), you have to use this information to setup your integral expression for finding a probability density function, and then sub in your limits to obtain the appropriate probability.
 
  • #5
Stephen Tashi said:
Can you do the integral [tex] \int_0^1 p^2 (1-p) dp [/tex] ? You only have to get an answer for 2 defective parts so don't try to do the integration with the symbolic value "x" in the integrand.

Oh hey, yeah, that makes that one easy! Thanks Stephen! I totally forgot I didn't need the full probability mass function, just that one particular value.
 
  • #6
Stephen Tashi said:
The bounds for integration with respect to U vary with T. They aren't 0 to 1 for all values of T. Visual the joint density as f(x,y). It is non-zero on half the unit square. So if you integrate with respect to U, the bounds of integration must be a function of T. If someone told you to integrate the function f(x,y) = 1/y over a triangular region, you would recognize the problem.

An interesting shorthand sometimes used in probability theory is to state all theorems involving integrals of distributions with the limits of integration from minus infinity to plus infinity. This works with the understanding that the distributions involved will be defined to zero everywhere they need to be. When it comes time to actually do problems, the limits of integration must be worked out because that is the only convenient way to apply calculus to functions that are given by certain formulas on restricted areas and are "zero everywhere else".

I'm not sure I understand; don't I need to integrate with respect to T to get the marginal density for U? And the bounds for integration with respect to T should be 0 to 1, right? I want things just in terms of U, with no T.

I'm familiar with what you said in the second paragraph, I just thought I'd done that correctly. I guess not. :confused:

Edit: Thought about this some more. I think I was approaching it wrong, trying to get the marginal density of U so I could integrate that for the marginal distribution of U. It sounds like you're saying I should do both integrals in the same equation, and integrate in the opposite order.

I did a quick sketch and it looks like I want integral(T: 1/2 to 1) of dt * ( integral(U: 1/2 to t) of fTU(t,u) dt ). That resolves to (1/2)(1+ln(1/2)), about 0.15, which sounds about right.
 
Last edited:

FAQ: Probability: Nested Uniform Distributions

What is a nested uniform distribution?

A nested uniform distribution is a type of probability distribution where the variables have uniform distributions within specific ranges that are nested inside each other. This means that the probability of a value falling within a certain range is the same for all values within that range.

How is a nested uniform distribution different from a regular uniform distribution?

A regular uniform distribution has a single range with equal probabilities for all values within that range. In contrast, a nested uniform distribution has multiple ranges within each other, with different probabilities for each range. This allows for more complex patterns and variability in the data.

What are some real-world applications of nested uniform distributions?

Nested uniform distributions are commonly used in modeling natural phenomena such as species distribution and population growth. They are also used in market research to understand consumer behavior and in finance to model stock prices.

How are probabilities calculated in a nested uniform distribution?

In a nested uniform distribution, probabilities are calculated by multiplying the probabilities of each range. For example, if one range has a probability of 0.5 and another range nested within it has a probability of 0.3, the overall probability for a value falling within both ranges would be 0.5 x 0.3 = 0.15.

Can nested uniform distributions be used to model continuous data?

Yes, nested uniform distributions can be used to model continuous data by dividing the data into different ranges and assigning each range a probability. However, it is more commonly used for discrete data, where the ranges are clearly defined and have equal probabilities within them.

Back
Top