Proving the Uniform Distribution of Y from Independent Random Variables X

In summary, the p.d.f. of Y is uniformly distributed on the interval [0,1] if p=\frac{1}{2} and it is singular if p\neq\frac{1}{2}.
  • #1
bennyzadir
18
0
Let be $X_1, X_2, ..., X_n, ... $ independent identically distributed random variables with mutual distribution $ \mathbb{P}\{X_i=0\}=1-\mathbb{P}\{X_i=1\}=p $. Let be $ Y:= \sum_{n=1}^{\infty}2^{-n}X_n$.
a) Prove that if $p=\frac{1}{2}$ then Y is uniformly distributed on interval [0,1].
b) Show that if $p \neq \frac{1}{2}$ then the distribution function of random variable Y is continuous but not absolutely continuous and it is singular (i.e. singular with respect to the Lebesque measure, i.e with respect to the uniform distribution).

I would really appreciate if you could help me!
Thank you in advance!
 
Physics news on Phys.org
  • #2
zadir said:
Let be $X_1, X_2, ..., X_n, ... $ independent identically distributed random variables with mutual distribution $ \mathbb{P}\{X_i=0\}=1-\mathbb{P}\{X_i=1\}=p $. Let be $ Y:= \sum_{n=1}^{\infty}2^{-n}X_n$.
a) Prove that if $p=\frac{1}{2}$ then Y is uniformly distributed on interval [0,1].
b) Show that if $p \neq \frac{1}{2}$ then the distribution function of random variable Y is continuous but not absolutely continuous and it is singular (i.e. singular with respect to the Lebesque measure, i.e with respect to the uniform distribution).
I would really appreciate if you could help me!
Thank you in advance!

If You set $\varphi_{n}(x)$ the p.d.f. of each $X_{n}$ and set $\Phi_{n}(\omega)=\mathcal {F} \{\varphi_{n}(x)\}$ You have that the p.d.f. of $\displaystyle Y=\sum_{n=1}^{\infty} 2^{-n}\ X_{n}$ is...

$\displaystyle \Phi(\omega)= \prod_{n=1}^{\infty} \Phi_{n}(\omega)$ (1)

If $p=\frac{1}{2}$ is...

$\displaystyle \varphi_{n} (x)= \frac{1}{2}\ \delta(x) + \frac{1}{2}\ \delta(x-\frac{1}{2^{n}}) \implies \Phi_{n}(\omega)= e^{- i \frac{\omega}{2^{n+1}}}\ \cos \frac {\omega}{2^{n}}$ (2)

Now You have to remember that is...

$\displaystyle \frac{\sin \omega}{\omega}= \prod_{n=1}^{\infty} \cos \frac{\omega}{2^{n}}$ (3)

... to obtain from (1) and (2)...

$\displaystyle \Phi(\omega)= e^{-i\ \frac{\omega}{2}}\ \frac{\sin \omega}{\omega}$ (4)

... so that Y is uniformly distributed between 0 and 1...

Kind regards

$\chi$ $\sigma$
 
Last edited:
  • #3
Thank you for your answer. Do you have any idea for part b) ?
 
  • #4
If $p \ne \frac{1}{2}$ the task becomes a little more complex. In that case You have...

$\displaystyle \varphi_{n}(x)= p\ \delta(x) + (1-p)\ \delta (x-\frac{1}{2^{n}}) \implies \Phi_{n} (\omega)= (1-p)\ e^{- i \frac{\omega}{2^{n}}}\ (1+ \frac{p}{1-p}\ e^{i \frac{\omega}{2^{n}}})$ (1)

... and now You have to valuate the 'infinite product'...

$\displaystyle \Phi(\omega)= \prod_{n=1}^{\infty} \Phi_{n}(\omega)$ (2)

What You can demonstrate is that the infinite product (2) converges because converges the term...

$\displaystyle \prod_{n=1}^{\infty} (1+ \frac{p}{1-p}\ e^{i \frac{\omega}{2^{n}}})$ (3)

... and that is true because converges the series...

$\displaystyle \sum_{n=1}^{\infty} e^{i \frac{\omega}{2^{n}}}$ (4)

The effective computation of (2) is a different task that requires a little of efforts...

Kind regards

$\chi$ $\sigma$
 
  • #5


a) To prove that Y is uniformly distributed on the interval [0,1], we need to show that for any interval [a,b] with 0≤a<b≤1, the probability that Y falls within this interval is equal to the length of the interval, i.e. b-a.

First, note that Y can only take on values in the interval [0,1] since it is a sum of terms that are either 0 or 2^-n. Therefore, we can write the probability that Y falls within [a,b] as:

P(a≤Y≤b) = P(Y≤b) - P(Y≤a)

= P(Y≤b) - P(Y<a)

= P(Y≤b) - P(Y=0)

= P(Y≤b) - (1-p)^∞

= P(Y≤b) - 0 (since p=1/2)

= P(Y≤b)

Now, we can use the fact that Y is a sum of independent random variables to calculate this probability. Since each X_i is either 0 or 1 with equal probability, the probability that Y takes on a particular value y is equal to the number of ways we can get that value divided by the total number of possible outcomes.

Therefore, P(Y=y) = 1/2^k where y is a sum of exactly k terms. Since the values of Y are unique, we can rewrite P(Y≤b) as:

P(Y≤b) = Σ P(Y=y) = Σ 1/2^k

where the summation is taken over all y such that y≤b. This is a geometric series with a common ratio of 1/2 and a starting term of 1. Therefore, we can use the formula for the sum of a geometric series to get:

P(Y≤b) = 1/1-1/2 = 2/2-1 = 1

Similarly, we can calculate P(Y≤a) and get the same result of 1. Therefore, we have shown that P(a≤Y≤b) = P(Y≤b) - P(Y≤a) = 1-1 = 0. This is true for any interval [a,b] with 0≤a<b≤1, which means that
 

FAQ: Proving the Uniform Distribution of Y from Independent Random Variables X

What is the concept of "sum of random variables"?

The sum of random variables is a mathematical operation that involves adding together multiple random variables. This can be thought of as a way to combine different sources of randomness, resulting in a new random variable with its own unique characteristics.

How is the sum of random variables calculated?

The sum of random variables is calculated by adding together the individual values of each random variable. For example, if we have two random variables X and Y, the sum would be represented as X + Y. In some cases, the sum may also involve multiplying the random variables by a constant or combining them using a mathematical function.

What is the significance of the sum of random variables in statistics?

The sum of random variables is a fundamental concept in statistics and probability theory. It is used to model and analyze a wide range of real-world phenomena, from the behavior of financial markets to the outcomes of medical treatments. By understanding the sum of random variables, we can better understand and make predictions about complex systems.

Can the sum of random variables be used to predict outcomes?

Yes, the sum of random variables can be used to make predictions about the possible outcomes of a system. By analyzing the properties of the individual random variables and their combined sum, we can estimate the likelihood of different outcomes and make informed decisions.

What are some real-world examples of the sum of random variables?

There are many real-world examples where the concept of sum of random variables is applicable. Some common examples include adding the scores of multiple dice rolls, combining the results of multiple medical tests to diagnose a patient, and predicting the stock market using a combination of different economic indicators. Essentially, any situation where multiple sources of randomness are involved can be represented and analyzed using the sum of random variables.

Back
Top