Help with independent random variables and correlation

In summary: X## and ##Y## are dependent.b.) and d.) Since ##Z## and ##X## are independent, by the definition of independence we can write the joint density of ##Z## and ##X## as ##f_{X,Z}(x,z)=f_{X}(x)f_{Z}(z)##. Then using the transformation formula to get the joint density of ##Y## and ##Z## we have \begin{align*}f_{Y,Z}(y,z) &= f_{X}(y/z)\cdot f_{Z}(z)\\&= f_{Y}(y)\cdot f_{Z}(z)\end{align*}since ##f_{Y}(y)=
  • #1
ychu066
1
0
1 Let X be a normal variable with mean 0 and variance 1. Let Y = ZX
where Z and X are independent and Pr(Z = +1) = Pr(Z = -1) =1/2.

a Show that Y and Z are independent.
b Show that Y is also normal with mean 0 and variance 1.
c Show that X and Y are uncorrelated but dependent.
d Can you write down the joint density of X and Y ? Explain your
answer.

Note that this example exhibits two random variables which are un-
correlated, normally distributed, but not independent (and necessarily
not jointly normally distributed).

Please help me with the bold questions...
 
Physics news on Phys.org
  • #2
a.)
\begin{align*}
Pr(Z=z \wedge Y=y) &= Pr(Z=z \wedge zX=y)= Pr(Z=z \wedge X=y/z)\\
&=Pr(Z=z)\cdot Pr(X=y/z) = Pr(Z=z)\cdot Pr(zX=y)=Pr(Z=z)\cdot Pr(Y=y)
\end{align*}
We used that ##X## and ##Z## are independent, and that ##Pr(Y)=Pr(zX)## for any fixed ##z\in\{\,-1,1\,\}##.

c.)
\begin{align*}
E(Y)=\int_\mathbb{R} y Pr(y)d\lambda_Y &= \sum_{z=\pm 1} \int_\mathbb{R} zx Pr(zx)d\lambda_X\\
&= -\dfrac{1}{2}\int_\mathbb{R} xd\lambda_X + \dfrac{1}{2}\int_\mathbb{R} xd\lambda_X = 0 = E(X)
\end{align*}
since ##Z## and ##X## are independent. And
\begin{align*}
E(XY)=\int_\mathbb{R} xy\,f(xy)d\lambda_{XY} = -\int_\mathbb{R} x^2\,f(-x^2)d\lambda_{X} + \int_\mathbb{R} x^2\,f(x^2)d\lambda_{X}=0
\end{align*}
since ##X## is symmetrically distributed at ##x=E(X)=0##.
\begin{align*}Pr(X=x\wedge Y=y)&=Pr(X=x\wedge Y=zx)=Pr(X=x\wedge X=y/z=x)\\&=Pr(X=x)=Pr(Y=y)\end{align*}
and ##Pr(X=x)Pr(Y=y)=Pr(X=x)^2 \neq Pr(X=x) = Pr(X=x\wedge Y=y)##
 

FAQ: Help with independent random variables and correlation

What is an independent random variable?

An independent random variable is a variable whose outcome does not affect the outcome of any other random variable in a given experiment or study. This means that the occurrence or value of one variable is completely unrelated to the occurrence or value of the other variable.

What is the importance of independence in random variables?

Independence in random variables is important because it allows for more accurate and reliable statistical analysis. It ensures that the results of one variable are not influenced by the results of another, allowing for a clearer understanding of the relationship between variables.

How can independence be determined in a set of random variables?

Independence in a set of random variables can be determined by calculating the correlation coefficient between the variables. A correlation coefficient of 0 or close to 0 indicates independence, while a coefficient of 1 or -1 indicates a strong relationship between the variables.

What is correlation in relation to independent random variables?

Correlation is a measure of the relationship between two variables. In the context of independent random variables, correlation refers to the degree to which the variables are related or connected in some way. A correlation of 0 indicates no relationship, while a correlation of 1 or -1 indicates a strong positive or negative relationship, respectively.

Can two independent random variables have a correlation of 0?

Yes, two independent random variables can have a correlation of 0. This means that there is no relationship between the variables and the occurrence or value of one variable does not affect the occurrence or value of the other variable. However, it is important to note that a correlation of 0 does not necessarily mean that the variables are independent, as there could be other factors at play that are not being considered.

Similar threads

Back
Top