Find unconditional distribution using transforms

In summary, finding the unconditional distribution using transforms involves applying techniques such as the Laplace transform or Fourier transform to a conditional distribution. By manipulating the transform of the conditional distribution, one can derive the unconditional distribution by integrating or summing over the relevant conditioning variables. This approach is particularly useful in probability theory and statistics for simplifying complex distributions and analyzing random variables.
  • #1
psie
269
32
Homework Statement
If ##X\mid \Sigma^2=\lambda\in N(0,1/\lambda)## with ##\Sigma ^2\in \Gamma \left(\frac{n}{2}{,}\frac{2}{n}\right)##, show that ##X\in t(n)## using transforms.
Relevant Equations
##t(n)## is the t-distribution with parameter ##n##.
I am asked to solve the challenging problem above (I don't see the purpose in this exercise actually, since transforms just make it harder I think).

Here's my attempt; denote by ##\varphi_X## the characteristic function (cf) of ##X##, then $$\varphi_X(t)=Ee^{itX}=E(E(e^{itX}\mid\Sigma^2))=Eh(\Sigma^2),$$where ##h(\lambda)=\varphi_{X\mid \Sigma^2=\lambda}(t)=e^{-t^2/(2\lambda)}##, since recall the cf of ##N(0,\sigma^2)## is just ##e^{-t^2\sigma^2/2}##. Now, $$\varphi_X(t)=Ee^{-t^2/(2\Sigma^2)}=\ldots,$$and I don't know how to proceed further. The thing is, I prefer not to take this route, since the cf of ##t(n)## is very intricate, but I guess this is the way to go, by comparing cfs, or?
 
Physics news on Phys.org
  • #2
It's possible to solve this by comparing cfs, but it is a bit of work. What will come in handy are equations 10.32.10 and 10.32.11 here. We have $$\varphi_X(t)=E\left[e^{-t^2/(2\Sigma^2)}\right]=\frac{\left(n/2\right)^{n/2}}{\Gamma(n/2)}\int_0^\infty \exp\left(-\frac{n\lambda}{2}-\frac{t^2}{2\lambda}\right)\lambda^{n/2-1}\,d\lambda.$$ Now make a substitution so as to identify the integral representation of equation 10.32.10 given in the link. To derive the cf of the Student's ##t##-distribution with parameter ##n##, you'll need equation 10.32.11. Equation 10.32.9 in that link shows that ##K_\nu(z)=K_{-\nu}(z)##, which is also useful.
 

FAQ: Find unconditional distribution using transforms

What is an unconditional distribution?

An unconditional distribution refers to the probability distribution of a random variable without any conditions or constraints applied. It reflects the probabilities of all possible outcomes of the variable in question, regardless of other variables that may be present in a system.

What are transforms in probability theory?

Transforms in probability theory are mathematical techniques used to manipulate probability distributions. Common transforms include the moment generating function (MGF), characteristic function, and Laplace transform. These tools help in deriving properties of distributions, such as moments, and can also be used to find unconditional distributions from joint distributions.

How do I find an unconditional distribution using the moment generating function (MGF)?

To find an unconditional distribution using the MGF, first compute the MGF of the joint distribution of the random variables involved. Then, to obtain the unconditional distribution of a specific variable, take the appropriate derivative of the MGF and evaluate it at zero. This process can reveal the moments of the distribution, which can then be used to identify the unconditional distribution.

What is the role of the characteristic function in finding unconditional distributions?

The characteristic function, which is the Fourier transform of the probability distribution, can be used to find unconditional distributions by providing a unique representation of the distribution. By manipulating the characteristic function of a joint distribution, one can isolate the characteristic function of the marginal distribution, thus allowing for the determination of the unconditional distribution of a specific variable.

Are there any specific examples of using transforms to find unconditional distributions?

Yes, a common example is the use of the joint distribution of two random variables, say X and Y, where you want the unconditional distribution of X. By calculating the joint MGF or characteristic function and then marginalizing over Y (integrating or summing out Y), you can derive the unconditional MGF or characteristic function of X, which can subsequently be inverted to find the unconditional distribution.

Back
Top