How to show these random variables are independent?

In summary, to show that random variables are independent, one must demonstrate that the joint probability distribution of the variables equals the product of their individual marginal distributions for all possible combinations of the variables. This can be verified through mathematical calculations or by analyzing the relationship between the variables, ensuring that knowing the value of one variable provides no information about the other. If applicable, checking the independence condition using covariance or correlation can also serve as a method to confirm independence.
  • #1
psie
269
32
TL;DR Summary
I am studying order statistics in An Intermediate Course in Probability by Gut. First the author treats only continuous distributions. In a section on the joint distribution of the extreme order variables ##X_{(n)}=\max\{X_1,\ldots,X_n\}## and ##X_{(1)}=\min\{X_1,\ldots,X_n\}##, the author derives the density of the range. that is ##R_n=X_{(n)}-X_{(1)}##. Then there's an exercise which I simply do not understand why it's in that section.
The exercise that appears in the text is:

Exercise 2.5 The geometric distribution is a discrete analog of the exponential distribution in the sense of lack of memory. More precisely, show that if ##X_1## and ##X_2## are independent ##\text{Ge}(p)##-distributed random variables, then ##X_{(1)}## and ##X_{(2)}-X_{(1)}## are independent.

What I find confusing about this exercise is that the author has, up until now, not derived any results for order statistics when it comes to discrete distributions. I know the formula for the density of ##X_{(1)}## and the range when the underlying distribution is continuous, but these do not apply for discrete distribution. I was thinking going back to an earlier chapter where the author derives distributions of transformations of random variables. I was thinking I could assume ##X_2## to be greater than ##X_1## and then compute the pmf of their difference, but this doesn't feel like a sensible assumption, since after all, ##\max\{X_1,\ldots,X_n\}## is understood pointwise.

How would you go about solving this exercise?
 
Physics news on Phys.org
  • #2
This has been solved. It's a bit of work writing it all down, but basically you want to compute the probabilities ## P\left(X_{(1)}=u\right)##, ##P\left(X_{(1)}=u, X_{(2)}=u+d \right)## and ##P\left(X_{(2)}-X_{(1)}=d \right)##. The first two are fairly straightforward, splitting up the probability using indicator functions. The third probability is a bit more tricky and conditional expectation will come in handy (in addition to indicator functions). In particular, the identity $$P(A)=E[\mathbf1_A]=E[E[\mathbf1_A\mid X]]=E[P(A\mid X)].$$
 
  • Like
Likes fresh_42

Similar threads

Replies
8
Views
968
Replies
5
Views
2K
Replies
11
Views
3K
Replies
1
Views
951
Replies
10
Views
3K
Replies
9
Views
1K
Back
Top