What does this mean for the relationship between X and Y?

  • Thread starter kingwinner
  • Start date
  • Tags
    Mean
In summary, the "d" or "D" means that the two random variables have the same probability distribution. It is used to show that two random quantities have the same distribution.
  • #1
kingwinner
1,270
0
Let X and Y be random variables, F be a function.

I have seen in a variety of contexts that they write "E(X) < ∞". What does it mean? My guess is that it means that E(X) is finite, but if this is the case, shouldn't they say -∞ < E(X) < ∞ instead?


Also, I have seen the notation of the letter "d" above the equal sign. What does it mean? e.g.
d
X=F(Y)
[the "d" should be above the equal sign, not above X]
Does anyone have any idea or have seen the above notation?


Thanks for any help!
 
Physics news on Phys.org
  • #2
Probably the r.v. [tex]X[/tex] is known to be nonnegative. Then writing [tex]E(X) < \infty[/tex] means what you thought. Otherwise, probably an error on the part of the writer. Rarely, the writer may wish to allow [tex]E(X) = -\infty[/tex] but not [tex]+\infty[/tex] I suppose.
 
  • #3
kingwinner said:
Let X and Y be random variables, F be a function.

I have seen in a variety of contexts that they write "E(X) < ∞". What does it mean? My guess is that it means that E(X) is finite, but if this is the case, shouldn't they say -∞ < E(X) < ∞ instead?

yes, this means the expectation is finite (real). it is usually unspoken that it is also greater than negative infinity (i don't know the historical reasons for this).
Also, I have seen the notation of the letter "d" above the equal sign. What does it mean? e.g.
d
X=F(Y)
[the "d" should be above the equal sign, not above X]
Does anyone have any idea or have seen the above notation?


Thanks for any help!

The 'd', or 'D', means "has the same distribution as", and is used to show that two random quantities have the same probability distribution. Examples:

[tex]
X \mathop{=}^d F(Y)
[/tex]

tells you that the r.v. X has the same distribution as F(Y).

[tex]
\sqrt n \left(\hat{\beta} - \beta_0 \right) \xrightarrow{D} n\left(0, \Sigma^{-1}\right)
[/tex]

means that as [tex] n \to \infty [/tex] the distribution of the LHS goes to the normal distribution given on the right.

As an alert: some writers will use [tex] \mathcal{L} [/tex] rather than d. This

[tex]
X \mathop{=}^\mathcal{L} F(Y)
[/tex]

means the same as my previous example. The [tex] \mathcal{L} [/tex] represents probability Law.
 
  • #4
1) So for positive random variables, E(X) < ∞ means E(X) is finite. That makes sense.
But how about a general random variable X that may take on negative values. What is the standard notation to use for E(X) being finite?
 
  • #5
statdad said:
The 'd', or 'D', means "has the same distribution as", and is used to show that two random quantities have the same probability distribution. Examples:

[tex]
X \mathop{=}^d F(Y)
[/tex]

tells you that the r.v. X has the same distribution as F(Y).

[tex]
\sqrt n \left(\hat{\beta} - \beta_0 \right) \xrightarrow{D} n\left(0, \Sigma^{-1}\right)
[/tex]

means that as [tex] n \to \infty [/tex] the distribution of the LHS goes to the normal distribution given on the right.
2) Then why can't we simply write X=F(Y) [without the "d" above the equal sign] to mean that the random variables X and F(Y) have the same distribution? I don't understand this.

Thanks for explaining!
 
  • #6
kingwinner said:
2) Then why can't we simply write X=F(Y) [without the "d" above the equal sign] to mean that the random variables X and F(Y) have the same distribution? I don't understand this.

Two things can have the same distribution without being equal.
 
  • #7
CRGreathouse said:
Two things can have the same distribution without being equal.
This isn't seem too obvious (the subject is still pretty new to me).

e.g. If X and Y are independent standard normal random variables, then it mean that X(ω) = Y(ω) for ALL ω E Ω and so X=Y ?? Why not?

How can two random variables have the same distribution without being equal? Can someone please provide a specific example?

Thank you!
 
  • #8
kingwinner said:
1) So for positive random variables, E(X) < ∞ means E(X) is finite. That makes sense.
But how about a general random variable X that may take on negative values. What is the standard notation to use for E(X) being finite?

Standard notation: [tex]E(|X|) < \infty[/tex], in abstract probability space there is no notion of conditional convergence.
 
  • #9
g_edgar said:
Standard notation: [tex]E(|X|) < \infty[/tex], in abstract probability space there is no notion of conditional convergence.
|E(X)| ≤ E(|X|) < ∞
i.e. -∞ < E(X) < ∞
So I think it makes sense. Thanks!
 
  • #10
kingwinner said:
This isn't seem too obvious (the subject is still pretty new to me).

e.g. If X and Y are independent standard normal random variables, then it mean that X(ω) = Y(ω) for ALL ω E Ω and so X=Y ?? Why not?

How can two random variables have the same distribution without being equal? Can someone please provide a specific example?

Thank you!
Can someone please help me with this? I would really appreciate it!
 
  • #11
kingwinner said:
This isn't seem too obvious (the subject is still pretty new to me).

e.g. If X and Y are independent standard normal random variables, then it mean that X(ω) = Y(ω) for ALL ω E Ω and so X=Y ?? Why not?

How can two random variables have the same distribution without being equal? Can someone please provide a specific example?

Thank you!
If that was the case, they wouldn't be independent. If you want an example, take two quarters, give one to a friend, and start flipping them. Define a random variable for you and your friend (say, Y and F). Their distribution will be the same, but they're unlikely to be equal.
 
  • #12
Tibarn said:
If that was the case, they wouldn't be independent. If you want an example, take two quarters, give one to a friend, and start flipping them. Define a random variable for you and your friend (say, Y and F). Their distribution will be the same, but they're unlikely to be equal.
um...I don't quite understand how your example would demonstrate that X and Y would have the same distribution, but not X=Y.
For your example, the sample space would be {H,T}.

Let ω1=H (for head), ω2=T (for tail)
X(ω1)=0 [the sample outcome of head would be mapped to the real number 0]
X(ω2)=1 [the sample outcome of tail would be mapped to the real number 1]
Y(ω1)=0
Y(ω2)=1

Then X(ω) = Y(ω) for ALL ω E Ω, right?

Can you please explain a little bit more about why it isn't the case that X=Y?

Thank you!
 
Last edited:
  • #13
Probability distributions describe the behavior of a random variable, but they do not guarantee that the values will always be the same. To expand on Tibarn's example:

You and a friend have a coin, each of you will begin flipping your coin and keeping track of the total number of heads. Suppose you each agree to perform 100 flips, your random variable (number of heads) has a binomial (100,.5) distribution, and the same can be said for your friend's r.v. This means that (as a particular instance) we know that for both of you the probability of seeing at least 57 heads will be the same, but it does not mean that you are guaranteed to obtain exactly the same number of heads.

Similar comments, a little more technical due to continuity, hold for cases where both random variables are continuous.
 
  • #14
statdad said:
Probability distributions describe the behavior of a random variable, but they do not guarantee that the values will always be the same. To expand on Tibarn's example:

You and a friend have a coin, each of you will begin flipping your coin and keeping track of the total number of heads. Suppose you each agree to perform 100 flips, your random variable (number of heads) has a binomial (100,.5) distribution, and the same can be said for your friend's r.v. This means that (as a particular instance) we know that for both of you the probability of seeing at least 57 heads will be the same, but it does not mean that you are guaranteed to obtain exactly the same number of heads.

Similar comments, a little more technical due to continuity, hold for cases where both random variables are continuous.
I begin to see intuitively why X and Y can have the same distribution with X≠Y from your example, but how can this be justified in terms of ω and Ω?
X=Y means X(ω) = Y(ω) for ALL ω E Ω. Now I am a little bit confused about what the ω and Ω actually are...
For your example, what is the sample space Ω? And what is an example of a sample point ω?

Thanks for your help!
 
  • #15
From first principles, an outcome to any sequence of 100 flips consists of a sequence of H (for Heads) and T (for Tails) of length 100, so [tex] \Omega [/tex] would be the set of all [tex] 2^{100} [/tex] sequences, from all Ts through all Hs.
One particular [tex] \omega [/tex] would be this one:

[tex]
\omega = \underbrace{HH \cdots H}_{\text{length 50}} \overbrace{TT \cdots T}^{\text{length50}}
[/tex]

for the r.vs I defined, and for this [tex] \omega [/tex],

[tex]
X(\omega) = Y(\omega) = 50
[/tex]


Notice the incredible amount of savings we have in the move from the original sample space [tex] \Omega [/tex], which has [tex] 2^{100} [/tex] elements, to the set of values of [tex] X [/tex] (and of course [tex] Y [/tex]) - there are only 101 different values to "keep track of".

I hope this helps.
 
  • #16
Well now kingwinner has every right to be confused, since if we define [tex]X, Y, \Omega[/tex] the way statdad is suggesting, then indeed [tex]X = Y[/tex]. An example where [tex]X[/tex] and [tex]Y[/tex] are distributed the same, but are not necessarily the same would be, for example, if [tex]X[/tex] counts the number of tails in a sequence of 100 flips, and [tex]Y[/tex] counts the number of heads. The distributions of [tex]X[/tex] and [tex]Y[/tex] are the same, but the random variables themselves are different.
 
  • #17
Moo, the random variables are defined the same way, but I took Kingwinners question to be how variables could have the same distribution but not always be equal in observed value. My example works for that.
 
  • #18
statdad said:
From first principles, an outcome to any sequence of 100 flips consists of a sequence of H (for Heads) and T (for Tails) of length 100, so [tex] \Omega [/tex] would be the set of all [tex] 2^{100} [/tex] sequences, from all Ts through all Hs.
One particular [tex] \omega [/tex] would be this one:

[tex]
\omega = \underbrace{HH \cdots H}_{\text{length 50}} \overbrace{TT \cdots T}^{\text{length50}}
[/tex]

for the r.vs I defined, and for this [tex] \omega [/tex],

[tex]
X(\omega) = Y(\omega) = 50
[/tex]


Notice the incredible amount of savings we have in the move from the original sample space [tex] \Omega [/tex], which has [tex] 2^{100} [/tex] elements, to the set of values of [tex] X [/tex] (and of course [tex] Y [/tex]) - there are only 101 different values to "keep track of".

I hope this helps.
Thanks, it helps and now I understand the meaning of ω and Ω better.

But by definition, X=Y means X(ω) = Y(ω) for ALL ω E Ω.
And for your coin example, X and Y have the same distribution and also X(ω) = Y(ω) for ALL ω E Ω, i.e. X=Y.


"Two things[random variables] can have the same distribution without being equal."
But still, I can't think of an example of this happening...how can two random variables have the same distribution without being equal??

EDIT: I actually missed Moo Of Doom's post before...now I actually see an example of this happening. Thank you!
 
Last edited:
  • #19
Try to think of random variables as being functions from the sample space to an other space (usually n-dimensional reals). So [itex] X:\Omega \rightarrow \mathbb{R}^n [/itex], and same for Y. When you say [itex]X=Y[/itex], you mean equality in terms of functions. A probability measure on a probability space is also a map [itex] \mathbb{P}:F \rightarrow [0,1] [/itex], where F is a sigma algebra on [itex]\Omega[/itex] (basicly a collection of subsets). A condition of being a random variable is that the inverse image of each open set lands in F, though usually people impose the condition that the inverse image of each Borel set should land in F. A law (or distribution) of a random variable is [itex]\mathbb{P} \circ X^{-1}[/itex]. So your law maps sets to [0,1], which are usually Borel sets. Equality in distribution means that they have the same law. Can you from that deduce that they are indeed the same function?
 
  • #20
Toss a fair coin 10 times. The random variable X = "number of heads" and the random variable Y = "number of tails" are certainly not equal, but have the same distribution.
That is ... P(X=k) = P(Y=k) for all k.
 

FAQ: What does this mean for the relationship between X and Y?

What is the meaning of "E(X) < ∞" in scientific terms?

"E(X) < ∞" refers to the expected value of a random variable X being less than infinity. It is a mathematical concept used in probability and statistics to describe the average or long-term behavior of a random phenomenon. It indicates that the possible values of X are finite and that the sum of all these values is also finite.

How is "E(X) < ∞" related to the concept of probability?

"E(X) < ∞" is related to probability in that it represents the average value of a random variable X over a large number of trials. In other words, it is the expected outcome of an experiment or event. It provides a way to quantify the likelihood of a particular outcome occurring.

Can "E(X) < ∞" ever be equal to infinity?

No, "E(X) < ∞" can never be equal to infinity. It is a mathematical statement that indicates the expected value of X is finite, meaning it is not infinite. This is because infinity is not a number and cannot be included in calculations of expected values.

How is "E(X) < ∞" calculated?

Calculating "E(X) < ∞" involves multiplying each possible value of X by its corresponding probability and then summing all of these products. This can also be written as the integral of X multiplied by its probability density function over the range of possible values. The resulting value is the expected value of X.

What does "E(X) < ∞" tell us about the behavior of a random variable X?

"E(X) < ∞" tells us that the values of X are not infinite and that the sum of all these values is also finite. This means that the random variable X has a defined and bounded behavior, and its average value is not infinitely large. It allows us to make predictions about the expected outcome of X over a large number of trials or experiments.

Similar threads

Back
Top