Sum independent normal variables

In summary: Yes, if you have the distribution of each component and the covariance, you can determine the joint distribution of the correlated normal random variables. However, if you only have the distribution of each component and do not know the covariance, you cannot determine the joint distribution or prove that the sum of the variables is normal.Yes, if you have the distribution of each component and the covariance, you can determine the joint distribution of the correlated normal random variables. However, if you only have the distribution of each component and do not know the covariance, you cannot determine the joint distribution or prove that the sum of the variables is normal.In summary, the conversation discusses the proof that a finite sum of independent normal random variables is normal. It
  • #36
mathman said:
Covariance ##E(\xi\eta)=E(\sigma\xi^2)=E(\sigma)E(\xi^2)=0##, since ##E(\sigma)=0## while ##\sigma## is independent of ##\xi## and both means ##=0##.
So this is a good example to show that even uncorrelated is not enough to guarantee that the sum is normal. It requires independence.
 
Physics news on Phys.org
  • #37
FactChecker said:
So this is a good example to show that even uncorrelated is not enough to guarantee that the sum is normal. It requires independence.
Independence is not necessary If the variables are jointly normal, even if correlated, the sum will be normal.
 
  • Like
Likes FactChecker
  • #38
mathman said:
Independence is not necessary If the variables are jointly normal, even if correlated, the sum will be normal.
I stand corrected. I should have just said that uncorrelated is not enough.
 
  • #39
Stephen Tashi said:
Are their theorems that deal with the general question? - Given a set of random variables ##\{X_1,X_2,..,X_M\}##, when does their exist some subspace ##S## of the vector space of random variables such that ##S## contains each ##X_i## and ##S## has a basis of mutually independent random variables?
"Linearly independent" presumably implies we are considering a set of random variables to be a vector space under the operations of multiplication by scalars and addition of the random variables.
Suppose we take "transformed" to mean transformed by a linear transformation in a vector space. If the vector space containing the random variables ##{X_1,X_2,...X_M}## has a finite basis ##B_1,B_2,...B_n## consisting of mutually independent random variables then (trivially) for each ##X_k## there exists a possibly non-invertible linear transformation ##T## that transforms some linear combination of the ##B_i## into ##X_k##.

If the smallest vector space containing the ##X_i## is infinite dimensional (e.g the vector space of all measurable functions on the real number line) , I don't know what happens.

I don't recall any texts that focus on vector spaces of random variables. Since the product of random variables is also a random variable, the topic for textbooks seems to be the algebra of random variables. But that approach downplays the concept of probability distributions.
Would be interesting to see if there is something similar to "Decouple" a basis for such space, so that any two ##X_i, X_j## are independent but the span is preserved.
 
  • #40
WWGD said:
Would be interesting to see if there is something similar to "Decouple" a basis for such space, so that any two ##X_i, X_j## are independent but the span is preserved.
That would require a change of the independent variables to a different set.
 

Similar threads

Replies
10
Views
3K
Replies
1
Views
705
Replies
2
Views
2K
Replies
30
Views
3K
Replies
2
Views
2K
Replies
9
Views
2K
Replies
2
Views
1K
Back
Top