- #1
- 8,140
- 572
- TL;DR Summary
- The sum of normally distributed random variables is normal
(I know how to prove it). Prove that a finite sum of of independent normal random variables is normal. I suspect that independence may not be necessary.
I was wondering about random variables which are correlated, but with##|\rho|\lt 1## or in general as long as they are linearly in dependent. In that case they could be transformed into a set of independent random variables.Dale said:Independence is necessary. Suppose, for example that ##X_1 \sim \mathcal{N}(\mu=0,\sigma=1)## and ##X_2 = -X_1## then ##X_2 \sim \mathcal{N}(\mu=0,\sigma=1)## but ##X_1+X_2=0## which is not normally distributed.
A proof w/ MGFs would have the same form the characteristic function proof in the linkWWGD said:Iirc ,you can use generating functions for the proof.
mathman said:I was wondering about random variables which are correlated, but with##|\rho|\lt 1## or in general as long as they are linearly in dependent.
In that case they could be transformed into a set of independent random variables.
My original question was specifically for finite sums. The Wiki article gives the answer for a pair of correlated variables which then implies it holds for any finite sum.BWV said:by the CLT, a finite sum of random variables from any distribution with finite variance will converge to normal
On the sum of 2 normals, some proofs in wikipedia
https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables
Yes, if that did not work, then all of Modern Portfolio Theory would failmathman said:The Wiki article gives the answer for a pair of correlated variables which then implies it holds for any finite sum.
Stephen Tashi said:Are their theorems that deal with the general question? - Given a set of random variables ##\{X_1,X_2,..,X_M\}##, when does their exist some subspace ##S## of the vector space of random variables such that ##S## contains each ##X_i## and ##S## has a basis of mutually independent random variables?
Don't mean to go far OT, but I am kind of curious about transformations that preserve normality.Office_Shredder said:I'm not sure this is the right question. Given a single random variable, it plus itself is just 2 time itself, since you're not adding two independent copies of itself. So anyone dimensional subspace for example.If you are wondering about sets of random variables that are stable under adding independent copies, then I think it's just the normal random variables, since if you repeatedly add a random variable to itself you get something that slowly deforms into a gaussian. I guess the space of all things deforming into gaussians also works.
Wouldn't any linear or affine transformation of a normal RV preserve normality?WWGD said:Don't mean to go far OT, but I am kind of curious about transformations that preserve normality.
An interesting thing is that covariance defines an inner product on the space of random variables.
mathman said:The Wiki article gives the answer for a pair of correlated variables
So if you have the distribution of each component and the covariance, don't you also have the joint distribution?Stephen Tashi said:Yes, the article gives an answer for a pair of correlated normal random variables that have have a joint bivariate normal distribution, but not for more general case of correlated normal random variables whose joint distribution is not specified.
BWV said:So if you have the distribution of each component and the covariance, don't you also have the joint distribution?
BWV said:Wouldn't any linear or affine transformation of a normal RV preserve normality?
could include all transformations that preserve normality so perhaps it should be specialized to particular kinds of transformation to eliminate relatively trivial ones. For example if ##X## has a normal distribution with mean zero and we define the transformation ##T(x) = x## except for x = 3 or -3, where we define T(x) = -x, have we have transformed ##X## to a (technically) different normally distributed random variable?but I am kind of curious about transformations that preserve normality.
The question I propose is whether each ##X_i## is an element of the same subspace ##S##. So the fact that each ##X_i## can be regarded as being in the 1 dimensional subspace generated by itself, doesn't answer that.Office_Shredder said:I'm not sure this is the right question. Given a single random variable, it plus itself is just 2 time itself, since you're not adding two independent copies of itself. So anyone dimensional subspace for example.
Just use , e.g., for X~N(0,1), the sum X+(-X).mathman said:It appears that a joint distribution of two dependent normal variables may not be normal. However it is not clear to me whether or not the sum has a normal distribution.
This is an artificial special case where the variance = 0. It could be called a normal distribution.WWGD said:Just use , e.g., for X~N(0,1), the sum X+(-X).
Canonical (counter)example: Assume that ξ is standard normal and that η=σξ, where σ=±1 is symmetric Bernoulli and independent of ξ. Then η is standard normal but ξ+η is not normal since P(ξ+η=0)=P(σ=−1)=1/2.
Seems bogus to me, as η contains a discontinuous function σ , and is only 'Normal' because the Bernoulli distribution is hidden until you add it to ξ. I think if you wrote out the characteristic functions for the example like in the proof here: https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variablesStephen Tashi said:An example from stackexchange: https://math.stackexchange.com/questions/878694/sum-of-dependent-normal-random-variables
BWV said:Seems bogus to me, as η contains a discontinuous function σ
WWGD said:It seems we can set the Copula in such a way that the sum violates normality. I will give it a try, should be a nice exercise. Never tried it before, but should work.
Stephen Tashi said:How would we state a theorem that avoids that counterexample?
If we say "Let ##\eta## be a normally distributed random variable that does not contain a discontinuous function", what would that mean?
On attempt to define "##\eta## does not contain a discontinuous function" might be: There doesn not exist a finite set of random variables ##\{X_1, X_2,...,X_n\}## such that at least one of the ##X_i## does not have a continuous distribution and such that for some function ##f## we have ##\eta = f(X_1,X_2,...X_n)##. However, given such great freedom for people to choose ##f## and the ##X_i## we might eliminate all normally distributed ##\eta## from consideration.
In practical situations the evidence about how ##\eta## can be represented as functions of other random variables would come from joint measurements of ##\eta## and those random variables or functions of those random variables. So perhaps the definition would need to be of the form "##\eta## does not contain a discontinuous function with respect to the random variable ##X## means that ...".
BWV said:Just also define the generating process for the RV - perhaps the sum of normally distributed RVs where each can be completely described by the Gaussian MGF or CF
Stephen Tashi said:However, the existence of one sort of generating process for a random variable doesn't rule out the existence of a different generating processes for it.
I think the goal is prove a theorem of the form:
If ##X## and ##Y## are normally distributed random variables with joint distribution ##J(X,Y)## and ... some conditions... then X + Y is normally distributed.
To get anything interesting the "some conditions" must be conditions that aren't trivially equivalent to assuming ##J(X,Y)## is a bivariate normal distribution.
BWV said:but you have to define normal by some generating function.
Added note: These random variables are uncorrelated.Stephen Tashi said:An example from stackexchange: https://math.stackexchange.com/questions/878694/sum-of-dependent-normal-random-variables
Canonical (counter)example: Assume that ξ is standard normal and that η=σξ, where σ=±1 is symmetric Bernoulli and independent of ξ. Then η is standard normal but ξ+η is not normal since P(ξ+η=0)=P(σ=−1)=1/2.
Is that correct? The intension of the η=σξ example is to randomly flip the sign on half of ξ, which does not change the distribution, but half of η is perfectly negatively correlated to ξ, and this information is recovered in P(ξ+η=0)=P(σ=−1)=1/2.mathman said:Added note: These random variables are uncorrelated.
Why would that be necessary? If we grant that you can take a random sample from ##\xi##, in order to realize a sample of ##\xi \sigma## you only need one to decide whether to flip the value of that particular random sample by using one realization of ##\sigma##.BWV said:How would you actually flip the sign randomly on half of ℝ?
Covariance ##E(\xi\eta)=E(\sigma\xi^2)=E(\sigma)E(\xi^2)=0##, since ##E(\sigma)=0## while ##\sigma## is independent of ##\xi## and both means ##=0##.BWV said:Is that correct? The intension of the η=σξ example is to randomly flip the sign on half of ξ, which does not change the distribution, but half of η is perfectly negatively correlated to ξ, and this information is recovered in P(ξ+η=0)=P(σ=−1)=1/2.
ISTM there are problems with the construction on this, but they are above my pay grade. ξ is a function on ℝ, while σ is discrete. How would you actually flip the sign randomly on half of ℝ? any countably finite number in σ would not change the outcome of the distribution, and if σ in uncountably infinite, then how is P(ξ+η=0) not 0?