- #1
mnb96
- 715
- 5
Hello,
I was trying to interpret the formula of Pearson's Chi-squared test:
[tex]\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}[/tex]
I thought that if we assume that each [itex]O_i[/itex] is an observation of the random variable [itex]X_i[/itex], then the above formula essentially considers the sum-of-squares of n standardized random variables [itex]Y_i=\frac{X_i-\mu_i}{\sigma_i}[/itex]. In fact, if such random variables are [itex]Y_i \sim N(0,1)[/itex], then the random variable [itex]S = \sum_{i=1}^n Y_i^2[/itex] follows a [itex]\chi^2[/itex]-distribution. Thus, the formula of the Chi-squared test would essentially evaluate the probability [itex]\mathrm{P}\left( S = \chi^2 \right)[/itex], and of course compare it to some chosen P-value.
My question is about the standardization of the random variables [itex]X_i[/itex].
If my interpretation above is correct, then Pearson's Chi-squared test somehow assumes that each random variable [itex]X_i[/itex] has variance equal to its expected value, that is: [tex]\sigma_i^2 = \mu_i[/tex]
Why so?
Can anybody explain why we would need to assume that variance and expected values are numerically equal? That condition is satisfied only for some distributions like Poisson and Gamma (with [itex]\theta=1)[/itex]. Why such a restriction?
I was trying to interpret the formula of Pearson's Chi-squared test:
[tex]\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}[/tex]
I thought that if we assume that each [itex]O_i[/itex] is an observation of the random variable [itex]X_i[/itex], then the above formula essentially considers the sum-of-squares of n standardized random variables [itex]Y_i=\frac{X_i-\mu_i}{\sigma_i}[/itex]. In fact, if such random variables are [itex]Y_i \sim N(0,1)[/itex], then the random variable [itex]S = \sum_{i=1}^n Y_i^2[/itex] follows a [itex]\chi^2[/itex]-distribution. Thus, the formula of the Chi-squared test would essentially evaluate the probability [itex]\mathrm{P}\left( S = \chi^2 \right)[/itex], and of course compare it to some chosen P-value.
My question is about the standardization of the random variables [itex]X_i[/itex].
If my interpretation above is correct, then Pearson's Chi-squared test somehow assumes that each random variable [itex]X_i[/itex] has variance equal to its expected value, that is: [tex]\sigma_i^2 = \mu_i[/tex]
Why so?
Can anybody explain why we would need to assume that variance and expected values are numerically equal? That condition is satisfied only for some distributions like Poisson and Gamma (with [itex]\theta=1)[/itex]. Why such a restriction?
Last edited: