- #1
crashcat
- 43
- 31
- TL;DR Summary
- I came across this formula that measures the distribution of measurements, but it makes no sense to me and I hope someone can explain it to me.
Variance and standard deviation and other measures of error I understand. This formula doesn't behave well for data sets centered around zero and also has other problems, like scaling differently as N increases than the standard deviation or standard error. Does anyone recognize this and can point me to a description or derivation? $$\frac{1}{\bar{x}}\sqrt{\frac{n\sum{x^2}-(\sum{x})^2}{n(n-1)}}$$