I am collecting data from a Geiger-Muller radiation detector, which generates clicks that correspond to particles entering the detector. These clicks come in purely at random, so the number of clicks in a given time interval are governed by the Poisson distribution. My job is to find the average...
When subtracting measurements that each have a certain level of uncertainty, we can end up with huge levels of error if the two measurements are roughly equal in value. However, this problem doesn't appear when adding numbers.
Question: Is there a name for this type of error? Does anyone...
Okay, so you state that "As ##N## gets larger, this peak gets taller and narrower." That makes sense and what I always thought. Would this not mean that ##\sigma## drops as N increases? I would normally think that a tall, narrow probability distribution would correspond to a small ##\sigma##.
Okay, let's go back to the equation:
$$\sigma = \sqrt{ \frac{\sum_{i=1}^N (x_i-\langle x\rangle)^2}{N-1} }$$
Let's not even use the term "standard deviation." Here, N is the number of samples selected from a population. The average in the equation refers to the sample mean. The x_i's are...
I used the NORMINV[] function in Excel to generate random numbers that are distributed normally about a mean. The distribution is continuous, unlike coin flips. As more and more numbers are generated, the standard deviation doesn't drop. Most say that as I increase the sample size, that the...
As for Jensen's inequality and such, I'm not saying that the variance and standard deviations would rise by the same amount as N increases, only that if one increases the other must do as well and that if one stays constant, the other does as well.
Sure, here is the standard deviation:
$$\sigma = \sqrt{ \frac{\sum_{i=1}^N (x_i-\langle x\rangle)^2}{N-1} }$$
where $N$ is the sample size, $x_i$ is an individual measurement, and $\langle x\rangle$ is the mean of all measurements.
As for your first question, I am referring to the standard deviation of N measurements of (say) the mass of an object, with an underlying normal probability distribution understood.
As for the variance, why would it behave any different than the standard deviation? I mean, if the variance...
But isn't the only difference between the two the fact that you divide √N in one case and √(N-1) in the other? That means for large N they pretty much behave the same. Or am I misunderstanding your point?
In high school, I was taught that the standard deviation drops as you increase the sample size. For this reason, larger sample sizes produce less fluctuation. At the time, I didn't question this because it made sense.
Then, I was taught that the standard deviation does not drop as you increase...