Standard deviation vs measurement uncertainty

In summary, @FactChecker points out that the sample standard deviation is an estimate of the random variation in the measurements, but that the measurement uncertainty cannot be ignored.
  • #1
bluemystic
3
0
Homework Statement
Suppose I measure the length of something 5 times and average the values. Each measurement has its associated uncertainty. What is the uncertainty of the average?
Relevant Equations
SD=sqrt( sum of difference^2/(N-1) )
Standard Error=SD/sqrt(N)
Using the above formulas, we can arrive at an unbiased estimate of the standard deviation of the sample, then divide by sqrt(N) to arrive at the standard deviation of the average. What I'm confused about it where the measurement uncertainty comes into the equation. Is it being ignored? Say I take only two measurements and they turn out to be equal. Then the sample standard deviation is zero. But the true uncertainty of the average can't be 0 because of measurement uncertainty, can it?

On a side note, why can't I use error propagation of measurement uncertainties to obtain the uncertainty of the average, without considering standard deviation?
 
Physics news on Phys.org
  • #2
The sample standard deviation is only an estimate. Using only two experimental samples would be a very poor estimator, so you should not draw any conclusions from that. The measurement "uncertainty" can be constant or have random variation, or a mixture of both. The sample standard deviation is only appropriate for measuring the random variation.
 
  • Like
Likes bluemystic
  • #3
As @FactChecker points out, systematic errors will not be reflected in the variation in the measurements. Putting those to one side, we have random errors and rounding errors. If the random errors are smaller than the rounding then this can also result in, effectively, systematic error. E.g. you are using a 1mm gradation, random error is only 0.1mm, and the value to be measured is 1.7mm; you will read it as 2mm every time.

But what you are asking about is purely random errors for which you have some a priori estimate.
The usual process is that you calculate the batch error (standard error of the mean) as you describe, but use the lower of that and your a priori limit. To me, that is not really satisfactory; there ought to be a general formula that smoothly covers the transition from few samples to many. I have tried to come up with one, but it might require a Bayesian approach.
 
  • Like
Likes bluemystic
  • #4
Thanks for the help! I didn't realize one measured random error and the other measured systematic error.
 

FAQ: Standard deviation vs measurement uncertainty

What is the difference between standard deviation and measurement uncertainty?

Standard deviation is a measure of the spread or variability of a set of data, while measurement uncertainty is a measure of the potential error or uncertainty in a measurement. In other words, standard deviation tells us how much the data points deviate from the average, while measurement uncertainty tells us how accurate our measurement is likely to be.

How are standard deviation and measurement uncertainty calculated?

Standard deviation is calculated by finding the average of the squared differences between each data point and the mean, and then taking the square root of that value. Measurement uncertainty, on the other hand, is typically calculated using statistical methods and taking into account factors such as instrument precision and human error.

Which one is more important to consider in scientific experiments?

Both standard deviation and measurement uncertainty are important to consider in scientific experiments. Standard deviation can give us an idea of the reliability and consistency of our data, while measurement uncertainty helps us understand the potential limitations of our measurements. It is important to consider both when drawing conclusions from experimental data.

How does sample size affect standard deviation and measurement uncertainty?

Larger sample sizes tend to result in smaller standard deviations, as more data points can help to reduce the impact of outliers. However, sample size does not necessarily have a direct effect on measurement uncertainty, as this can also be influenced by factors such as instrument precision and human error.

Can standard deviation and measurement uncertainty be used interchangeably?

No, standard deviation and measurement uncertainty are not interchangeable. While they are both measures of variability, they are calculated and used in different ways. Standard deviation is primarily used to describe the data itself, while measurement uncertainty is used to quantify the potential error in a measurement.

Similar threads

Back
Top