How do we account for multiple sources of uncertainty in a measurement?

In summary: In these cases, the uncertainty for the difference between two measurements (30 and 29.5, for instance) is going to be a bit more complicated to calculate.
  • #1
i_love_science
80
2
A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.
 
Chemistry news on Phys.org
  • #2
The uncertainty you quote with the measurement is just an estimate of the true uncertainty in the measurement, so it should only be given to 1 significant figure in most cases.

There is a slight exception if the first digit of the uncertainty begins with a ##1## (or sometimes a ##2##), in which case you might sometimes include a second significant figure in the quoted uncertainty (this is a consequence of Benford's Law).

Here I would probably use ##\pm 1 ^o C##.

Edit: Also, N.B. that the measurement should be quoted to the same number of decimal places as the uncertainty!
 
Last edited by a moderator:
  • Like
Likes i_love_science
  • #3
i_love_science said:
A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.
When you say the thermometer can only be read to +/- 0.5 degrees then you can only report the measured temperature to the nearest whole degree. In this case then you would report the temperature change as from 30. degrees to 50. degrees or a temperature change of 20. degrees. The decimal point makes the trailing zero significant. If you add the next zero then you are implying that the precision is +/- 0.05 degrees. Without the decimal point the trailing zero is ambiguous and and would not be considered significant.

Looking further at this case, the lower measured temperature of 30. degrees implies that the actual temperature is somewhere between 29.5 and 30.5 degrees and the 50. degree measurement implies an actual temperature between 49.5 and 50.5 degrees. So the actual temperature change could be a maximum of 29.5 to 50.5 or 21 degrees and the minimum possible change would be from 30.5 to 49.5 or 19 degrees for a total uncertainty of 2 degrees or +/- 1 degree.
 
  • Like
Likes i_love_science and Tom.G
  • #4
I'm an adult physics student and am just now learning probability so everything looks like a probability, question. Especially because you used the word "uncertainty".

Could 30 degrees +- 0.5 degrees on a well-calibrated thermometer mean 30 degrees +-2* sd=2*0.25 degrees? So that, with a confidence interval of alpha = 0.05 or 95% of the time the measurement 30 will be within 29.5 to 30.5? Too deep for me.

I'm sure I'm getting things wrong here. In real life, they tell you which calibrator to see that your thermometer isn't getting damaged. There is a specified time period, I think once a year when you have to recalibrate your thermometers.

Then, you measure your thermometer at the last little hash mark carved into the side and trust your eyes. If it's closer to 30 than 29 or 31.

It's said to be 30. , in practice, lab professionals who use their results treating patients just say 30

.

Doctors say 98.6

.
 
  • #5
fdegregorio said:
Could 30 degrees +- 0.5 degrees on a well-calibrated thermometer mean 30 degrees +-2* sd=2*0.25 degrees?
It depends.

If the error were a random measurement error characterized by a normal distribution, for instance, then we might take the 0.5 degree error as indicating the standard deviation of that error distribution.

But it seems far more likely that we are talking about a quantization error involved in rounding the actual measurement to the nearest whole degree. In this case the error distribution is going to have a flat distribution with a cut-off on either end. Meanwhile, under this same assumption (and assuming independence) the error distribution for the difference is going to be a triangle shape with a peak in the middle. [The assumption of independence is questionable here]

Of course, the real world truth is somewhat messier. Often, one has multiple errors that are not all well known, individually identified or equipped with simple or independent distributions.
 
  • Like
Likes fdegregorio

FAQ: How do we account for multiple sources of uncertainty in a measurement?

What are significant figures?

Significant figures are the digits in a number that carry meaning and contribute to its precision. They are used to indicate the certainty or accuracy of a measurement or calculation.

How do you determine the number of significant figures in a number?

The rules for determining significant figures are:

  • All non-zero digits are significant.
  • Zeroes between non-zero digits are significant.
  • Trailing zeroes after a decimal point are significant.
  • Leading zeroes before a non-zero digit are not significant.
  • Trailing zeroes in a whole number with no decimal point are not significant.

What is the purpose of using significant figures?

Significant figures are used to indicate the precision or accuracy of a measurement or calculation. They help to avoid misrepresentation of data and ensure that calculations are carried out to the appropriate level of precision.

What are uncertainties in measurements?

Uncertainties in measurements refer to the limitations or potential errors in the values obtained from a measurement. They can be caused by various factors such as equipment limitations, human error, and natural variability.

How do you calculate uncertainties?

The uncertainty of a measurement can be calculated by taking the difference between the highest and lowest values obtained from multiple measurements and dividing it by 2. This value is then divided by the average of the measurements and multiplied by 100 to get a percentage uncertainty.

Back
Top