Number of Sig Figures and Precision

  • I
  • Thread starter fog37
  • Start date
  • Tags
    Precision
In summary, the more significant digits in a measurement, the higher the precision of the measurement.
  • #1
fog37
1,569
108
TL;DR Summary
Understand relation between precision and the number of significant figures in a number that is the result of a measurement.
Hello,

When we measure something, using a measuring instrument, our measurement will be a number with a finite number of significant figures. The rightmost digit is the uncertain digit affected by uncertainty. A measurement should always be reported as the best estimate +-error range where the best estimate is the average of multiple measurements...

The more precise is the instrument, the more precise is the measurement, the more sig figs in the measurement. So far so good. So precision is reflected by the number of sig figs.

However, the other definition of precision is how close multiple measurements are to each other...This definition does not seem to relate precision to sig figs...What am I missing?

Are measurements with many sig figs also likely to be very close numerically to each other?

Thanks!
 
Physics news on Phys.org
  • #2
Significant figures are like training wheels. They help you get started understanding the basic idea of measurement uncertainty, but at some point you take off the training wheels and deal with uncertainty the "right" way.

The right way to deal with uncertainty is to determine and report the standard uncertainty. That is the standard deviation in your measurement. That can come from two sources, one is statistical information and the other is non-statistical information. So, with multiple measurements you can simply take the standard deviation and that will be the statistical component of your uncertainty.

Then, for the instrument round-off error, that would be non-statistical. For that you would look at the documentation from the instrument manufacturer. They should report the device uncertainty. It is not a good idea to simply assume that it is equal to the last digit in the display. Whatever the manufacturer documents would be your non-statistical error.

Your combined uncertainty would be ##u_{combined}=\sqrt{u^2_{statistical}+u^2_{non-statistical}}##
 
  • Like
Likes vanhees71, malawi_glenn, parham and 1 other person
  • #3
Dale said:
Significant figures are like training wheels. They help you get started understanding the basic idea of measurement uncertainty, but at some point you take off the training wheels and deal with uncertainty the "right" way.

The right way to deal with uncertainty is to determine and report the standard uncertainty. That is the standard deviation in your measurement. That can come from two sources, one is statistical information and the other is non-statistical information. So, with multiple measurements you can simply take the standard deviation and that will be the statistical component of your uncertainty.

Then, for the instrument round-off error, that would be non-statistical. For that you would look at the documentation from the instrument manufacturer. They should report the device uncertainty. It is not a good idea to simply assume that it is equal to the last digit in the display. Whatever the manufacturer documents would be your non-statistical error.

Your combined uncertainty would be ##u_{combined}=\sqrt{u^2_{statistical}+u^2_{non-statistical}}##
Thank you Dale. I get that things get more complex.

If the measuring instrument has the proper resolution, the number of digits (aka significant digits) in the numerical answer we get from the measurement should reflect the precision of the instrument: the more sig figs in the answer the higher the precision. So I would conclude that high resolution tool = high precision tool.

In terms of statistics, if we collected multiple measurements and their values were not too different from each other, i.e. low standard deviation, the resources I am reading state that it is equivalent to being precise...So the consistency in the answer (not the same as accuracy) and the instrument resolution are somehow related...is that correct?

By the way, in some cases, we don't really take multiple measurements. Ex: measuring the length of an object with a meter stick. Repeating the measurement will give the same result (no fluctuation). The measurement is reported as the measured value +- uncertainty. The uncertainty will be 0.5m which is half the resolution (1mm). We can measure with certainty cm and mm. But we are uncertain about fractions of mm. We can guesstimate by visually splitting a single mm into 10 parts but but I have read it is better to use 1/2 of the resolution, +-0.5mm, as uncertainty.

Ex: 2.34 +- 0.05 cm

So the "true" length will be between 2.29 and 2.39.
 
  • #4
fog37 said:
Summary: Understand relation between precision and the number of significant figures in a number that is the result of a measurement.

The more precise is the instrument, the more precise is the measurement, the more sig figs in the measurement. So far so good. So precision is reflected by the number of sig figs.

However, the other definition of precision is how close multiple measurements are to each other...This definition does not seem to relate precision to sig figs...What am I missing?
A well engineered measuring instrument will not report useless digits so the two measures should be similar. In the trade the extraneous reported numbers are known as "marketing digits" and are of course purely bogus. Avoid such items if you can.
The most egregious one that comes to mind are the toy microscopes and telescopes that advertise absurdly large magnifications (often beyond the diffraction limit).
 
  • Like
Likes fog37
  • #5
How would you define the resolution of a ruler?
 
  • #6
anorlunda said:
How would you define the resolution of a ruler?
Well, a ruler has vertical marks. Some marks represent cm.

The smaller marks represent mm. They are 1mm apart from each other. So 1mm would be its resolution of the meter stick.
 
  • #7
fog37 said:
Summary: Understand relation between precision and the number of significant figures in a number that However, the other definition of precision is how close multiple measurements are to each other...This definition does not seem to relate precision to sig figs...What am I missing?

Are measurements with many sig figs also likely to be very close numerically to each other?
What makes those different measurements differ from each other?
Could you give us examples?
 
  • #8
fog37 said:
So I would conclude that high resolution tool = high precision tool.
That is a bad conclusion. I had a friend that looked at the dashboards of cars and assumed that the ones where the speedometer had the highest numbers were the fastest cars.

I don't know your background. If you are a student doing a lab, then that assumption is fine for the purpose. But if you are doing real science then that is a fundamentally flawed assumption. Check the documentation for your actual instrument. If you don't have the documentation then you should use your own statistical analysis and not this assumption.

fog37 said:
By the way, in some cases, we don't really take multiple measurements. Ex: measuring the length of an object with a meter stick. Repeating the measurement will give the same result (no fluctuation). The measurement is reported as the measured value +- uncertainty. The uncertainty will be 0.5m which is half the resolution (1mm).
Yes, this is exactly what I was talking about above regarding non-statistical uncertainty. In this case you would typically model that a measurement of 345 mm means that the length is uniformly distributed somewhere in the range of 344.5 mm to 345.5 mm. A uniform distribution has a small standard deviation so the standard uncertainty would not be 0.5 mm, but rather (0.5/##\sqrt{3}$$) mm. So this component of the standard uncertainty is 0.29 mm.

See section 4.3.7 in the BIPM's "Guide to the expression of uncertainty in measurement" https://www.bipm.org/documents/2012...08_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6

The BIPM guide is the authoritative reference on analyzing and reporting uncertainty.
 
  • #9
anorlunda said:
How would you define the resolution of a ruler?
I think reporting it as the rms deviation within the smallest graduation as @Dale does above is a pretty good practice. So for a 1mm graticule the uncertainty is $$\pm\frac {.5mm}{\sqrt 3}$$
Dale said:
Yes, this is exactly what I was talking about above regarding non-statistical uncertainty.
But I am not sure I agree with calling this a non-statistical uncertainty (semantics again)
 
  • #10
With the ruler, I was trying to point out that not all instruments are digital.

But I could have a ruler with 1 foot markings, yet still estimate the inches. Roughly one digit better than the spacings of the markings. That's less true with 1 mm markings; physiology, not physics.

In college, one fellow student had a 3 foot K&E slide rule (with 3 times as many gradation marks), whereas everyone else had 1 foot slide rules. He claimed that the long rule allowed 3 digits, compared to 2 for the short ones. But challenge after challenge with different students manning the long and the short ones failed to verify that claim. Again, physiology, not physics.
 
  • Like
Likes hutchphd and dlgoff
  • #11
I have always loved vernier scales...
 
  • Like
Likes anorlunda
  • #12
anorlunda said:
not all instruments are digital
What is this "analog" of which you speak?
 
  • #13
hutchphd said:
But I am not sure I agree with calling this a non-statistical uncertainty (semantics again)
Well, the official description from section 0.7.1 of the BIPM guide is:
The uncertainty in the result of a measurement generally consists of several components which may be grouped into two categories according to the way in which their numerical value is estimated:
A. those which are evaluated by statistical methods,
B. those which are evaluated by other means.
So calling it non-statistical is just what I use to refer to "Category B" for people who are most likely not familiar with the BIPM uncertainty categorization. I could call it Category B uncertainty, but not enough people actually read the document and know what that means.
 
Last edited:
  • Like
Likes hutchphd
  • #14
Thanks again. Lots to learn.

For the simple case of addition between two measurements ##X## and ##Y##, i.e. ##Z=X+Y##, the total uncertainty of ##Z##, ##\delta Z## is often given as the sum of the uncertainties ##\delta X## and ##\delta Y##: $$\delta z= \delta X + \delta Y$$ However, the more correct formula seems to involve the sum of the square of the individual uncertainties and square rooting of the sum...

In which situations would we use the simple addition of the uncertainties instead of the square root one? When the uncertainties are small? Small relative to what?
 
  • #15
When they are "random" and typically of similar size (If one of them is large it will dominate and the rest doesn't really matter.) This result follows most naturally, in my mind, from consideration of the "central limit theorem"
https://en.wikipedia.org/wiki/Central_limit_theorem
Which shows that given a bunch of disjoint error sources the best estimate of the resultant distribution of expected values often looks Gaussian with an error given by the RMS sum. In my practical experience over 40 yrs of manufacturing and designing precision instruments and diagnostic tests this is a remarkably useful result.
(With apologies to the statisticians because my formal training in this jargon filled arena is not impressive.)
 
  • #16
fog37 said:
So I would conclude that high resolution tool = high precision tool.
You'll have to pardon me if I find that hilarious. Let me give you an example of directions using low resolution, high resolution, and high precision

to get from Atlanta to Dallas:

LOW RESOLUTION: Got to Oklahoma and turn left --- very sloppy but it will get you in the vicinity of Dallas.

HIGH RESOLUTION: (here I would put actual directions for Atlanta -> Dallas) --- actually this is more like high ACCURACY plus high resolution.

HIGH PRECISION: Get on I95, go North for 537 miles, turn left on I466 and go 14.3 miles (not sure where these precise directions will get you but it sure as hell won't be Dallas
 
  • #17
fog37 said:
However, the more correct formula seems to involve the sum of the square of the individual uncertainties and square rooting of the sum.
That is correct. Assuming the uncertainties are uncorrelated then it is the variances that add. So the standard deviations are the square root of the sum of the variances.
 
  • Like
Likes hutchphd and fog37
  • #18
fog37 said:
In which situations would we use the simple addition of the uncertainties instead of the square root one?
Also, if the errors are 100% correlated.
 
  • Like
Likes fog37 and jbriggs444
  • #19
Dale said:
That is correct. Assuming the uncertainties are uncorrelated then it is the variances that add. So the standard deviations are the square root of the sum of the variances.
I see. So if the uncertainties are correlated then we use the simple addition of the uncertainties...How do we know ahead of time if the uncertainties are correlated or not?

Also, in the case of a single measurement (because multiple measurements would give the same answer because the instrument's resolution is larger than the external random errors), we take, by convention, 1/2 the resolution of the instrument as uncertainty. Ex: in the case of the meter stick with mm divisions, a measurement is reported as ## X +\delta X## where ## \delta X= 0.5mm##. So measurements would look like ##22.4 \pm 0.5 mm##.

If we used a digital caliper with rated accuracy of ##\pm 0.2mm## and resolution of ##\pm0.1mm##. Would we use half the rated accuracy (i.e. 01mm) or half the resolution (i.e. 0.05mm) of the caliper as uncertainty in our measurements?

THANKS!
 
  • #20
fog37 said:
So if the uncertainties are correlated then we use the simple addition of the uncertainties.
If ##f(x,y)## is some arbitrary function then the full propagation of errors formula is $$\sigma^2_f= \left(\frac{\partial f}{\partial x}\right)^2 \sigma^2_x+ \left(\frac{\partial f}{\partial y}\right)^2\sigma^2_y+ 2\left(\frac{\partial f}{\partial x}\right) \left(\frac{\partial f}{\partial y}\right) \sigma_{xy}$$ where ##\sigma^2_i## is the variance of ##i## and ##\sigma_{ij}## is the covariance between ##i## and ##j##. If the covariance is 0 then the variances add. If the covariance is 1 then the standard deviations add. Otherwise it is somewhat in between.

fog37 said:
we take, by convention, 1/2 the resolution of the instrument as uncertainty
Again, check the manufacturer’s documentation. This is not always a good assumption.

fog37 said:
If we used a digital caliper with rated accuracy of ±0.2mm and resolution of ±0.1mm. Would we take half the accuracy or half the resolution as uncertainty?
If the manufacturer says 0.2 mm then you should use that. Not half of that and not the resolution.
 
  • #21
fog37 said:
If we used a digital caliper with rated accuracy of ±0.2mm and resolution of ±0.1mm. Would we take half the accuracy or half the resolution as uncertainty?
Absent other information these are uncorrelated sources. If i did a bunch of measurements with a bunch of calipers of the same object what distribution would obtain?

In my world the absolute measurement "error" should be reported as the RMS sum of the two i. e. ##\pm 0.25mm## at the usual confidence level.
 
  • Like
Likes Dale

FAQ: Number of Sig Figures and Precision

What is the significance of sig figs in scientific measurements?

Sig figs, short for significant figures, are used to indicate the precision or accuracy of a measurement. They tell us the number of digits that are known with certainty, and are important for maintaining consistency and avoiding misleading results in scientific calculations.

How do I determine the number of sig figs in a given measurement?

The general rule for determining the number of sig figs is to count all of the digits from the first non-zero number to the last non-zero number. For example, in the number 0.00345, there are 3 sig figs. However, if the number ends in a zero after the decimal point, such as 1.50, the zeros are considered significant and there are 3 sig figs.

What is the purpose of rounding when dealing with sig figs?

Rounding is used to ensure that the final answer of a calculation has the same number of sig figs as the measurement with the least number of sig figs. This helps to maintain the accuracy and precision of the final result.

How do I perform mathematical operations with numbers that have different numbers of sig figs?

When performing addition or subtraction, the final answer should have the same number of decimal places as the measurement with the least number of decimal places. For multiplication or division, the final answer should have the same number of sig figs as the measurement with the least number of sig figs.

Can sig figs be used in non-mathematical contexts?

Yes, sig figs can also be used in non-mathematical contexts, such as when reporting a person's age or weight. In these cases, the sig figs indicate the precision or accuracy of the reported value.

Similar threads

Replies
7
Views
854
Replies
2
Views
682
Replies
3
Views
960
Replies
19
Views
6K
Replies
6
Views
3K
Replies
8
Views
4K
Back
Top