WHat is the uncertainty in a metre rule?

  • Thread starter mutineer123
  • Start date
  • Tags
    Uncertainty
In summary, the uncertainty in a metre rule is the accuracy of the scale, which is determined by the ruler's markings.
  • #36


truesearch said:
I am concerned when I read in post 6 (a PF Mentor) that measurements can be made to within +/- 0.1mm using a mm scale.
As a standard deviation, 0.2 is easy to achieve and 0.1 might be possible (all in mm).

As an example for 0.2 standard deviation, this means that 0.3 is usually (~70%) read somewhere between 0.1 and 0.5. It is sufficient to see that 0.3 is smaller than 0.5, but not close to 0, to do this.
0.5 should usually be read somewhere between 0.3 and 0.7 - which is everything not close to a mark on the scale.

If you want to give some "upper bound" for the error, you should use larger values, of course. But an upper bound is not always well-defined (apart from digital displays). And if you want to use the marks on the scale, you should format your numbers like [itex]0.3^{+0.7}_{-0.3}mm[/itex]
 
Physics news on Phys.org
  • #37


Yes indeed using ± notation.

But if you are limited to scale divisions you cannot report half divisions, your statement has to be 2 or 3, not 2.5 ± 0.5.
That implies you have to report 0,1,2,3,4 etc.

So a report that the thickness is 2 implies that it is between 1 and 3 ie 2 ±1

Which is what I said.

Incidentally you need to revise your statement on end standards.
 
  • #38


mfb said:
As an example for 0.2 standard deviation, this means that 0.3 is usually (~70%) read somewhere between 0.1 and 0.5. It is sufficient to see that 0.3 is smaller than 0.5, but not close to 0, to do this.
I guess you meant 0.2 in the bolded number. Nevertheless, this is only true for a normal distribution. When we measure with a coarse scale such that we always get the length to be between the same two divisions, the error is not of statistical nature, and uncertainty has a different meaning from standard deviation. For a uniform distribution with ends a, and b, the standard deviation is:
[tex]
\sigma_{U} = \frac{b - a}{2 \sqrt{3}}
[/tex]
Notice that [itex]2 \sqrt{3} \approx 3.5[/tex]

Studiot said:
Yes indeed using ± notation.

But if you are limited to scale divisions you cannot report half divisions, your statement has to be 2 or 3, not 2.5 ± 0.5.
That implies you have to report 0,1,2,3,4 etc.

So a report that the thickness is 2 implies that it is between 1 and 3 ie 2 ±1

Which is what I said.

Incidentally you need to revise your statement on end standards.

You can report half divisions, as it is customary in experimental physics.

As for end standards, we are not doing calibration of etalons. We are measuring the length of an object. Thus, we are free to slide the scale so that the left end coincides exactly with one of the ruler's divisions. Then, there is uncertainty in reading off only the right end.

There might be systematic errors due to the bad calibration of the rulers divisions, but that's another point.
 
  • #39


As for end standards, we are not doing calibration of etalons. We are measuring the length of an object. Thus, we are free to slide the scale so that the left end coincides exactly with one of the ruler's divisions. Then, there is uncertainty in reading off only the right end.

This is a fundamental error. The process of sliding still constitutes a 'reading' or alignment error. It is not as accurate as the engineering process of aligning an end standard ruler, even with a comparator microscope which engineers also use.

And BTW why mention etalons?

There is no way anyone could say +/- 0.1mm

Actually there is but you require a draftsman's scale rule with diagonal scales. Have you heard of these?
However I have not seen one as long as 1 metre.
 
  • #40


Last few comments are missing the point and rely on unsubstantiated assumptions.
Trying to see something between divisions which is not there!
Assuming that there is some uniform scale within the division
Assuming there is no distortion.
The example of digital instruments should serve as a clue... There is no way to 'eyeball' how close the last digit is to the one above or the one below. +/-1 max is a safe, sensible, objective bet.
 
  • #41


Studiot said:
This is a fundamental error. The process of sliding still constitutes a 'reading' or alignment error.
Even if it does consists of an error, the error is of the order of the width of the mark, and not of the order of half the distance between two marks. I don't know about your rulers, but the marks on mine are pretty thin.
 
  • #42


I have used draughts mans scales ...I have also used verniers
 
  • #43


truesearch, error is not the absolute maximum or minimum you can be wrong by. It's the average error. So it behaves like a random walk. For a random walk of N steps of distance L each, the average distance you travel is L*Sqrt(N). So average error behaves like RMS.

In practice, the actual value of how much you are off by will be normally distributed. The quoted error is the standard deviation of that distribution.
 
  • #44


Even if it does consists of an error, the error is of the order of the width of the mark, and not of the order of half the distance between two marks. I don't know about your rulers, but the marks on mine are pretty thin.

So why can't you 'read' the other end to the same precision?

As a matter of inerest how do you guarantee that the aligned 'zero' stays put while you read the other end?

I know how the navy does it for a traverse tape and how an engineering workshop does it for an engineering endstop rule and similarly how a drapers shop does it for a drapers endstop rule. Why do you think they do it this way with an end stop rather than your way?
 
  • #45


I have used draughts mans scales

So you know they are commonly calibrated in 1/100 inch or 0.1mm?
 
  • #46


Studiot said:
So why can't you 'read' the other end to the same precision?
Because not all lengths in Nature are integer multiples of the divisions of our scale.

Studiot said:
As a matter of inerest how do you guarantee that the aligned 'zero' stays put while you read the other end?
You don't. But, you are describing sources of error that are order of magnitude smaller than the precision of the scale of the measuring instrument.

In fact, if the sources you allude to start giving comparable contributions, then it means your measuring scale is so precise that you do not get the same result as you repeat the measurement. In other words, you start getting statistical errors.


Studiot said:
I know how the navy does it for a traverse tape and how an engineering workshop does it for an engineering endstop rule and similarly how a drapers shop does it for a drapers endstop rule. Why do you think they do it this way with an end stop rather than your way?
Probably to account for the fact that in these cases the object being measured is violently moved during the measuring process. This, on the other hand, happens rarely in a Physics Lab.
 
  • #47


Dickfore all you are proving is that different people in different 'laboratories' use different techniques and thereby achieve (slightly) different results by going their own way.

The whole object of calibration and standardisation is so that anyone anywhere can achieve the same result under the same conditions. This involves standardisation of measurement technique as well as tools in order to remove 'operator bias'. Measurement against a common stop end is one such standard.

If a laboratory develops its own special techniques it needs to report these as part of the results.
I once worked in such a laboratory measuring the lengths of bricks, accurate to less than a 10thou using the lab's specially developed technique.
But we never pretended it was 'standard' or that the method should be widely adopted.
 
  • #48


Studiot said:
The whole object of calibration and standardisation is so that anyone anywhere can achieve the same result under the same conditions.

Provided that the results are reported to same precision! What you are describing in the previous posts is comparing the result of a measurement done by a school ruler, to that of a micrometer screw gauge.

Let us say that (by the method I described), I get a length measurement of 3.5±0.5 mm.

Then, you come with your fancy equipment and get a result 3.329±0.007 mm (you had to repeat the experiment several times because you noticed that every time you get a different reading with your fine equipment. Then, you took the mean, and you found the standard deviation of the mean, and you took a 95% confidence interval for the mean.)

Does that make my measurement "wrong"?
 
  • #49


truesearch said:
I am concerned when I read in post 6 (a PF Mentor) that measurements can be made to within +/- 0.1mm using a mm scale.

Ultimately, the proper ± figure for scale-reading uncertainty depends on the person who is making the measurement and the instrument that he is using, and that person has to make a judgment about this.

I feel confident in assigning ±0.1 mm when using a metal scale with finely-engraved lines, in a way that eliminates or minimizes parallax error due to the thickness of the scale. It helps that I'm rather nearsighted so I can get my eye about 10 cm from the scale if I take off my eyeglasses. If I'm using a typical plastic ruler with relatively thick mm-lines, I might use ±0.2 mm. If I'm using a thick meter stick and can't lay it edgewise on the object being measured so that I have to sight across the thickness of the meter stick, I might use ±0.5 mm or even ±1.0 mm.
 
  • #50


Not at all to do with statistics.

But everything to do with technique which contains inherent sources of error v technique which avoids these.

When you place your test piece and ruler against the stop end you have a guaranteed square and reproducible 'zero'.

When you estimate the alignment of two lines along a sight line that may or may not be square and hold the ruler and testpiece at some random (albeit small) angle to each other you have a recipe for variability of measurement. Notice I said 'sight line'. Two operators will align the pieces slightly differently by sight. They cannot do this with a stop end.


Edit
JTbell has just described the visual alignment issue to a T whilst I was posting.
 
  • #51


jtbell said:
Ultimately, the proper ± figure for scale-reading uncertainty depends on the person who is making the measurement...
No. Wrong experimental procedure leads to systematic errors that are not a measure of the uncertainty.

jtbell said:
...and the instrument that he is using,...
Yes. But:

jtbell said:
I feel confident in assigning ±0.1 mm when using a metal scale with finely-engraved lines, in a way that eliminates or minimizes parallax error due to the thickness of the scale.
This is definitely wrong if the "finely-engraved lines" are a distance 1 mm apart, as given in the OP.
 
  • #52


Dickfore said:
No. Wrong experimental procedure leads to systematic errors that are not a measure of the uncertainty.
They can lead to either or both. Eye-balling a value, for example, vs using a precise measurement is a technique flaw that introduces a random error, rather than a systematic one.
 
  • #53


Studiot said:
Not at all to do with statistics.

But everything to do with technique which contains inherent sources of error v technique which avoids these.

When you place your test piece and ruler against the stop end you have a guaranteed square and reproducible 'zero'.

When you estimate the alignment of two lines along a sight line that may or may not be square and hold the ruler and testpiece at some random (albeit small) angle to each other you have a recipe for variability of measurement. Notice I said 'sight line'. Two operators will align the pieces slightly differently by sight. They cannot do this with a stop end.

Again, if you consider the error due to alignment of zero to give an uncertainty of 0.5 mm (on a ruler with a division of 1 mm), you are overstating the error. This is also a mistake.

Consider the error propagation formula:
[tex]
\sigma_{L} = \sqrt{\sigma^{2}_{\mathrm{left}} + \sigma^{2}_{\mathrm{right}}}
[/tex]
Now, by the nature of the measurement, we must have [itex]\sigma_{\mathrm{left}} \ll \sigma_{\mathrm{right}}[/itex]. Then, we may expand:
[tex]
\sigma_{L} = \sigma_{\mathrm{right}} \, \left( 1 + \left( \frac{\sigma_{\mathrm{left}}}{\sigma_{\mathrm{right}}} \right)^{2} \right)^{\frac{1}{2}} \approx \sigma_{\mathrm{right}} \, \left[ 1 + \frac{1}{2} \, \left( \frac{\sigma_{\mathrm{left}}}{\sigma_{\mathrm{right}}} \right)^{2} \right]
[/tex]
This is much smaller than [itex]\sqrt{2} \, \sigma_{\mathrm{right}}[/itex] if:
[tex]
\frac{\sigma_{\mathrm{left}}}{\sigma_{\mathrm{right}}} \ll \sqrt{2 (\sqrt{2} - 1)} = 0.92
[/tex]
which is certainly the case.
 
  • #54


I think, I misread the OP. I was reading it as length of object requiring two measurements, because ruler isn't long enough. But for matching zero, of course, the precision is much better than 0.5mm. It's on the order of the width of the tick, rather than order of distance between ticks.
 
  • #55


K^2 said:
They can lead to either or both. Eye-balling a value, for example, vs using a precise measurement is a technique flaw that introduces a random error, rather than a systematic one.

Not if you eyeball from the same direction consistently, it isn't.
 
  • #56


If the actual values are different, you'll always have a different error in eye-balling it.

You can go ahead and do your own experiment on that. You will note a random error. There might also be a systematic one, but random error will dominate.
 
  • #57


Actually, parallax error may be negligible if the scale is next to the measured object.
 
  • #58


We are talking about different things. Try eye-balling distances/sizes without a scale at all, so that you can use the scale measurement to compare it to. You'll find the error to be mostly random. Though, a bias may be present as well. This is extreme case, of course, but eye-balling distances between ticks works the same way. It's just harder to check yourself.
 
  • #59


K^2 said:
Try eye-balling distances/sizes without a scale at all, so that you can use the scale measurement to compare it to.

I'm afraid I don't understand what you're trying to describe.
 
  • #60


I understand the temptation to leap into discussing standard deviations, but the OP asked what the uncertainty is. If the two input errors are +/- A and +/- B then the uncertainty in the difference (or sum) is +/-(A+B). Whether it is more appropriate to use that value or one based on s.d. depends on the purpose to which the answer will be put. If my life depends on staying within a specific bound, I'll take the conservative approach. Note that using s.d. then being conservative by demanding 3 s.d.s of tolerance actually produces a larger safety margin than necessary.
Where s.d. is considered appropriate, it's worth thinking about the distribution of the input errors. In this case, they'll follow a uniform distribution over the stated range (perhaps with a little rounding at the edges). The error in the difference will therefore be distributed as a symmetric trapezium ('trapezoid' in US). In the special case of the ranges being equal, like the sum of two dice, this simplifies to an isosceles triangle. The s.d. will be √((A2+B2)/3).
 
Back
Top