- #1
azaharak
- 152
- 0
I have a coworker who is very old and set in their ways, he has been causing problems in the department in many ways and thinks everything that he does is correct. I'm currently in a debate with him over error analysis, (this includes a lot of small issues and some larger ones).
Firstly, he continues to place what I call (intrinsic uncertainties) inherent from a given measuring tool such as a meter stick , micrometer, caliper, etc, under the category a of systematic errors.
The intrinsic uncertainties in a measuring tool can be taken to be on the order of the least count. They are not solely systematic, I believe that that actually obey random statistics more often.
When a manufacturer states the intrinsic uncertainty in their digital caliper is 0.002cm, this means that any measurement made (correctly) is within that value. In fact the systematic error is within 0 to 0.002cm, and the distribution in between is random.
Secondly other random components such as how the instruments user will align the device, how much pressure is used, temperature variations that could change elongation, will have a random component that most likely will dwarf the systematic component inherent in the tool.
----
The reason why this bothers me is because the way he has written the lab manual, my students are all calling the ~ least count errors are systematic.
Systematic errors are very hard to detect, they would be not zeroing a balance, possible parallax, etc.
Secondly, I learned that true systematic errors propagate slightly different (not in quadrature).
So my question is, shouldn't the inherent or intrinsic error from a measuring tool such as meterstick, stop watch, or digital balance be treated as random and not defined as systematic error.
I'm not sure if its should be defined as either.
Firstly, he continues to place what I call (intrinsic uncertainties) inherent from a given measuring tool such as a meter stick , micrometer, caliper, etc, under the category a of systematic errors.
The intrinsic uncertainties in a measuring tool can be taken to be on the order of the least count. They are not solely systematic, I believe that that actually obey random statistics more often.
When a manufacturer states the intrinsic uncertainty in their digital caliper is 0.002cm, this means that any measurement made (correctly) is within that value. In fact the systematic error is within 0 to 0.002cm, and the distribution in between is random.
Secondly other random components such as how the instruments user will align the device, how much pressure is used, temperature variations that could change elongation, will have a random component that most likely will dwarf the systematic component inherent in the tool.
----
The reason why this bothers me is because the way he has written the lab manual, my students are all calling the ~ least count errors are systematic.
Systematic errors are very hard to detect, they would be not zeroing a balance, possible parallax, etc.
Secondly, I learned that true systematic errors propagate slightly different (not in quadrature).
So my question is, shouldn't the inherent or intrinsic error from a measuring tool such as meterstick, stop watch, or digital balance be treated as random and not defined as systematic error.
I'm not sure if its should be defined as either.