- #1
Govind
- 11
- 1
- TL;DR Summary
- what would be the correct definition of measurement uncertinity which would be applicable to any case whereas 'systemaic error is involved or not' or 'measurements are finite or infinite'?
I have some confusion regarding Measurement Uncertainity. In some books/articles it is defined wrt true value as "Uncertainty in the average of measurements is the range in which true value is most likely to fall , when there is no bias or systematic component of error is involved in measurement" but if we take infinite measurements ( theoretically ) with no systematic error then we would be pretty sure that mean of measurements will be equal to true value and uncertainty would make no sense as it is defined previously!
There is another definition (which is based on confidence level, generally 68% and random measurements not with true value) states that "There is roughly a 68% chance that any measurement of a sample taken at random will be within one standard deviation of the mean. Usually the mean is what we wish to know and each individual measurement almost certainly differs from the true value of the mean by some amount. But there is a 68% chance that any single measurement lies with one standard deviation of this true value of the mean. Thus it is reasonable to say that:". It seems to me Correct to some extensions ( when no systematic error is envolved ) because it would be applicable to infinite measurements also.
Both the definitions are defined with consideration of no systematic component of error involved but if it is involved what would uncertainty in uncorrected result ( not corrected for systematic error ) represent? Well, second definition will still hold if we measure the further measurements with the same instrument or without correction of systematic component but if someone other would measure measuand with no systematic component or with corrected instruments his 2/3 measurements will not fall in that range of uncertainty. Then in that case what would be the appropriate definition of uncertainty?
And here GUM description for uncertainty came up :-
Now I ends up here with my point of view on previously stated definitions. And want to ask what would be the correct definition of measurement uncertinity which would be applicable to any case whereas 'systemaic error is involved or not' or 'measurements are finite or infinite'?
There is another definition (which is based on confidence level, generally 68% and random measurements not with true value) states that "There is roughly a 68% chance that any measurement of a sample taken at random will be within one standard deviation of the mean. Usually the mean is what we wish to know and each individual measurement almost certainly differs from the true value of the mean by some amount. But there is a 68% chance that any single measurement lies with one standard deviation of this true value of the mean. Thus it is reasonable to say that:". It seems to me Correct to some extensions ( when no systematic error is envolved ) because it would be applicable to infinite measurements also.
Both the definitions are defined with consideration of no systematic component of error involved but if it is involved what would uncertainty in uncorrected result ( not corrected for systematic error ) represent? Well, second definition will still hold if we measure the further measurements with the same instrument or without correction of systematic component but if someone other would measure measuand with no systematic component or with corrected instruments his 2/3 measurements will not fall in that range of uncertainty. Then in that case what would be the appropriate definition of uncertainty?
And here GUM description for uncertainty came up :-
Above description is like adding an additinal point 'systematic error' to first definition which is based on true value and the same confusion arises here that what would uncertainity in population mean (mean of theoritically infinite measuremnets) with no systemati error represent?D.5.1 Whereas the exact values of the contributions to the error of a result of a measurement are unknown and unknowable, the uncertainties associated with the random and systematic effects that give rise to the error can be evaluated. But, even if the evaluated uncertainties are small, there is still no guarantee that the error in the measurement result is small; for in the determination of a correction or in the assessment of incomplete knowledge, a systematic effect may have been overlooked because it is unrecognized. Thus the uncertainty of a result of a measurement is not necessarily an indication of the likelihood that the measurement result is near the value of the measurand; it is simply an estimate of the likelihood of nearness to the best value that is consistent with presently available knowledge.
Now I ends up here with my point of view on previously stated definitions. And want to ask what would be the correct definition of measurement uncertinity which would be applicable to any case whereas 'systemaic error is involved or not' or 'measurements are finite or infinite'?