- #1
mikelee8a
- 2
- 0
Hi,
This problem has been bugging me for while. I hope someone can explain it.
1) If I measure something with a digital device and I know that the reading is correct, my only uncertainty is the limited significant figures displayed by the device. So if it read 24.6 the actual value couldn't possibly be less the 24.55 or more than 24.65 (assuming device correctly calibrated and rounds is a suitable fashion). So reading written as 24.60(5). The point is that this error bounds the result, with every value in the range being equally likely.
2) Compare with standard error, used when measuring a value which has a statistical variation over time. As far as I can see this is written in exactly the same way as above but is interpreted as meaning there is a 68% chance of the result lying in the range (compared with 100%) with the quoted value being the most probably value.
My questions are, is there a way knowing which type of error is being used? And what is the relationship between these two measures of uncertainty? I know that if many type 1) measurements are made and combined, a normal prob. distribution emerges (central limit theorem I think)
This problem has been bugging me for while. I hope someone can explain it.
1) If I measure something with a digital device and I know that the reading is correct, my only uncertainty is the limited significant figures displayed by the device. So if it read 24.6 the actual value couldn't possibly be less the 24.55 or more than 24.65 (assuming device correctly calibrated and rounds is a suitable fashion). So reading written as 24.60(5). The point is that this error bounds the result, with every value in the range being equally likely.
2) Compare with standard error, used when measuring a value which has a statistical variation over time. As far as I can see this is written in exactly the same way as above but is interpreted as meaning there is a 68% chance of the result lying in the range (compared with 100%) with the quoted value being the most probably value.
My questions are, is there a way knowing which type of error is being used? And what is the relationship between these two measures of uncertainty? I know that if many type 1) measurements are made and combined, a normal prob. distribution emerges (central limit theorem I think)