- #1
magin
- 6
- 0
Hello physicsforum people,
I'm not sure how many significant figures I should express a confidence interval to. I have confidence intervals for means that I need to express in a lab report, which I am going to do in the something ± something fashion. (I have assumed a normal distribution of the deviations of each measurement about the true mean, although it is not the calculation of the confidence intervals I have a problem with)
The resultant something ± something else confidence interval should be accurate to arbitrary precision shouldn't it? (neglecting the fact that you would have used a finite precision computer to calculate it)
If I were to round the bit left of the ± sign, I would shift the interval and if I round the bit to the right, I would narrow/broaden the interval. I am figuring that when making 95% confidence intervals in general, if you leave them un-rounded they will have a probability of containing the true mean closer to 95%, which is what I want, correct?
So why would someone round one, other than to the precision at which the computer can calculate it? I know the rationale behind rounding is to avoid false precision, but when you are explicitly stating precision, I do not believe this is a problem.
Thanks,
Sam
I'm not sure how many significant figures I should express a confidence interval to. I have confidence intervals for means that I need to express in a lab report, which I am going to do in the something ± something fashion. (I have assumed a normal distribution of the deviations of each measurement about the true mean, although it is not the calculation of the confidence intervals I have a problem with)
The resultant something ± something else confidence interval should be accurate to arbitrary precision shouldn't it? (neglecting the fact that you would have used a finite precision computer to calculate it)
If I were to round the bit left of the ± sign, I would shift the interval and if I round the bit to the right, I would narrow/broaden the interval. I am figuring that when making 95% confidence intervals in general, if you leave them un-rounded they will have a probability of containing the true mean closer to 95%, which is what I want, correct?
So why would someone round one, other than to the precision at which the computer can calculate it? I know the rationale behind rounding is to avoid false precision, but when you are explicitly stating precision, I do not believe this is a problem.
Thanks,
Sam