Why is the error value higher in the average calculation?

In summary, the conversation is about determining the error of the average of 3 measurements. The formula used to calculate the error is e=sqrt((rep 1 error)^2 + (rep 2 error)^2 + (rep 3 error)^2). However, the error value for the average is higher than the individual measurements. This is contrary to the usual rules of error propagation. The person mentions a source where it is suggested to divide the error by the square root of the number of measurements, which makes more sense for the average to be more accurate. The derivation of this rule can be found on Wikipedia.
  • #1
davidp92
12
0
Not sure if this is the right section to post this..
I have 3 measurements and was trying to take the average of the measurements and calculate the error of the average:
replicate 1 = 8.9 (+/-) 0.71mg
replicate 2 = 9.3 (+/-) 0.69mg
replicate 3 = 8.8 (+/-) 0.70mg

I get an average of 8.9333 (+/-) e where e=sqrt((rep 1 error)^2 + (rep 2 error)^2 + (rep 3 error)^2) which gives me a value of 1.21. But why is the error value so much higher in the average?
What step am I missing? I don't know the derivation behind the error propagation formula - so I just use it as it is: e=sqrt((e1)^2+(e2)^2+...)
 
Physics news on Phys.org
  • #2
When I was in first year physics, they told me to divide the error by the square root of the number of measurements I had. This is contrary to the usual rules of error propagation. I found it in one place
http://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm
Look at section 3.10 near the bottom of the page.
Unfortunately it is very difficult to read due to typos and anyway, I think they are just saying there is a rule somewhere else that you divide by root n. It certainly makes sense that your average should be more accurate than each measurement is.

EDIT: more on how it is derived here: http://en.wikipedia.org/wiki/Standard_error_(statistics)
 
Last edited:

FAQ: Why is the error value higher in the average calculation?

1. What is error propagation?

Error propagation is the process of quantifying and analyzing how errors in measured or calculated values can affect the final result of an experiment or calculation. It involves identifying the sources of error and determining how they contribute to the overall uncertainty of the final result.

2. How do you calculate error propagation?

Error propagation is typically calculated using the law of propagation of uncertainty, which involves multiplying the individual uncertainties of each measured or calculated value by their corresponding sensitivity coefficients and then taking the square root of the sum of these squared values.

3. What are the sources of error in error propagation?

The sources of error in error propagation can include measurement errors, systematic errors, and random errors. Measurement errors can arise from limitations of the measuring instrument or human error, while systematic errors are caused by flaws in the experimental setup or calculation method. Random errors are due to natural variations in the measurement process.

4. How can you minimize error propagation?

To minimize error propagation, it is important to minimize the sources of error by using precise and accurate measuring instruments, carefully designing and executing the experiment, and using appropriate statistical analysis methods. Additionally, taking multiple measurements and calculating the average can help reduce the impact of random errors.

5. Why is error propagation important in scientific research?

Error propagation is important in scientific research because it allows researchers to understand the reliability and accuracy of their results. By quantifying and analyzing the sources of error, scientists can determine the confidence level of their findings and make informed decisions about the validity of their conclusions.

Similar threads

Replies
2
Views
1K
Replies
2
Views
1K
Replies
15
Views
3K
Replies
5
Views
5K
Replies
4
Views
5K
Replies
5
Views
1K
Back
Top