Physics lab work - calculating % errors

In summary, the conversation discusses calculating the +- error for a table of 10 readings taken on an optical bench with a smallest unit of measurement of 1mm. The person is unsure of which method to use and whether or not to round the readings to the nearest mm. The expert suggests using the Standard Error in the Mean, which is calculated as \Delta x / \sqrt{N} when all samples have the same error. The reader is also directed to a resource for further information.
  • #1
Kaldanis
106
0
I have a table of these 10 readings,

149.6
150.9
149.7
147.9
147.7
152.4
149.8
152.2
153.2
148.9


They were taken on an optical bench where the smallest unit of measurement was 1mm. I'm trying to calculate the +- error but I'm not sure how to. I've came across 3 methods so far,

1. Standard deviation

2. [itex]\frac{max - min}{average}[/itex][itex]*100 *0.5[/itex]

3. [itex]\frac{max - min}{no.values}[/itex]​

Which should I use? Also, since the smallest unit of measurement was 1mm, should I round each of my readings to the nearest mm?
 
Physics news on Phys.org
  • #2
Kaldanis said:
I have a table of these 10 readings,

149.6
150.9
149.7
147.9
147.7
152.4
149.8
152.2
153.2
148.9


They were taken on an optical bench where the smallest unit of measurement was 1mm. I'm trying to calculate the +- error but I'm not sure how to. I've came across 3 methods so far,

1. Standard deviation

2. [itex]\frac{max - min}{average}[/itex][itex]*100 *0.5[/itex]

3. [itex]\frac{max - min}{no.values}[/itex]​

Which should I use? Also, since the smallest unit of measurement was 1mm, should I round each of my readings to the nearest mm?

You probably want the Standard Error in the Mean. When all the N samples have the same error [itex]\Delta x[/itex], the standard error would be [itex] \Delta x / \sqrt{N}[/itex].

Take a look here at the section on the Standard Error in the Mean.
 

FAQ: Physics lab work - calculating % errors

1. What is % error in physics lab work?

% error, or percentage error, is a way to measure the accuracy of a measurement or calculation in a physics lab. It is the difference between the measured or calculated value and the accepted or true value, expressed as a percentage of the accepted value.

2. How do you calculate % error in physics lab work?

To calculate % error, you first need to find the difference between the measured or calculated value and the accepted or true value. Then, divide that difference by the accepted value and multiply by 100 to get the percentage. The formula is: % error = (measured/calculated value - accepted/true value) * 100

3. Why is % error important in physics lab work?

% error is important because it allows us to evaluate the accuracy of our measurements and calculations. It can help identify any potential sources of error and determine the reliability of our results.

4. What is an acceptable % error in physics lab work?

The acceptable % error can vary depending on the experiment and the level of precision required. In general, a % error of less than 5% is considered good, while a % error of 10% or higher may indicate significant errors in the experiment.

5. How can you reduce % error in physics lab work?

To reduce % error, it is important to identify and minimize sources of error, such as human error, equipment limitations, and measurement uncertainties. Using more precise equipment, taking multiple measurements, and performing repeated trials can also help reduce % error.

Similar threads

Back
Top