- #1
FrameOfMind
- 1
- 0
Hello,
I'm trying to figure out a way to calculate a suitable uncertainty value for the accuracy of a human measuring timing values during an experiment.
The experiment was designed to determine the frictional force acting on a trolley as it is pulled along by a string attached to a falling weight. We took three measurements of the time for the trolley to pass a set distance after dropping the weight for each mass value. The mass was increased by 0.5kg for each test. Afterwards, we took the mean of the three values for each mass and used that in our further calculations for Acceleration, Force, and Friction.
For the uncertainty of the stopwatch itself, we took the smallest measurement of the stop watch (0.01s) and divided it by the smallest average time (in this case, 1.24s), then multiplied it by 100 to get the error percentage. However, we weren't really given a suitable method to determine the uncertainty of the human reaction when starting and stopping the stopwatch at the right time (I believe the method we were given was unsuitable, since we were told to click start/stop on the stop watch as quickly as possible, which I think doesn't really measure "reaction time", rather "how quickly can your finger press a button twice in quick succession?").
I've tried the following method to determine the human uncertainty:
Calculate the standard deviation for each of the sets of timing values, eg:
mean time = (1.25 + 1.28 + 1.19)/3 = 1.24
standard deviation = sqrt(((1.25 - 1.24)^2 + (1.28 - 1.24)^2 + (1.19 - 1.24)^2)/3) = ~0.037
Then I take the mean of all the standard deviations, and I arrive at a value of 0.02s.
Then I once again divide the uncertainty by the smallest average time and multiply by 100 to get the error percentage, then I add it to the rest of the error values to obtain the total error percentage. The value I obtained seems to be quite small, which intuitively seems quite unrealistic since I've heard the average human reaction time is roughly 0.2-0.25s. The overall error percentage is also only 3.362%, which seems a little too good to be true using such rudimentary equipment.
My question: is this the right method to use, or have I done something wrong? If not, what would you suggest I do to get a suitable uncertainty for the reaction time?
(By the way, this may seem like homework and that it doesn't belong in this particular section, but it isn't really. I don't actually have to go into this much detail for my assignment, I'm just genuinely interested in how I might figure this out.)
I'm trying to figure out a way to calculate a suitable uncertainty value for the accuracy of a human measuring timing values during an experiment.
The experiment was designed to determine the frictional force acting on a trolley as it is pulled along by a string attached to a falling weight. We took three measurements of the time for the trolley to pass a set distance after dropping the weight for each mass value. The mass was increased by 0.5kg for each test. Afterwards, we took the mean of the three values for each mass and used that in our further calculations for Acceleration, Force, and Friction.
For the uncertainty of the stopwatch itself, we took the smallest measurement of the stop watch (0.01s) and divided it by the smallest average time (in this case, 1.24s), then multiplied it by 100 to get the error percentage. However, we weren't really given a suitable method to determine the uncertainty of the human reaction when starting and stopping the stopwatch at the right time (I believe the method we were given was unsuitable, since we were told to click start/stop on the stop watch as quickly as possible, which I think doesn't really measure "reaction time", rather "how quickly can your finger press a button twice in quick succession?").
I've tried the following method to determine the human uncertainty:
Calculate the standard deviation for each of the sets of timing values, eg:
mean time = (1.25 + 1.28 + 1.19)/3 = 1.24
standard deviation = sqrt(((1.25 - 1.24)^2 + (1.28 - 1.24)^2 + (1.19 - 1.24)^2)/3) = ~0.037
Then I take the mean of all the standard deviations, and I arrive at a value of 0.02s.
Then I once again divide the uncertainty by the smallest average time and multiply by 100 to get the error percentage, then I add it to the rest of the error values to obtain the total error percentage. The value I obtained seems to be quite small, which intuitively seems quite unrealistic since I've heard the average human reaction time is roughly 0.2-0.25s. The overall error percentage is also only 3.362%, which seems a little too good to be true using such rudimentary equipment.
My question: is this the right method to use, or have I done something wrong? If not, what would you suggest I do to get a suitable uncertainty for the reaction time?
(By the way, this may seem like homework and that it doesn't belong in this particular section, but it isn't really. I don't actually have to go into this much detail for my assignment, I'm just genuinely interested in how I might figure this out.)