Uncertainty larger than the plotted value

In summary,The OP is attempting to determine the acceleration of a mass using a spring and a PASCO motion sensor. The uncertainty in the distance on the sensor is 0.8%, and the OP gets the uncertainty in extension of the spring by combining 0.8% of the original position and 0.8% of the new position. However, the extension is so small that the uncertainty ends up being larger than the value. The OP also attempted to solve for the slope of the line connecting the points, but was unsuccessful because the magnitude of the difference between the two measurements is too small.
  • #1
SuchBants
23
0

Homework Statement


I'm determining g by the extension of a spring. I used a PASCO motion sensor to record the displacement of the spring towards the sensor since it is more accurate than myself using a ruler.

The uncertainty in the distance on the sensor is 0.8%. I get the uncertainty in extension of the spring by combining 0.8% of the original position and 0.8% of the new position.

However the extension is so small that the uncertainty ends up being larger than the value.
akil.JPG

Homework Equations


Can this be right? The slope of my graph actually gives me a value for g of about 9.7m/s/s so...

The Attempt at a Solution


(0.8/100) *0.271939=0.002175512
(0.8/100) *0.268=0.002144
uncertainty in extension = 0.002144+0.002175512=0.004319512
% uncertainty = 0.004319512/0.00394 * 100 = 109.7%
[/B]
 

Attachments

  • akil.JPG
    akil.JPG
    34.4 KB · Views: 1,245
Physics news on Phys.org
  • #2
You are assuming that the uncertainties in the two measurements are unrelated. That is surely not the case. But unfortunately it is hard to know how to proceed without knowing what the relationship is without knowing the source of the error.
Part of it could be a "set zero" error. For that, there would be effectively no error in the displacement. Part could be a granularity error.
 
  • #3
If the uncertainty in each number that you add is 0.8% than the uncertainty in the extension is the square root of the sum of the squares of the uncertainties or ##0.8%\times \sqrt{2} = 1.1%.
 
  • #4
kuruman said:
If the uncertainty in each number that you add is 0.8% than the uncertainty in the extension is the square root of the sum of the squares of the uncertainties or ##0.8%\times \sqrt{2} = 1.1%.
I believe one value is being subtracted from the other. When you take the difference of two numbers of similar magnitude the fractional error in the result can be huge.
 
  • #5
haruspex said:
believe one value is being subtracted from the other. When you take the difference of two numbers of similar magnitude the fractional error in the result can be huge.
Yes indeed. However, the numbers to be subtracted are not that small considering the sig figs to which the data are reported. Below is a a plot of the data with the best straight line fit and 1.1% error bars. It looks as one would expect with the best fit going through (almost) all error bars.

PlotwithErrorBars.png


On edit: It looks like I missed copying a point but that should not change anything.
 

Attachments

  • PlotwithErrorBars.png
    PlotwithErrorBars.png
    5.5 KB · Views: 931
  • #6
kuruman said:
the numbers to be subtracted are not that small
What matters is how the magnitude of the difference compares with the magnitude of the source values.
My understanding is that the "extension" column has been been obtained by taking the difference between two position measurements. One of the position measurements is shown in the "mean position" column (or maybe that is the average of the two, whatever). Thus the two measurements only differ by about 1%. If each has an error of 0.8% then the difference between them is completely untrustworthy.
 
  • #7
OP mentions a Pasco sensor (probably using ultrasound) towards which the mass moves. I would guess that "mean" position is the average of a large number of measurements made in rapid succession by the instrument and displayed on the screen of a controller unit and that the 0.8% is the manufacturer's estimate of the uncertainty in the measurement. Maybe OP can clarify this point. The "Extension" column shows the difference between any entry under "Mean position" and the first entry in that column. It is a derived quantity from two measured quantities.
haruspex said:
If each has an error of 0.8% then the difference between them is completely untrustworthy.
That is absolutely correct if one tried to determine some quantity by using these two points only. However, if one collects a lot of such untrustworthy points, the slope can be determined with good accuracy as seen in the figure, post #5.
 
  • Like
Likes SuchBants
  • #8
kuruman said:
OP mentions a Pasco sensor (probably using ultrasound) towards which the mass moves. I would guess that "mean" position is the average of a large number of measurements made in rapid succession by the instrument and displayed on the screen of a controller unit and that the 0.8% is the manufacturer's estimate of the uncertainty in the measurement. Maybe OP can clarify this point. The "Extension" column shows the difference between any entry under "Mean position" and the first entry in that column. It is a derived quantity from two measured quantities.

That is absolutely correct if one tried to determine some quantity by using these two points only. However, if one collects a lot of such untrustworthy points, the slope can be determined with good accuracy as seen in the figure, post #5.
I think you are missing the point regarding the OP's confusion.
You plotted measured position (yi) against mass (mi). The OP plotted displacement (yi-y0) against mass.
In the OP's working, a constant initial measured position, y0, is being subtracted from each measured extended position to obtain the displacement. If in each of those subtractions the yi and y0 are treated as having a 0.8% error, independently, then the displacement has a huge percentage error. And if the resulting displacements are treated as having independent errors then the graph will have error bars that dwarf the result.
The mistake the OP is making is treating all these as independent. In plotting yi only, and ignoring y0, you avoided that mistake.
 
  • #9
haruspex said:
The mistake the OP is making is treating all these as independent. In plotting yi only, and ignoring y0, you avoided that mistake.
You are correct as usual. I am hard-wired to look at things in a certain way by not in another.
 
  • #11
kuruman said:
OP mentions a Pasco sensor (probably using ultrasound) towards which the mass moves. I would guess that "mean" position is the average of a large number of measurements made in rapid succession by the instrument and displayed on the screen of a controller unit and that the 0.8% is the manufacturer's estimate of the uncertainty in the measurement. Maybe OP can clarify this point. The "Extension" column shows the difference between any entry under "Mean position" and the first entry in that column. It is a derived quantity from two measured quantities.

That is absolutely correct if one tried to determine some quantity by using these two points only. However, if one collects a lot of such untrustworthy points, the slope can be determined with good accuracy as seen in the figure, post #5.

No you are absolutely right. The sensor used ultrasound to give the distance to a mass hanger, the hanger was on a spring and left to reach equilibrium. I averaged 5 seconds worth of data, roughly 50 samples which differed by 0.00001m usually. So there is a masssive number of readings that were averaged. I know the slope is accurate as it gives me a good value for g.
 
  • #12
SuchBants said:
I averaged 5 seconds worth of data, roughly 50 samples which differed by 0.00001m usually.
Sure, but that doesn't help if you do not know the basis for the quoted 0.8%. You cannot assume each measurement's error is independent of the rest. E.g. suppose you were to measure the same distance repeatedly with a metre stick marked in mm and record each to the nearest mm. Most likely all your measurements will be the same, yet the error remains ±0.5mm.

As I posted, your mistake is in taking the difference between two measurements while presuming that each has a 0.8% error margin independently of the other. That may be the case (in which case you needed to set the sensor much closer to the target), or it may be that the two errors are strongly correlated.

We could guess that the error arises mostly from the uncertainty in the speed of sound through the air at the time. Unless this was varying wildly during the experiment it would follow that the actual error was a fixed percentage of the measurement. Thus, you can take the difference and then apply the 0.8%.
 
  • #13
haruspex said:
We could guess that the error arises mostly from the uncertainty in the speed of sound through the air at the time. Unless this was varying wildly during the experiment it would follow that the actual error was a fixed percentage of the measurement. Thus, you can take the difference and then apply the 0.8%.
https://www.pasco.com/support/technical-support/technote/techIDlookup.cfm?TechNoteID=436
This tells you how they got 0.8%.
So take 0.8% of the difference or of each value and then sum them? The fact my graph gives me a best fit line with a very accurate value for g suggests the errors are overestimated if they were to be +- 110% for example.
 
  • #14
SuchBants said:
https://www.pasco.com/support/technical-support/technote/techIDlookup.cfm?TechNoteID=436
This tells you how they got 0.8%.
So take 0.8% of the difference or of each value and then sum them? The fact my graph gives me a best fit line with a very accurate value for g suggests the errors are overestimated if they were to be +- 110% for example.
That link confirms that the variable speed of sound in air is the main source of error. So you can assume that is constant during your tests, and take the differences in the measurements before applying the 0.8%.
 
  • Like
Likes SuchBants
  • #15
haruspex said:
That link confirms that the variable speed of sound in air is the main source of error. So you can assume that is constant during your tests, and take the differences in the measurements before applying the 0.8%.
perfect
 

FAQ: Uncertainty larger than the plotted value

What does "uncertainty larger than the plotted value" mean?

"Uncertainty larger than the plotted value" refers to a situation where the range of possible values for a particular measurement or data point is larger than the actual value shown on a graph or plot. This means that there is a higher degree of uncertainty or potential error associated with the value, making it less reliable.

Why is it important to consider uncertainty in scientific measurements?

Uncertainty is an inherent part of any scientific measurement and cannot be completely eliminated. It is important to consider uncertainty because it allows us to understand the limitations of our data and the potential for error. By acknowledging and accounting for uncertainty, scientists can ensure the accuracy and reliability of their findings.

How is uncertainty calculated?

Uncertainty is typically calculated using statistical methods, such as standard deviation or confidence intervals. These calculations take into account the variability and potential sources of error in the data to determine the range of values within which the true value is likely to fall.

What factors can contribute to uncertainty in scientific measurements?

There are many factors that can contribute to uncertainty in scientific measurements, including limitations of the measuring instrument, human error, environmental conditions, and inherent variability in the system being studied. It is important for scientists to carefully consider and account for these factors in their data analysis.

How can scientists minimize uncertainty in their measurements?

While uncertainty cannot be completely eliminated, scientists can minimize it by using precise and accurate measuring instruments, carefully controlling experimental conditions, and taking multiple measurements to reduce variability. It is also important to carefully analyze and interpret data, taking into account the potential sources of error and considering the level of uncertainty in the results.

Back
Top