How to statistically calculate the final value?

  • #1
Lotto
231
13
TL;DR Summary
Let us say we have conducted two measurments with aim to determine an acceleration of our object. We have from both measurements:
1.##t_1 \pm \Delta t_1##, ##s_1 \pm \Delta s_1##
2. ##t_2\pm \Delta t_2##, ##s_2 \pm \Delta s_2##.

To calculate ##a## we use ##s=\frac 12 a t^2##.

How to determine the final value of ##a \pm \Delta a##?
My steps would be that I would first calculated ##a_1## and ##a_2##, determined by using that formula with partial derivatives its errors, and then I would made an arithmetic mean of ##a_1## and ##a_2##. I am not sure how to determine the final error, but I think I can use this formula

##\Delta a=\frac{a_1 \frac{1}{{\Delta a_1}^2}+a_2 \frac{1}{{\Delta a_2}^2}}{\frac{1}{{\Delta a_1}^2}+\frac{1}{{\Delta a_2}^2}}##.

But shouldn't I also do a standard deviation of ##a_1## and ##a_2## from ##a## and then calculate the final error by using a general formula

##\sigma=\sqrt{{\sigma_A}^2+{\sigma_B}^2}##?
 
Physics news on Phys.org
  • #2
When you only have two measurements, a statistically calculated uncertainty is not very meaningful. You have no clue whether the value of the acceleration from a third measurement will be higher than the larger value, lower than the smaller value or in-between the two. In my opinion, you need at least three data points before you start worrying about uncertainties. If you have only two, I would say consider half the difference between the two values as an estimate of your uncertainty. Uncertainties are fuzzy, the fewer data points you have, the fuzzier they become.
 
  • Like
Likes Dale
  • #3
Lotto said:
To calculate ##a## we use ##s=\frac 12 a t^2##.
This formula assumes that the acceleration is constant, the velocity at ##t=0## is 0, and the speed at ##t=0## is also 0. Is that valid?
 
  • Like
Likes nasu
  • #4
kuruman said:
When you only have two measurements, a statistically calculated uncertainty is not very meaningful. You have no clue whether the value of the acceleration from a third measurement will be higher than the larger value, lower than the smaller value or in-between the two. In my opinion, you need at least three data points before you start worrying about uncertainties. If you have only two, I would say consider half the difference between the two values as an estimate of your uncertainty. Uncertainties are fuzzy, the fewer data points you have, the fuzzier they become.
OK, so let's say I have 10+ measurements and that we suppose that the movement is with a constant acceleration. All I want to know is the general principle I can apply in such cases, my measuring of the acceleration was just an example.
 
Last edited:
  • #5
Lotto said:
OK, so let's say I have 10+ measurements and that we suppose that the movement is with a constant acceleration. All I want to know is the general principle I can apply in such cases, my measuring of the acceleration was just an example.
OK, so in that case you will be using a statistical software to estimate your acceleration (or whatever). The statistical software will give you the estimate of your parameter ##a## a standard error or some other estimate of the uncertainty of ##a##. You can just use that directly as ##\Delta a## if you think that only the statistical errors are important.

If you believe that there are also important systematic uncertainties then you can include them as $$\Delta a =\sqrt{\Delta a_{\text{statistical}}^2+\Delta a_{\text{systematic}}^2}$$
 
  • Like
Likes vanhees71
  • #6
Are you sure that you have a different error on all ##s## and ##t## measurements? I am asking, because it really makes it more complicated than high school math. Generally you would want to convert errors in x-direction into y-errors and than proceed normally. If this is not possible or the errors in ##s## and ##t## are not Gaussian or the conventional methods will yield a bias in your particular experiment, then you have to do a bootstrap.
One of the conventional methods is a ##\chi^2## fit. In your case I would minimize $$\chi^2=\sum_i \frac{(s_i-\frac{1}{2}at_i^2)^2}{\sigma^2_{s_i} +(\frac{\partial 0.5at^2}{\partial t^2})^2\sigma^2_{t^2_i}}$$. Note that I use the error of ##t_i^2## not just ##t_i##. Now you would have to find ##\frac{\partial \chi^2}{\partial a}##, set it equal to 0 and solve for a. If the errors on ##s## and ##t## are Gaussian, then you can use something like $$\Delta a=\sqrt{\sum_i (\partial_{s_i} a \Delta s_i)^2+(\partial_{t_i^2} a \Delta t_i^2)^2}$$, but to be safe better derive it yourself.
 
  • Like
Likes vanhees71 and Dale
  • #7
Leopold89 said:
Are you sure that you have a different error on all ##s## and ##t## measurements? I am asking, because it really makes it more complicated than high school math. Generally you would want to convert errors in x-direction into y-errors and than proceed normally. If this is not possible or the errors in ##s## and ##t## are not Gaussian or the conventional methods will yield a bias in your particular experiment, then you have to do a bootstrap.
One of the conventional methods is a ##\chi^2## fit. In your case I would minimize $$\chi^2=\sum_i \frac{(s_i-\frac{1}{2}at_i^2)^2}{\sigma^2_{s_i} +(\frac{\partial 0.5at^2}{\partial t^2})^2\sigma^2_{t^2_i}}$$. Note that I use the error of ##t_i^2## not just ##t_i##. Now you would have to find ##\frac{\partial \chi^2}{\partial a}##, set it equal to 0 and solve for a. If the errors on ##s## and ##t## are Gaussian, then you can use something like $$\Delta a=\sqrt{\sum_i (\partial_{s_i} a \Delta s_i)^2+(\partial_{t_i^2} a \Delta t_i^2)^2}$$, but to be safe better derive it yourself.
And if the systematic errors for ##t## and ##s## were all the same, so ##\Delta t##, ##\Delta s##, what would it looked like?

Should I calculate a standard deviation of my values ##a_1, a_2, ...## and calculate a weighted arithmetic mean of errors ##\Delta a_1, \Delta a_2, ...##? I would then add them by using that formula with square roots. Would it be a legal way to do it?

And the final ##a## would be just an arithmetic mean of ##a_1, a_2, ...##?
 
  • #8
Lotto said:
And if the systematic errors for ##t## and ##s## were all the same, so ##\Delta t##, ##\Delta s##, what would it looked like?

Should I calculate a standard deviation of my values ##a_1, a_2, ...## and calculate a weighted arithmetic mean of errors ##\Delta a_1, \Delta a_2, ...##? I would then add them by using that formula with square roots. Would it be a legal way to do it?

And the final ##a## would be just an arithmetic mean of ##a_1, a_2, ...##?
Possible. You could use the GLS, after converting ##\Delta t## to ##\Delta s##. Then you can try to rewrite the estimator ##\hat \beta## such that it looks like an average.

P.S. No, it does not work with the mean. Here is an example, where you can see that the estimator is not the mean.
 
Last edited:

Related to How to statistically calculate the final value?

1. What is the final value theorem in statistics?

The final value theorem is a concept primarily used in control theory and signal processing, not directly in statistics. However, in statistical terms, calculating the final value often involves determining the end result of a process or dataset, such as the mean or the cumulative sum.

2. How do you calculate the final value of a dataset?

The final value of a dataset can be calculated using various statistical methods depending on the context. For example, if you are looking for the final cumulative value, you would sum all the data points. If you need the final average value, you would calculate the mean by summing all data points and dividing by the number of points.

3. What statistical methods can be used to determine the final value?

Common statistical methods to determine the final value include calculating the mean, median, mode, sum, and weighted average. The choice of method depends on the nature of the data and the specific question you are trying to answer.

4. How do you ensure accuracy when calculating the final value?

To ensure accuracy, you should use precise measurements and appropriate statistical techniques. Double-check your calculations, use software tools for complex datasets, and consider potential errors or biases in your data collection and analysis processes.

5. Can the final value be affected by outliers in the dataset?

Yes, outliers can significantly affect the final value, especially in measures like the mean. It's important to identify and address outliers by either removing them, using robust statistical methods, or applying transformations to minimize their impact.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
1K
  • Classical Physics
Replies
3
Views
1K
  • Classical Physics
Replies
8
Views
2K
  • Atomic and Condensed Matter
Replies
0
Views
697
Replies
1
Views
715
Replies
23
Views
1K
  • Precalculus Mathematics Homework Help
Replies
6
Views
2K
  • Other Physics Topics
Replies
1
Views
3K
Replies
3
Views
2K
Back
Top