When Are Time-Dependent Fitted Parameters Statistically Distinct?

  • Thread starter NoobixCube
  • Start date
In summary, the conversation discusses the determination of when two fitted parameters, s and s', are considered distinctly different from each other. This is determined by calculating the confidence level, which is the probability that the two results are at least the difference apart. The t-test is one method to calculate this confidence level, but there may be other ways as well. This is important in scientific research as it helps determine the significance of differences between parameters and their respective errors.
  • #1
NoobixCube
155
0
Suppose I have a fitted parameter like [tex]s[/tex] with an error of [tex]\pm \sigma_{s}[/tex] which are time dependent . I then gather more data later on and re-fit to find parameter [tex]s[/tex] which should have changed. I find a new value [tex]s'[/tex] with [tex]\pm \sigma_{s'}[/tex] . Scientifically, when are these values said to be distinctly different from each other, namely what is the least amount of 'error overlap' for these two values [tex]s[/tex] and [tex]s'[/tex] to be different? Your thoughts would be most welcome. I have heard that the t-test is one way. Are there any others?
 
Last edited:
Physics news on Phys.org
  • #2
What one usually specifies is a "confidence level". That means that you do the following: you *suppose* that the two results were actually "the same", that means, drawn from the same distribution (that distribution comes from the error model on the measurement, or also eventually intrinsically random processes in the phenomenon you try to measure). You then calculate what is the probability that for two trials, (with a single, or with many measurements themselves), your estimated values of the two trials are AT LEAST the difference apart than you found. That probability is then the complement of the confidence level by which you can say that they are different (it is the probability that you could have gotten this difference when the actual parameter was in fact the same).
 
  • #3
Thanks for your post vanesch :)
 

FAQ: When Are Time-Dependent Fitted Parameters Statistically Distinct?

What does it mean when results are statistically different?

When results are statistically different, it means that there is a significant difference between two or more groups or conditions being compared. This difference is not due to chance or random variation, but is instead a true difference that can be attributed to the independent variable being studied.

How is statistical difference determined?

Statistical difference is usually determined through the use of statistical tests such as t-tests or ANOVA. These tests calculate a p-value, which represents the probability of obtaining the observed results by chance. If the p-value is below a predetermined threshold (usually 0.05), the results are considered statistically different.

What is the importance of statistical difference in research?

Statistical difference is important in research because it allows us to determine if the results we observe are due to the variables we are studying or simply due to chance. This helps us draw meaningful conclusions and make accurate predictions based on our data.

Can results be statistically different but still have overlapping confidence intervals?

Yes, it is possible for results to be statistically different but still have overlapping confidence intervals. Confidence intervals represent the range of values within which the true population parameter is likely to fall. Even if the confidence intervals overlap, the means or proportions of the two groups may still be statistically different if the p-value is below the predetermined threshold.

How can researchers ensure that their results are statistically different?

To ensure that results are statistically different, researchers must carefully design their study, use appropriate statistical tests, and accurately analyze their data. It is also important to have a large enough sample size to increase the power of the statistical tests and to reduce the chance of obtaining false results.

Similar threads

Back
Top