Measurements - How can it be that precise?

In summary: It is just a statistical calculation based on the assumption that the errors are independent and have a mean of 0. It is not about trust or belief, it is just a mathematical tool to estimate the "true value" based on a limited number of measurements.
  • #36
Guys, could you give me an online source where the derivation of the factor $$1/\sqrt{N}$$ for the standard derivation for one data sample is derived correctly?
Thanks.
 
Physics news on Phys.org
  • #38
  • Like
Likes Omega0 and Dale
  • #39
Dale, that's it. Now I got it. Thanks to you and the other guys, Stephen, FastChecker etc.

To justify my stupid fault a bit: In the (very good) book it says, see the derivation above, my translation: "Here the ##s^2\left(y_i\right) ## are all the same because they are built each from the same input values. "
What I absolutely didn't understand is that, naturally, they don't need to be the same input values but the same ##s^2##.
This was my big failure.
Having said this, I would have found it fair if the professors would have remarked this. Seems they expected a knowledge in statistics above my level.

Thanks and sorry for my ignorance.
 
  • Like
Likes FactChecker and Dale
  • #40
Which formulae for population parameters also work for sample statistics that estimate them?

If we have a random samples ##S_x##, ##S_y## of N things taken from each of two random variables ##X## and ##Y##, we can imagine the values in ##S_x## and ##S_y## to define an empirical joint probability distributionj. So the sample mean of the pairwise sums of values in in ##S_x## and ##S_y## should be the sum of their sample means - analagous (but not identical) to the fact that ##E(X+Y) = E(X) + E(Y)##

However if ##X## and ##Y## are independent random variables, there is no guarantee that the empirical distribution of N pairs of numbers of the form ##(x,y), x \in S_x, y\in S_y## will factor as an distribution of x-value times an (independent) distribution of y-values. So we can't conclude the sample variance of ##x+y## is the sample variance of the x-values plus the sample variance of the y-values.

If, instead of N pairs of values, we looked at all the possible ##N^2## values ##(x,y)## we would get an empirical distribution where the x and y values are independent. Then something analgous to ##Var(X+Y) = Var(X) + Var(Y)## should work out. We would have to be specific about what definition of "sample variance" is used. For example, can we used the unbiased estimators of the population variances?
 
  • #41
I prefer to think that (using variables with zero mean for simple equations):
$$\sigma^2_{X+Y} = E( (X+Y)^2 ) = E( X^2 + 2XY + Y^2) = E(X^2) + 2E(XY) + E(Y^2)$$ $$ = \sigma^2_X + 2*cov(XY) + \sigma^2_Y$$
So the independence of ##X## and ##Y## implies the desired result.
 
  • #42
FactChecker said:
So the independence of ##X## and ##Y## implies the desired result.

##E(X^2) = \sigma_X^2 ## in the case of a random variable with zero mean. For a set of data ##x_i## realized from such a random variable, we don't necessarily have a zero sample mean.
 
  • Like
Likes FactChecker
  • #43
Stephen Tashi said:
##E(X^2) = \sigma_X^2 ## in the case of a random variable with zero mean. For a set of data ##x_i## realized from such a random variable, we don't necessarily have a zero sample mean.
Good point. But in terms of expected values, it is correct.
 

Similar threads

Replies
2
Views
1K
Replies
1
Views
1K
Replies
4
Views
1K
Replies
20
Views
1K
Replies
1
Views
2K
Replies
12
Views
1K
Replies
9
Views
2K
Replies
5
Views
1K
Back
Top