Why Does E(X1 + X2 + ... + Xn) Equal n*E(Xi) in Statistics?

  • Thread starter cdux
  • Start date
  • Tags
    Statistics
In summary: X].In summary, the conversation is discussing the calculation of an unbiased estimator for a uniform distribution. The proof involves showing that the expected value of the estimator is equal to the true value. The solution involves using the mean of the distribution and the property that the mean of the sum of independent random variables is equal to the sum of their individual means. This also applies to the variance, where the variance of the sum of independent random variables is equal to the sum of their individual variances.
  • #1
cdux
188
0
I'm a bit confused about a particular step in a calculation.

Given Theta_n = (2/n)(X1 + X2 + ... + Xn) being an unbiased estimator of Theta for a U(0,Theta), we have to prove it by showing E(Theta_n) = Theta.

And we go on E(Theta_n) = (2/n)E(X1+X2 + .. Xn)

Now at this point the solution is (2/n) * n * (Theta/2) (= Theta which is the sought-after result)

I understand that Theta/2 is the mean of a U() but how exactly does one go from E(X1 + X2 + .. Xn) to equaling it to n*E(Xi)? Is E(X1) = E(X2) = E(Xi)? If yes, why?

(PS. A more complex example is Var(X1+X2 + .. Xn) appearing to also result to nV(Xi) (=nσ^2) )
 
Physics news on Phys.org
  • #2
Hrm. Afterthought. I guess it might be simply a blatant case of "mean of the whole but it missed the 'over n' so it's n * E(Xi)".

I guess it might apply in the case of Var too..
 
  • #3
I'm having a difficulty seeing how that could be true for Var.

Var((2/n)(X1 + X2 + ... + Xn)) = (4/n^2)nVar(Xi).

I understand 4/n^2 going out as a property of Var but how is Var(X1 + X2 + ... + Xn) = nVar(Xi)?
 
  • #4
Nevermind. I found it. If X1, X2.. are uncorrelated then V(Σ(Xi)) = ΣV(Xi) ..after a proof involving Var's equality with E[X^2) - E^2
 

FAQ: Why Does E(X1 + X2 + ... + Xn) Equal n*E(Xi) in Statistics?

What is an estimator in statistics?

An estimator in statistics is a function that is used to estimate a population parameter based on a sample of data. It is used to make inferences about the population and is often used in hypothesis testing and confidence interval construction.

What are the different types of estimators?

There are two main types of estimators: point estimators and interval estimators. Point estimators give a single estimate of the population parameter, while interval estimators provide a range of values within which the population parameter is likely to fall.

How do you determine the accuracy of an estimator?

The accuracy of an estimator is determined by its bias and variance. Bias refers to the difference between the expected value of the estimator and the true value of the population parameter. A low bias indicates a more accurate estimator. Variance refers to the spread of the estimator's values around its expected value. A lower variance also indicates a more accurate estimator.

What is the central limit theorem and how does it relate to estimators?

The central limit theorem states that the sampling distribution of the mean of a large sample from any population will be approximately normal, regardless of the shape of the population distribution. This allows for the use of normal-based estimators, such as the mean and standard deviation, even when the population is not normally distributed.

How can estimators be improved upon?

Estimators can be improved upon by increasing the sample size, which reduces the variance and leads to a more accurate estimate. Additionally, using more robust estimators, such as the median instead of the mean, can reduce the impact of outliers on the estimate. Regularly checking and adjusting for bias can also improve the accuracy of an estimator.

Back
Top