Statistics Question - Expected value of an estimator

In summary: However, if each Y_i is drawn from a different population, then the means will not be the same, and the statistics will be different.
  • #1
michonamona
122
0
Hello friends!

Given an estimator of the population mean:

[tex]\bar{Y}=\frac{\sum^{N}_{i=1}Y_{i}}{N}[/tex]

The expected value of [tex]\bar{Y}[/tex] is :

[tex]E(\bar{Y}) = \frac{1}{N}E(Y_{1})+\frac{1}{N}E(Y_{2})+\cdots+\frac{1}{N}E(Y_{N})=\mu[/tex] where [tex]\mu[/tex] is the population mean.

Therefore:

[tex]E(\bar{Y}) = \frac{1}{N}\mu+\frac{1}{N}\mu+\cdots+\frac{1}{N}\mu[/tex]


My question is, why are [tex]E(Y_{1}), E(Y_{2}), E(Y_{N})[/tex] all equal to [tex]\mu[/tex]?
 
Last edited:
Physics news on Phys.org
  • #2
michonamona said:
My question is, why are [tex]E(Y_{1}), E(Y_{2}), E(Y_{N})[/tex] all equal to [tex]\mu[/tex]?

Isn't that exactly what is meant by "[itex]\mu[/itex] is the population mean"?

By the way, you are missing a factor of 1/N in the definition of the estimator.
 
  • #3
jbunniii said:
Isn't that exactly what is meant by "[itex]\mu[/itex] is the population mean"?

By the way, you are missing a factor of 1/N in the definition of the estimator.

Thank you for your reply.

My mistake, I edited the equation.

But that's exactly what my question is about. If the Expected value of Ybar is equal to mu, then why is the expected value of EACH of the components of the series of Ybar also mu?

It must be something really simple that I'm missing...

Thanks
M
 
  • #4
michonamona said:
Thank you for your reply.

My mistake, I edited the equation.

But that's exactly what my question is about. If the Expected value of Ybar is equal to mu, then why is the expected value of EACH of the components of the series of Ybar also mu?

It must be something really simple that I'm missing...

Thanks
M

Well, what ARE these components [itex]Y_i[/itex]? I assume they are random samples from the population, are they not?

I think you are arguing in reverse. The expected value of the estimator is [itex]\mu[/itex] BECAUSE the expected value of each of the random samples is [itex]\mu[/itex], not the other way around.
 
  • #5
...expected value of each of the random samples is [itex]\mu[/itex], not the other way around.


so each of the random sample Y_i was taken from the population? meaning the size of each Y_i is the same as the population?


Thank you,
M
 
  • #6
michonamona said:
so each of the random sample Y_i was taken from the population?

I don't know. I assume so, but you're the one who asked the original question. Is it homework? If so, doesn't the homework problem tell you what the [itex]Y_i[/itex]'s are?

meaning the size of each Y_i is the same as the population?

I don't know what you mean by "size." If each [itex]Y_i[/itex] comes from the same population (more formally, the same probability distribution) then the STATISTICS should be the same for each [itex]Y_i[/itex]. The mean ([itex]\mu[/itex]) is one such statistic.
 

FAQ: Statistics Question - Expected value of an estimator

1. What is the expected value of an estimator in statistics?

The expected value of an estimator is the average value that the estimator would take if we were to repeat the estimation process an infinite number of times. It is a measure of the central tendency of the estimator and is often used as a benchmark for evaluating the performance of different estimators.

2. How is the expected value of an estimator calculated?

The expected value of an estimator is calculated by taking the sum of all possible values of the estimator, each multiplied by its corresponding probability of occurrence. This is often represented by the formula E[estimator] = ∑(x * P(x)), where x represents all possible values of the estimator and P(x) represents the probability of each value occurring.

3. What does the expected value of an estimator tell us about the accuracy of the estimator?

The expected value of an estimator gives us an idea of the accuracy of the estimator. If the expected value is close to the true value of the parameter being estimated, it indicates that the estimator is unbiased. However, a large difference between the expected value and the true value may indicate that the estimator is biased and may not be a good measure of the parameter.

4. How does the sample size affect the expected value of an estimator?

The sample size can affect the expected value of an estimator in two ways. First, as the sample size increases, the expected value of the estimator becomes more stable and closer to the true value of the parameter. This is known as the law of large numbers. Second, a larger sample size may also reduce the variability of the estimator, resulting in a smaller standard error and a more precise estimation.

5. Can the expected value of an estimator be negative?

Yes, the expected value of an estimator can be negative. This can happen if the estimator has a high probability of taking negative values and/or if the negative values have a higher weight in the calculation of the expected value. However, in most cases, the expected value of an estimator is expected to be positive, especially if the estimator is unbiased and the parameter being estimated is positive.

Back
Top