Asymptotically unbiased & consistent estimators

  • Thread starter kingwinner
  • Start date
  • Tags
    Estimators
In summary, the textbook proved that if "θ hat" is an unbiased estimator for θ, and Var(θ hat)->0 as n->∞, then it is a consistent estimator of θ. However, there is a remark that we can replace "unbiased" by "asymptotically unbiased" in the above theorem, and the result will still hold, but the textbook provided no proof.
  • #1
kingwinner
1,270
0
Theorem: If "θ hat" is an unbiased estimator for θ AND Var(θ hat)->0 as n->∞, then it is a consistent estimator of θ.

The textbook proved this theorem using Chebyshev's Inequality and Squeeze Theorem and I understand the proof.
BUT then there is a remark that we can replace "unbiased" by "asymptotically unbiased" in the above theorem, and the result will still hold, but the textbook provided no proof. This is where I'm having a lot of trouble. I really don't see how we can prove this (i.e. asymptotically unbiased and variance->0 implies consistent). I tried to modify the original proof, but no way I can get it to work under the assumption of asymptotically unbiased.

I'm frustrated and I hope someone can explain how to prove it. Thank you!
 
Last edited:
Physics news on Phys.org
  • #2
Hi kingwinner! :smile:

What about the following adjustment:

[tex]P(|\hat{\theta}_n-\theta_0|\geq \varepsilon)\leq P(|\hat{\theta}_n-E(\hat{\theta}_n)|+|E(\hat{\theta}_n)-\theta_0|\geq \varepsilon)\leq \frac{Var(\hat{\theta}_n)}{(\varepsilon-|E(\hat{\theta}_n)-\theta_0|)^2}\rightarrow 0[/tex]
 
  • #3
micromass said:
Hi kingwinner! :smile:

What about the following adjustment:

[tex]P(|\hat{\theta}_n-\theta_0|\geq \varepsilon)\leq P(|\hat{\theta}_n-E(\hat{\theta}_n)|+|E(\hat{\theta}_n)-\theta_0|\geq \varepsilon)\leq \frac{Var(\hat{\theta}_n)}{(\varepsilon-|E(\hat{\theta}_n)-\theta_0|)^2}\rightarrow 0[/tex]

Thanks for the help, but one of the assumptions of Chebyshev's inequality requires [tex]\varepsilon-|E(\hat{\theta}_n)-\theta_0|[/tex]>0 which is not necessarily true here?
 
  • #4
kingwinner said:
Thanks for the help, but one of the assumptions of Chebyshev's inequality requires [tex]\varepsilon-|E(\hat{\theta}_n)-\theta_0|[/tex]>0 which is not necessarily true here?

It's not necessarily true, but it is true for large n. We know that

[tex]E(\hat{\theta}_n)\rightarrow \theta_0[/tex]

So from a certain n0, we know that

[tex]|E(\hat{\theta}_n)-\theta_0|<\varepsilon[/tex]

So from that certain n0, we know that

[tex]\varepsilon-|E(\hat{\theta}_n)-\theta_0|>0[/tex]
 
  • #5
Thanks for the help! :) You're a legend...
 

FAQ: Asymptotically unbiased & consistent estimators

1. What is an asymptotically unbiased estimator?

An asymptotically unbiased estimator is a statistical method used to estimate a population parameter (such as mean or variance) that becomes more accurate as the sample size increases. This means that as the sample size approaches infinity, the estimated value approaches the true value of the parameter.

2. How is consistency related to asymptotic unbiasedness?

Consistency is a property of an estimator that means it converges to the true value of a population parameter as the sample size increases. Asymptotic unbiasedness is also related to the sample size increasing, so a consistent estimator is also asymptotically unbiased.

3. What are the advantages of using asymptotically unbiased estimators?

Asymptotically unbiased estimators have the advantage of being more accurate as the sample size increases. This means that they can provide a more precise estimate of a population parameter compared to other estimators.

4. Are there any limitations to using asymptotically unbiased estimators?

One limitation of using asymptotically unbiased estimators is that they may not be accurate for small sample sizes. This means that they may not be suitable for use in situations where only a small amount of data is available.

5. How can I determine if an estimator is asymptotically unbiased?

To determine if an estimator is asymptotically unbiased, you can look at its expected value and compare it to the true value of the population parameter. If the expected value of the estimator is equal to the true value, then it is asymptotically unbiased.

Similar threads

Replies
1
Views
1K
Replies
8
Views
8K
Replies
5
Views
855
Replies
1
Views
1K
Replies
1
Views
1K
Replies
8
Views
3K
Replies
11
Views
2K
Replies
1
Views
3K
Back
Top