What is the best estimate for the unknown parameter in a normal distribution?

In summary: As for the central limit theorem, the estimator is the average of the experiments, not the result of a given experiment. In summary, the conversation discusses an experiment where the outcome is influenced by an unknown parameter, with a normal distribution and a range of [97, 103]. The experimenter concludes that the mean must be 100 based on the results, but there may be other factors to consider and further understanding of the subject matter is needed.
  • #1
pamparana
128
0
Hello everyone,

I just started reading "Fundamentals of Statistical Signal processing" by Steven Kay and just got done with the first chapter, which describes the estimation problem, PDFs etc.

It has some very interesting problems at the end and some of them have me a bit confused. I think they are quite central to understanding the content of this chapter and I was hoping to see if you guys can shed some light on this.

Question:
An unknown parameter [tex]\theta[/tex] influences the outcome of an experiment which is modeled by a random variable x. The PDF of x is:
p(x;[tex]\theta[/tex]) = [tex]\frac{1}{\sqrt{2\pi}}exp \left[-\frac{1}{2}\left(x-\theta\right)^{2}\right][/tex]
A series of experiments are performed and x is found to be always in the range [97, 103]. The experimenter concludes x must be 100. Comment on this.

I was thinking about this for a while. But I am not sure about my reasoning. I have a hunch that the experimenter is wrong to come to this conclusion but am unable to explain why. I have a feeling that I need to know the shape of the PDF and maybe know something about the particular estimator.

I would be really grateful if someone can help me with this. This seems quite a fundamental question and I have a feeling that I have not understood the subject matter well.

This is always a hassle with self-study!

Many thanks,

Luc
 
Physics news on Phys.org
  • #2
In your description you are given that the pdf is normal with a standard deviation = 1 and an unknown mean. By performing experiments, he is is estimating the mean to be 100.

The experimenter concludes x must be 100.

You should have said θ not x.
 

FAQ: What is the best estimate for the unknown parameter in a normal distribution?

What is estimation theory?

Estimation theory is a branch of statistics that deals with the process of estimating the value of an unknown parameter based on a given set of data. It involves making inferences and predictions about the unknown parameter using statistical techniques.

Why is estimation theory important?

Estimation theory is important because it allows us to make informed decisions based on data. It helps us to understand the characteristics of a population and make predictions about future outcomes. It is widely used in various fields such as economics, engineering, and social sciences.

What are the two types of estimation?

The two types of estimation in estimation theory are point estimation and interval estimation. Point estimation involves estimating the value of the unknown parameter with a single value, while interval estimation provides a range of values within which the unknown parameter is likely to fall.

What is the difference between parametric and non-parametric estimation?

Parametric estimation assumes that the data follows a specific probability distribution, and the parameters of that distribution are estimated. Non-parametric estimation, on the other hand, does not make any assumptions about the underlying distribution and uses data-driven methods to estimate the unknown parameter.

How is estimation theory related to hypothesis testing?

Estimation theory and hypothesis testing are closely related as they both involve making inferences about unknown parameters based on data. In estimation theory, the goal is to estimate the value of the parameter with a certain level of confidence. In hypothesis testing, the goal is to determine whether a certain hypothesis about the parameter is supported by the data or not.

Back
Top