Minimum MSE estimation derivation

In summary, the conversation discusses the need for a term in the calculation of minimum mean square error (MMSE) to account for the difference between E(x^2) and [E(x)]^2. This term is necessary when Z = E[x|Y=y] and reduces the overall calculation to E({\left | \left | X \right | \right |}^2|Y=y)-{\left | \left | \hat{X} \right | \right |}^2, which is the average mean square error. The conversation also mentions the difficulty in finding information on MMSE and its derivation, and asks for recommendations for easy-to-read articles on the topic.
  • #1
EmmaSaunders1
45
0
Hello,

Would anyone be-able to recommend a good, easy to read article which outlines MMSE and its derivation. Specifically I am having trouble finding this term


[tex]
+ \int x'xP(x|y)dx-\left \| \int xP(x|y)dx \right \|^2
[/tex]

from

[tex]
E({\left | \left | X-z \right | \right |}^2|Y=y)
=\int (x-z)'(x-z)P(x|y)dx\\
=[z'-\int x'P(x|y)dx][z-\int xP(x|y)dx] + \int x'xP(x|y)dx-\left \| \int xP(x|y)dx \right \|^2
[/tex]

Thank you
 
Physics news on Phys.org
  • #2
Shouldnt the term just be zero - I can't understand it's presence - are there any conditions in which it is not zero??
 
  • #3
For anyone who is interested - the last term

[tex]
+ \int x'xP(x|y)dx-\left \| \int xP(x|y)dx \right \|^2
[/tex]

is necessary to account for the difference between E(x^2) and [E(x)]^2. When Z = E[x|Y=y] the term

[tex]
E({\left | \left | X-z \right | \right |}^2|Y=y)
[/tex]

Is a minimum and reduces to

[tex]
+ \int x'xP(x|y)dx-\left \| \int xP(x|y)dx \right \|^2 =
[/tex]

Then

[tex]
E({\left | \left | X \right | \right |}^2|Y=y)-E(X|Y=y)^2\\
=E({\left | \left | X \right | \right |}^2|Y=y)-{\left | \left | \hat{X} \right | \right |}^2
[/tex]

which is the average mean square error
 

FAQ: Minimum MSE estimation derivation

1. What is Minimum MSE estimation derivation?

Minimum MSE estimation derivation is a mathematical method used to find the optimal values of unknown parameters in a statistical model. It is based on the principle of minimizing the Mean Squared Error (MSE) between the estimated values and the actual values of the data.

2. Why is Minimum MSE estimation derivation important?

Minimum MSE estimation derivation is important because it allows us to obtain the most accurate estimates of the unknown parameters in a statistical model. This helps us make better predictions and decisions based on the data.

3. How is Minimum MSE estimation derivation calculated?

The Minimum MSE estimation derivation is calculated by taking the derivative of the MSE function with respect to the unknown parameters and setting it equal to zero. This gives us the equations for the optimal values of the parameters.

4. What are the assumptions for Minimum MSE estimation derivation?

The assumptions for Minimum MSE estimation derivation include the normality of the error terms, linearity of the model, and independence of the error terms. These assumptions ensure that the estimated parameters are unbiased and have minimum variance.

5. How does Minimum MSE estimation derivation differ from other estimation methods?

Minimum MSE estimation derivation differs from other estimation methods, such as Maximum Likelihood Estimation, in that it takes into account the squared errors rather than just the error probabilities. This makes it more robust to outliers and more suitable for models with normally distributed errors.

Similar threads

Replies
10
Views
1K
Replies
1
Views
826
Replies
19
Views
1K
Replies
1
Views
1K
Replies
2
Views
392
Replies
1
Views
2K
Replies
12
Views
2K
Back
Top