Can You Estimate Parameters for a Non-Closed Form Probability Distribution?

In summary, there is no closed form expression for a given probability distribution and the parameters {n, x, y} need to be estimated using data. A maximum likelihood approach is suggested, but difficulties arise when trying to get the derivative with respect to n. Alternative methods such as using Leibniz's Rule or trying various values of n and selecting the one with the largest likelihood are mentioned.
  • #1
jimmy1
61
0
I have a probability distribution of the form [tex]\sum_{i=0}^n f(n,x,y)[/tex]. There is no closed form expression for it. I need to know if there is any method that I can use to estimate the parameters {n, x, y} given some data from the above distribution.
I've tried a maximum likelihood approach, but I'm having trouble getting the derivative with respect to n. Is it possible to get this derivative, and use a maximum likelihood approch to estimate n
 
Physics news on Phys.org
  • #2
Is your f indexed by i? If not, then you have (n+1)f(n,x,y), which is differentiable w/r/t/ n, as long as f is.

If f is indexed by i, then you might think of the sum as an integral and may be able to apply Leibniz's Rule (see under "Alternate form": http://en.wikipedia.org/wiki/Leibniz's_rule).
 
Last edited:
  • #3
Thanks for the reply. I had a look at that Leibniz's Rule link, but I'm not fully sure how to go about using it??

Anyway, I was thinking of a slightly more simple idea. I basically need an estimate of the 3 parameters {n, x, y}, preferiably using MLE. Since it's difficult to get the derivative w.r.t n, I was thinking of trying various values of n (say n=1,..,50), and for each value of n estimate MLE of x,y.

So basically, I now end up with 50 different estimates for {n, x, y}. So my question is, is there any mathematical way to tell which one of these 50 estimates is the best one? ie. Is there some sort of likelihood test I could use?
 
  • #4
I'd just look at the (log) likelihood numbers and select the largest.
 

FAQ: Can You Estimate Parameters for a Non-Closed Form Probability Distribution?

What is parameter estimation?

Parameter estimation is a statistical method used to calculate the values of unknown parameters in a model or system based on observed data.

Why is parameter estimation important in scientific research?

Parameter estimation is important because it allows scientists to make predictions and draw conclusions about a system or phenomenon using data. It also helps to validate or refine existing theories and models.

What are the different methods for parameter estimation?

There are various methods for parameter estimation, such as least squares estimation, maximum likelihood estimation, and Bayesian estimation. The choice of method depends on the type of data and the assumptions made about the underlying model.

What factors can affect the accuracy of parameter estimation?

The accuracy of parameter estimation can be affected by the quality and quantity of data, the assumptions made about the model, and the choice of estimation method. It is also important to consider potential sources of bias and uncertainty in the data.

How do you evaluate the goodness of fit for a parameter estimation?

The goodness of fit for a parameter estimation can be evaluated by comparing the estimated parameters to the observed data. Some common measures of goodness of fit include the residual sum of squares, the coefficient of determination (R-squared), and the Akaike information criterion (AIC).

Similar threads

Replies
7
Views
2K
Replies
6
Views
2K
Replies
1
Views
1K
Replies
3
Views
1K
Replies
1
Views
1K
Replies
12
Views
2K
Replies
3
Views
1K
Replies
16
Views
2K
Back
Top