Log likelihood and Maximum likelihood

In summary, to find the first derivative of the log-likelihood function, we need to create a joint probability distribution by multiplying the individual probability density functions. This results in a function of two variables, p and (1-p). By taking the logarithm, we can simplify the function and use the given information of x̄ to find the desired derivative.
  • #1
cajswn
2
0
Screenshot 2020-11-08 at 21.03.30.png

I'm not sure how to get this first derivative (mainly where does the 4 come from?)
I know x̄ is the sample mean (which I think is 1/2?)
Can someone suggest where to start with finding the log-likelihood?

I know the mass function of a binomial distribution is:
Screenshot 2020-11-08 at 21.05.01.png


Thanks!
 
Last edited:
Physics news on Phys.org
  • #2
Hi cajswn,

To determine the likelihood function we need to create the joint probability distribution. Since the 4 individual trials of the binomial process are independent, we simply multiply the 4 individual probability density functions to get the needed joint probability distribution: $$L = p^{2}(1-p)^{2}.$$ Now take the logarithm to get the log-likelihood function: $$l = 2\ln p + 2\ln (1-p).$$ Since $\bar{x} = 1/2$ we have $2=4\bar{x}$ and $2 = 4-4\bar{x}$. Using these two identities we get: $$l = 4\bar{x}\ln p +(4-4\bar{x})\ln (1-p).$$ From here, take the derivative of $l$ with respect to $p$ to get the result you're looking for.
 

FAQ: Log likelihood and Maximum likelihood

What is the difference between log likelihood and maximum likelihood?

Log likelihood is the logarithm of the likelihood function, which is a measure of how well a statistical model fits a given dataset. Maximum likelihood, on the other hand, is a method for estimating the parameters of a statistical model by finding the values that maximize the likelihood function. In other words, log likelihood is a measure of model fit, while maximum likelihood is a method for parameter estimation.

How is log likelihood calculated?

Log likelihood is calculated by taking the natural logarithm of the likelihood function. The likelihood function is the probability of obtaining the observed data given a specific set of model parameters. By taking the logarithm, we can simplify the calculation and make it easier to work with mathematically.

What is the significance of log likelihood in statistical modeling?

Log likelihood is an important tool in statistical modeling because it allows us to compare different models and determine which one best fits the data. A higher log likelihood value indicates a better fit, while a lower value indicates a poorer fit. Therefore, log likelihood helps us to select the most appropriate model for a given dataset.

What is the relationship between log likelihood and the likelihood ratio test?

The likelihood ratio test is a statistical test used to compare the fit of two nested models. The test statistic is based on the difference between the log likelihood values of the two models. If the difference is large enough, we can reject the null hypothesis that the simpler model is a better fit and conclude that the more complex model is a better fit for the data.

Can log likelihood be negative?

Yes, log likelihood can be negative. This occurs when the likelihood function is less than 1, which can happen if the model is a poor fit for the data. However, we typically use the negative log likelihood in statistical calculations to avoid dealing with negative numbers. This means that a higher negative log likelihood value still indicates a better fit, with 0 being the best possible fit.

Similar threads

Replies
16
Views
2K
Replies
1
Views
1K
Replies
6
Views
2K
Replies
3
Views
1K
Replies
3
Views
2K
Replies
11
Views
2K
Replies
2
Views
2K
Replies
3
Views
3K
Replies
2
Views
1K
Back
Top