Help with Statistical Inference

  • MHB
  • Thread starter trousmate
  • Start date
  • Tags
    Statistical
In summary,This person is asking for help with a presentation they have to give, but they were not able to attend the lectures because of illness. The person provides a summary of the content of the conversation, noting that they don't know what to do because they missed some lectures. They provide a solution to a problem that is not the problem being asked.
  • #1
trousmate
1
0
View attachment 3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM
 

Attachments

  • Screen Shot 2014-11-07 at 12.38.28.png
    Screen Shot 2014-11-07 at 12.38.28.png
    6.8 KB · Views: 74
Physics news on Phys.org
  • #2
trousmate said:
https://www.physicsforums.com/attachments/3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM

Wellcome on MHB trousmate!...

If You have rsamples of a r.v. Y then the likelihood function is... $\displaystyle f(y_{1}, y_{2}, ..., y_{r} | p) = p^{\sum y_{i}}\ (1-p)^{r - \sum y_{i}}\ (1)$... so that... $\displaystyle \ln f= \sum_{i} y_{i}\ \ln p - (r - \sum_{i} y_{i} )\ \ln (1 - p) \implies\frac{d \ln f}{d p } = \frac{\sum_{i} y_{i}} {p} - \frac {(r -\sum_{i} y_{i})}{1 - p}\ (2)$... and imposing$\displaystyle \frac{d \ln f}{d p} = 0$ You arrive to write... $\displaystyle \overline {p}= \frac{\sum_{i} y_{i}}{r}\ (3)$Kind regards$\chi$ $\sigma$
 
  • #3
trousmate said:
https://www.physicsforums.com/attachments/3490

See image.

I have to present this on Monday and don't really know what I'm doing having missed a couple of lectures through illness. Any help or hints would be very much appreciated.

Thanks,
TM

For this problem you have the likelihood:

$$L(\theta|Y) = b(Y,r,\theta)=\frac{r!}{Y!(r-Y)!}\theta^Y(1-\theta)^{r-Y}$$

Then the log-likelihood is:

$$LL(\theta|Y) =\log(r!) - \log(Y!) -\log((r-Y)!)+Y\log(\theta) +(r-Y)\log(1-\theta)$$

Then to find the value of $\theta$ that maximises the log-likelihood we take the partial derivative with respect to $\theta$ and equate that to zero:

$$\frac{\partial}{\partial \theta}LL(\theta|Y)=\frac{Y}{\theta}-\frac{r-Y}{1-\theta}
$$
So for the maximum (log-)likelihood estimator $\hat{\theta}$ we have:
$$\frac{Y}{\hat{\theta}}-\frac{r-Y}{1-\hat{\theta}}=0$$

which can be rearranged to give:$$\hat{\theta}(r-Y)=(1-\hat{\theta})Y$$ or $$\hat{\theta}=\frac{Y}{r}$$

and as you should be aware of; the maximum log-likelihood estimator for this sort of problem is also the maximum likelihood estimator.

.
 
  • #4
chisigma said:
Wellcome on MHB trousmate!...

If You have rsamples of a r.v. Y then the likelihood function is... $\displaystyle f(y_{1}, y_{2}, ..., y_{r} | p) = p^{\sum y_{i}}\ (1-p)^{r - \sum y_{i}}\ (1)$... so that... $\displaystyle \ln f= \sum_{i} y_{i}\ \ln p - (r - \sum_{i} y_{i} )\ \ln (1 - p) \implies\frac{d \ln f}{d p } = \frac{\sum_{i} y_{i}} {p} - \frac {(r -\sum_{i} y_{i})}{1 - p}\ (2)$... and imposing$\displaystyle \frac{d \ln f}{d p} = 0$ You arrive to write... $\displaystyle \overline {p}= \frac{\sum_{i} y_{i}}{r}\ (3)$Kind regards$\chi$ $\sigma$

Very nice but this is the solution to the wrong problem, it just happens to have the same solution as the problem as asked, but it is still the wrong problem. The data is the number of successes from r trials not a vector of results.

.
 
  • #5


Hello TM,

I'm sorry to hear that you were unable to attend some lectures due to illness. Statistical inference can be a complex topic, but I'm happy to help provide some guidance.

Firstly, statistical inference is the process of drawing conclusions or making predictions about a population based on a sample of data. This is important because it allows us to make generalizations and predictions about a larger group without having to collect data from every single individual.

Some key concepts in statistical inference include hypothesis testing, confidence intervals, and p-values. Hypothesis testing involves setting up a null hypothesis and an alternative hypothesis and then using data to determine whether the null hypothesis can be rejected or not. This helps us determine if there is a significant difference or relationship between variables.

Confidence intervals are a range of values that we can be reasonably confident the true population parameter falls within. They are often used to estimate the true value of a population parameter, such as the mean or proportion.

P-values are a measure of the strength of evidence against the null hypothesis. A smaller p-value indicates stronger evidence against the null hypothesis, and may lead to rejecting the null hypothesis in favor of the alternative.

When presenting on statistical inference, it's important to clearly define the research question and explain the methods used to analyze the data. It can also be helpful to provide visual representations, such as graphs or charts, to illustrate the results.

I hope this helps give you a better understanding of statistical inference. If you have any specific questions or need further clarification, please don't hesitate to reach out. Good luck with your presentation on Monday!

Best,
 

FAQ: Help with Statistical Inference

What is statistical inference?

Statistical inference is the process of drawing conclusions or making predictions about a population based on a sample of data. It involves using statistical techniques to analyze the sample data and make inferences about the larger population.

Why is statistical inference important?

Statistical inference allows us to make generalizations about a population based on a smaller sample, which is often more feasible and cost-effective to collect. It also helps us to understand the relationships and patterns in the data and make informed decisions based on the results.

What are the two types of statistical inference?

The two types of statistical inference are estimation and hypothesis testing. Estimation involves using sample data to estimate the value of a population parameter, such as a mean or proportion. Hypothesis testing involves making a claim about a population and using sample data to determine if there is enough evidence to reject or fail to reject the claim.

What are some common tools used in statistical inference?

Some common tools used in statistical inference include confidence intervals, hypothesis tests, and regression analysis. These tools help to quantify the uncertainty in our estimates and make conclusions about the population based on the sample data.

What are some potential challenges in statistical inference?

One challenge in statistical inference is ensuring that the sample is representative of the population. Biases in sampling methods or non-response bias can lead to inaccurate inferences. Additionally, small sample sizes can lead to imprecise estimates and weak conclusions. It is also important to consider potential confounding factors and ensure that the appropriate statistical methods are used for the data at hand.

Similar threads

Replies
2
Views
1K
Replies
30
Views
1K
Replies
3
Views
1K
Replies
3
Views
2K
Replies
1
Views
2K
Replies
20
Views
3K
Replies
7
Views
2K
Back
Top