Likelihood Ratio Test for Common Variance from Two Normal Distribution Samples

In summary, the problem asks for the construction of a likelihood ratio test for comparing the variances of two independent samples from normal distributions with unknown means and a common variance. The likelihood ratio is derived and the resulting test is compared to the classical test given in the book. Both tests are equivalent, but they use different methods - likelihood ratios and classical statistics, respectively.
  • #1
Ackbach
Gold Member
MHB
4,155
92
$\newcommand{\szdp}[1]{\!\left(#1\right)}
\newcommand{\szdb}[1]{\!\left[#1\right]}$
Problem Statement: Let $S_1^2$ and $S_2^2$ denote, respectively, the variances of independent random samples of sizes $n$ and $m$ selected from normal distributions with means $\mu_1$ and $\mu_2$ and common variance $\sigma^2.$ If $\mu_1$ and $\mu_2$ are unknown, construct a likelihood ratio test of $H_0: \sigma^2=\sigma_0^2$ against $H_a:\sigma^2=\sigma_a^2,$ assuming that $\sigma_a^2>\sigma_0^2.$

Note 1: This is Problem 10.89 in Mathematical Statistics with Applications, 5th Ed., by Wackerly, Mendenhall, and Sheaffer.

Note 2: This is cross-posted here.

My Work So Far: Let $X_1, X_2,\dots,X_n$ be the sample from the normal distribution with mean $\mu_1,$ and let $Y_1, Y_2,\dots,Y_m$ be the sample from the normal
distribution with mean $\mu_2.$ The likelihood is
\begin{align*}
L(\mu_1,\mu_2,\sigma^2)
=\szdp{\frac{1}{\sqrt{2\pi}}}^{\!\!(m+n)}
\szdp{\frac{1}{\sigma^2}}^{\!\!(m+n)/2}
\exp\szdb{-\frac{1}{2\sigma^2}\szdp{\sum_{i=1}^n(x_i-\mu_1)^2
+\sum_{i=1}^m(y_i-\mu_2)^2}}.
\end{align*}
We obtain $L\big(\hat{\Omega}_0\big)$ by replacing $\sigma^2$ with $\sigma_0^2$ and $\mu_1$ with $\overline{x}$ and $\mu_2$ with $\overline{y}:$
\begin{align*}
L\big(\hat{\Omega}_0\big)
=\szdp{\frac{1}{\sqrt{2\pi}}}^{\!\!(m+n)}
\szdp{\frac{1}{\sigma_0^2}}^{\!\!(m+n)/2}
\exp\szdb{-\frac{1}{2\sigma_0^2}\szdp{\sum_{i=1}^n(x_i-\overline{x})^2
+\sum_{i=1}^m(y_i-\overline{y})^2}}.
\end{align*}
The MLE for the common variance in exactly this scenario (but with switched $m$ and $n$) is:
$$\hat\sigma^2=\frac{1}{m+n}\szdb{\sum_{i=1}^n(x_i-\overline{x})^2
+\sum_{i=1}^m(y_i-\overline{y})^2}.$$
So this estimator plugged into the likelihood yields
\begin{align*}
L\big(\hat{\Omega}\big)
&=\szdp{\frac{1}{\sqrt{2\pi}}}^{\!\!(m+n)}
\szdp{\frac{1}{\hat\sigma^2}}^{\!\!(m+n)/2}
\exp\szdb{-\frac{1}{2\hat\sigma^2}\szdp{\sum_{i=1}^n(x_i-\overline{x})^2
+\sum_{i=1}^m(y_i-\overline{y})^2}}.
\end{align*}
It follows that the ratio is
\begin{align*}
\lambda
&=\frac{L\big(\hat{\Omega}_0\big)}{L\big(\hat{\Omega}\big)}\\
&=\szdp{\frac{\hat\sigma^2}{\sigma_0^2}}^{\!\!(m+n)/2}
\exp\szdb{\frac{(\sigma_0^2-\hat\sigma^2)(m+n)}{2\sigma_0^2}}.\\
-2\ln(\lambda)
&=(m+n)\szdb{\frac{\hat\sigma^2}{\sigma_0^2}
-\ln\szdp{\frac{\hat\sigma^2}{\sigma_0^2}}-1}.
\end{align*}
Now the function $f(x)=x-\ln(x)-1$ first decreases, then increases. It has a global minimum of $0$ at $x=1.$ Note also that the original inequality becomes:
\begin{align*}
\lambda&<k\\
2\ln(\lambda)&<2\ln(k)\\
-2\ln(\lambda)&>k'.
\end{align*}
As the test is for $\sigma_a^2>\sigma_0^2,$ we will expect the estimator $\hat\sigma^2>\sigma_0^2.$ We can, evidently, use Theorem 10.2 and claim that $-2\ln(\lambda)$ is $\chi^2$ distributed with d.o.f. $1-0.$ So we reject $H_0$ when
$$(m+n)\szdb{\frac{\sum_{i=1}^n(x_i-\overline{x})^2
+\sum_{i=1}^m(y_i-\overline{y})^2}{(m+n)\sigma_0^2}
-\ln\szdp{\frac{\sum_{i=1}^n(x_i-\overline{x})^2
+\sum_{i=1}^m(y_i-\overline{y})^2}{(m+n)\sigma_0^2}}-1}
>\chi^2_{\alpha},$$
or
$$\frac{\sum_{i=1}^n(x_i-\overline{x})^2
+\sum_{i=1}^m(y_i-\overline{y})^2}{\sigma_0^2}
-\ln\szdp{\frac{\sum_{i=1}^n(x_i-\overline{x})^2
+\sum_{i=1}^m(y_i-\overline{y})^2}{\sigma_0^2}}-(m+n)
>\chi^2_{\alpha}.$$

My Questions:

1. Is my answer correct?
2. My answer is not the book's answer. The book's answer is simply that
$$\chi^2=\frac{(n-1)S_1^2+(m-1)S_2^2}{\sigma_0^2}$$
has a $\chi_{(n+m-2)}^2$ distribution under $H_0,$ and that we reject $H_0$ if $\chi^2>\chi_a^2.$ How is this a likelihood ratio test? It's not evident that they went through any of the steps of forming the likelihood ratio with all the necessary optimizations. Their estimator is not the MLE for $\sigma^2,$ is it?
 
Physics news on Phys.org
  • #2


My

1. Your answer appears to be correct. You have correctly derived the likelihood ratio test for the given problem and have shown that it is equivalent to the test given in the book.

2. The book's answer is also correct, but it is not a likelihood ratio test. The book's answer is a classical test based on the fact that under $H_0,$ the statistic $\chi^2=\frac{(n-1)S_1^2+(m-1)S_2^2}{\sigma_0^2}$ follows a $\chi^2$ distribution with $n+m-2$ degrees of freedom. This is a well-known result in classical statistics and does not require the use of likelihood ratios. The estimator used in the book's answer is the MLE for $\sigma^2.$ The MLE for $\sigma^2$ is given by $\hat\sigma^2=\frac{1}{n+m}\left(\sum_{i=1}^n(x_i-\overline{x})^2+\sum_{i=1}^m(y_i-\overline{y})^2\right),$ which is the same as the estimator used in your answer. So, both answers are equivalent, but they use different methods. The book's answer uses classical statistical methods, while your answer uses likelihood ratios.
 

FAQ: Likelihood Ratio Test for Common Variance from Two Normal Distribution Samples

What is the Likelihood Ratio Test for Common Variance from Two Normal Distribution Samples?

The Likelihood Ratio Test for Common Variance from Two Normal Distribution Samples is a statistical test used to determine whether two samples come from populations with the same variance. It is based on the likelihood ratio, which compares the likelihood of the data under the null hypothesis (equal variances) to the likelihood under the alternative hypothesis (unequal variances).

How is the Likelihood Ratio calculated?

The Likelihood Ratio is calculated by taking the ratio of the maximum likelihood estimates of the parameters under the null and alternative hypotheses. In the case of the Likelihood Ratio Test for Common Variance, the parameters are the variances of the two populations.

What are the assumptions of the Likelihood Ratio Test for Common Variance?

The main assumptions of the Likelihood Ratio Test for Common Variance are that the samples are independent, normally distributed, and have equal means. Additionally, the samples should be of equal size and have a sufficient number of observations to ensure the validity of the test.

What is the significance level used for the Likelihood Ratio Test for Common Variance?

The significance level, also known as alpha, is the probability of rejecting the null hypothesis when it is actually true. The commonly used significance level for the Likelihood Ratio Test for Common Variance is 0.05, which means that there is a 5% chance of incorrectly rejecting the null hypothesis.

What is the interpretation of the p-value in the Likelihood Ratio Test for Common Variance?

The p-value in the Likelihood Ratio Test for Common Variance is the probability of obtaining the observed data or more extreme data under the null hypothesis. A low p-value (usually less than 0.05) indicates that the observed data is unlikely to occur if the null hypothesis is true, and therefore, the null hypothesis can be rejected in favor of the alternative hypothesis (unequal variances). On the other hand, a high p-value suggests that the null hypothesis cannot be rejected, and the data does not provide enough evidence to conclude that the variances are different.

Similar threads

Back
Top