OLS standard error that corrects for autocorrelation but not heteroskedasticity

Therefore, they will be the same as the conventional standard errors, which assume no autocorrelation or heteroskedasticity.
  • #1
Usagi
45
0
Question: By mapping the OLS regression into the GMM framework, write the formula for the standard error of the OLS regression coefficients that corrects for autocorrelation but *not* heteroskedasticity. Furthermore, show that in this case, the conventional standard errors are OK if the $x$'s are uncorrelated over time, even if the errors $\varepsilon$ are correlated over time.

Attempt:
So the general model is $y_t = \beta' x_t + \varepsilon_t$. OLS picks parameters $\beta$ to minimize the variance of the residual:
$$\min_{\beta} E_T[(y_t-\beta' x_t)^2] $$
where the notation $E_t(\cdot) = \frac{1}{T} \sum_{t=1}^T( \cdot )$ denotes the sample mean. We find $\widehat{\beta}$ from the first-order condition, which states that:
$$g_T(\beta) = E_T[x_t(y_t - x_t' \beta)] =0$$
In the GMM context, here, the number of moments equals the number of parameters. Thus, we set the sample moments exactly to zero and solve for the estimate analytically:
$$\widehat{\beta} = [E_T(x_tx_t')]^{-1} E_T(x_t y_t)$$
Using the known result from GMM theory that
$$Var(\widehat{b}) = \frac{1}{T} (ad)^{-1} aSa^{\prime} (ad)^{-1 \prime}$$
where in this case $a = I$ (the identity matrix), $d = -E[x_t x_t']$, and $S = \sum_{j=-\infty}^{\infty} E[f(x_t, b), f(x_{t-j}, b)']$ with $f(x_t, \beta) = x_t(y_t - x_t'\beta) = x_t \varepsilon_t$.

So the general formula for the standard error of OLS is
$$Var(\widehat{\beta}) = \frac{1}{T}E(x_t x_t')^{-1} \left[\sum_{j=-\infty}^{\infty} E(\varepsilon_t x_t x_{t-j}' \varepsilon_{t-j})\right]E(x_t x_t')^{-1}$$

Now I know from the OLS assumptions:

(i) No autocorrelation: $E(\varepsilon_t \mid x_t, x_{t-1}, \cdots, \varepsilon_{t-1}, \varepsilon_{t-2}, \cdots) =0$

(ii) No heteroskedasticity: $E(\varepsilon_t^2 \mid x_t, x_{t-1}, \cdots, \varepsilon_{t-1}, \cdots) = constant = \sigma_{\varepsilon}^2$



What would the OLS standard error become if I correct for autocorrelation but not heteroskedasticity? Also how do I show that the conventional standard errors are OK if the $x$'s are uncorrelated over time, even if the errors $\varepsilon$ are correlated over time?
 
Physics news on Phys.org
  • #2
Answer: In this case, the OLS standard error would be:$$Var(\widehat{\beta}) = \frac{1}{T}E(x_t x_t')^{-1} \left[\sum_{j=-\infty}^{\infty} E(\varepsilon_t x_t x_{t-j}' \varepsilon_{t-j})\right]E(x_t x_t')^{-1}$$ The conventional standard errors are OK if the $x$'s are uncorrelated over time, even if the errors $\varepsilon$ are correlated over time. This follows from the fact that the OLS standard errors correct for autocorrelation in the error terms, but not heteroskedasticity. Since the $x$'s are assumed to be uncorrelated over time, the OLS standard errors will be the same as if there were no autocorrelation in the error terms.
 

FAQ: OLS standard error that corrects for autocorrelation but not heteroskedasticity

What is OLS standard error?

OLS standard error is the standard error used in Ordinary Least Squares (OLS) regression analysis. It is a measure of the variability of the estimated coefficient in a regression model. It is calculated as the square root of the variance of the coefficient estimate.

How does OLS standard error correct for autocorrelation?

OLS standard error corrects for autocorrelation by using a correction factor known as the Durbin-Watson statistic. This statistic measures the degree of autocorrelation in the residuals of a regression model. The OLS standard error is then multiplied by this correction factor to adjust for the effect of autocorrelation on the estimated coefficient.

What is the impact of autocorrelation on OLS standard error?

Autocorrelation can lead to an underestimation of the true standard error in a regression model. This means that the estimated coefficient may appear to be more precise than it actually is. By correcting for autocorrelation, the OLS standard error provides a more accurate measure of the variability of the estimated coefficient.

Why does OLS standard error not correct for heteroskedasticity?

OLS standard error does not correct for heteroskedasticity because the Durbin-Watson statistic used to correct for autocorrelation assumes that the variance of the residuals is constant. In the presence of heteroskedasticity, the variance of the residuals is not constant and therefore the correction factor may not be accurate.

What are some limitations of using OLS standard error to correct for autocorrelation?

One limitation of using OLS standard error to correct for autocorrelation is that it assumes that the correlation between the residuals at different lags is the same. This may not be the case in real-world data, and therefore the correction may not be accurate. Additionally, the Durbin-Watson statistic may not be reliable when the sample size is small. In these cases, alternative methods such as Cochrane-Orcutt or Newey-West may be more appropriate for correcting for autocorrelation.

Similar threads

Back
Top