Correcting Autocorrletation in a Model with Dummies

  • Thread starter coraUK
  • Start date
  • Tags
    Model
In summary, the speaker is asking for help in obtaining SPSS output that corrects for first-level autocorrelation in the dependent variable and provides appropriate beta estimates and significance levels. They mention using dummy variables for time and state effects and refer to a method described in an older paper. The method involves estimating autocorrelation and transforming the dependent variable before running a regression analysis. The speaker is seeking clarification on how to apply this method in their own data, which has a panel structure.
  • #1
coraUK
1
0
How can I get SPSS output that corrects for first-level autocorrelation in the dependent variable and gives me appropriate beta estimates and significance levels? I used dummy variables for time and state effects in a model, I have 22 YEAR dummies and 51 STATE dummies. The method I want to copy is explained in an older paper:



Eij = pEij-1 + Sij



"Where ρ is the autocorrelation between the εijth and εij-1th errors and δij is a normally and independently distributed error with a constant variance across time and counties. The residuals from the weighted least squares fit were used to estimate rho. The dependent variable Yij was then transformed into Yij - ρ Yij-1. The regression analysis was rerun with these transformed independent and dependent variables." Can anyone explain what was done in this example and how I can do the same?
 
Physics news on Phys.org
  • #2
It is standard autocorr. correction, except for the panel structure of the data. SPSS needs to be somehow "told" that the dataset is panel (has a cross-section dimension in addition to a time series dimension).
 
  • #3


In this example, the researcher is using dummy variables for time and state effects in their model. However, they have found that there is first-level autocorrelation in the dependent variable, meaning that the errors in the model are correlated with each other over time. This can lead to biased and inaccurate beta estimates and significance levels.

To correct for this autocorrelation, the researcher has followed a method explained in an older paper. This method involves transforming the dependent variable by subtracting the product of the autocorrelation coefficient (ρ) and the lagged dependent variable (Yij-1) from the original dependent variable (Yij). This transformed dependent variable, Yij - ρ Yij-1, is then used in the regression analysis along with the original independent variables.

By using this method, the researcher is able to remove the autocorrelation from the dependent variable and obtain unbiased beta estimates and significance levels. To replicate this method in SPSS, you can follow these steps:

1. Run the initial regression analysis with the original dependent and independent variables.

2. Save the residuals from this regression analysis by clicking on "Save" under the "Statistics" tab and selecting "Save Unstandardized Predicted Values."

3. Compute the autocorrelation coefficient (ρ) using the saved residuals. This can be done by going to "Analyze" > "Regression" > "Autocorrelation" and selecting "Durbin-Watson" under "Method."

4. Create a new variable by subtracting the product of ρ and the lagged dependent variable (Yij-1) from the original dependent variable (Yij). This can be done using the "Compute" function under the "Transform" tab.

5. Run the regression analysis again, this time using the transformed dependent variable and the original independent variables.

By following these steps, you will be able to replicate the method used in the example and correct for first-level autocorrelation in your model.
 

FAQ: Correcting Autocorrletation in a Model with Dummies

1. What is autocorrelation in a model with dummies?

Autocorrelation in a model with dummies refers to the correlation between the error terms in a regression model. In other words, it is the correlation between the residuals of the model. This can occur when the error terms are not independent and are influenced by previous errors.

Why is it important to correct for autocorrelation in a model with dummies?

It is important to correct for autocorrelation in a model with dummies because it violates the assumption of independence of the error terms. This can lead to biased and unreliable estimates of the model coefficients, making the model less accurate and less useful for prediction and inference.

How can autocorrelation be detected in a model with dummies?

Autocorrelation can be detected through visual inspection of the residuals plot, or by using statistical tests such as the Durbin-Watson test or the Breusch-Godfrey test. These tests check for the presence of autocorrelation in the residuals and provide a p-value to determine its significance.

What are the methods for correcting autocorrelation in a model with dummies?

There are several methods for correcting autocorrelation in a model with dummies. One method is to include lagged variables in the model to account for the correlation between the error terms. Another method is to use robust standard errors, such as Newey-West or HAC standard errors, which take into account the presence of autocorrelation. Lastly, a more advanced method is to use time series techniques, such as ARIMA or ARCH models, to model the autocorrelated errors.

Can autocorrelation be completely eliminated in a model with dummies?

In most cases, it is not possible to completely eliminate autocorrelation in a model with dummies. However, it can be reduced to a negligible level through the use of appropriate correction methods. Additionally, the impact of autocorrelation on the model results can be minimized by including other relevant variables in the model and using robust estimation techniques.

Back
Top