I am trying to use a generalized least square model (gls
in R) on my panel data to deal with autocorrelation problem.
I do not want to have any lags for any variables.
I am trying to use Durbin-Watson test (dwtest
in R) to check the autocorrelation problem from my generalized least square model (gls
).
However, I find that the dwtest
is not applicable over gls
function while it is applicable to other functions such as lm
.
Is there a way to check the autocorrelation problem from my gls
model?
Durbin-Watson test is designed to check for presence of autocorrelation in standard least-squares models (such as one fitted by lm
). If autocorrelation is detected, one can then capture it explicitly in the model using, for example, generalized least squares (gls
in R). My understanding is that Durbin-Watson is not appropriate to then test for "goodness of fit" in the resulting models, as gls
residuals may no longer follow the same distribution as residuals from the standard lm
model. (Somebody with deeper knowledge of statistics should correct me, if I'm wrong).
With that said, function durbinWatsonTest
from the car
package will accept arbitrary residuals and return the associated test statistic. You can therefore do something like this:
v <- gls( ... )$residuals
attr(v,"std") <- NULL # get rid of the additional attribute
car::durbinWatsonTest( v )
Note that durbinWatsonTest
will compute p-values only for lm
models (likely due to the considerations described above), but you can estimate them empirically by permuting your data / residuals.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With