- #1
roadworx
- 21
- 0
Hi,
I have a question on stationarity in time series.
I basically understand the concept, I think. However, I don't understand why the lag should affect the joint distribution.
For example, the joint distribution of <Yt, Yt+a> should be the same as the joint distribution of <Yp, Yp+a>. If now the a were to increase, the joint distributions should still be the same. Is that correct?
If Yp+a is changed to some other value, say Yp+b, then surely the joint distribution would still be the same in a stationary time series. Does anyone know if this is correct?
I have a question on stationarity in time series.
I basically understand the concept, I think. However, I don't understand why the lag should affect the joint distribution.
For example, the joint distribution of <Yt, Yt+a> should be the same as the joint distribution of <Yp, Yp+a>. If now the a were to increase, the joint distributions should still be the same. Is that correct?
If Yp+a is changed to some other value, say Yp+b, then surely the joint distribution would still be the same in a stationary time series. Does anyone know if this is correct?