Stationary time series with seasonality...

  • I
  • Thread starter fog37
  • Start date
  • Tags
    Time series
In summary, stationary time series with seasonality exhibit consistent statistical properties over time, including a constant mean and variance, while also displaying regular, predictable patterns that repeat at specific intervals. Analyzing such time series often involves techniques to remove or account for seasonal effects, allowing for clearer insights into underlying trends and behaviors. Key methods include decomposition, differencing, and the application of seasonal adjustment processes to enhance forecasting accuracy.
  • #1
fog37
1,569
108
TL;DR Summary
seasonal component and stationarity
Hello,
I was under the impression that a time series with trend, seasonality, and cyclic component would automatically be nonstationary.
Stationarity means constant mean, variance, and autocorrelation (weakly stationary).

However, it seems that we could have a stationary time-series that has a seasonal component....How is that possible? A seasonal component is simply a fluctuating, periodic signal with constant period (Ex: sine wave)....Wouldn't the seasonal component make the signal appear statistically different if our window of time catches the series during the upswing and during the downswing of the seasonal component?

Thank you!
 
Physics news on Phys.org
  • #2
fog37 said:
TL;DR Summary: seasonal component and stationarity

Hello,
I was under the impression that a time series with trend, seasonality, and cyclic component would automatically be nonstationary.
Stationarity means constant mean, variance, and autocorrelation (weakly stationary).

However, it seems that we could have a stationary time-series that has a seasonal component...
I don't think that ARIMA or SARIMA are considered to be stationary. For their analysis, they are often related to stationary time series by removing the seasonality and trend.
 
  • Like
Likes DeBangis21 and fog37
  • #3
Thank you. I learned that:

To remove linear trend, we can use 1st differencing ##D_1 = y_t - y_{t-1}##
To remove quadratic trend, we can use 2nd differencing: ##D_2##
To remove seasonality, we can also use the differencing but the order of differencing must match the period of the seasonal component. For example, if the seasonality, after inspection, has period 12 (days, months, etc.), we remove seasonality by using ##D_{12} = y_t - y_{t-12}##..

Here we are again to ARMA vs ARIMA vs SARIMA. If we know our signal has both linear trend and 12-month seasonality, we could either:

a) stationarize ##y_t## by applying ##D_1## and ##D_{12}## to get the (trend free and seasonality free) stationary signal ##g_t##, build an ARMA model with it, make predictions, and finally inverse transform the predictions so they apply correctly to the original data

b) after data visualization, directly use the nonstationary ##y_t## with trend and seasonality to build a SARIMA model taking into a account that the integration should be ##I(1)## to remove the linear trend and specifying the ##12## for the differencing/removing of the seasonal part. Once we fit the SARIMA model, we make predictions but still need to inverse transform those predictions so they apply the original data ##y_t##....

The outcomes for approach a) and b) should be exactly the same...
 
Last edited:
  • #4
FactChecker said:
I don't think that ARIMA or SARIMA are considered to be stationary. For their analysis, they are often related to stationary time series by removing the seasonality and trend.
Hello FactCheker,

I have been thinking: a nonstationary time series ##y(t)## can be decomposed into the sum of its trend ##T(t)## and seasonality ##S(t)##:
$$y(t)=T(t)+S(t)+error(t)$$
To trend-stationarize the series y(t) (i.e. remove the trend), we usually difference the series... why don't we just perform the calculation##g(t) = y(t) -T(t)## instead of differencing the series ##y(t)##?
Subtracting ##T(t)## would seem to perfectly take care of the trend removal. The new series ##g(t)## is now trend-free.
The same goes for removing the seasonality ##S(t)##: ##g(t) = y(t) -T(t)- S(t)##. Of course, we only remain with the ##error(t)##

So what are we left with when we using differencing on ##y(t)## to get a signal ##g(t)## that is trend and seasonality free?

Thank you,
Brett
 
  • #5
If you already know what T and S are, what analysis is even left for you to do? Usually the point is you don't actually know what the function is.
 
  • Like
Likes FactChecker and fog37
  • #6
Office_Shredder said:
If you already know what T and S are, what analysis is even left for you to do? Usually the point is you don't actually know what the function is.
Well, the idea is generally to use that time series to build a statistical model like AR, MA, ARMA, ARIMA, etc. These models require the series used to build them to be stationary. So if our series is not stationary, we need to make it stationary via transformations like differencing....

However, during exploratory analysis, we can use functions (ex: in Python) that can separate out the components so we know what kind of time series we are dealing with and figure out the appropriate transformations to stationarize it:

1706706946702.png

1706706927844.png
 
  • Like
Likes DeBangis21
  • #7
fog37 said:
Well, the idea is generally to use that time series to build a statistical model like AR, MA, ARMA, ARIMA, etc. These models require the series used to build them to be stationary. So if our series is not stationary, we need to make it stationary via transformations like differencing....

However, during exploratory analysis, we can use functions (ex: in Python) that can separate out the components so we know what kind of time series we are dealing with and figure out the appropriate transformations to stationarize it:

View attachment 339501
View attachment 339499
What tools or processes are you considering using to analyze a time series? R is a popular statistical package. It has a function, decompose, which helps you to see the trend, seasonal, and irregular components. (see this) But it seems that their time series analysis function, acf, which determines autocorrelations works on the original time series. I have never used R for a complete time series analysis, so I will have to leave this for others to say more.
 
  • Like
Likes fog37
  • #8
FactChecker said:
What tools or processes are you considering using to analyze a time series? R is a popular statistical package. It has a function, decompose, which helps you to see the trend, seasonal, and irregular components. (see this) But it seems that their time series analysis function, acf, which determines autocorrelations works on the original time series. I have never used R for a complete time series analysis, so I will have to leave this for others to say more.
We find the trend ##T(t)## using regression (linear, polynomial, etc) and subtract it from the series ##y(t)##:
$$g(t) = y(t)- T(t)$$
OR we do differencing, i.e. we get $$g(t)= y(t) - y(t-1)$$...

The results are different but both ##g(t)## are trend free now...Which one to use to make ##y(t)## stationary mathematically?
 
  • #9
fog37 said:
We find the trend ##T(t)## using regression (linear, polynomial, etc) and subtract it from the series ##y(t)##:
$$g(t) = y(t)- T(t)$$
OR we do differencing, i.e. we get $$g(t)= y(t) - y(t-1)$$...

The results are different but both ##g(t)## are trend free now...Which one to use to make ##y(t)## stationary mathematically?
It seems more complicated than that. I think they are not necessarily trend free. You can combine the two like ##g_i = y_i - y_{i-1} + c## and need to apply both "detrending" steps.
Also, you need to consider how the random error terms enter in. Consider the difference in how the random term accumulates in these two models:
##y_i = g_i+T_i + \epsilon_i## versus ##y_i=g_i+y_{i-1} +\epsilon_i##, where ##g## is a simple stationary time series and ##\epsilon## is a stationary random distribution.
In the first model, the random component is a single random sample from ##\epsilon##. In the second model, the random component accumulates, summing all the ##\epsilon_j##s associated with the prior ##y_j## values.
Which model is most appropriate is something for you to decide, based on the data or on the subject matter.
 

FAQ: Stationary time series with seasonality...

What is a stationary time series with seasonality?

A stationary time series with seasonality is a time series whose statistical properties, such as mean and variance, do not change over time, but it exhibits regular, repeating patterns or cycles at specific intervals. This seasonality can be daily, monthly, yearly, etc.

How can you identify seasonality in a time series?

Seasonality in a time series can be identified using various methods, such as plotting the data and visually inspecting for repeating patterns, using autocorrelation functions (ACF) to detect periodic correlations, or employing spectral analysis to find dominant frequencies in the data.

What are common methods to model stationary time series with seasonality?

Common methods to model stationary time series with seasonality include Seasonal Autoregressive Integrated Moving Average (SARIMA), Exponential Smoothing State Space Model (ETS), and seasonal decomposition techniques like STL (Seasonal and Trend decomposition using Loess).

How do you test for stationarity in a time series?

Stationarity in a time series can be tested using statistical tests such as the Augmented Dickey-Fuller (ADF) test, the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test, or the Phillips-Perron (PP) test. These tests help determine whether the time series has a unit root and hence is non-stationary.

Can seasonality affect the stationarity of a time series?

Yes, seasonality can affect the stationarity of a time series. If the seasonal component is not properly accounted for, it can introduce non-stationarity. To achieve stationarity, the seasonal component can be removed or modeled separately.

Similar threads

Back
Top