Time series analysis and data transformation

  • I
  • Thread starter fog37
  • Start date
In summary, time series analysis involves examining data points collected or recorded at specific time intervals to identify trends, seasonal patterns, and cyclical movements. Data transformation in this context refers to techniques applied to prepare and enhance the data for analysis, such as normalization, differencing, and smoothing. These processes help improve the accuracy of forecasts and insights derived from the time series data, making it easier to interpret and model the underlying patterns effectively.
  • #1
fog37
1,569
108
TL;DR Summary
time series analysis and transformations
Hello,
Many time-series forecasting models (AR, ARMA, ARIMA, SARIMA, etc.) require the time series data to be stationarity.

But often, due to seasonality, trend, etc. we start with an observed time-series that is not stationary. So we apply transformations to the data so it becomes stationary. Essentially, we get a new, stationary time series which we use to create the model (AR, ARMA, etc.). But the transformed data is very different from the original data...Isn't the model supposed to work with data like the original data, i.e. isn't the goal to build a model that describes and can make forecasting on data that looks like the original data, not like the transformed data?

Thanks!
 
Physics news on Phys.org
  • #2
fog37 said:
TL;DR Summary: time series analysis and transformations

Isn't the model supposed to work with data like the original data, i.e. isn't the goal to build a model that describes and can make forecasting on data that looks like the original data, not like the transformed data?
As long as there is an inverse transform then you can get back to the original scale.

The usual problem with computing the statistics on the transformed data is that the residuals usually have different properties. Assumptions on the residual distribution hold on the transformed scale, and when inverse transformed they may be quite different.
 
Last edited:
  • Like
Likes fog37 and BvU
  • #3
There are many levels and definitions of "stationary". See Stationary process. A lot of people would not consider an ARIMA or SARIMA to be stationary in the most simple sense.
 
  • Like
Likes fog37
  • #4
Dale said:
As long as there is an inverse transform then you can get back to the original scale.

The usual problem with computing the statistics on the transformed data is that the residuals usually have different properties. Assumptions on the residual distribution hold on the transformed scale, and when inverse transformed they may be quite different.
Ok, I guess the key word is "inverse transformation". We convert the original signal into a new signal, create a model for the new signal, make predictions, and finally apply an inverse transformation to predictions which would now make sense for the original data...
It is the same things as when we convert a time-domain signal ##f(t)## into its frequency version ##F(\omega)##, solve the problem in the frequency domain, get a frequency domain solution, and convert that solution back to the time domain...
 
  • Like
Likes Dale
  • #5
Yes, that is a good example
 
  • Like
Likes fog37
  • #6
One realization I just had is that time-series models like ##AR, MA, ARMA, etc.## seem to just be discrete time ODEs, i.e. difference equations...But these linear models are generally used to make predictions/extrapolations of unknown values of ##y_t## without reaching a final solution, ##y=f(t)##, correct? Why not?

For example, a fitted AR(1) model is something like this: $$y_t = a y_{t-1}$$ which can be converted to the ODE model $$y_t = \frac {a} {a-1} y'$$

Why not solve for ##y_t## instead of keeping it as ##y_t = a y_{t-1}##?
 
  • #7
fog37 said:
Why not solve for ##y_t## instead of keeping it as ##y_t = a y_{t-1}##?
Because the direct solution for ##y_t## includes the cumulative random terms of all the preceding time steps. That can have a huge random variance. On the other hand, if you know the value of ##y_{t-1}##, why not use it and the random variance from that is just from one time step and is relatively small.
 
  • Like
Likes fog37
  • #8
I was thinking the following in regards to transformations, inverse transformations, ARMA, ARIMA and SARIMA.

ARMA is meant to model time-series that are weakly stationary (constant mean, variance, autocorrelation). To train an ARMA model, the training signal ##y(t)## must therefore be stationary. If it is not, we need to apply transformations to make it so and apply inverse transformations at the very end.

With ARIMA, we avoid manually doing the stationarizing step since the ##I(d)## part of ARIMA automatically make our input signal with trend and seasonality stationary, if it is not, by taking the difference transform...
But I guess differencing does not take remove the seasonal component from ##y(t)##? Does that means that we would need to remove seasonality manually before using ARIMA?

The best solution seems to then use SARIMA which does not care if the training signal has trend and/or seasonality because it takes care of it internally: we don't need to manually apply any transformations to the raw time series ##y(t)## and inverse transformations to the prediction outputs of the SARIMA model....

Any mistake in my understanding? I would definitely choose SARIMA, more convenient, since we can skip all those preprocessing transformations to make ##y(t)## stationary and inverse transformations after the forecasting...
 
  • #9
fog37 said:
Any mistake in my understanding? I would definitely choose SARIMA, more convenient, since we can skip all those preprocessing transformations to make ##y(t)## stationary and inverse transformations after the forecasting...
That is a natural thought. But you should avoid anything that would be like "throwing everything at the wall to see what sticks". A time series analysis tool-set might allow you to automate finding a SARIMA solution that only includes terms that are statistically significant. But you should have some subject-matter reason to include the trend and seasonal terms. A good tool-set should allow you to prevent the inclusion of terms that do not make sense .
 

FAQ: Time series analysis and data transformation

What is time series analysis?

Time series analysis is a statistical technique that deals with time-ordered data points. It involves methods for analyzing time series data to extract meaningful statistics and identify characteristics such as trends, seasonality, and cyclic patterns. This type of analysis is widely used in various fields such as economics, finance, environmental science, and engineering to forecast future values based on historical data.

What are the common methods used in time series analysis?

Common methods used in time series analysis include Autoregressive Integrated Moving Average (ARIMA), Seasonal Decomposition of Time Series (STL), Exponential Smoothing State Space Model (ETS), and machine learning approaches like Long Short-Term Memory (LSTM) networks. Each method has its strengths and is chosen based on the specific characteristics of the time series data and the goals of the analysis.

What is data transformation in the context of time series analysis?

Data transformation in time series analysis involves modifying the data to make it suitable for analysis and modeling. This can include techniques such as differencing to remove trends, logarithmic transformations to stabilize variance, normalization to scale data, and seasonal adjustment to remove seasonal effects. The goal of data transformation is to prepare the data in a way that enhances the performance of analytical models.

Why is stationarity important in time series analysis?

Stationarity is a crucial property in time series analysis because many statistical methods assume that the underlying time series data is stationary, meaning its statistical properties like mean and variance do not change over time. Non-stationary data can lead to unreliable and misleading results. Techniques such as differencing, detrending, and seasonal adjustment are often used to achieve stationarity before applying time series models.

How can you evaluate the performance of a time series model?

The performance of a time series model can be evaluated using various metrics and techniques. Commonly used metrics include Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE). Cross-validation techniques, such as rolling forecasting origin or time series split, can also be employed to assess how well the model generalizes to unseen data. Visual inspection of residual plots and comparing forecasted values with actual values are additional methods for evaluating model performance.

Similar threads

Replies
8
Views
2K
Replies
1
Views
3K
Replies
2
Views
1K
Replies
2
Views
2K
Replies
7
Views
2K
Replies
4
Views
1K
Replies
1
Views
1K
Back
Top