What is an Autoregressive Model?
An Autoregressive (AR) model is a statistical tool used for predicting future data points based on preceding ones in a particular time series. This tool makes assumptions that a given time series value is heavily influenced by its past values, and these relationships are quantified through certain coefficients. The succeeding value in a given timeline can be forecasted using the former time steps in a regression model.
An Autoregressive model can be represented as:
y(t) = c + w1*y(t-1) + w2*y(t-2) + ... + wp*y(t-p) + e(t)
In this equation:
- y(t) refers to the current time series value
- y(t-1) signifies the previous time step value
- y(t-2) shows the value from two-time steps in the past, etc.
- The coefficients w1, w2, etc., are the autoregressive coefficients indicating the influence of past values on the current one.
- c stands for the constant bias term, and e(t) is a random error term.
Autoregressive language models, a subset of Machine Learning models, utilize autoregression to predict succeeding words based on preceding ones for tasks like natural language processing and machine translation.
R programming offers the arima() function to implement an autoregressive model on a time series.
There are various types of Autoregressive models like VECTOR autoregressive model (VAR) and CONDITIONAL Autoregressive Model (CAR). VAR helps in modeling relationships among various time series while CAR models spatial data relying on the presumption that a variable's value at a certain location relies on the values at neighboring locations.
In a typical autoregression model, predictions are made based on the correlation between the data at earlier steps or 'lag variables.' This correlation can be positive, both moving in the same direction, for two variables, or negative, fluctuating in opposite directions. The strength of this correlation, whether positive or negative, implies how the past can predict the future. This correlation is known as the "autocorrelation" associating the variable with its history.
If the input variables display weak or no correlation with final output, it reduces the predictability of the time series dataset which can be beneficial for deep neural networks' training.