One of the most basic and useful of the time series models is the order 1 (1 lag) autoregressive model, denoted AR(1) and given by Yt − μ = ρ(Yt − 1 − μ) + et where Yt is the observation at time t, μ is the long run mean of the time series and et is an independent sequence of random variables. We use this venerable model to illustrate the Dickey–Fuller test then mention that the results extend to a broader collection of models.
When written as Yt = μ(1 − ρ) + ρYt − 1 + et, or more convincingly as Yt = λ + ρYt − 1 + et, with e independent and identically distributed as N(0, σ2), the AR(1) model looks like a regression with errors satisfying the usual assumptions. Indeed the least squares estimators of the coefficients are asymptotically unbiased and normally distributed under one key condition, namely that the true ρ satisfies | ρ | < 1. It appears that this assumption is quite often violated. Many prominent time series appear to have ρ = 1, in which case Yt − μ = ρ(Yt − 1 − μ) + et...
References and Further Reading
- Dickey DA (1984) Power of unit root tests. In: Proceedings of the business and economic statistics section. American Statistical Association, Philadelphia, Washington, DC, pp 489–493Google Scholar