Multistep-Ahead Time Series Prediction

  • Haibin Cheng
  • Pang-Ning Tan
  • Jing Gao
  • Jerry Scripps
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3918)


Multistep-ahead prediction is the task of predicting a sequence of values in a time series. A typical approach, known as multi-stage prediction, is to apply a predictive model step-by-step and use the predicted value of the current time step to determine its value in the next time step. This paper examines two alternative approaches known as independent value prediction and parameter prediction. The first approach builds a separate model for each prediction step using the values observed in the past. The second approach fits a parametric function to the time series and builds models to predict the parameters of the function. We perform a comparative study on the three approaches using multiple linear regression, recurrent neural networks, and a hybrid of hidden Markov model with multiple linear regression. The advantages and disadvantages of each approach are analyzed in terms of their error accumulation, smoothness of prediction, and learning difficulty.


Time Series Mean Square Error Multiple Linear Regression Hide Markov Model Recurrent Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gershenfeld, N.A., Weigend, A.S.: The Future of Time Series. In: Time Series Prediction: Forecasting the Future and Understanding the Past, pp. 1–70 (1993)Google Scholar
  2. 2.
    Jones, R.H.: Maximum likelihood fitting of ARMA models to time series with missing observations. Technometrics 20, 389–395 (1980)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Giles, C.L., Lawrence, S., Tsoi, A.C.: Noisy Time Series Prediction using a Recurrent Neural Network and Grammatical Inference. Machine Learning 44(1-2), 161–183 (2001)CrossRefMATHGoogle Scholar
  4. 4.
    Elman, J.L.: Distributed Representations, Simple Recurrent Networks, and Grammatical Structure. Machine Learning 7(2/3), 195–226 (1991)CrossRefGoogle Scholar
  5. 5.
    Kennel, M.B., Brown, R., Abarbanel, H.D.I.: Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys. Rev. A 45, 3403 (1992)CrossRefGoogle Scholar
  6. 6.
    Hyndman, R.: Time Series Data Library,
  7. 7.
    Rynkiewicz, J.: Hybrid HMM/MLP models for time series prediction. In: Proc. of the European Symposium on Artificial Neural Networks Brugges, Belgium, pp. 455–462 (1999)Google Scholar
  8. 8.
    Akaike, H.: Fitting autoregressive models for prediction. Annals of the Institute of Statistical Mathematics 21, 243–247 (1969)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    UCI Machine Learning Repository,
  10. 10.
    Le Borgne, Y.: Bias variance trade-off characterization in a classification. What differences with regression? Technical Report N°534, ULB (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Haibin Cheng
    • 1
  • Pang-Ning Tan
    • 1
  • Jing Gao
    • 1
  • Jerry Scripps
    • 1
  1. 1.Department of Computer Science and EngineeringMichigan State UniversityUSA

Personalised recommendations