Long-Term Prediction of Time Series Using State-Space Models

  • Elia Liitiäinen
  • Amaury Lendasse
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4132)


State-space models offer a powerful modelling tool for time series prediction. However, as most algorithms are not optimized for long-term prediction, it may be hard to achieve good prediction results. In this paper, we investigate Gaussian linear regression filters for parameter estimation in state-space models and we propose new long-term prediction strategies. Experiments using the EM-algorithm for training of nonlinear state-space models show that significant improvements are possible with no additional computational cost.


Blind Source Separation Time Series Prediction Prediction Horizon Prediction Step Validation Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Time Series Prediction: Forecasting the Future and Understanding the Past. Addison-Wesley Publishing Company (1994)Google Scholar
  2. 2.
    Nelson, A.T.: Nonlinear Estimation and Modeling of Noisy Time-series by Dual Kalman Filtering Methods. PhD thesis, Oregon Graduate Institute (2000)Google Scholar
  3. 3.
    Valpola, H., Karhunen, J.: An unsupervised ensemble learning method for nonlinear dynamic state-space models. Neural Computation 14(11), 2647–2692 (2002)MATHCrossRefGoogle Scholar
  4. 4.
    Ghahramani, Z., Roweis, S.: Learning nonlinear dynamical systems using an EM algorithm. Advances in Neural Information Processing Systems 11, 431–437 (1999)Google Scholar
  5. 5.
    van der Merwe, R.: Sigma-Point Kalman Filters for Probabilistic Inference in Dynamic State-Space Models. PhD thesis, Oregon Health & Science UniversityGoogle Scholar
  6. 6.
    Lefebvre, T., Bruyninckx, H., De Schutter, J.: Kalman filters for non-linear systems: a comparison of performance. International Journal of Control 77(7), 639–653 (2004)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Ito, K., Xiong, K.Q.: Gaussian filters for nonlinear filtering problems. IEEE Transactions on Automatic Control 45(5), 910–927 (2000)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Haykin, S. (ed.): Kalman Filtering and Neural Networks. Wiley Series on Adaptive and Learning Systems for Signal Processing. John Wiley & Sons, Inc., Chichester (2001)Google Scholar
  9. 9.
    Raiko, T., Tornio, M., Honkela, A., Karhunen, J.: State inference in variational bayesian nonlinear state-space models. In: Rosca, J.P., Erdogmus, D., Príncipe, J.C., Haykin, S. (eds.) ICA 2006. LNCS, vol. 3889, pp. 222–229. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Doucet, A., de Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Springer, Heidelberg (2001)MATHGoogle Scholar
  11. 11.
    Wan, E.A., van der Merwe, R., Nelson, A.T.: Dual estimation and the unscented transformation. In: Advances in Neural Information Processing Systems (2000)Google Scholar
  12. 12.
    Dynamical Systems and Turbulence. In: Rand, D.A., Young, L.S. (eds.) Detecting strange attractors in turbulence. Lecture Notes in Mathematics, vol. 898. Springer, Heidelberg (1981)Google Scholar
  13. 13.
    Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall, Englewood Cliffs (1998)Google Scholar
  14. 14.
    Honkela, A., Harmeling, S., Lundqvist, L., Valpola, H.: Using kernel PCA for initialisation of variational bayesian nonlinear blind source separation method. In: Puntonet, C.G., Prieto, A.G. (eds.) ICA 2004. LNCS, vol. 3195, pp. 790–797. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  15. 15.
    Moody, J., Darken, C.J.: Fast learning in networks of locally-tuned processing units. Neural Computation, 281–294 (1989)Google Scholar
  16. 16.
    Ji, Y., Hao, J., Reyhani, N., Lendasse, A.: Direct and recursive prediction of time series using mutual information selection. In: Cabestany, J., Prieto, A.G., Sandoval, F. (eds.) IWANN 2005. LNCS, vol. 3512, pp. 1010–1017. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  17. 17.
    Fletcher, R.: Practical Methods of Optimization, vol. 1. John Wiley and Sons, Chichester (1980)MATHGoogle Scholar
  18. 18.
  19. 19.
    Cottrell, M., Girard, B., Rousset, P.: Forecasting of curves using classification. Journal of Forecasting 17(5-6), 429–439 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Elia Liitiäinen
    • 1
  • Amaury Lendasse
    • 1
  1. 1.Neural Networks Research CentreHelsinki University of TechnologyFinland

Personalised recommendations