Linear State-Space Model with Time-Varying Dynamics

  • Jaakko Luttinen
  • Tapani Raiko
  • Alexander Ilin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8725)


This paper introduces a linear state-space model with time-varying dynamics. The time dependency is obtained by forming the state dynamics matrix as a time-varying linear combination of a set of matrices. The time dependency of the weights in the linear combination is modelled by another linear Gaussian dynamical model allowing the model to learn how the dynamics of the process changes. Previous approaches have used switching models which have a small set of possible state dynamics matrices and the model selects one of those matrices at each time, thus jumping between them. Our model forms the dynamics as a linear combination and the changes can be smooth and more continuous. The model is motivated by physical processes which are described by linear partial differential equations whose parameters vary in time. An example of such a process could be a temperature field whose evolution is driven by a varying wind direction. The posterior inference is performed using variational Bayesian approximation. The experiments on stochastic advection-diffusion processes and real-world weather processes show that the model with time-varying dynamics can outperform previously introduced approaches.


Posterior Distribution Neural Information Processing System Optimal Rotation Prior Probability Distribution Switching Dynamic 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bar-Shalom, Y., Li, X.R., Kirubarajan, T.: Estimation with Applications to Tracking and Navigation. Wiley-Interscience (2001)Google Scholar
  2. 2.
    Shumway, R.H., Stoffer, D.S.: Time Series Analysis and Its Applications. Springer (2000)Google Scholar
  3. 3.
    Ghahramani, Z., Roweis, S.T.: Learning nonlinear dynamical systems using an EM algorithm. In: Advances in Neural Information Processing Systems, pp. 431–437 (1999)Google Scholar
  4. 4.
    Valpola, H., Karhunen, J.: An unsupervised ensemble learning method for nonlinear dynamic state-space models. Neural Computation 14(11), 2647–2692 (2002)CrossRefzbMATHGoogle Scholar
  5. 5.
    Ghahramani, Z., Hinton, G.E.: Variational learning for switching state-space models. Neural Computation 12, 963–996 (1998)Google Scholar
  6. 6.
    Pavlovic, V., Rehg, J.M., MacCormick, J.: Learning switching linear models of human motion. In: Advances in Neural Information Processing Systems 13, pp. 981–987. MIT Press (2001)Google Scholar
  7. 7.
    Raiko, T., Ilin, A., Korsakova, N., Oja, E., Valpola, H.: Drifting linear dynamics (abstract). In: International Conference on Artificial Intelligence and Statistics, AISTATS 2010 (2010)Google Scholar
  8. 8.
    Michalski, V., Memisevic, R., Konda, K.: Modeling sequential data using higher-order relational features and predictive training. ArXiv preprint ArXiv:1402.2333 (2014)Google Scholar
  9. 9.
    Beal, M.J.: Variational algorithms for approximate Bayesian inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London (2003)Google Scholar
  10. 10.
    Luttinen, J.: Fast variational Bayesian linear state-space model. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013, Part I. LNCS, vol. 8188, pp. 305–320. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  11. 11.
    Bishop, C.M.: Variational principal components. In: Proceedings of the 9th International Conference on Artificial Neural Networks (ICANN 1999), pp. 509–514 (1999)Google Scholar
  12. 12.
    Bishop, C.M.: Pattern Recognition and Machine Learning, 2nd edn. Information Science and Statistics. Springer, New York (2006)Google Scholar
  13. 13.
    Beal, M.J., Ghahramani, Z.: The variational Bayesian EM algorithm for incomplete data: with application to scoring graphical model structures. Bayesian Statistics 7, 453–464 (2003)MathSciNetGoogle Scholar
  14. 14.
    Barber, D., Chiappa, S.: Unified inference for variational Bayesian linear Gaussian state-space models. In: Advances in Neural Information Processing Systems 19. MIT Press (2007)Google Scholar
  15. 15.
    Liu, C., Rubin, D.B., Wu, Y.N.: Parameter expansion to accelerate EM: the PX-EM algorithm. Biometrika 85, 755–770 (1998)CrossRefzbMATHMathSciNetGoogle Scholar
  16. 16.
    Qi, Y.A., Jaakkola, T.S.: Parameter expanded variational Bayesian methods. In: Advances in Neural Information Processing Systems 19, pp. 1097–1104. MIT Press (2007)Google Scholar
  17. 17.
    Luttinen, J., Ilin, A.: Transformations in variational Bayesian factor analysis to speed up learning. Neurocomputing 73, 1093–1102 (2010)CrossRefGoogle Scholar
  18. 18.
    NCDC: Global surface summary of day product, (Online; accessed April 16, 2014)
  19. 19.
    Luttinen, J., Ilin, A.: Variational Gaussian-process factor analysis for modeling spatio-temporal data. In: Advances in Neural Information Processing Systems 22. MIT Press (2009)Google Scholar
  20. 20.
    Luttinen, J., Ilin, A., Karhunen, J.: Bayesian robust PCA of incomplete data. Neural Processing Letters 36(2), 189–202 (2012)CrossRefGoogle Scholar
  21. 21.
    Luttinen, J.: BayesPy – Bayesian Python,

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Jaakko Luttinen
    • 1
  • Tapani Raiko
    • 1
  • Alexander Ilin
    • 1
  1. 1.Aalto UniversityFinland

Personalised recommendations