Sparse Spatio-temporal Gaussian Processes with General Likelihoods

  • Jouni Hartikainen
  • Jaakko Riihimäki
  • Simo Särkkä
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6791)


In this paper, we consider learning of spatio-temporal processes by formulating a Gaussian process model as a solution to an evolution type stochastic partial differential equation. Our approach is based on converting the stochastic infinite-dimensional differential equation into a finite dimensional linear time invariant (LTI) stochastic differential equation (SDE) by discretizing the process spatially. The LTI SDE is time-discretized analytically, resulting in a state space model with linear-Gaussian dynamics. We use expectation propagation to perform approximate inference on non-Gaussian data, and show how to incorporate sparse approximations to further reduce the computational complexity. We briefly illustrate the proposed methodology with a simulation study and with a real world modelling problem.


Gaussian processes spatio-temporal data expectation propagation sparse approximations 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)zbMATHGoogle Scholar
  2. 2.
    Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)CrossRefzbMATHGoogle Scholar
  3. 3.
    Quinonero-Candela, J., Rasmussen, C.E.: A unifying view of sparse approximate Gaussian process regression. Journal Of Machine Learning Research 6, 1939–1959 (2005)zbMATHMathSciNetGoogle Scholar
  4. 4.
    Snelson, E., Ghahramani, Z.: Sparse Gaussian process using pseudo-inputs. In: Weiss, Y., Schölkopf, B., Platt, J. (eds.) Advances in Neural Information Processing Systems, vol. 18. MIT Press, Cambridge (2006)Google Scholar
  5. 5.
    Minka, T.: A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology (2001)Google Scholar
  6. 6.
    Ray, W.H., Lainiotis, D.G.: Distributed Parameter Systems. Dekker, New York (1978)zbMATHGoogle Scholar
  7. 7.
    Curtain, R.: A survey of infinite-dimensional filtering. SIAM Review 17(3), 395–411 (1975)CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    Wikle, C.K., Cressie, N.: A dimension-reduced approach to space-time Kalman filtering. Biometrika 86(4), 815–829 (1999)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Cressie, N., Wikle, C.K.: Space-time Kalman filter. In: El-Shaarawi, A.H., Piegorsch, W.W. (eds.) Encyclopedia of Environmetrics, vol. 4, pp. 2045–2049. John Wiley & Sons, Ltd., Chichester (2002)Google Scholar
  10. 10.
    Gelfand, A.E., Diggle, P.J., Fuentes, M., Guttorp, P.: Handbook of Spatial Statistics. Chapman & Hall/CRC (2010)Google Scholar
  11. 11.
    Kaipio, J., Somersalo, E.: Statistical and Computational Inverse Problems. Applied mathematical Sciences, vol. 160. Springer, Heidelberg (2005)zbMATHGoogle Scholar
  12. 12.
    Hiltunen, P., Särkkä, S., Nissilä, I., Lajunen, A., Lampinen, J.: State space regularization in the nonstationary inverse problem for diffuse optical tomography. Inverse Problems 27(2) (2011)Google Scholar
  13. 13.
    Alvarez, M., Lawrence, N.D.: Latent force models. In: van Dyk, D., Welling, M. (eds.) Proceedings of the Twelfth International Workshop on Artificial Intelligence and Statistics, pp. 9–16 (2009)Google Scholar
  14. 14.
    Hartikainen, J., Särkkä, S.: Kalman Filtering and Smoothing Solutions to Temporal Gaussian Process Regression Models. In: Proceedings of IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pp. 379–384 (2010)Google Scholar
  15. 15.
    Bar-Shalom, Y., Li, X.R., Kirubarajan, T.: Estimation with Applications to Tracking and Navigation. Wiley Interscience, Hoboken (2001)CrossRefGoogle Scholar
  16. 16.
    Nickisch, H., Rasmussen, C.: Approximations for binary Gaussian process classification. Journal of Machine Learning Research 9, 2035–2078 (2008)zbMATHMathSciNetGoogle Scholar
  17. 17.
    Heskes, T., Zoeter, O.: Expectation propogation for approximate inference in dynamic bayesian networks. In: Uncertainty in Artificial Intelligence, pp. 216–223 (2002)Google Scholar
  18. 18.
    Ypma, A., Heskes, T.: Novel approximations for inference in nonlinear dynamical systems using expectation propagation. Neurocomputing 69(1-3), 85–99 (2005)CrossRefGoogle Scholar
  19. 19.
    Yu, B.M., Cunningham, J.P., Shenoy, K.V., Sahani, M.: Neural decoding of movements: From linear to nonlinear trajectory models. In: Ishikawa, M., Doya, K., Miyamoto, H., Yamakawa, T. (eds.) ICONIP 2007, Part I. LNCS, vol. 4984, pp. 586–595. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  20. 20.
    Vanhatalo, J., Pietiläinen, V., Vehtari, A.: Approximate inference for disease mapping with sparse Gaussian processes. Statistics in Medicine 29(15), 1580–1607 (2010)MathSciNetGoogle Scholar
  21. 21.
    Rue, H., Martino, S., Chopin, N.: Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. Journal of the Royal Statistical Society (Series B) 71(2), 319–392 (2009)CrossRefzbMATHMathSciNetGoogle Scholar
  22. 22.
    Cseke, B., Heskes, T.: Approximate marginals in latent Gaussian models. Journal of Machine Learning Research 12, 417–454 (2011)zbMATHMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Jouni Hartikainen
    • 1
  • Jaakko Riihimäki
    • 1
  • Simo Särkkä
    • 1
  1. 1.Dept. of Biomedical Engineering and Computational ScienceAalto UniversityFinland

Personalised recommendations