Stationary Subspace Analysis

  • Paul von Bünau
  • Frank C. Meinecke
  • Klaus-Robert Müller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5441)


Non-stationarities are an ubiquitous phenomenon in time-series data, yet they pose a challenge to standard methodology: classification models and ICA components, for example, cannot be estimated reliably under distribution changes because the classic assumption of a stationary data generating process is violated. Conversely, understanding the nature of observed non-stationary behaviour often lies at the heart of a scientific question. To this end, we propose a novel unsupervised technique: Stationary Subspace Analysis (SSA). SSA decomposes a multi-variate time-series into a stationary and a non-stationary subspace. This factorization is a universal tool for furthering the understanding of non-stationary data. Moreover, we can robustify other methods by restricting them to the stationary subspace. We demonstrate the performance of our novel concept in simulations and present a real world application from Brain Computer Interfacing.


Non-Stationarities Source Separation BSS Dimensionality Reduction Covariate Shift Brain-Computer-Interface BCI 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blankertz, B., Kawanabe, M., Tomioka, R., Hohlefeld, F., Nikulin, V., Müller, K.-R.: Invariant common spatial patterns: Alleviating nonstationarities in brain-computer interfacing. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems, vol. 20, pp. 113–120. MIT Press, Cambridge (2008)Google Scholar
  2. 2.
    Engle, R.F., Granger, C.W.J.: Co-integration and error correction: Representation, estimation, and testing. Econometrica 55(2), 251–276 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Friedman, J., Rafsky, L.: Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample tests. The Annals of Statistics 7(4), 697–717 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Heckman, J.J.: Sample selection bias as a specification error. Econometrica 47(1), 153–162 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. Wiley, New York (2001)CrossRefGoogle Scholar
  6. 6.
    Jaynes, E.T.: Information theory and statistical mechanics. Physical Review 160, 620–630 (1957)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Murata, N., Kawanabe, M., Ziehe, A., Müller, K.-R., Amari, S.: On-line learning in changing environments with applications in supervised and unsupervised learning. Neural Networks 15(4-6), 743–760 (2002)CrossRefGoogle Scholar
  8. 8.
    Plumbley, M.D.: Geometrical methods for non-negative ica: Manifolds, lie groups and toral subalgebras. Neurocomputing 67, 161–197 (2005)CrossRefGoogle Scholar
  9. 9.
    Quiñonero-Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N. (eds.): Dataset Shift in Machine Learning. MIT Press, Cambridge (2008)Google Scholar
  10. 10.
    Shimodaira, H.: Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90(2), 227–244 (2000)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Paul von Bünau
    • 1
  • Frank C. Meinecke
    • 1
  • Klaus-Robert Müller
    • 1
  1. 1.Machine Learning Group, CS Dept.TU BerlinGermany

Personalised recommendations