KI - Künstliche Intelligenz

, Volume 26, Issue 4, pp 341–348 | Cite as

Slow Feature Analysis: Perspectives for Technical Applications of a Versatile Learning Algorithm

  • Alberto N. Escalante-B.Email author
  • Laurenz Wiskott


Slow Feature Analysis (SFA) is an unsupervised learning algorithm based on the slowness principle and has originally been developed to learn invariances in a model of the primate visual system. Although developed for computational neuroscience, SFA has turned out to be a versatile algorithm also for technical applications since it can be used for feature extraction, dimensionality reduction, and invariance learning. With minor adaptations SFA can also be applied to supervised learning problems such as classification and regression. In this work, we review several illustrative examples of possible applications including the estimation of driving forces, nonlinear blind source separation, traffic sign recognition, and face processing.


Slow feature analysis Hierarchical networks Nonlinear feature extraction Dimensionality reduction High-dimensional data Driving forces Blind source separation Object recognition Face processing 


  1. 1.
    Berkes P (2005) Handwritten digit recognition with nonlinear fisher discriminant analysis. In: ICANN. Lecture notes in computer science, vol 3697. Springer, Berlin, pp 285–287 Google Scholar
  2. 2.
    Berkes P (2005) Pattern recognition with slow feature analysis. Cognitive Sciences EPrint Archive (CogPrints).
  3. 3.
    Berkes P, L W (2005) Slow feature analysis yields a rich repertoire of complex cell properties. J Vis 5(6):579–602 CrossRefGoogle Scholar
  4. 4.
    Blaschke T, Berkes P, Wiskott L (2006) What is the relationship between slow feature analysis and independent component analysis? Neural Comput 18(10):2495–2508 zbMATHCrossRefGoogle Scholar
  5. 5.
    Bray A, Martinez D (2003) Kernel-based extraction of slow features: complex cells learn disparity and translation invariance from natural images. In: NIPS, vol 15. MIT Press, Cambridge, pp 253–260 Google Scholar
  6. 6.
    Dähne S, Höhne J, Schreuder M, Tangermann M (2011) Slow feature analysis—a tool for extraction of discriminating event-related potentials in brain-computer interfaces. In: ICANN. Lecture notes in computer science, vol 6791. Springer, Berlin, pp 36–43 Google Scholar
  7. 7.
    Escalante A, Wiskott L (2010) Gender and age estimation from synthetic face images with hierarchical slow feature analysis. In: Hüllermeier E, Kruse R (eds) International conference on information processing and management of uncertainty in knowledge-based systems (IPMU’10), pp 240–249 Google Scholar
  8. 8.
    Escalante A, Wiskott L (2011) Heuristic evaluation of expansions for non-linear hierarchical slow feature analysis. In: Proc the 10th intl conf on machine learning and applications (ICMLA’11). IEEE Computer Society, Los Alamitos, pp 133–138 Google Scholar
  9. 9.
    Földiák P (1991) Learning invariance from transformation sequences. Neural Comput 3(2):194–200 CrossRefGoogle Scholar
  10. 10.
    Franzius M, Sprekeler H, Wiskott L (2007) Slowness and sparseness lead to place, head-direction, and spatial-view cells. PLoS Comput Biol 3(8):e166 MathSciNetCrossRefGoogle Scholar
  11. 11.
    Franzius M, Wilbert N, Wiskott L (2011) Invariant object recognition and pose estimation with slow feature analysis. Neural Comput 23(9):2289–2323 CrossRefGoogle Scholar
  12. 12.
    Harmeling S, Ziehe A, Kawanabe M, Müller KR (2003) Kernel-based nonlinear blind source separation. Neural Comput 15:1089–1124 zbMATHCrossRefGoogle Scholar
  13. 13.
    Hinton GE (1989) Connectionist learning procedures. Artif Intell 40(1–3):185–234 CrossRefGoogle Scholar
  14. 14.
    Höfer S, Hild M, Kubisch M (2010) Using slow feature analysis to extract behavioural manifolds related to humanoid robot postures. In: 10th international conference on epigenetic robotics, pp 43–50 Google Scholar
  15. 15.
    Hyvärinen A (1999) Survey on independent component analysis. Neural Comput Surv 1:94–128 Google Scholar
  16. 16.
    Hyvärinen A, Pajunen P (1999) Nonlinear independent component analysis: existence and uniqueness results. Neural Netw 12:429–439 CrossRefGoogle Scholar
  17. 17.
    Klampfl S, Maass W (2010) A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction. Neural Comput 22(12):2979–3035 MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Koch P, Konen W, Hein K (2010) Gesture recognition on few training data using slow feature analysis and parametric bootstrap. In: The 2010 international joint conference on neural networks (IJCNN), pp 1–8 CrossRefGoogle Scholar
  19. 19.
    Konen W, Koch P (2011) The slowness principle: SFA can detect different slow components in nonstationary time series. Int J Innovative Comput. Appl 3(1):3–10 CrossRefGoogle Scholar
  20. 20.
    Kuhnl T, Kummert F, Fritsch J (2011) Monocular road segmentation using slow feature analysis. In: Intelligent vehicles symposium (IV). IEEE Press, New York, pp 800–806 CrossRefGoogle Scholar
  21. 21.
    Legenstein R, Wilbert N, Wiskott L (2010) Reinforcement learning on slow features of high-dimensional input streams. PLoS Comput Biol 6(8):e1000,894 MathSciNetCrossRefGoogle Scholar
  22. 22.
    Mitchison G (1991) Removing time variation with the anti-hebbian differential synapse. Neural Comput 3(3):312–320 CrossRefGoogle Scholar
  23. 23.
    Mohamed NM, Mahdi H (2010) A simple evaluation of face detection algorithms using unpublished static images. In: 10th international conference on intelligent systems design and applications (ISDA), pp 1–5 CrossRefGoogle Scholar
  24. 24.
    Schreiber T (1999) Interdisciplinary application of nonlinear time series methods. Phys Rep 308(1):1–64 MathSciNetCrossRefGoogle Scholar
  25. 25.
    Sprekeler H, Wiskott L (2011) A theory of slow feature analysis for transformation-based input signals with an application to complex cells. Neural Comput 23(2):303–335 MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    Sprekeler H, Zito T, Wiskott L (2010) An extension of slow feature analysis for nonlinear blind source separation. Cognitive Sciences EPrint Archive (CogPrints).
  27. 27.
    Stallkamp J, Schlipsing M, Salmen J, Igel C (2011) The German traffic sign recognition benchmark: a multi-class classification competition. In: International joint conference on neural networks Google Scholar
  28. 28.
    Vollgraf R, Obermayer K (2006) Sparse optimization for second order kernel methods. In: Proc IJCNN’06, pp 145–152 Google Scholar
  29. 29.
    Wiskott L (2003) Estimating driving forces of nonstationary time series with slow feature analysis. e-Print archive.
  30. 30.
    Wiskott L (2003) Slow feature analysis: a theoretical analysis of optimal free responses. Neural Comput 15(9):2147–2177 zbMATHCrossRefGoogle Scholar
  31. 31.
    Wiskott L, Sejnowski T (2002) Slow feature analysis: unsupervised learning of invariances. Neural Comput 14(4):715–770 zbMATHCrossRefGoogle Scholar
  32. 32.
    Zhang Z, Tao D (2012) Slow feature analysis for human action recognition. IEEE Trans Pattern Anal Mach Intell 34(3):436–450 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  1. 1.Institut für NeuroinformatikRuhr-Universität BochumBochumGermany

Personalised recommendations