Skip to main content
Log in

Slow Feature Analysis: Perspectives for Technical Applications of a Versatile Learning Algorithm

  • Fachbeitrag
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

Slow Feature Analysis (SFA) is an unsupervised learning algorithm based on the slowness principle and has originally been developed to learn invariances in a model of the primate visual system. Although developed for computational neuroscience, SFA has turned out to be a versatile algorithm also for technical applications since it can be used for feature extraction, dimensionality reduction, and invariance learning. With minor adaptations SFA can also be applied to supervised learning problems such as classification and regression. In this work, we review several illustrative examples of possible applications including the estimation of driving forces, nonlinear blind source separation, traffic sign recognition, and face processing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 4
Fig. 3
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Berkes P (2005) Handwritten digit recognition with nonlinear fisher discriminant analysis. In: ICANN. Lecture notes in computer science, vol 3697. Springer, Berlin, pp 285–287

    Google Scholar 

  2. Berkes P (2005) Pattern recognition with slow feature analysis. Cognitive Sciences EPrint Archive (CogPrints). http://cogprints.org/4104/

  3. Berkes P, L W (2005) Slow feature analysis yields a rich repertoire of complex cell properties. J Vis 5(6):579–602

    Article  Google Scholar 

  4. Blaschke T, Berkes P, Wiskott L (2006) What is the relationship between slow feature analysis and independent component analysis? Neural Comput 18(10):2495–2508

    Article  MATH  Google Scholar 

  5. Bray A, Martinez D (2003) Kernel-based extraction of slow features: complex cells learn disparity and translation invariance from natural images. In: NIPS, vol 15. MIT Press, Cambridge, pp 253–260

    Google Scholar 

  6. Dähne S, Höhne J, Schreuder M, Tangermann M (2011) Slow feature analysis—a tool for extraction of discriminating event-related potentials in brain-computer interfaces. In: ICANN. Lecture notes in computer science, vol 6791. Springer, Berlin, pp 36–43

    Google Scholar 

  7. Escalante A, Wiskott L (2010) Gender and age estimation from synthetic face images with hierarchical slow feature analysis. In: Hüllermeier E, Kruse R (eds) International conference on information processing and management of uncertainty in knowledge-based systems (IPMU’10), pp 240–249

    Google Scholar 

  8. Escalante A, Wiskott L (2011) Heuristic evaluation of expansions for non-linear hierarchical slow feature analysis. In: Proc the 10th intl conf on machine learning and applications (ICMLA’11). IEEE Computer Society, Los Alamitos, pp 133–138

    Google Scholar 

  9. Földiák P (1991) Learning invariance from transformation sequences. Neural Comput 3(2):194–200

    Article  Google Scholar 

  10. Franzius M, Sprekeler H, Wiskott L (2007) Slowness and sparseness lead to place, head-direction, and spatial-view cells. PLoS Comput Biol 3(8):e166

    Article  MathSciNet  Google Scholar 

  11. Franzius M, Wilbert N, Wiskott L (2011) Invariant object recognition and pose estimation with slow feature analysis. Neural Comput 23(9):2289–2323

    Article  Google Scholar 

  12. Harmeling S, Ziehe A, Kawanabe M, Müller KR (2003) Kernel-based nonlinear blind source separation. Neural Comput 15:1089–1124

    Article  MATH  Google Scholar 

  13. Hinton GE (1989) Connectionist learning procedures. Artif Intell 40(1–3):185–234

    Article  Google Scholar 

  14. Höfer S, Hild M, Kubisch M (2010) Using slow feature analysis to extract behavioural manifolds related to humanoid robot postures. In: 10th international conference on epigenetic robotics, pp 43–50

    Google Scholar 

  15. Hyvärinen A (1999) Survey on independent component analysis. Neural Comput Surv 1:94–128

    Google Scholar 

  16. Hyvärinen A, Pajunen P (1999) Nonlinear independent component analysis: existence and uniqueness results. Neural Netw 12:429–439

    Article  Google Scholar 

  17. Klampfl S, Maass W (2010) A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction. Neural Comput 22(12):2979–3035

    Article  MathSciNet  MATH  Google Scholar 

  18. Koch P, Konen W, Hein K (2010) Gesture recognition on few training data using slow feature analysis and parametric bootstrap. In: The 2010 international joint conference on neural networks (IJCNN), pp 1–8

    Chapter  Google Scholar 

  19. Konen W, Koch P (2011) The slowness principle: SFA can detect different slow components in nonstationary time series. Int J Innovative Comput. Appl 3(1):3–10

    Article  Google Scholar 

  20. Kuhnl T, Kummert F, Fritsch J (2011) Monocular road segmentation using slow feature analysis. In: Intelligent vehicles symposium (IV). IEEE Press, New York, pp 800–806

    Chapter  Google Scholar 

  21. Legenstein R, Wilbert N, Wiskott L (2010) Reinforcement learning on slow features of high-dimensional input streams. PLoS Comput Biol 6(8):e1000,894

    Article  MathSciNet  Google Scholar 

  22. Mitchison G (1991) Removing time variation with the anti-hebbian differential synapse. Neural Comput 3(3):312–320

    Article  Google Scholar 

  23. Mohamed NM, Mahdi H (2010) A simple evaluation of face detection algorithms using unpublished static images. In: 10th international conference on intelligent systems design and applications (ISDA), pp 1–5

    Chapter  Google Scholar 

  24. Schreiber T (1999) Interdisciplinary application of nonlinear time series methods. Phys Rep 308(1):1–64

    Article  MathSciNet  Google Scholar 

  25. Sprekeler H, Wiskott L (2011) A theory of slow feature analysis for transformation-based input signals with an application to complex cells. Neural Comput 23(2):303–335

    Article  MathSciNet  MATH  Google Scholar 

  26. Sprekeler H, Zito T, Wiskott L (2010) An extension of slow feature analysis for nonlinear blind source separation. Cognitive Sciences EPrint Archive (CogPrints). http://cogprints.org/7056/

  27. Stallkamp J, Schlipsing M, Salmen J, Igel C (2011) The German traffic sign recognition benchmark: a multi-class classification competition. In: International joint conference on neural networks

  28. Vollgraf R, Obermayer K (2006) Sparse optimization for second order kernel methods. In: Proc IJCNN’06, pp 145–152

    Google Scholar 

  29. Wiskott L (2003) Estimating driving forces of nonstationary time series with slow feature analysis. arXiv.org e-Print archive. http://arxiv.org/abs/cond-mat/0312317/

  30. Wiskott L (2003) Slow feature analysis: a theoretical analysis of optimal free responses. Neural Comput 15(9):2147–2177

    Article  MATH  Google Scholar 

  31. Wiskott L, Sejnowski T (2002) Slow feature analysis: unsupervised learning of invariances. Neural Comput 14(4):715–770

    Article  MATH  Google Scholar 

  32. Zhang Z, Tao D (2012) Slow feature analysis for human action recognition. IEEE Trans Pattern Anal Mach Intell 34(3):436–450

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alberto N. Escalante-B..

Rights and permissions

Reprints and permissions

About this article

Cite this article

Escalante-B., A.N., Wiskott, L. Slow Feature Analysis: Perspectives for Technical Applications of a Versatile Learning Algorithm. Künstl Intell 26, 341–348 (2012). https://doi.org/10.1007/s13218-012-0190-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-012-0190-7

Keywords

Navigation