Skip to main content

Unsupervised Slow Subspace-Learning from Stationary Processes

  • Conference paper
Algorithmic Learning Theory (ALT 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4264))

Included in the following conference series:

Abstract

We propose a method of unsupervised learning from stationary, vector-valued processes. A low-dimensional subspace is selected on the basis of a criterion which rewards data-variance (like PSA) and penalizes the variance of the velocity vector, thus exploiting the short-time dependencies of the process. We prove error bounds in terms of the β-mixing coefficients and consistency for absolutely regular processes. Experiments with image recognition demonstrate the algorithms ability to learn geometrically invariant feature maps.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Benveniste, A., Métevier, M., Priouret, P.: Adaptive Algorithms and Stochastic Approximations. Springer, Heidelberg (1987)

    MATH  Google Scholar 

  2. Földiák, P.: Learning invariance from transformation sequences. Neural Computation 3, 194–200 (1991)

    Article  Google Scholar 

  3. Lozano, A.C., Kulkarni, S.R., Shapire, R.E.: Convergence and consistency of regularized boosting algorithms with stationary, β-mixing observations. In: Advances in Neural Information Processing Systems 18 (2006)

    Google Scholar 

  4. Manton, J.H., Helmke, U., Mareels, I.M.Y.: A dual purpose principal and minor component flow. Systems & Control Letters 54, 759–769 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  5. Maurer, A.: Bounds for linear multi-task learning. JMLR 7, 117–139 (2006)

    MathSciNet  Google Scholar 

  6. Maurer, A.: Generalization Bounds for Subspace Selection and Hyperbolic PCA. In: Saunders, C., Grobelnik, M., Gunn, S., Shawe-Taylor, J. (eds.) SLSFS 2005. LNCS, vol. 3940, pp. 185–197. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  7. McDiarmid, C.: Concentration. Probabilistic Methods of Algorithmic Discrete Mathematics, pp. 195–248. Springer, Berlin (1998)

    Google Scholar 

  8. Meir, R.: Nonparametric time series prediction through adaptive model selection. Machine Learning 39, 5–34 (2000)

    Article  MATH  Google Scholar 

  9. Oja, E.: Principal component analysis. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, pp. 910–913. MIT Press, Cambridge (2002)

    Google Scholar 

  10. Mika, S., Schölkopf, B., Smola, A., Müller, K.-R., Scholz, M., Rätsch, G.: Kernel PCA and De-noising in Feature Spaces. In: Advances in Neural Information Processing Systems 11 (1998)

    Google Scholar 

  11. Shawe-Taylor, J., Christianini, N.: Estimating the moments of a random vector. In: Proceedings of GRETSI 2003 Conference, vol. I, pp. 47–52 (2003)

    Google Scholar 

  12. Reed, M., Simon, B.: Functional Analysis, part I. In: Methods of Mathematical Physics. Academic Press, London (1980)

    Google Scholar 

  13. Rio, E.: Théorie asymptotique des processus al éatoires faiblement dépendants. Springer, Heidelberg (2000)

    Google Scholar 

  14. Simon, B.: Trace Ideals and Their Applications. Cambridge University Press, London (1979)

    MATH  Google Scholar 

  15. Shawe-Taylor, J., Williams, C.K.I., Cristianini, N., Kandola, J.S.: On the eigenspectrum of the Gram matrix and the generalization error of kernel-PCA. IEEE Transactions on Information Theory 51(7), 2510–2522 (2005)

    Article  MathSciNet  Google Scholar 

  16. Vidyasagar, M.: Learning and generalization with applications to neural networks. Springer, London (2003)

    Google Scholar 

  17. Wiskott, L., Sejnowski, T.: Slow feature analysis: Unsupervised learning of invariances. Neural Computation 14, 715–770 (2003)

    Article  Google Scholar 

  18. Yan, W., Helmke, U., Moore, J.B.: Global analysis of Oja’s flow for neural networks. IEEE Trans. on Neural Networks 5(5), 674–683 (1994)

    Article  Google Scholar 

  19. Yu, B.: Rate of convergence for empirical processes of stationary mixing sequences. Annals of Probability 22, 94–116 (1994)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Maurer, A. (2006). Unsupervised Slow Subspace-Learning from Stationary Processes. In: Balcázar, J.L., Long, P.M., Stephan, F. (eds) Algorithmic Learning Theory. ALT 2006. Lecture Notes in Computer Science(), vol 4264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11894841_29

Download citation

  • DOI: https://doi.org/10.1007/11894841_29

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46649-9

  • Online ISBN: 978-3-540-46650-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics