Skip to main content

Learning Temporal Coherent Features through Life-Time Sparsity

  • Conference paper
Neural Information Processing (ICONIP 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7663))

Included in the following conference series:

  • 3287 Accesses

Abstract

In this paper, we consider the problem of unsupervised feature learning for spatio-temporal data streams, specifically video data. We focus on the problem of learning features invariant to image transformations and regard a video stream as a set of pairwise similiar images. Many existing methods dealing with the problem of invariant feature extraction either try to build a model of the transformations present in the data or achieve invariance by adding a penalty to a reconstruction loss term. In contrast to this, we propose to learn invariant features by directly optimizing the temporal coherence of a hidden, and possibly deep, representation. We find that our approach is both fast and capable of learning deep feature representations invariant to complex image transformations. We furthermore show that features learned using our approach can be used to improve object recognition performance in still images (Caltech-101, STL-10).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lecun, Y., Huang, F.J., Bottou, L.: Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. In: CVPR (2004)

    Google Scholar 

  2. Coates, A., Lee, H., Ng, A.Y.: An Analysis of Single-layer Networks in Unsupervised Feature Learning. In: AISTATS (2011)

    Google Scholar 

  3. Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality Reduction by Learning an Invariant Mapping. In: CVPR (2006)

    Google Scholar 

  4. Goodfellow, I., Le, Q., Saxe, A., Lee, H., Ng, A.: Measuring Invariances in Deep Networks. In: NIPS (2009)

    Google Scholar 

  5. Field, D.J.: What is the goal of sensory coding? Neural Computation, 559–601 (1994)

    Google Scholar 

  6. Willmore, B., Tolhurst, D.J.: Characterizing the Sparseness of Neural Codes. Network 12, 255–270 (2001)

    Google Scholar 

  7. Grimes, D.B., Rao, R.P.N.: Bilinear Sparse Coding for Invariant Vision. Neural Computation 17, 47–73 (2005)

    Article  Google Scholar 

  8. Olshausen, B.A., Cadieu, C., Culpepper, J., Warland, D.K.: Bilinear Models of Natural Images. In: SPIE (2007)

    Google Scholar 

  9. Memisevic, R.: Unsupervised Learning of Image Transformations. In: CVPR (2007)

    Google Scholar 

  10. Adelson, E.H., Bergen, J.R.: Spatiotemporal Energy Models for the Perception of Motion. J. Opt. Soc. Am. A 2(2), 284–299 (1985)

    Article  Google Scholar 

  11. Ohzawa, I., Deangelis, G.C., Freeman, R.D.: Stereoscopic Depth Discrimination in the Visual Cortex: Neurons Ideally Suited as Disparity Detectors. Science 249, 1037–1041 (1990)

    Article  Google Scholar 

  12. Hyvrinen, A., Hoyer, P.: Emergence of Phase and Shift Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces. Neural Computation 12(7), 1705–1720 (2000)

    Article  Google Scholar 

  13. Bergstra, J., Bengio, Y., Louradour, J.: Suitability of V1 Energy Models for Object Classification. Neural Computation, 1–17 (2010)

    Google Scholar 

  14. Földiák, P.: Learning Invariance from Transformation Sequences. Neural Computation 3(2), 194–200 (1991)

    Article  Google Scholar 

  15. Hyvarinen, A., Hurri, J., Vayrynen, J.: Bubbles: a Unifying Framework for Low-level Statistical Properties of Natural Image Sequences. Journal of the Optical Society of America A 20(7), 1237–1252 (2003)

    Article  Google Scholar 

  16. Hurri, J., Hyvrinen, A.: Temporal Coherence, Natural Image Sequences, and the Visual Cortex. In: NIPS (2002)

    Google Scholar 

  17. Berkes, P., Wiskott, L.: Slow Feature Analysis Yields a Rich Repertoire of Complex Cell Properties. Journal of Vision 5(6), 579–602 (2005)

    Article  Google Scholar 

  18. Zou, W., Ng, A., Yu, K.: Unsupervised Learning of Visual Invariance with Temporal Coherence. In: NIPS*2011 Workshop on Deep Learning and Unsupervised Feature Learning (2011)

    Google Scholar 

  19. Le, Q., Ngiam, J., Chen, Z., Hao Chia, D.J., Koh, P.W., Ng, A.: Tiled convolutional neural networks. In: NIPS (2010)

    Google Scholar 

  20. Ngiam, J., Koh, P.W.W., Chen, Z., Bhaskar, S.A., Ng, A.Y.: Sparse Filtering. In: NIPS (2011)

    Google Scholar 

  21. Memisevic, R., Zach, C., Hinton, G., Pollefeys, M.: Gated Softmax Classification. In: NIPS (2010)

    Google Scholar 

  22. Riedmiller, M., Braun, H.: A Direct Adaptive Method for Faster Backpropagation Learning: the RPROP Algorithm. In: ICNN (1993)

    Google Scholar 

  23. Memisevic, R., Hinton, G.E.: Learning to Represent Spatial Transformations with Factored Higher-Order Boltzmann Machines. Neural Computation 22(6), 1473–1492 (2010)

    Article  MATH  Google Scholar 

  24. Fei-Fei, L., Fergus, R., Perona, P.: Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories. In: CVPR Workshop of Generative Model Based Vision, WGMBV (2004)

    Google Scholar 

  25. Kavukcuoglu, K., Sermanet, P., Lan Boureau, Y., Gregor, K., Mathieu, M., Lecun, Y.: Learning Convolutional Feature Hierarchies for Visual Recognition. In: NIPS (2010)

    Google Scholar 

  26. Yu, K., Lin, Y., Lafferty, J.D.: Learning Image Representations from the Pixel Level via Hierarchical Sparse Coding. In: CVPR (2011)

    Google Scholar 

  27. Coates, A., Ng, A.: The Importance of Encoding versus Training with Sparse Coding and Vector Quantization. In: ICML (2011)

    Google Scholar 

  28. Coates, A., Ng, A.Y.: Selecting Receptive Fields in Deep Networks. In: NIPS (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Springenberg, J.T., Riedmiller, M. (2012). Learning Temporal Coherent Features through Life-Time Sparsity. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds) Neural Information Processing. ICONIP 2012. Lecture Notes in Computer Science, vol 7663. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34475-6_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34475-6_42

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34474-9

  • Online ISBN: 978-3-642-34475-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics