Skip to main content

Learning Deep Belief Networks from Non-stationary Streams

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2012 (ICANN 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7553))

Included in the following conference series:

Abstract

Deep learning has proven to be beneficial for complex tasks such as classifying images. However, this approach has been mostly applied to static datasets. The analysis of non-stationary (e.g., concept drift) streams of data involves specific issues connected with the temporal and changing nature of the data. In this paper, we propose a proof-of-concept method, called Adaptive Deep Belief Networks, of how deep learning can be generalized to learn online from changing streams of data. We do so by exploiting the generative properties of the model to incrementally re-train the Deep Belief Network whenever new data are collected. This approach eliminates the need to store past observations and, therefore, requires only constant memory consumption. Hence, our approach can be valuable for life-long learning from non-stationary data streams.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aggarwal, C.C., Han, J., Wang, J., Yu, P.S.: On demand classification of data streams. In: Proceedings of KDD 2004, pp. 503–508 (2004)

    Google Scholar 

  2. Angelov, P., Filev, D.P., Kasabov, N.: Evolving Intelligent Systems: Methodology and Applications. Wiley-IEEE Press (2010)

    Google Scholar 

  3. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Proceedings of NIPS 2006, vol. 19, pp. 153–160 (2006)

    Google Scholar 

  4. Bifet, A. (ed.): Proceeding of the 2010 Conference on Adaptive Stream Mining: Pattern Learning and Mining from Evolving Data Streams (2010)

    Google Scholar 

  5. Cho, K., Raiko, T., Ilin, A.: Enhanced gradient and adaptive learning rate for training restricted Boltzmann machines. In: Proceedings of ICML 2011, pp. 105–112 (2011)

    Google Scholar 

  6. Erhan, D., Courville, A., Bengio, Y., Vincent, P.: Why does unsupervised pre-training help deep learning? In: Proceedings of AISTATS 2010, pp. 201–208.

    Google Scholar 

  7. Erhan, D., Manzagol, P.-A., Bengio, Y., Bengio, S., Vincent, P.: The difficulty of training deep architectures and the effect of unsupervised pre-training. In: Proceedings of AISTATS 2009, pp. 153–160 (2009)

    Google Scholar 

  8. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Computation 18(7), 1527–1554 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. Igel, C., Hüsken, M.: Improving the RPROP learning algorithm. In: Proceedings of NC 2000, pp. 115–121 (2000)

    Google Scholar 

  10. Last, M.: Online classification of nonstationary data streams. Intelligent Data Analysis 6(2), 129–147 (2002)

    MATH  MathSciNet  Google Scholar 

  11. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010), http://yann.lecun.com/exdb/mnist/

  12. Quiñonero Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N.D.: Dataset Shift in Machine Learning. The MIT Press (2009)

    Google Scholar 

  13. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: IEEE International Conference on Neural Networks, vol. 1, pp. 586–591 (1993)

    Google Scholar 

  14. Salakhutdinov, R.: Learning deep generative models. PhD thesis, University of Toronto (2009)

    Google Scholar 

  15. Zliobaite, I.: Learning under concept drift: an overview. CoRR (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Calandra, R., Raiko, T., Deisenroth, M.P., Pouzols, F.M. (2012). Learning Deep Belief Networks from Non-stationary Streams. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds) Artificial Neural Networks and Machine Learning – ICANN 2012. ICANN 2012. Lecture Notes in Computer Science, vol 7553. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33266-1_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33266-1_47

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33265-4

  • Online ISBN: 978-3-642-33266-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics