Skip to main content

A Learning Technique for Deep Belief Neural Networks

  • Conference paper
Neural Networks and Artificial Intelligence (ICNNAI 2014)

Abstract

Deep belief neural network represents many-layered perceptron and permits to overcome some limitations of conventional multilayer perceptron due to deep architecture. The supervised training algorithm is not effective for deep belief neural network and therefore in many studies was proposed new learning procedure for deep neural networks. It consists of two stages. The first one is unsupervised learning using layer by layer approach, which is intended for initialization of parameters (pre-training of deep belief neural network). The second is supervised training in order to provide fine tuning of whole neural network. In this work we propose the training approach for restricted Boltzmann machine, which is based on minimization of reconstruction square error. The main contribution of this paper is new interpretation of training rules for restricted Boltzmann machine. It is shown that traditional approach for restricted Boltzmann machine training is particular case of proposed technique. We demonstrate the efficiency of proposed approach using deep nonlinear auto-encoder.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Hinton, G.E., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Computation 18, 1527–1554 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  2. Hinton, G.: Training products of experts by minimizing contrastive divergence. Neural Computation 14, 1771–1800 (2002)

    Article  MATH  Google Scholar 

  3. Hinton, G., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  4. Hinton, G.E.: A practical guide to training restricted Boltzmann machines (Tech. Rep. 2010–000). Machine Learning Group, University of Toronto, Toronto (2010)

    Google Scholar 

  5. Bengio, Y.: Learning deep architectures for AI. Foundations and Trends in Machine Learning 2(1), 1–127 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  6. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Schölkopf, B., Platt, J.C., Hoffman, T. (eds.) Advances in Neural Information Processing Systems, vol. 11, pp. 153–160. MIT Press, Cambridge (2007)

    Google Scholar 

  7. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P., Bengio, S.: Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research 11, 625–660 (2010)

    MATH  MathSciNet  Google Scholar 

  8. Golovko, V., Vaitsekhovich, H., Apanel, E., Mastykin, A.: Neural network model for transient ischemic attacks diagnostics. Optical Memory and Neural Networks (Information Optics) 21(3), 166–176 (2012)

    Article  Google Scholar 

  9. Scholz, M., Fraunholz, M., Selbig, J.: Nonlinear principal component analysis: neural network models and applications. In: Principal Manifolds for Data Visualization and Dimension Reduction, pp. 44–67. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Golovko, V., Kroshchanka, A., Rubanau, U., Jankowski, S. (2014). A Learning Technique for Deep Belief Neural Networks. In: Golovko, V., Imada, A. (eds) Neural Networks and Artificial Intelligence. ICNNAI 2014. Communications in Computer and Information Science, vol 440. Springer, Cham. https://doi.org/10.1007/978-3-319-08201-1_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08201-1_13

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08200-4

  • Online ISBN: 978-3-319-08201-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics