Skip to main content

Evolving Node Transfer Functions in Deep Neural Networks for Pattern Recognition

  • Conference paper
  • First Online:
Artificial Intelligence and Soft Computing (ICAISC 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10245))

Included in the following conference series:

  • 1896 Accesses

Abstract

Theoretical results suggest that in order to learn complicated functions that can represent high-level features in the computer vision field, one may need to use deep architectures. The popular choice among scientists and engineers for modeling deep architectures are feed-forward Deep Artificial Neural Networks. One of the latest research areas in this field is the evolution of Artificial Neural Networks: NeuroEvolution. This paper explores the effect of evolving a Node Transfer Function and its parameters, along with the evolution of connection weights and an architecture in Deep Neural Networks for Pattern Recognition problems. The results strongly indicate the importance of evolving Node Transfer Functions for shortening the time of training Deep Artificial Neural Networks using NeuroEvolution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2, 1–127 (2009). Now Publishers

    Article  MATH  Google Scholar 

  2. Pascanu, R., Montufar, G., Bengio, Y.: On the number of response regions of deep feedforward networks with piecewise linear activations. In: NIPS 2014, pp. 2924–2932 (2015)

    Google Scholar 

  3. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: ICML, pp. 1096–1103 (2008)

    Google Scholar 

  4. Ranzato, M., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: NIPS (2007)

    Google Scholar 

  5. Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: ICML, pp. 473–480 (2007)

    Google Scholar 

  6. Hochreiter, S., Bengio, Y., Franconi, P., Schmidhuber, J.: Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies. IEE Press, New York (2001)

    Google Scholar 

  7. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010)

    Google Scholar 

  8. Sutskever, I., Martens, J., Dahl, G., Hilton, G.: On the importance of initialization and momentum in deep learning. In: ICML (3), vol. 28, pp. 1139–1147 (2013)

    Google Scholar 

  9. Kent, A., Williams, J.G. (eds.): Evolutionary Artificial Neural Networks. Encyclopedia of Computer Science and Technology, vol. 33, pp. 137–170. Marcel Dekker, New York (1995)

    Google Scholar 

  10. Yao, X.: Evolving artificial neural networks. Proc. IEEE 87, 1423–1447 (2002)

    Google Scholar 

  11. David, O.E., Greental, I.: Genetic algorithms for evolving deep neural networks. In: GECCO, pp. 1451–1452 (2014)

    Google Scholar 

  12. Tirumala, S.S.: Implementation of evolutionary algorithms for deep architectures. In: AIC (2014)

    Google Scholar 

  13. Angeline, P.J., Saunders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. Neural Netw. 5, 54–65 (1994)

    Article  Google Scholar 

  14. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10, 99–127 (2002)

    Article  Google Scholar 

  15. Mahsal, K.M., Masood, A.A., Khan, M., Miller, J.F.: Fastlearning neural networks using Cartesian genetic programming. Neurocomputing 121, 274–289 (2013)

    Article  Google Scholar 

  16. James, A.T., Miller, J.F.: NeuroEvolution: The Importance of Transfer Function Evolution (2013)

    Google Scholar 

  17. Vodianyk, D., Rokita, P.: Evolving node transfer functions in artificial neural networks for handwritten digits recognition. In: Chmielewski, L.J., Datta, A., Kozera, R., Wojciechowski, K. (eds.) ICCVG 2016. LNCS, vol. 9972, pp. 604–613. Springer, Cham (2016). doi:10.1007/978-3-319-46418-3_54

    Chapter  Google Scholar 

  18. USC University of Southern California: Signal, Image Processing Institute, Ming Hsieh Department of Electrical Engineering. Textures, vol. 1. http://sipi.usc.edu/database/?volume=textures

  19. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  20. Nene, S.A., Nayar, S.K., Murase, H.: Columbia Object Image Library (COIL-100). Technical report CUCS-006-96 (1996). http://www.cs.columbia.edu/CAVE/software/softlib/coil-100.php

  21. Recommendation ITU-R BT.601-7 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dmytro Vodianyk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Vodianyk, D., Rokita, P. (2017). Evolving Node Transfer Functions in Deep Neural Networks for Pattern Recognition. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2017. Lecture Notes in Computer Science(), vol 10245. Springer, Cham. https://doi.org/10.1007/978-3-319-59063-9_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59063-9_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59062-2

  • Online ISBN: 978-3-319-59063-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics