Abstract
Theoretical results suggest that in order to learn complicated functions that can represent high-level features in the computer vision field, one may need to use deep architectures. The popular choice among scientists and engineers for modeling deep architectures are feed-forward Deep Artificial Neural Networks. One of the latest research areas in this field is the evolution of Artificial Neural Networks: NeuroEvolution. This paper explores the effect of evolving a Node Transfer Function and its parameters, along with the evolution of connection weights and an architecture in Deep Neural Networks for Pattern Recognition problems. The results strongly indicate the importance of evolving Node Transfer Functions for shortening the time of training Deep Artificial Neural Networks using NeuroEvolution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2, 1–127 (2009). Now Publishers
Pascanu, R., Montufar, G., Bengio, Y.: On the number of response regions of deep feedforward networks with piecewise linear activations. In: NIPS 2014, pp. 2924–2932 (2015)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: ICML, pp. 1096–1103 (2008)
Ranzato, M., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: NIPS (2007)
Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: ICML, pp. 473–480 (2007)
Hochreiter, S., Bengio, Y., Franconi, P., Schmidhuber, J.: Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies. IEE Press, New York (2001)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010)
Sutskever, I., Martens, J., Dahl, G., Hilton, G.: On the importance of initialization and momentum in deep learning. In: ICML (3), vol. 28, pp. 1139–1147 (2013)
Kent, A., Williams, J.G. (eds.): Evolutionary Artificial Neural Networks. Encyclopedia of Computer Science and Technology, vol. 33, pp. 137–170. Marcel Dekker, New York (1995)
Yao, X.: Evolving artificial neural networks. Proc. IEEE 87, 1423–1447 (2002)
David, O.E., Greental, I.: Genetic algorithms for evolving deep neural networks. In: GECCO, pp. 1451–1452 (2014)
Tirumala, S.S.: Implementation of evolutionary algorithms for deep architectures. In: AIC (2014)
Angeline, P.J., Saunders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. Neural Netw. 5, 54–65 (1994)
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10, 99–127 (2002)
Mahsal, K.M., Masood, A.A., Khan, M., Miller, J.F.: Fastlearning neural networks using Cartesian genetic programming. Neurocomputing 121, 274–289 (2013)
James, A.T., Miller, J.F.: NeuroEvolution: The Importance of Transfer Function Evolution (2013)
Vodianyk, D., Rokita, P.: Evolving node transfer functions in artificial neural networks for handwritten digits recognition. In: Chmielewski, L.J., Datta, A., Kozera, R., Wojciechowski, K. (eds.) ICCVG 2016. LNCS, vol. 9972, pp. 604–613. Springer, Cham (2016). doi:10.1007/978-3-319-46418-3_54
USC University of Southern California: Signal, Image Processing Institute, Ming Hsieh Department of Electrical Engineering. Textures, vol. 1. http://sipi.usc.edu/database/?volume=textures
The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
Nene, S.A., Nayar, S.K., Murase, H.: Columbia Object Image Library (COIL-100). Technical report CUCS-006-96 (1996). http://www.cs.columbia.edu/CAVE/software/softlib/coil-100.php
Recommendation ITU-R BT.601-7 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Vodianyk, D., Rokita, P. (2017). Evolving Node Transfer Functions in Deep Neural Networks for Pattern Recognition. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2017. Lecture Notes in Computer Science(), vol 10245. Springer, Cham. https://doi.org/10.1007/978-3-319-59063-9_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-59063-9_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-59062-2
Online ISBN: 978-3-319-59063-9
eBook Packages: Computer ScienceComputer Science (R0)