Abstract
The development of parallel batch pattern back propagation training algorithm of multilayer perceptron with two hidden layers and its parallelization efficiency research on many-core high performance computing system are presented in this paper. The model of multilayer perceptron and the batch pattern training algorithm are theoretically described. The algorithmic description of the parallel batch pattern training method is presented. Our results show high parallelization efficiency of the developed training algorithm on large scale data classification task on many-core parallel computing system with 48 CPUs using MPI technology.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Prentice Hall, New Jersey (2008)
De Llano, R.M., Bosque, J.L.: Study of Neural Net Training Methods in Parallel and Distributed Architectures. Future Generation Computer Systems 26(2), 183–190 (2010)
Čerňanský, M.: Training Recurrent Neural Network Using Multistream Extended Kalman Filter on Multicore Processor and Cuda Enabled Graphic Processor Unit. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009, Part I. LNCS, vol. 5768, pp. 381–390. Springer, Heidelberg (2009)
Lotrič, U., Dobnikar, A.: Parallel Implementations of Recurrent Neural Network Learning. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds.) ICANNGA 2009. LNCS, vol. 5495, pp. 99–108. Springer, Heidelberg (2009)
Parallel Grid-aware Library for Neural Network Training, http://uweb.deis.unical.it/turchenko/research-projects/pagalinnet/
Turchenko, V., Grandinetti, L.: Scalability of Enhanced Parallel Batch Pattern BP Training Algorithm on General-Purpose Supercomputers. In: de Leon F. de Carvalho, A.P., Rodríguez-González, S., De Paz Santana, J.F., Corchado Rodríguez, J.M. (eds.) Distributed Computing and Artificial Intelligence. AISC, vol. 79, pp. 525–532. Springer, Heidelberg (2010)
Turchenko, V., Grandinetti, L.: Parallel Batch Pattern BP Training Algorithm of Recurrent Neural Network. In: 14th IEEE International Conference on Intelligent Engineering Systems, Las Palmas of Gran Canaria, Spain, pp. 25–30 (2010)
Turchenko, V., Bosilca, G., Bouteiller, A., Dongarra, J.: Efficient Parallelization of Batch Pattern Training Algorithm on Many-core and Cluster Architectures. In: 7th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems, Berlin, Germany, pp. 692–698 (2013)
Turchenko, V., Golovko, V., Sachenko, A.: Parallel Training Algorithm for Radial Basis Function Neural Network. In: 7th International Conference on Neural Networks and Artificial Intelligence, Minsk, Belarus, pp. 47–51 (2012)
Funahashi, K.: On the Approximate Realization of Continuous Mappings by Neural Network. Neural Networks 2, 183–192 (1989)
Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks 2, 359–366 (1989)
The MNIST Database of Handwritten Digits, http://yann.lecun.com/exdb/mnist/
Hinton, G.E., Osindero, S., Teh, Y.: A Fast Learning Algorithm for Deep Belief Nets. Neural Computation 18, 1527–1554 (2006)
Golovko, V., Galushkin, A.: Neural Networks: Training, Models and Applications. Radiotechnika, Moscow (2001) (in Russian)
Turchenko, V., Grandinetti, L., Bosilca, G., Dongarra, J.: Improvement of Parallelization Efficiency of Batch Pattern BP Training Algorithm Using Open MPI. Procedia Computer Science 1(1), 525–533 (2010)
Open MPI: Open Source High Performance Computing, http://www.open-mpi.org/
Turchenko, V.: Scalability of Parallel Batch Pattern Neural Network Training Algorithm. Artificial Intelligence. Journal of Institute of Artificial Intelligence of National Academy of Sciences of Ukraine 2, 144–150 (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Turchenko, V., Sachenko, A. (2014). Efficiency of Parallel Large-Scale Two-Layered MLP Training on Many-Core System. In: Golovko, V., Imada, A. (eds) Neural Networks and Artificial Intelligence. ICNNAI 2014. Communications in Computer and Information Science, vol 440. Springer, Cham. https://doi.org/10.1007/978-3-319-08201-1_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-08201-1_19
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-08200-4
Online ISBN: 978-3-319-08201-1
eBook Packages: Computer ScienceComputer Science (R0)