Skip to main content

Efficiency of Parallel Large-Scale Two-Layered MLP Training on Many-Core System

  • Conference paper
Neural Networks and Artificial Intelligence (ICNNAI 2014)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 440))

Abstract

The development of parallel batch pattern back propagation training algorithm of multilayer perceptron with two hidden layers and its parallelization efficiency research on many-core high performance computing system are presented in this paper. The model of multilayer perceptron and the batch pattern training algorithm are theoretically described. The algorithmic description of the parallel batch pattern training method is presented. Our results show high parallelization efficiency of the developed training algorithm on large scale data classification task on many-core parallel computing system with 48 CPUs using MPI technology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Prentice Hall, New Jersey (2008)

    Google Scholar 

  2. De Llano, R.M., Bosque, J.L.: Study of Neural Net Training Methods in Parallel and Distributed Architectures. Future Generation Computer Systems 26(2), 183–190 (2010)

    Article  Google Scholar 

  3. Čerňanský, M.: Training Recurrent Neural Network Using Multistream Extended Kalman Filter on Multicore Processor and Cuda Enabled Graphic Processor Unit. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009, Part I. LNCS, vol. 5768, pp. 381–390. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  4. Lotrič, U., Dobnikar, A.: Parallel Implementations of Recurrent Neural Network Learning. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds.) ICANNGA 2009. LNCS, vol. 5495, pp. 99–108. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  5. Parallel Grid-aware Library for Neural Network Training, http://uweb.deis.unical.it/turchenko/research-projects/pagalinnet/

  6. Turchenko, V., Grandinetti, L.: Scalability of Enhanced Parallel Batch Pattern BP Training Algorithm on General-Purpose Supercomputers. In: de Leon F. de Carvalho, A.P., Rodríguez-González, S., De Paz Santana, J.F., Corchado Rodríguez, J.M. (eds.) Distributed Computing and Artificial Intelligence. AISC, vol. 79, pp. 525–532. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  7. Turchenko, V., Grandinetti, L.: Parallel Batch Pattern BP Training Algorithm of Recurrent Neural Network. In: 14th IEEE International Conference on Intelligent Engineering Systems, Las Palmas of Gran Canaria, Spain, pp. 25–30 (2010)

    Google Scholar 

  8. Turchenko, V., Bosilca, G., Bouteiller, A., Dongarra, J.: Efficient Parallelization of Batch Pattern Training Algorithm on Many-core and Cluster Architectures. In: 7th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems, Berlin, Germany, pp. 692–698 (2013)

    Google Scholar 

  9. Turchenko, V., Golovko, V., Sachenko, A.: Parallel Training Algorithm for Radial Basis Function Neural Network. In: 7th International Conference on Neural Networks and Artificial Intelligence, Minsk, Belarus, pp. 47–51 (2012)

    Google Scholar 

  10. Funahashi, K.: On the Approximate Realization of Continuous Mappings by Neural Network. Neural Networks 2, 183–192 (1989)

    Article  Google Scholar 

  11. Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  12. The MNIST Database of Handwritten Digits, http://yann.lecun.com/exdb/mnist/

  13. Hinton, G.E., Osindero, S., Teh, Y.: A Fast Learning Algorithm for Deep Belief Nets. Neural Computation 18, 1527–1554 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  14. Golovko, V., Galushkin, A.: Neural Networks: Training, Models and Applications. Radiotechnika, Moscow (2001) (in Russian)

    Google Scholar 

  15. Turchenko, V., Grandinetti, L., Bosilca, G., Dongarra, J.: Improvement of Parallelization Efficiency of Batch Pattern BP Training Algorithm Using Open MPI. Procedia Computer Science 1(1), 525–533 (2010)

    Article  Google Scholar 

  16. Open MPI: Open Source High Performance Computing, http://www.open-mpi.org/

  17. Turchenko, V.: Scalability of Parallel Batch Pattern Neural Network Training Algorithm. Artificial Intelligence. Journal of Institute of Artificial Intelligence of National Academy of Sciences of Ukraine 2, 144–150 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Turchenko, V., Sachenko, A. (2014). Efficiency of Parallel Large-Scale Two-Layered MLP Training on Many-Core System. In: Golovko, V., Imada, A. (eds) Neural Networks and Artificial Intelligence. ICNNAI 2014. Communications in Computer and Information Science, vol 440. Springer, Cham. https://doi.org/10.1007/978-3-319-08201-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08201-1_19

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08200-4

  • Online ISBN: 978-3-319-08201-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics