Skip to main content

Parallel Implementations of Recurrent Neural Network Learning

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5495))

Abstract

Neural networks have proved to be effective in solving a wide range of problems. As problems become more and more demanding, they require larger neural networks, and the time used for learning is consequently greater. Parallel implementations of learning algorithms are therefore vital for a useful application. Implementation, however, strongly depends on the features of the learning algorithm and the underlying hardware architecture. For this experimental work a dynamic problem was chosen which implicates the use of recurrent neural networks and a learning algorithm based on the paradigm of learning automata. Two parallel implementations of the algorithm were applied - one on a computing cluster using MPI and OpenMP libraries and one on a graphics processing unit using the CUDA library. The performance of both parallel implementations justifies the development of parallel algorithms.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Quinn, M.: Parallel Programming in C with MPI and OpenMP. McGraw Hill, Boston (2003)

    Google Scholar 

  2. Halfhill, T.R.: Parallel Processing With CUDA. Microprocessor report (2008), http://www.MPRonline.com

  3. Nvidia: Nvidia CUDA Compute Unified Device Architecture, Programming Guide, Version 1.1 (2007), http://nvidia.com/cuda

  4. Seiffert, U.: Artifical neural networks on massively parallel computer hardware. In: ESANN 2002 proceedings, Bruges, Belgium, pp. 319–330 (2002)

    Google Scholar 

  5. Lotrič, U., Dobnikar, A.: Parallel implementations of feed-forward neural network using MPI and C# on.NET platform. In: Ribeiro, B., et al. (eds.) Adaptive and natural computing algorithms: proceedings of the International Conference in Coimbra, Portugal, pp. 534–537 (2005)

    Google Scholar 

  6. Catanzaro, B., Sundaram, N., Keutzer, K.: Fast support vector machine training and classification on graphics processors. In: McCallum, A., Roweis, S. (eds.) Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, pp. 104–111 (2008)

    Google Scholar 

  7. Haykin, S.: Neural networks: a comprehensive foundation, 2nd edn. Prentice-Hall, New Jersey (1999)

    MATH  Google Scholar 

  8. Narendra, K., Thathachar, M.A.L.: Learning automata: an introduction. Prentice-Hall, New Jersey (1989)

    MATH  Google Scholar 

  9. Šter, B., Gabrijel, I., Dobnikar, A.: Impact of learning on the structural properties of neural networks, part 2. LNCS, pp. 63–70. Springer, Heidelberg (2007)

    Google Scholar 

  10. Deino Software: DeinoMPI - High Performance Parallel Computing for Windows (2008), http://mpi.deino.net

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lotrič, U., Dobnikar, A. (2009). Parallel Implementations of Recurrent Neural Network Learning. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2009. Lecture Notes in Computer Science, vol 5495. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04921-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04921-7_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04920-0

  • Online ISBN: 978-3-642-04921-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics