Abstract
Neural networks have proved to be effective in solving a wide range of problems. As problems become more and more demanding, they require larger neural networks, and the time used for learning is consequently greater. Parallel implementations of learning algorithms are therefore vital for a useful application. Implementation, however, strongly depends on the features of the learning algorithm and the underlying hardware architecture. For this experimental work a dynamic problem was chosen which implicates the use of recurrent neural networks and a learning algorithm based on the paradigm of learning automata. Two parallel implementations of the algorithm were applied - one on a computing cluster using MPI and OpenMP libraries and one on a graphics processing unit using the CUDA library. The performance of both parallel implementations justifies the development of parallel algorithms.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Quinn, M.: Parallel Programming in C with MPI and OpenMP. McGraw Hill, Boston (2003)
Halfhill, T.R.: Parallel Processing With CUDA. Microprocessor report (2008), http://www.MPRonline.com
Nvidia: Nvidia CUDA Compute Unified Device Architecture, Programming Guide, Version 1.1 (2007), http://nvidia.com/cuda
Seiffert, U.: Artifical neural networks on massively parallel computer hardware. In: ESANN 2002 proceedings, Bruges, Belgium, pp. 319–330 (2002)
Lotrič, U., Dobnikar, A.: Parallel implementations of feed-forward neural network using MPI and C# on.NET platform. In: Ribeiro, B., et al. (eds.) Adaptive and natural computing algorithms: proceedings of the International Conference in Coimbra, Portugal, pp. 534–537 (2005)
Catanzaro, B., Sundaram, N., Keutzer, K.: Fast support vector machine training and classification on graphics processors. In: McCallum, A., Roweis, S. (eds.) Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, pp. 104–111 (2008)
Haykin, S.: Neural networks: a comprehensive foundation, 2nd edn. Prentice-Hall, New Jersey (1999)
Narendra, K., Thathachar, M.A.L.: Learning automata: an introduction. Prentice-Hall, New Jersey (1989)
Šter, B., Gabrijel, I., Dobnikar, A.: Impact of learning on the structural properties of neural networks, part 2. LNCS, pp. 63–70. Springer, Heidelberg (2007)
Deino Software: DeinoMPI - High Performance Parallel Computing for Windows (2008), http://mpi.deino.net
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lotrič, U., Dobnikar, A. (2009). Parallel Implementations of Recurrent Neural Network Learning. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2009. Lecture Notes in Computer Science, vol 5495. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04921-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-04921-7_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04920-0
Online ISBN: 978-3-642-04921-7
eBook Packages: Computer ScienceComputer Science (R0)