A Study of Random Neural Network Performance for Supervised Learning Tasks in CUDA
The Graphics Processing Unit (GPU) have been used for accelerating graphic calculations as well as for developing more general devices. One of the most used parallel platform is Compute Unified Device Architecture (CUDA). This one allows to implement in parallel multiple GPU obtaining a high computational performance. Over the last years, CUDA has been used for the implementation of several parallel distributed systems. At the end of the 80s, it was introduced a stochastic neural network named Random Neural Networks (RNN). The method have been successfully used in the Machine Learning community for solving many learning tasks. In this paper we present the gradient descent algorithm for the RNN model in CUDA. We evaluate the performance of the algorithm on two real benchmark problems about energy sources, and we compare it with the obtained using a classic implementation in C.
KeywordsRandom Neural Network Parallel Computing CUDA Gradient Descent Algorithm
Unable to display preview. Download preview PDF.
- 1.Snášel, V., Klement, P., Gajdoš, P., Abraham, A.: Self Organising Maps on Compute Unified Device Architecture for the Performance Monitoring of Emergency Call-Taking Centre. In: Gavrilova, M.L., Tan, C.J.K., Abraham, A. (eds.) Transactions on Computational Science XXI. LNCS, vol. 8160, pp. 339–366. Springer, Heidelberg (2013)CrossRefGoogle Scholar
- 2.Liang, L.: Parallel Implementations of Hopfield Neural Networks on GPU (2011)Google Scholar
- 3.Sierra-Canto, X., Madera-Ramirez, F., Uc-Cetina, V.: Parallel training of a back-propagation neural network using cuda. In: 2010 Ninth International Conference on Machine Learning and Applications (ICMLA), pp. 307–312 (December 2010)Google Scholar
- 9.Center for Machine Learning and Intelligent Systems. UCI Machine Learning Repository, http://archive.ics.uci.edu/ml/datasets/Energy+efficiency (date of access: February 6, 2014)
- 11.Prokop, L., Misak, S., Snasel, V., Platos, J., Kroemer, P.: Supervised learning of photovoltaic power plant output prediction models. Neural Network World 23(4), 321–338 (2013)Google Scholar
- 12.Moore, G.E.: Cramming more components onto integrated circuits. Electronics, 114–117 (April 1965)Google Scholar
- 14.Gelenbe, E.: The Spiked Random Neural Network: Nonlinearity, Learning and Approximation. In: Proc. Fifth IEEE International Workshop on Cellular Neural Networks and Their Applications, London, England, pp. 14–19 (April 1998)Google Scholar
- 15.Rumelhart, D.E., Hinton, G.E., McClelland, J.L.: A general framework for parallel distributed processing. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Computational Models of Cognition and Perception, ch. 2, vol. 1, pp. 45–76. MIT Press, Cambridge (1986)Google Scholar
- 18.Basterrech, S., Zjavka, L., Prokop, L., Mišák, S.: Irradiance prediction using echo state queueing networks and differential polynomial neural networks. In: IEEE Intelligent Systems Design and Applications (ISDA), Hanoi, Veit Nam (December 2013)Google Scholar