Advertisement

A Study of Random Neural Network Performance for Supervised Learning Tasks in CUDA

  • Sebastián Basterrech
  • Jan Janoušek
  • Vaclav Snášel
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 298)

Abstract

The Graphics Processing Unit (GPU) have been used for accelerating graphic calculations as well as for developing more general devices. One of the most used parallel platform is Compute Unified Device Architecture (CUDA). This one allows to implement in parallel multiple GPU obtaining a high computational performance. Over the last years, CUDA has been used for the implementation of several parallel distributed systems. At the end of the 80s, it was introduced a stochastic neural network named Random Neural Networks (RNN). The method have been successfully used in the Machine Learning community for solving many learning tasks. In this paper we present the gradient descent algorithm for the RNN model in CUDA. We evaluate the performance of the algorithm on two real benchmark problems about energy sources, and we compare it with the obtained using a classic implementation in C.

Keywords

Random Neural Network Parallel Computing CUDA Gradient Descent Algorithm 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Snášel, V., Klement, P., Gajdoš, P., Abraham, A.: Self Organising Maps on Compute Unified Device Architecture for the Performance Monitoring of Emergency Call-Taking Centre. In: Gavrilova, M.L., Tan, C.J.K., Abraham, A. (eds.) Transactions on Computational Science XXI. LNCS, vol. 8160, pp. 339–366. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  2. 2.
    Liang, L.: Parallel Implementations of Hopfield Neural Networks on GPU (2011)Google Scholar
  3. 3.
    Sierra-Canto, X., Madera-Ramirez, F., Uc-Cetina, V.: Parallel training of a back-propagation neural network using cuda. In: 2010 Ninth International Conference on Machine Learning and Applications (ICMLA), pp. 307–312 (December 2010)Google Scholar
  4. 4.
    Juang, C.-F., Chen, T.-C., Cheng, W.-Y.: Speedup of implementing fuzzy neural networks with high-dimensional inputs through parallel processing on graphic processing units. IEEE Transactions on Fuzzy Systems 19(4), 717–728 (2011)CrossRefGoogle Scholar
  5. 5.
    Gelenbe, E.: Random Neural Networks with Negative and Positive Signals and Product Form Solution. Neural Computation 1(4), 502–510 (1989)CrossRefGoogle Scholar
  6. 6.
    Gelenbe, E.: Learning in the Recurrent Random Neural Network. Neural Computation 5(1), 154–511 (1993)CrossRefMathSciNetGoogle Scholar
  7. 7.
    Bakircioğlu, H., Koçak, T.: Survey of Random Neural Network applications. European Journal of Operational Research 126(2), 319–330 (2000)CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Timotheou, S.: The Random Neural Network: A Survey. The Computer Journal 53(3), 251–267 (2010)CrossRefGoogle Scholar
  9. 9.
    Center for Machine Learning and Intelligent Systems. UCI Machine Learning Repository, http://archive.ics.uci.edu/ml/datasets/Energy+efficiency (date of access: February 6, 2014)
  10. 10.
    Tsanas, A., Xifara, A.: Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy and Buildings 49, 560–567 (2012)CrossRefGoogle Scholar
  11. 11.
    Prokop, L., Misak, S., Snasel, V., Platos, J., Kroemer, P.: Supervised learning of photovoltaic power plant output prediction models. Neural Network World 23(4), 321–338 (2013)Google Scholar
  12. 12.
    Moore, G.E.: Cramming more components onto integrated circuits. Electronics, 114–117 (April 1965)Google Scholar
  13. 13.
    Gelenbe, E.: Product-Form Queueing Networks with Negative and Positive Customers. Journal of Applied Probability 28(3), 656–663 (1991)CrossRefMATHMathSciNetGoogle Scholar
  14. 14.
    Gelenbe, E.: The Spiked Random Neural Network: Nonlinearity, Learning and Approximation. In: Proc. Fifth IEEE International Workshop on Cellular Neural Networks and Their Applications, London, England, pp. 14–19 (April 1998)Google Scholar
  15. 15.
    Rumelhart, D.E., Hinton, G.E., McClelland, J.L.: A general framework for parallel distributed processing. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Computational Models of Cognition and Perception, ch. 2, vol. 1, pp. 45–76. MIT Press, Cambridge (1986)Google Scholar
  16. 16.
    Basterrech, S., Mohamed, S., Rubino, G., Soliman, M.: Levenberg-Marquardt Training Algorithms for Random Neural Networks. Computer Journal 54(1), 125–135 (2011)CrossRefGoogle Scholar
  17. 17.
    Likas, A., Stafylopatis, A.: Training the Random Neural Network using Quasi-Newton methods. European Journal of Operational Research 126(2), 331–339 (2000)CrossRefMATHMathSciNetGoogle Scholar
  18. 18.
    Basterrech, S., Zjavka, L., Prokop, L., Mišák, S.: Irradiance prediction using echo state queueing networks and differential polynomial neural networks. In: IEEE Intelligent Systems Design and Applications (ISDA), Hanoi, Veit Nam (December 2013)Google Scholar
  19. 19.
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5(2), 157–166 (1994)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Sebastián Basterrech
    • 1
  • Jan Janoušek
    • 1
  • Vaclav Snášel
    • 1
  1. 1.IT4InnovationVŠB–Technical University of OstravaOstravaCzech Republic

Personalised recommendations