Use of Neural Networks in Q-Learning Algorithm

  • Nataliya BoykoEmail author
  • Volodymyr Korkishko
  • Bohdan Dohnyak
  • Olena Vovk
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 935)


This article is devoted to the algorithm of training with reinforcement (reinforcement learning). This article will cover various modifications of the Q-Learning algorithm, along with its techniques, which can accelerate learning through the use of neural networks. We also talk about different ways of approximating the tables of this algorithm, consider its implementation in the code and analyze its behavior in different environments. We set the optimal parameters for its implementation, and we will evaluate its performance in two parameters: the number of necessary neural network weight corrections and quality of training.


Training with reinforcement Q-Learning Neural networks Markov environment 


  1. 1.
    Leskovec J., Rajaraman, A., Ullman, J.D.: Mining of Massive Datasets, p. 470. Cambridge University Press, Massachusetts (2014)Google Scholar
  2. 2.
    Mayer-Schoenberger V., Cukier, K.: A Revolution That Will Transform How We Live, Work, and Think, p. 230. Houghton Mifflin Harcourt, Boston, New York (2013)Google Scholar
  3. 3.
    Boyko N., Shakhovska N., Sviridova N.: Use of machine learning in the forecast of clinical consequences of cancer diseases. In: 7th Mediterranean Conference on Embedded Computing - MECO 2018, pp. 531–536. IEEE (2018)Google Scholar
  4. 4.
    Maass, W., Natschger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computations based on perturbations. In: Neural Computation: Proceedings, Institute for Theoretical Computer Science, Switzerland, vol. 11, pp. 2531–2560 (2002)CrossRefGoogle Scholar
  5. 5.
    Schrauwen, B., Verstraeten, D., Campenhout, J.V.: An overview of reservoir computing theory, applications and implementations. In: Proceedings of the 15th European Symposium on Artificial Neural Networks, Belgium, Bruges, pp. 471–482 (2007)Google Scholar
  6. 6.
    Coombes, S.: Waves, bumps, and patterns in neural field theories. Biol. Cybern. 93(2), 91–108 (2005). Proceedings. University of Nottingham, NottinghamGoogle Scholar
  7. 7.
    Antonopoulos, N., Gillam, L (eds).: Cloud Computing: Principles, Systems and Applications, p. 379. Springer, London (2010)zbMATHGoogle Scholar
  8. 8.
    Gosavi, N., Shinde, S.S., Dhakulkar, B.: Use of cloud computing in library and information science field. Int. J. Digital Library Serv. 2(3), 51–60 (2012)Google Scholar
  9. 9.
    Dhamdhere, S.N. (ed.).: Cloud Computing and Virtualization, p. 385 (2013)Google Scholar
  10. 10.
    Monirul Islam, M.: Necessity of cloud computing for digital libraries: Bangladesh perspective. In: International Conference on Digital Libraries (ICDL) Vision 2020: Looking Back 10 Years and Forging New Frontiers, pp. 513–524 (2013)Google Scholar
  11. 11.
    Mell, P., Grance, T.: The NIST Definition of Cloud Computing: Recommendations of the National Institute of Standards and Technology (2011)Google Scholar
  12. 12.
    Boyko, N.I.: Perspective technologies of study of large data in distributed information systems. Radioelectronics, Computer Science, Management, vol. 4, pp. 66–77. Zaporizhzhya National Technical University, Zaporozhye (2017)Google Scholar
  13. 13.
    Shakhovska, N., Vovk, O., Hasko, R., Kryvenchuk, Y.: The method of big data processing for distance educational system. In: Shakhovska, N., Stepashko, V. (eds.) CSIT 2017. AISC, vol. 689, pp. 461–473. Springer, Cham (2018). Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Lviv Polytechnic National UniversityLvivUkraine

Personalised recommendations