Advertisement

About the Integration of Learning and Decision-Making Models in Intelligent Systems of Real-Time

  • Alexander P. Eremeev
  • Alexander A. Kozhukhov
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 875)

Abstract

The paper considers integrated tools consist of multi-agent temporal differences reinforcement learning, statistical and main analysis modules. Deep reinforcement learning approach were analyzed to improve performance of reinforcement learning algorithms under time constraints. The possibilities of including anytime algorithm, particularly milestone method, into the forecasting subsystem type of intelligent decision support system of real-time for improving performance and reducing response and execution time were proposed. The work was supported by RFBR projects 17-07-00553, 18-51-00007.

Keywords

Artificial intelligence Intelligent system Real time Reinforcement learning Deep learning Forecasting Decision support Anytime algorithm 

References

  1. 1.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning. The MIT Press, London (2012)Google Scholar
  2. 2.
    Vagin, V.N., Eremeev, A.P.: Some basic principles of design of intelligent systems for supporting real-time decision making. J. Comput. Syst. Sci. Int. 6, 953–961 (2001)zbMATHGoogle Scholar
  3. 3.
    Osipov, G.S.: Methods of Artificial Intelligence, 2nd edn. FIZMATLIT, Moscow (2015). (in Russian)Google Scholar
  4. 4.
    Busoniu, L., Babuska, R., De Schutter, B.: Multi-agent reinforcement learning: an overview. In: Innovations in Multi-Agent Systems and Applications, vol. 310, pp. 183–221. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Mnih, V., Badia, A.P., Mirza, M., Graves, A., Harley, T., et al.: Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning (PMLR 48), pp. 1928–1937 (2016)Google Scholar
  6. 6.
    Eremeev, A.P., Kozhukhov, A.A.: Implementation of reinforcement learning methods based on temporal differences and a multi-agent approach for real-time intelligent systems (in Russian). J. Softw. Syst. 1, 28–33 (2017)CrossRefGoogle Scholar
  7. 7.
    Sort, J., Singh, S., Lewis, R.L.: Variance-based rewards for approximate Bayesian reinforcement learning. In: Proceedings of Uncertainty in Artificial Intelligence, pp. 564–571 (2010)Google Scholar
  8. 8.
    Wiering, M., Otterlo, M.: Reinforcement Learning: State-of-the-Art (Adaptation, Learning, and Optimization). Springer (2012)Google Scholar
  9. 9.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)CrossRefGoogle Scholar
  10. 10.
    Li, Y.: Deep reinforcement learning: an overview, arXiv (2017). http://arxiv.org/abs/1701.07274
  11. 11.
    Nikolenko, S., Kadurin, A., Archangelskaya, E.: Deep Learning. Immersion in the world of neural networks. PITER, St. Petersburg (2017). (in Russian)Google Scholar
  12. 12.
    Guo, H.: Generating text with deep reinforcement learning, arXiv (2015). http://arxiv.org/abs/1510.09292
  13. 13.
    Hansen, E.A., Zilberstein, S.: Monitoring and control of anytime algorithms: a dynamic programming approach. J. Artif. Intell. 126, 139–157 (2001)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Mangharam, R., Saba, A.: Anytime algorithms for GPU architectures. In: IEEE Real-Time Systems Symposium (2011)Google Scholar
  15. 15.
    Eremeev, A.P., Kozhukhov, A.A.: Methods and program tools based on prediction and reinforcement learning for the intelligent decision support systems of real-time. In: Proceedings of the Second International Scientific Conference “Intelligent Information Technologies for Industry” (IITI 2017), vol. 1, pp. 74–83. Springer (2017)Google Scholar
  16. 16.
    Likhachev, M., Ferguson, D., Gordon, G., Stentz, A., Thrun, S.: Anytime dynamic A*: an anytime, replanning algorithm. In: Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), pp. 262–271 (2005)Google Scholar
  17. 17.
    Nair, A., Srinivasan, P., Blackwell, S., et al.: Massively parallel methods for deep reinforcement learning. In: Deep Learning Workshop, International Conference on Machine Learning, Lille, France (2015)Google Scholar
  18. 18.
    Vagin, V.N., Eremeev, A.P., Guliakina, N.A.: About the temporal reasoning formalization in the intelligent systems. In: OSTIS-2016, Minsk, pp. 275–282 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Alexander P. Eremeev
    • 1
  • Alexander A. Kozhukhov
    • 1
  1. 1.Institute of Automatics and Computer EngineeringMoscow Power Engineering InstituteMoscowRussia

Personalised recommendations