Advertisement

Neural-Network-Based Synchronous Iteration Learning Method for Multi-player Zero-Sum Games

  • Ruizhuo Song
  • Qinglai Wei
  • Qing Li
Chapter
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 166)

Abstract

In this chapter, a synchronous solution method for multi-player zero-sum (ZS) games without system dynamics is established based on neural network. The policy iteration (PI) algorithm is presented to solve the Hamilton–Jacobi–Bellman (HJB) equation. It is proven that the obtained iterative cost function is convergent to the optimal game value. For avoiding system dynamics, off-policy learning method is given to obtain the iterative cost function, controls and disturbances based on PI. Critic neural network (CNN), action neural networks (ANNs) and disturbance neural networks (DNNs) are used to approximate the cost function, controls and disturbances. The weights of neural networks compose the synchronous weight matrix, and the uniformly ultimately bounded (UUB) of the synchronous weight matrix is proven. Two examples are given to show that the effectiveness of the proposed synchronous solution method for multi-player ZS games.

References

  1. 1.
    Yeung, D., Petrosyan, L.: Cooperative Stochastic Differential Games. Springer, Berlin (2006)zbMATHGoogle Scholar
  2. 2.
    Lewis, F., Vrabie, D., Syrmos, V.: Optimal Control, 3rd edn. Wiley, Hoboken (2012)CrossRefGoogle Scholar
  3. 3.
    Song, R., Lewis, F., Wei, Q.: Off-policy integral reinforcement learning method to solve nonlinear continuous-time multi-player non-zero-sum games. IEEE Trans. Neural Networks Learn. Syst. 28(3), 704–713 (2016)CrossRefGoogle Scholar
  4. 4.
    Liu, D., Wei, Q.: Multiperson zero-sum differential games for a class of uncertain nonlinear systems. Int. J. Adap. Control Signal Process. 28(3–5), 205–231 (2014)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Mu, C., Sun, C., Song, A., Yu, H.: Iterative GDHP-based approximate optimal tracking control for a class of discrete-time nonlinear systems. Neurocomputing 214(19), 775–784 (2016)CrossRefGoogle Scholar
  6. 6.
    Fang, X., Zheng, D., He, H., Ni, Z.: Data-driven heuristic dynamic programming with virtual reality. Neurocomputing 166(20), 244–255 (2015)CrossRefGoogle Scholar
  7. 7.
    Feng, T., Zhang, H., Luo, Y., Zhang, J.: Stability analysis of heuristic dynamic programming algorithm for nonlinear systems. Neurocomputing 149(Part C, 3), 1461–1468 (2015)CrossRefGoogle Scholar
  8. 8.
    Feng, T., Zhang, H., Luo, Y., Liang, H.: Globally optimal distributed cooperative control for general linear multi-agent systems. Neurocomputing 203(26), 12–21 (2016)CrossRefGoogle Scholar
  9. 9.
    Lewis, F., Vrabie, D., Syrmos, V.: Optimal Control. Wiley, NewYork (2012)CrossRefGoogle Scholar
  10. 10.
    Basar, T., Olsder, G.: Dynamic Noncooperative Game Theory. Academic Press, New York (1982)zbMATHGoogle Scholar
  11. 11.
    Vamvoudakis, K., Lewis, F.: Multi-player non-zero-sum games: online adaptive learning solution of coupled Hamilton-Jacobi equations. Automatica 47(8), 1556–1569 (2011)MathSciNetCrossRefGoogle Scholar

Copyright information

© Science Press, Beijing and Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.University of Science and Technology BeijingBeijingChina
  2. 2.Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations