Abstract
In this paper, we study how structural decomposition and multiagent interactions can be utilized by deep reinforcement learning in order to address high dimensional robotic control problems. In this regard, we propose the D3PG approach, which is a multiagent extension of DDPG by decomposing the global critic into a weighted sum of local critics. Each of these critics is modeled as an individual learning agent that governs the decision making of a particular joint of a robot. We then propose a method to learn the weights during learning in order to capture different levels of dependencies among the agents. The experimental evaluation demonstrates that D3PG can achieve competitive or significantly improved performance compared to some widely used deep reinforcement learning algorithms. Another advantage of D3PG is that it is able to provide explicit interpretations of the final learned policy as well as the underlying dependencies among the joints of a learning robot.
Keywords
- Deep reinforcement learning
- Structural decomposition
- Interpretability
- Robotic control
Supported by National Natural Science Foundation of China under Grant No. 62076259.
This is a preview of subscription content, access via your institution.
Buying options






References
Bu, L., Babu, R., De Schutter, B., et al.: A comprehensive survey of multiagent reinforcement learning. IEEE TSMC Part C 38(2), 156–172 (2008)
Busoniu, L., De Schutter, B., Babuska, R.: Decentralized reinforcement learning control of a robotic manipulator. In: 2006 9th ICARCV, pp. 1–6 (2006)
Dong, Y., Yu, C., Weng, P., Maustafa, A., Cheng, H., Ge, H.: Decomposed deep reinforcement learning for robotic control. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1834–1836 (2020)
Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel: Benchmarking deep reinforcement learning for continuous control. In: ICML, pp. 1329–1338 (2016)
Dziomin, U., Kabysh, A., Golovko, V., Stetter, R.: A multi-agent reinforcement learning approach for the efficient control of mobile robot. In: 2013 IEEE 7th IDAACS, vol. 2, pp. 867–873 (2013)
Foerster, J., Assael, I.A., de Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. In: NIPS, pp. 2137–2145 (2016)
Foerster, J.N., Farquhar, G., Afouras, T., et al.: Counterfactual multi-agent policy gradients. In: AAAI, pp. 7254–7264 (2018)
Guestrin, C., Lagoudakis, M., Parr, R.: Coordinated reinforcement learning. In: ICML, vol. 2, pp. 227–234 (2002)
Gupta, J.K., Egorov, M., Kochenderfer, M.: Cooperative multi-agent control using deep reinforcement learning. In: Sukthankar, G., Rodriguez-Aguilar, J.A. (eds.) AAMAS 2017. LNCS (LNAI), vol. 10642, pp. 66–83. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71682-4_5
Iqbal, S., Sha, F.: Actor-attention-critic for multi-agent reinforcement learning. arXiv preprint arXiv:1810.02912 (2018)
Jiang, J., Lu, Z.: Learning attentional communication for multi-agent cooperation. In: NIPS, pp. 7254–7264 (2018)
Kabysh, A., Golovko, V., Lipnickas, A.: Influence learning for multi-agent system based on reinforcement learning. Int. J. Comput. 11(1), 39–44 (2014)
Kok, J.R., Vlassis, N.: Collaborative multiagent reinforcement learning by payoff propagation. J. Mach. Learn. Res. 7(Sep), 1789–1828 (2006)
Leottau, D.L., Ruiz-del-Solar, J., MacAlpine, P., Stone, P.: A study of layered learning strategies applied to individual behaviors in robot soccer. In: Almeida, L., Ji, J., Steinbauer, G., Luke, S. (eds.) RoboCup 2015. LNCS (LNAI), vol. 9513, pp. 290–302. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-29339-4_24
Leottau, D.L., Ruiz-del Solar, J., Babuška, R.: Decentralized reinforcement learning of robot behaviors. Artif. Intell. 256, 130–159 (2018)
Leottau, D.L., Ruiz-del Solar, J.: An accelerated approach to decentralized reinforcement learning of the ball-dribbling behavior. In: Workshops at the Twenty-Ninth AAAI (2015)
Li, Y.: Deep reinforcement learning. arXiv preprint arXiv:1810.06339 (2018)
Lillicrap, T.P., Hunt, J.J., Pritzel, A., et al.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015)
Lowe, R., Wu, Y., Tamar, A., et al.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: NIPS, pp. 6379–6390 (2017)
Mao, H., Zhang, Z., et al.: Modelling the dynamic joint policy of teammates with attention multi-agent DDPG. In: AAMAS, pp. 1108–1116 (2019)
Martin, J.A., De Lope, H., et al.: A distributed reinforcement learning architecture for multi-link robots. In: 4th ICINCO, vol. 192, p. 197 (2007)
Matignon, L., Laurent, G.J., Le Fort-Piat, N.: Design of semi-decentralized control laws for distributed-air-jet micromanipulators by reinforcement learning. In: 2009 IROS, pp. 3277–3283 (2009)
Mnih, V., Badia, A.P., et al.: Asynchronous methods for deep reinforcement learning. In: ICML, pp. 1928–1937 (2016)
Peng, P., Yuan, Q., et al.: Multiagent bidirectionally-coordinated nets for learning to play StarCraft combat games, p. 2. arXiv preprint arXiv:1703.10069 (2017)
Peters, J., Schaal, S.: Natural actor-critic. Neurocomputing 71(7–9), 1180–1190 (2008)
Rashid, T., Samvelyan, M., et al.: QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning. In: ICML, pp. 4292–4301 (2018)
Sanchez-Gonzalez, A., Heess, N., et al.: Graph networks as learnable physics engines for inference and control. arXiv preprint arXiv:1806.01242 (2018)
Schulman, J., Levine, S., Abbeel, P., et al.: Trust region policy optimization. In: ICML, pp. 1889–1897 (2015)
Schulman, J., Moritz, P., et al.: High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438 (2015)
Schulman, J., Wolski, F., et al.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Sunehag, P., Lever, G., et al.: Value-decomposition networks for cooperative multi-agent learning based on team reward. In: AAMAS, pp. 2085–2087 (2018)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
Todorov, E., Erez, T., Tassa, Y.: MuJoCo: a physics engine for model-based control. In: 2012 IROS, pp. 5026–5033 (2012)
Troost, S., Schuitema, E., Jonker, P.: Using cooperative multi-agent Q-learning to achieve action space decomposition within single robots. In: 1st ERLARS, p. 23 (2008)
Vaswani, A., Shazeer, N., et al.: Attention is all you need. In: NIPS, pp. 5998–6008 (2017)
Wang, T., Liao, R., et al.: NerveNet: learning structured policy with graph neural networks (2018)
Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8(3–4), 229–256 (1992). https://doi.org/10.1007/BF00992696
Yu, C., Wang, D., Ren, J., Ge, H., Sun, L.: Decentralized multiagent reinforcement learning for efficient robotic control by coordination graphs. In: Geng, X., Kang, B.-H. (eds.) PRICAI 2018. LNCS (LNAI), vol. 11012, pp. 191–203. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-97304-3_15
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
1.1 A.1 Appendix
1.2 A.2 MuJoCo Platform
Below we provide some specifications for the states, actions and rewards of the four robot environments in MuJoCo.
Swimmer. The swimmer is a planar robot with 3 links and 2 actuated joints. Fluid is simulated through viscosity forces, which apply drag on each link, allowing the swimmer to move forward. The 8-dim observation includes the joint angles and velocities, and the coordinates of the center of mass. The reward is given by \(r(s, a) = v_x-0.005\cdot \) \(\parallel \) \(a\) \(\parallel \) \(_2^2\), where \(v_x\) is the forward velocity. No termination condition is applied.
Hopper. The hopper is a planar robot with 4 rigid links, corresponding to the torso, upper leg, lower leg, and foot, along with 3 actuated joints. The 11-dim observation includes joint angles, joint velocities, the coordinates of the center of mass, and the constraint forces. The reward is given by \(r(s, a) = v_x-0.005\cdot \) \(\parallel \) \(a\) \(\parallel \) \(_2^2 + 1\), where the last term is a bonus for being “alive”. The episode is terminated when \(z_{body}<\) 0.7, where \(z_{body}\) is the z-coordinate of the body, or when \(|\theta _y|<\) 0.2, where \(\theta _y\) is the forward pitch of the body.
Walker. The walker is a planar biped robot consisting of 7 links, corresponding to two legs and a torso, along with 6 actuated joints. The 17-dim observation includes joint angles, joint velocities, and the coordinates of center of mass. The reward is given by \(r(s, a) = v_x-0.005\cdot \) \(\parallel \) \(a\) \(\parallel \) \(_2^2\). The episode is terminated when \(z_{body}<\) 0.8, or \(z_{body}>\) 2.0, or \(|\theta _y|>\) 1.0.
Half-Cheetah. The half-cheetah is a planar biped robot with 9 rigid links, including two legs and a torso, along with 6 actuated joints. The 17-dim state includes joint angles, joint velocities, and the coordinates of the center of mass. The reward \(r(s, a) = v_x-0.005\cdot \) \(\parallel \) \(a\) \(\parallel \) \(_2^2\). No termination condition is applied.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, Y., Yu, C., Ge, H. (2020). D3PG: Decomposed Deep Deterministic Policy Gradient for Continuous Control. In: Taylor, M.E., Yu, Y., Elkind, E., Gao, Y. (eds) Distributed Artificial Intelligence. DAI 2020. Lecture Notes in Computer Science(), vol 12547. Springer, Cham. https://doi.org/10.1007/978-3-030-64096-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-64096-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-64095-8
Online ISBN: 978-3-030-64096-5
eBook Packages: Computer ScienceComputer Science (R0)