Coordination in Collaborative Work by Deep Reinforcement Learning with Various State Descriptions

  • Yuki MiyashitaEmail author
  • Toshiharu SugawaraEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11873)


Cooperation and coordination are sophisticated behaviors and are still major issues in studies on multi-agent systems because how to cooperate and coordinate depends on not only environmental characteristics but also the behaviors/strategies that closely affect each other. On the other hand, recently using the multi-agent deep reinforcement learning (MADRL) has received much attention because of the possibility of learning and facilitating their coordinated behaviors. However, the characteristics of socially learned coordination structures have been not sufficiently clarified. In this paper, by focusing on the MADRL in which each agent has its own deep Q-networks (DQNs), we show that the different types of input to the network lead to various coordination structures, using the pickup and floor laying problem, which is an abstract form related to our target problem. We also indicate that the generated coordination structures affect the entire performance of multi-agent systems.


Multi-agent deep reinforcement learning Coordination Cooperation Divisional cooperation Deep Q networks 



This work was partly supported by JSPS KAKENHI Grant Number 17KT0044.


  1. 1.
    Agmon, N., Kraus, S., Kaminka, G.A.: Multi-robot perimeter patrol in adversarial settings. In: 2008 IEEE International Conference on Robotics and Automation, pp. 2339–2345. IEEE (2008)Google Scholar
  2. 2.
    Foerster, J., Nardelli, N., Farquhar, G., Torr, P., Kohli, P., Whiteson, S., et al.: Stabilising experience replay for deep multi-agent reinforcement learning. arXiv preprint arXiv:1702.08887 (2017)
  3. 3.
    Giuggioli, L., Arye, I., Heiblum Robles, A., Kaminka, G.A.: From ants to birds: a novel bio-inspired approach to online area coverage. In: Groß, R., et al. (eds.) Distributed Autonomous Robotic Systems. SPAR, vol. 6, pp. 31–43. Springer, Cham (2018). Scholar
  4. 4.
    Gu, S., Holly, E., Lillicrap, T., Levine, S.: Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3389–3396. IEEE (2017)Google Scholar
  5. 5.
    Lample, G., Chaplot, D.S.: Playing FPS games with deep reinforcement learning. In: AAAI, pp. 2140–2146 (2017)Google Scholar
  6. 6.
    Liu, M., Ma, H., Li, J., Koenig, S.: Task and path planning for multi-agent pickup and delivery. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1152–1160. IFAAMAS (2019)Google Scholar
  7. 7.
    Palmer, G., Tuyls, K., Bloembergen, D., Savani, R.: Lenient multi-agent deep reinforcement learning. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 443–451. IFAAMAS (2018)Google Scholar
  8. 8.
    Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: AAAI, Phoenix, AZ, vol. 2, p. 5 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Computer Science and EngineeringWaseda UniversityTokyoJapan
  2. 2.Shimizu CorporationTokyoJapan

Personalised recommendations