Abstract
The ability to cooperate and work as a team is one of the “holy grail” goals of intelligent robots. To address the importance of communication in multi-agent reinforcement learning (MARL), we propose a Cooperative Indoor Navigation (CIN) task, where agents cooperatively navigate to reach a goal in a 3D indoor room with realistic observation inputs. This navigation task is more challenging and closer to real-world robotic applications than previous multi-agent tasks since each agent can observe only part of the environment from its first-person view. Therefore, this task requires the communication and cooperation of agents to accomplish. To research the CIN task, we collect a large-scale dataset with challenging demonstration trajectories. The code and data of the CIN task have been released. The prior methods of MARL primarily emphasized the learning of policies for multiple agents but paid little attention to the communication model, resulting in their inability to perform optimally in the CIN task. In this paper, we propose a MARL model with a communication mechanism to address the CIN task. In our experiments, we discover that our proposed model outperforms previous MARL methods and communication is the key to addressing the CIN task. Our quantitative results shows that our proposed MARL method outperforms the baseline by 6% on SPL. And our qualitative results demonstrates that the agent with the communication mechanism is able to explore the whole environment sufficiently so that navigate efficiently.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aiello, M., et al.: IPPO: A privacy-aware architecture for decentralized data-sharing (2020). arXiv:2001.06420
Anderson, P., et al.: On evaluation of embodied navigation agents (2018). arXiv:1807.06757
Anderson, P., et al.: Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: CVPR (2018)
Baker, B., et al.: Emergent tool use from multi-agent autocurricula. In: ICLR (2020)
Bard, N., et al.: The Hanabi challenge: a new frontier for AI research. Artif. Intell. 280, 103216 (2020)
Batra, D., et al.: ObjectNav revisited: On evaluation of embodied agents navigating to objects (2020). arXiv:2006.13171
Berner, C., et al.: Dota 2 with large scale deep reinforcement learning (2019). arXiv:1912.06680
Chang, A., et al.: Matterport3D: Learning from RGB-D data in indoor environments (2017). arXiv:1709.06158
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014). arXiv:1406.1078
Deitke, M., et al.: RoboTHOR: an open simulation-to-real embodied AI platform. In: CVPR (2020)
Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: AAAI (2018)
Hu, S., Zhu, F., Chang, X., Liang, X.: UPDeT: universal multi-agent RL via policy decoupling with transformers. In: ICLR (2021)
Ikram, K., Mondragón, E., Alonso, E., Garcia-Ortiz, M.: HexaJungle: a marl simulator to study the emergence of language (2021)
Khan, M.J., Ahmed, S.H., Sukthankar, G.: Transformer-based value function decomposition for cooperative multi-agent reinforcement learning in starCraft. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. vol. 18, pp. 113–119 (2022)
Kolve, E., Mottaghi, R., Gordon, D., Zhu, Y., Gupta, A., Farhadi, A.: AI2-THOR: An interactive 3D environment for visual AI (2017). arXiv:1712.05474
Lin, T., Huh, J., Stauffer, C., Lim, S.N., Isola, P.: Learning to ground multi-agent communication with autoencoders. NeurIPS 34, 15230–15242 (2021)
Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: ICML (1994)
Liu, S., Lever, G., Merel, J., Tunyasuvunakool, S., Heess, N., Graepel, T.: Emergent coordination through competition. In: ICLR (2019)
Mahajan, A., Rashid, T., Samvelyan, M., Whiteson, S.: MAVEN: Multi-agent variational exploration (2019). arXiv:1910.07483
Mnih, V., et al.: Playing Atari with deep reinforcement learning (2013). arXiv:1312.5602
Mordatch, I., Abbeel, P.: Emergence of grounded compositional language in multi-agent populations. In: AAAI (2017)
Paquette, P., et al.: No press diplomacy: Modeling multi-agent gameplay (2019). arXiv:1909.02128
Pérez-Liébana, D., et al.: The multi-agent reinforcement learning in malmÖ (marlÖ) competition (2019). arXiv:1901.08129
Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., Whiteson, S.: QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning. In: ICML (2018)
Samvelyan, M., et al.: The StarCraft multi-agent challenge (2019). arXiv:1902.04043
Savva, M., Chang, A.X., Dosovitskiy, A., Funkhouser, T.A., Koltun, V.: MINOS: Multimodal indoor simulator for navigation in complex environments (2017). arXiv:1712.03931
Savva, M., et al.: Habitat: a platform for embodied AI research. In: ICCV (2019)
Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. In: ICLR 2016 (2016)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). arXiv:1707.06347
Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y., et al.: Policy gradient methods for reinforcement learning with function approximation. In: NeurIPS (1999)
Wang, T., Gupta, T., Mahajan, A., Peng, B., Whiteson, S., Zhang, C.: RODE: Learning roles to decompose multi-agent tasks (2020). arXiv:2010.01523
Wani, S., Patel, S., Jain, U., Chang, A.X., Savva, M.: MultiON: Benchmarking semantic map memory using multi-object navigation (2020). arXiv:2012.03912
de Witt, C.S., et al.: Is independent learning all you need in the StarCraft multi-agent challenge? CoRR (2020)
Xia, F., et al.: Interactive Gibson benchmark: a benchmark for interactive navigation in cluttered environments. IEEE Robot. Autom. Lett. 5(2), 713–720 (2020)
Xia, F., Zamir, A.R., He, Z., Sax, A., Malik, J., Savarese, S.: Gibson Env: real-world perception for embodied agents. In: CVPR (2018)
Yang, Y., et al.: Multi-agent determinantal Q-learning. In: ICML (2020)
Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games (2021). arXiv:2103.01955
Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A.M., Wu, Y.: The surprising effectiveness of MAPPO in cooperative, multi-agent games. CoRR (2021)
Zabounidis, R., Campbell, J., Stepputtis, S., Hughes, D., Sycara, K.P.: Concept learning for interpretable multi-agent reinforcement learning. In: Conference on Robot Learning, pp. 1828–1837. PMLR (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhu, F., Lee, V.C., Liu, R. (2024). Communicative and Cooperative Learning for Multi-agent Indoor Navigation. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14646. Springer, Singapore. https://doi.org/10.1007/978-981-97-2253-2_22
Download citation
DOI: https://doi.org/10.1007/978-981-97-2253-2_22
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2252-5
Online ISBN: 978-981-97-2253-2
eBook Packages: Computer ScienceComputer Science (R0)