Optimal Use of Verbal Instructions for Multi-robot Human Navigation Guidance
Efficiently guiding humans in indoor environments is a challenging open problem. Due to recent advances in mobile robotics and natural language processing, it has recently become possible to consider doing so with the help of mobile, verbally communicating robots. In the past, stationary verbal robots have been used for this purpose at Microsoft Research, and mobile non-verbal robots have been used at UT Austin in their multi-robot human guidance system. This paper extends that mobile multi-robot human guidance research by adding the element of natural language instructions, which are dynamically generated based on the robots’ path planner, and by implementing and testing the system on real robots.
Generating natural language instructions from the robots’ plan opens up a variety of optimization opportunities such as deciding where to place the robots, where to lead humans, and where to verbally instruct them. We present experimental results of the full multi-robot human guidance system and show that it is more effective than two baseline systems: one which only provides humans with verbal instructions, and another which only uses a single robot to lead users to their destinations.
KeywordsMulti robot coordination Natural language Human robot interaction Indoor navigation
This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (IIS-1637736, CPS-1739964, IIS-1724157), ONR (N00014-18-2243), FLI (RFP2-000), ARL, DARPA, and Lockheed Martin. Peter Stone serves on the Board of Directors of Cogitai, Inc. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research.
Supplementary material 1 (mp4 96928 KB)
- 1.Bohus, D., Saw, C.W., Horvitz, E.: Directions robot: in-the-wild experiences and lessons learned. In: AAMAS, pp. 637–644 (2014)Google Scholar
- 2.Daniele, A.F., Bansal, M., Walter, M.R.: Navigational instruction generation as inverse reinforcement learning with neural machine translation. In: HRI, pp. 109–118 (2017)Google Scholar
- 3.Denis, A.: The loria instruction generation system l in give 2.5. In: The European Workshop on Natural Language Generation, pp. 302–306. Association for Computational Linguistics (2011)Google Scholar
- 4.Eaton, E., Mucchiani, C., Mohan, M., Isele, D., Luna, J.M., Clingerman, C.: Design of a low-cost platform for autonomous mobile service robots. In: The IJCAI Workshop on Autonomous Mobile Service Robots (2016)Google Scholar
- 5.Hile, H., Grzeszczuk, R., Liu, A., Vedantham, R., Košecka, J., Borriello, G.: Landmark-based pedestrian navigation with enhanced spatial reasoning. In: Tokuda, H., Beigl, M., Friday, A., Brush, A.J.B., Tobe, Y. (eds.) Pervasive 2009. LNCS, vol. 5538, pp. 59–76. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01516-8_6CrossRefGoogle Scholar
- 6.Khandelwal, P., Barrett, S., Stone, P.: Leading the way: an efficient multi-robot guidance system. In: AAMAS, pp. 1625–1633 (2015)Google Scholar
- 7.Khandelwal, P., Stone, P.: Multi-robot human guidance: human experiments and multiple concurrent requests. In: AAMAS, pp. 1369–1377 (2017)Google Scholar
- 9.Quigley, M., et al.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3, p. 5 (2009)Google Scholar
- 10.Striegnitz, K., Denis, A., Gargett, A., Garoufi, K., Koller, A., Theune, M.: Report on the second second challenge on generating instructions in virtual environments (GIVE-2.5). In: The European Workshop on Natural Language Generation, pp. 270–279 (2011)Google Scholar
- 11.Van Den Oord, A., et al.: WaveNet: a generative model for raw audio. In: SSW, p. 125 (2016)Google Scholar
- 12.Veloso, M.M., Biswas, J., Coltin, B., Rosenthal, S.: CoBots: robust symbiotic autonomous mobile service robots. In: IJCAI, p. 4423. Citeseer (2015)Google Scholar