Abstract
People are proficient at communicating their intentions in order to avoid conflicts when navigating in narrow, crowded environments. Mobile robots, on the other hand, often lack both the ability to interpret human intentions and the ability to clearly communicate their own intentions to people sharing their space. This work addresses the second of these points, leveraging insights about how people implicitly communicate with each other through gaze to enable mobile robots to more clearly signal their navigational intention. We present a human study measuring the importance of gaze in coordinating people’s navigation. This study is followed by the development of a virtual agent head which is added to a mobile robot platform. Comparing the performance of a robot with a virtual agent head against one with an LED turn signal demonstrates its ability to impact people’s navigational choices, and that people more easily interpret the gaze cue than the LED turn signal.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
The questions from the survey are available online. https://drive.google.com/drive/folders/1qVj-gU1aFwY6Eq2a_l9ZdesfmQ_QQOC8?usp=sharing. The first question “Condition” was filled in before participants responded.
- 6.
Example interactions can be seen in the companion video to this paper, posted at https://youtu.be/rQziUQro9BU.
- 7.
Difficulties have been observed in the interpretation of the gaze direction of virtual agents. This effect is so named because it looks as though the Mona Lisa painting by Leonardo da Vinci is always looking at the observer.
References
Admoni, H., Scassellati, B.: Social eye gaze in human-robot interaction: a review. J. Hum.-Robot Interact. 6(1), 25–63 (2017)
Baraka, K., Veloso, M.M.: Mobile service robot state revealing through expressive lights: formalism, design, and evaluation. Int. J. Soc. Robot. 10(1), 65–92 (2018)
Dragan, A., Lee, K., Srinivasa, S.: Legibility and predictability of robot motion. In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 2013
Emery, N.J.: The eyes have it: the neuroethology, function and evolution of social gaze. Neurosci. Biobehav. Rev. 24(6), 581–604 (2000)
Fernandez, R.: Light-based nonverbal signaling with passive demonstrations for mobile service robots. Master’s thesis, The University of Texas at Austin, Austin, Texas, USA (2018)
Fernandez, R., John, N., Kirmani, S., Hart, J., Sinapov, J., Stone, P.: Passive demonstrations of light-based robot signals for improved human interpretability. In: Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 234–239. IEEE, August 2018
Jiang, Y.S., Warnell, G., Munera, E., Stone, P.: A study of human-robot copilot systems for en-route destination changing. In: Proceedings of the 27th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, August 2018
Jiang, Y.S., Warnell, G., Stone, P.: Inferring user intention using gaze in vehicles. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI), Boulder, Colorado, October 2018
Khambhaita, H., Rios-Martinez, J., Alami, R.: Head-body motion coordination for human aware robot navigation. In: Proceedings of the 9th International Workshop on Human-Friendly Robotics (HFR 2016), p. 8p (2016)
Khandelwal, P., et al.: BWIBots: a platform for bridging the gap between AI and human-robot interaction research. Int. J. Robot. Res. 36(5–7), 635–659 (2017)
Kruse, T., Pandey, A.K., Alami, R., Kirsch, A.: Human-aware robot navigation: a survey. Robot. Auton. Syst. 61(12), 1726–1743 (2013)
Leigh, A., Pineau, J., Olmedo, N., Zhang, H.: Person tracking and following with 2D laser scanners. In: Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 726–733. IEEE (2015)
Lynch, S.D., Pettré, J., Bruneau, J., Kulpa, R., Crétual, A., Olivier, A.H.: Effect of virtual human gaze behaviour during an orthogonal collision avoidance walking task. In: Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 136–142. IEEE, March 2018
Nummenmaa, L., Hyönä, J., Hietanen, J.K.: I’ll walk this way: eyes reveal the direction of locomotion and make passersby look and go the other way. Psychol. Sci. 20(12), 1454–1458 (2009)
Patla, A.E., Adkin, A., Ballard, T.: Online steering: coordination and control of body center of mass, head and body reorientation. Exp. Brain Res. 129(4), 629–634 (1999)
Ruhland, K., et al.: A review of eye gaze in virtual agents, social robotics and HCI: behaviour generation, user interaction and perception. Comput. Graph. Forum 34(6), 299–326 (2015)
Saran, A., Majumdar, S., Short, E.S., Thomaz, A., Niekum, S.: Human gaze following for human-robot interaction. In: Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8615–8621. IEEE, November 2018
Saran, A., Short, E.S., Thomaz, A., Niekum, S.: Understanding teacher gaze patterns for robot learning. arXiv preprint arXiv:1907.07202, July 2019
Shrestha, M.C., Onishi, T., Kobayashi, A., Kamezaki, M., Sugano, S.: Communicating directional intent in robot navigation using projection indicators. In: Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 746–751, August 2018
Szafir, D., Mutlu, B., Fong, T.: Communicating directionality in flying robots. In: Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), HRI 2015, pp. 19–26. ACM, New York (2015)
Unhelkar, V.V., Pérez-D’Arpino, C., Stirling, L., Shah, J.A.: Human-robot co-navigation using anticipatory indicators of human walking motion. In: Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 6183–6190. IEEE (2015)
Acknowledgments
This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF-19-2-0333), DARPA, Lockheed Martin, GM, and Bosch. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. Studies in this work were approved under University of Texas at Austin IRB study numbers 2015-06-0058 and 2019-03-0139.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Hart, J. et al. (2020). Using Human-Inspired Signals to Disambiguate Navigational Intentions. In: Wagner, A.R., et al. Social Robotics. ICSR 2020. Lecture Notes in Computer Science(), vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-62056-1_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62055-4
Online ISBN: 978-3-030-62056-1
eBook Packages: Computer ScienceComputer Science (R0)