Skip to main content

Using Human-Inspired Signals to Disambiguate Navigational Intentions

  • Conference paper
  • First Online:
Social Robotics (ICSR 2020)

Abstract

People are proficient at communicating their intentions in order to avoid conflicts when navigating in narrow, crowded environments. Mobile robots, on the other hand, often lack both the ability to interpret human intentions and the ability to clearly communicate their own intentions to people sharing their space. This work addresses the second of these points, leveraging insights about how people implicitly communicate with each other through gaze to enable mobile robots to more clearly signal their navigational intention. We present a human study measuring the importance of gaze in coordinating people’s navigation. This study is followed by the development of a virtual agent head which is added to a mobile robot platform. Comparing the performance of a robot with a virtual agent head against one with an LED turn signal demonstrates its ability to impact people’s navigational choices, and that people more easily interpret the gaze cue than the LED turn signal.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.urbandictionary.com/define.php?term=Hallway%20dance.

  2. 2.

    https://www.hello-robo.com/maki.

  3. 3.

    https://unity.com/.

  4. 4.

    https://github.com/MathiasCiarlo/ROSBridgeLib.

  5. 5.

    The questions from the survey are available online. https://drive.google.com/drive/folders/1qVj-gU1aFwY6Eq2a_l9ZdesfmQ_QQOC8?usp=sharing. The first question “Condition” was filled in before participants responded.

  6. 6.

    Example interactions can be seen in the companion video to this paper, posted at https://youtu.be/rQziUQro9BU.

  7. 7.

    Difficulties have been observed in the interpretation of the gaze direction of virtual agents. This effect is so named because it looks as though the Mona Lisa painting by Leonardo da Vinci is always looking at the observer.

References

  1. Admoni, H., Scassellati, B.: Social eye gaze in human-robot interaction: a review. J. Hum.-Robot Interact. 6(1), 25–63 (2017)

    Article  Google Scholar 

  2. Baraka, K., Veloso, M.M.: Mobile service robot state revealing through expressive lights: formalism, design, and evaluation. Int. J. Soc. Robot. 10(1), 65–92 (2018)

    Article  Google Scholar 

  3. Dragan, A., Lee, K., Srinivasa, S.: Legibility and predictability of robot motion. In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 2013

    Google Scholar 

  4. Emery, N.J.: The eyes have it: the neuroethology, function and evolution of social gaze. Neurosci. Biobehav. Rev. 24(6), 581–604 (2000)

    Article  Google Scholar 

  5. Fernandez, R.: Light-based nonverbal signaling with passive demonstrations for mobile service robots. Master’s thesis, The University of Texas at Austin, Austin, Texas, USA (2018)

    Google Scholar 

  6. Fernandez, R., John, N., Kirmani, S., Hart, J., Sinapov, J., Stone, P.: Passive demonstrations of light-based robot signals for improved human interpretability. In: Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 234–239. IEEE, August 2018

    Google Scholar 

  7. Jiang, Y.S., Warnell, G., Munera, E., Stone, P.: A study of human-robot copilot systems for en-route destination changing. In: Proceedings of the 27th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, August 2018

    Google Scholar 

  8. Jiang, Y.S., Warnell, G., Stone, P.: Inferring user intention using gaze in vehicles. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI), Boulder, Colorado, October 2018

    Google Scholar 

  9. Khambhaita, H., Rios-Martinez, J., Alami, R.: Head-body motion coordination for human aware robot navigation. In: Proceedings of the 9th International Workshop on Human-Friendly Robotics (HFR 2016), p. 8p (2016)

    Google Scholar 

  10. Khandelwal, P., et al.: BWIBots: a platform for bridging the gap between AI and human-robot interaction research. Int. J. Robot. Res. 36(5–7), 635–659 (2017)

    Article  Google Scholar 

  11. Kruse, T., Pandey, A.K., Alami, R., Kirsch, A.: Human-aware robot navigation: a survey. Robot. Auton. Syst. 61(12), 1726–1743 (2013)

    Article  Google Scholar 

  12. Leigh, A., Pineau, J., Olmedo, N., Zhang, H.: Person tracking and following with 2D laser scanners. In: Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 726–733. IEEE (2015)

    Google Scholar 

  13. Lynch, S.D., Pettré, J., Bruneau, J., Kulpa, R., Crétual, A., Olivier, A.H.: Effect of virtual human gaze behaviour during an orthogonal collision avoidance walking task. In: Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 136–142. IEEE, March 2018

    Google Scholar 

  14. Nummenmaa, L., Hyönä, J., Hietanen, J.K.: I’ll walk this way: eyes reveal the direction of locomotion and make passersby look and go the other way. Psychol. Sci. 20(12), 1454–1458 (2009)

    Article  Google Scholar 

  15. Patla, A.E., Adkin, A., Ballard, T.: Online steering: coordination and control of body center of mass, head and body reorientation. Exp. Brain Res. 129(4), 629–634 (1999)

    Article  Google Scholar 

  16. Ruhland, K., et al.: A review of eye gaze in virtual agents, social robotics and HCI: behaviour generation, user interaction and perception. Comput. Graph. Forum 34(6), 299–326 (2015)

    Article  Google Scholar 

  17. Saran, A., Majumdar, S., Short, E.S., Thomaz, A., Niekum, S.: Human gaze following for human-robot interaction. In: Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8615–8621. IEEE, November 2018

    Google Scholar 

  18. Saran, A., Short, E.S., Thomaz, A., Niekum, S.: Understanding teacher gaze patterns for robot learning. arXiv preprint arXiv:1907.07202, July 2019

  19. Shrestha, M.C., Onishi, T., Kobayashi, A., Kamezaki, M., Sugano, S.: Communicating directional intent in robot navigation using projection indicators. In: Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 746–751, August 2018

    Google Scholar 

  20. Szafir, D., Mutlu, B., Fong, T.: Communicating directionality in flying robots. In: Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), HRI 2015, pp. 19–26. ACM, New York (2015)

    Google Scholar 

  21. Unhelkar, V.V., Pérez-D’Arpino, C., Stirling, L., Shah, J.A.: Human-robot co-navigation using anticipatory indicators of human walking motion. In: Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 6183–6190. IEEE (2015)

    Google Scholar 

Download references

Acknowledgments

This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF-19-2-0333), DARPA, Lockheed Martin, GM, and Bosch. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. Studies in this work were approved under University of Texas at Austin IRB study numbers 2015-06-0058 and 2019-03-0139.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Justin Hart .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hart, J. et al. (2020). Using Human-Inspired Signals to Disambiguate Navigational Intentions. In: Wagner, A.R., et al. Social Robotics. ICSR 2020. Lecture Notes in Computer Science(), vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62056-1_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62055-4

  • Online ISBN: 978-3-030-62056-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics