Abstract
We investigate the effectiveness of robot-generated mixed reality gestures. Our findings demonstrate how these gestures increase user effectiveness by decreasing user response time, and that robots can pair long referring expressions with mixed reality gestures without cognitively overloading users.
This work was funded by NSF grants IIS-1909864 and CNS-1823245.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hart, S., Staveland, L.: Development of NASA-TLX (task load index): results of empirical and theoretical research. In: Human Mental Workload (1988)
Hirshfield, L., Williams, T., Sommer, N., Grant, T., Gursoy, S.V.: Workload-driven modulation of mixed-reality robot-human communication. In: Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data (2018)
Lavie, N.: The role of perceptual load in visual awareness. Brain Res. 1080, 91–100 (2006)
Matuszek, C., Bo, L., Zettlemoyer, L., Fox, D.: Learning from unscripted deictic gesture and language for human-robot interactions. In: Proceedings of AAAI (2014)
Salem, M., Kopp, S., Wachsmuth, I., Rohlfing, K., Joublin, F.: Generation and evaluation of communicative robot gesture. Int. J. Soc. Robot. 4(2), 201–221 (2012)
Sauppé, A., Mutlu, B.: Robot deictics: how gesture and context shape referential communication. In: Proceeding of HRI (2014)
Wickens, C.D.: Processing resources and attention. In: Multiple-task Performance (1991)
Wickens, C.D.: Multiple resources and mental workload. Hum. Factors 50(3), 449–555 (2008)
Williams, T., Bussing, M., Cabrol, S., Lau, I., Boyle, E., Tran, N.: Investigating the potential effectiveness of allocentric mixed reality deictic gesture. In: Chen, J.Y.C., Fragomeni, G. (eds.) HCII 2019. LNCS, vol. 11575, pp. 178–198. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21565-1_12
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Tran, N., Grant, T., Phung, T., Hirshfield, L., Wickens, C., Williams, T. (2021). Robot-Generated Mixed Reality Gestures Improve Human-Robot Interaction. In: Li, H., et al. Social Robotics. ICSR 2021. Lecture Notes in Computer Science(), vol 13086. Springer, Cham. https://doi.org/10.1007/978-3-030-90525-5_69
Download citation
DOI: https://doi.org/10.1007/978-3-030-90525-5_69
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90524-8
Online ISBN: 978-3-030-90525-5
eBook Packages: Computer ScienceComputer Science (R0)