Abstract
To enable situated human–robot interaction (HRI), an autonomous robot must both understand and control proxemics—the social use of space—to employ natural communication mechanisms analogous to those used by humans. This work presents a computational framework of proxemics based on data-driven probabilistic models of how social signals (speech and gesture) are produced (by a human) and perceived (by a robot). The framework and models were implemented as autonomous proxemic behavior systems for sociable robots, including: (1) a sampling-based method for robot proxemic goal state estimation with respect to human–robot distance and orientation parameters, (2) a reactive proxemic controller for goal state realization, and (3) a cost-based trajectory planner for maximizing automated robot speech and gesture recognition rates along a path to the goal state. Evaluation results indicate that the goal state estimation and realization significantly improve upon past work in human–robot proxemics with respect to “interaction potential”—predicted automated speech and gesture recognition rates as the robot enters into and engages in face-to-face social encounters with a human user—illustrating their efficacy to support richer robot perception and autonomy in HRI.
This is a preview of subscription content, access via your institution.










Notes
In practice, we often extend the model as a dynamic Bayesian network (Rabiner 1990) by conditioning the pose on the previous state during resampling to ensure that the pose does not change drastically between time intervals (Mead and Matarić 2016). For interactions between two agents, this level of inference might be excessive; however, for interactions between three or more agents, such inference is effective in determining a stable set of proxemic parameters.
In practice, a small number (in this work, \(10^{-6}\)) is added to IP to prevent division-by-zero errors when calculating the weight \(w_{IP}^{t}\) (Eq. 7).
The objective metrics employed did not necessitate a diverse set of human participants.
References
Argyle, M., & Dean, J. (1965). Eye-contact, distance, and affliciation. Sociometry, 28, 289–304.
Bailenson, J., Blascovich, J., Beall, A., & Loomis, J. (2001). Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence, 10(6), 583–598.
Brooks, A. G., & Arkin, R. C. (2007). Behavioral overlays for non-verbal communication expression on a humanoid robot. Autonomous Robots, 22(1), 55–74.
Burgoon, J., Stern, L., & Dillman, L. (1995). Interpersonal adaptation: Dyadic interaction patterns. New York: Cambridge University Press.
Feil-Seifer, D., & Matarić, M. (2011a). Automated detection and classification of positive vs. negative robot interactions with children with autism using distance-based features. In Proceedings of the 6th ACM/IEEE international conference on human–robot interaction (pp. 323–330). Lausanne, Switzerland, HRI’11.
Feil-Seifer, D., & Matarić, M. (2011b). People-aware navigation for goal-oriented behavior involving a human partner. In IEEE International Conference on Development and Learning (vol. 2, pp. 1–6). Frankfurt Am Main, Germany, ICDL’11.
Fox, D., Burgard, W., & Thrun, S. (1997). The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4(1), 23–33.
Gerkey, B., & Konolige, K. (2008). Planning and control in unstructured terrain. In ICRA Workshop on Path Planning on Costmaps. Pasadena, California, ICRA’08.
Hall, E. (1963). A system for notation of proxemic behavior. American Anthropologist, 65, 1003–1026.
Hall, E. T. (1959). The silent language. New York: Doubleday Company.
Hall, E. T. (1966). The hidden dimension. Chicago: Doubleday Company.
Hayduk, L., & Mainprize, S. (1980). Personal space of the blind. Social Psychology Quarterly, 43(2), 216–223.
Hüttenrauch, H., Eklundh, K., Green, A., & Topp, E. (2006). Investigating spatial relationships in human–robot interaction. In 2006 IEEE/RSJ international conference on intelligent robots and systems, IROS’06 (pp. 5052–5059).
ISO (2003) Acoustics—Normal equal-loudness-level contours (iso 226:2003). International Organization for Standardization.
Kastanis, I., & Slater, M. (2012). Reinforcement learning utilizes proxemics: An avatar learns to manipulate the position of people in immersive virtual reality. Transactions on Applied Perception, 9(1), 1–15.
Kuzuoka, H., Suzuki, Y., Yamashita, J., & Yamazaki, K. (2010). Reconfiguring spatial formation arrangement by robot body orientation. In HRI, Osaka, Japan.
Mallenby, T. W. (1975). The personal space of hard-of-hearing children after extended contact with “normals”. British Journal of Social and Clinical Psychology, 14(3), 253–257.
Marder-Eppstein, E., Berger, E., Foote, T., Gerkey, B., & Konolige, K. (2010). The office marathon: Robust navigation in an indoor office environment. In IEEE international conference on robotics and automation (pp. 300–307). Anchorage, Alaska, ICRA’10.
Marquardt, N., & Greenberg, S. (2012). Informing the design of proxemic interactions. IEEE Pervasive Computing, 11(2), 14–23.
McNeill, D. (2005). Gesture and thought. Chicago: Chicago University Press.
Mead, R., & Matarić, M.J. (2012). A probabilistic framework for autonomous proxemic control in situated and mobile human–robot interaction. In 7th ACM/IEEE international conference on human–robot interaction (pp. 193–194). Boston, MA, HRI’12.
Mead, R., & Matarić, M.J. (2015a). Proxemics and performance: Subjective human evaluations of autonomous sociable robot distance and social signal understanding. In The 2015 IEEE/RSJ international conference on intelligent robots and systems. Hamburg, Germany, IROS’15.
Mead, R., & Matarić, M.J. (2015b). Robots have needs too: People adapt their proxemic behavior to improve autonomous robot recognition of human social signals. In 4th International symposium on new frontiers in human–robot interaction. Canterbury, UK, NF-HRI’15.
Mead, R., & Matarić, M.J. (2016). Perceptual models of human–robot proxemics. In Experimental robotics, springer tracts in advanced robotics (Vol. 109, pp. 261–276).
Mead, R., Wade, E., Johnson, P., Clair, A.S., Chen, S., & Mataric, M.J. (2010). An architecture for rehabilitation task practice in socially assistive human–robot interaction. In Robot and human interactive communication (pp. 404–409).
Mead, R., Atrash, A., & Matarić, M.J. (2012). Representations of proxemic behavior for human–machine interaction. In NordiCHI 2012 workshop on proxemics in human–computer interaction. Copenhagen, Denmark, NordiCHI’12.
Mead, R., Atrash, A., & Matarić, M. J. (2013). Automated proxemic feature extraction and behavior recognition: Applications in human–robot interaction. International Journal of Social Robotics, 5(3), 367–378.
Mehrabian, A. (1972). Nonverbal communication. Aldine Transcation: Piscataway.
Mumm, J., & Mutlu, B. (2011). Human–robot proxemics: Physical and psychological distancing in human–robot interaction. In 6th ACM/IEEE International conference on human–robot interaction (pp. 331–338). Lausanne, HRI-2011.
Oosterhout, T., & Visser, A. (2008). A visual method for robot proxemics measurements. In HRI workshop on metrics for human–robot interaction, Amsterdam.
Ozyurek, A. (2002). Do speakers design their co-speech gestures for their addresees? the effects of addressee location on representational gestures. Journal of Memory and Language, 46(4), 688–704.
Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T,. Leibs, J., Wheeler, R., & Ng, A. (2009). Ros: An open-source robot operating system. In ICRA workshop on open source software. Kobe, Japan, ICRA’09.
Rabiner, L. (1990). A tutorial on hidden Markov models and selected applications in speech recognition. In Readings in speech recognition (pp. 267–296).
Rossini, N. (2004). The analysis of gesture: establishing a set of parameters. In Gesture-based communication in human–computer interaction (Vol. 2915, pp. 463–464). Springer-Verlag
Schegloff, E. (1998). Body torque. Social Research, 65(3), 535–596.
Takayama, L., & Pantofaru, C. (2009). Influences on proxemic behaviors in human–robot interaction. In 2009 IEEE/RSJ international conference on intelligent robots and systems, IROS’09 (pp. 5495–5502).
Tapus, A., Matarić, M., & Scassellati, B. (2007). The grand challenges in socially assistive robotics. IEEE Robotics and Automation Magazine, 14(1), 35–42.
Trautman, P., & Krause, A. (2010). Unfreezing the robot: Navigation in dense, interacting crowds. In 2010 IEEE/RSJ international conference on intelligent robots and systems, IROS’10 (pp. 797–803).
Walters, M., Dautenhahn, K., Boekhorst, R., Koay, K., Syrdal, D., & Nehaniv, C. (2009). An empirical framework for human–robot proxemics. In New frontiers in human–robot interaction, Edinburgh (pp. 144–149).
Acknowledgments
This work was supported in part by an NSF Graduate Research Fellowship, the NSF National Robotics Initiative Grant IIS-1208500, and NSF Grants CNS-0709296 and CNS-1513275.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 81408 KB)
Rights and permissions
About this article
Cite this article
Mead, R., Matarić, M.J. Autonomous human–robot proxemics: socially aware navigation based on interaction potential. Auton Robot 41, 1189–1201 (2017). https://doi.org/10.1007/s10514-016-9572-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10514-016-9572-2
Keywords
- Human–robot interaction
- Proxemics
- Social signals
- Interaction potential
- Goal state estimation
- Path-planning