Abstract
Human-like appearance has been shown to positively affect perception of and attitudes towards robotic agents. In particular, the more human-like robots look, the more participants are willing to ascribe human-like states to them (i.e., having a mind, emotions, agency). The positive effect of human-likeness on agent ratings, however, does not translate to better performance in human-robot interaction (HRI). Performance first increases as human-likeness increases, then drops dramatically as soon as human-likeness reaches around 70 % to finally reach its maximum at 100 % humanness. The goal of the current paper is to investigate whether attentional mechanisms, in particular delayed disengagement, are responsible for the drop in performance for very human-like, but not perfectly human agents. The idea is that robots with a high degree of human-likeness capture attention and thus make it harder to orient attention away from them towards task-relevant stimuli in the periphery resulting in bad performance. To investigate this question, faces of differing degrees of human-likeness (0 %, 30 %, 70 %, 100 %, non-social control) are presented to participants in an eye-tracking experiment and the time it takes participants to orient towards a peripheral stimulus is measured. Results show significant delayed disengagement for all stimuli, but no stronger delayed disengagement for very human-like agents, making delayed disengagement an unlikely source for the negative effect of human-like appearance on performance in HRI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A pilot version of this study was conducted with the same agent across the spectrum with the same results.
- 2.
Post-hoc t-tests of the differences in the SOAs at the agent-level revealed a significant difference between SOA 50 and SOA 200 only in the humanoid (t(35) = 2.577, p = .014, η² = .209) condition, but not in any of the other conditions.
References
Azarian, B., Esser, E.G., Peterson, M.S.: Evidence from the eyes: Threatening postures hold attention. Psychon. Bull. Rev. 23(3), 764–770 (2015). June 2016
Bailenson, J.N., Swinth, K., Hoyt, C., Persky, S., Dimov, A., Blascovich, J.: The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence 14(4), 379–393 (2005)
Bartneck, C., Kanda, T., Mubin, O., Al Mahmud, A.: Does the design of a robot influence its animacy and perceived intelligence? Int. J. Soc. Robot. 1(2), 195–204 (2009)
Belopolsky, A.V., Devue, C., Theeuwes, J.: Angry faces hold the eyes. Vis. Cogn. 19(1), 27–36 (2011)
DiSalvo, C.F., Gemperle, F., Forlizzi, J., Kiesler, S.: All robots are not created equal: the design and perception of humanoid robot heads. In: Proceedings of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 321–326. ACM, June 2002
Duffy, B.R.: Anthropomorphism and the social robot. Robot. Auton. Syst. 42(3), 177–190 (2003)
FantaMorph. Version 3; Abrosoft Co., Beijing, China. http://www.fantamorph.com/index.html
Fox, E., Russo, R., Dutton, K.: Attentional bias for threat: evidence for delayed disengagement from emotional faces. Cogn. Emot. 16(3), 355–379 (2002)
Freeman, J.B., Penner, A.M., Saperstein, A., Scheutz, M., Ambady, N.: Looking the part: social status cues shape race perception. PLoS ONE 6, e25107 (2011)
Goetz, J., Kiesler, S., Powers, A.: Matching robot appearance and behavior to tasks to improve human-robot cooperation. In: Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication, ROMAN 2003, pp. 55–60. IEEE, October 2003
Goudey, A., Bonnin, G.: Must smart objects look human? Study of the impact of anthropomorphism on the acceptance of companion robots. Recherche et Applications en Marketing (English Edition) 31(2), 2–20 (2016)
Gray, K., Wegner, D.M.: Feeling robots and human zombies: mind perception and the uncanny valley. Cognition 125(1), 125–130 (2012)
Hackel, L.M., Looser, C.E., Van Bavel, J.J.: Group membership alters the threshold for mind perception: The role of social identity, collective identification, and intergroup threat. J. Exp. Soc. Psychol. 52, 15–23 (2014)
Hoff, K.A., Bashir, M.: Trust in automation integrating empirical evidence on factors that influence trust. Hum. Factors J. Hum. Factors Ergon. Soc. 57(3), 407–434 (2015)
Kanda, T., Miyashita, T., Osada, T., Haikawa, Y., Ishiguro, H.: Analysis of humanoid appearances in human–robot interaction. IEEE Trans. Rob. 24(3), 725–735 (2008)
Kiesler, S., Powers, A., Fussell, S.R., Torrey, C.: Anthropomorphic interactions with a robot and robot-like agent. Soc. Cogn. 26(2), 169–181 (2008)
Langton, S.R., Law, A.S., Burton, A.M., Schweinberger, S.R.: Attention capture by faces. Cognition 107(1), 330–342 (2008)
Li, D., Rau, P.L.P., Li, Y.: A cross-cultural study: effect of robot appearance and task. Int. J. Social Robot. 2(2), 175–186 (2010)
Looser, C.E., Wheatley, T.: The tipping point of animacy how, when, and where we perceive life in a face. Psychol. Sci. 21(12), 1854–1862 (2010)
Lundqvist, D., Flykt, A., Ohman, A.: The Karolinska Directed Emotional Faces (KDEF). Department of Neurosciences Karolinska Hospital, Stockholm (1998)
Mandell, A.R., Smith, M.A., Martini, M.C., Shaw, T.H., Wiese, E.: Does the presence of social agents improve cognitive performance on a vigilance task? In: Tapus, A., et al. (eds.) ICSR 2015. LNCS, vol. 9388, pp. 421–430. Springer, Heidelberg (2015)
Martini, M.C., Gonzalez, C.A., Wiese, E.: Seeing minds in others–can agents with robotic appearance have human-like preferences?. PloS One 11(1) (2016). online
Martini, M.C., Buzzell, G.A., Wiese, E.: Agent appearance modulates mind attribution and social attention in human-robot interaction. In: Tapus, A., et al. (eds.) ICSR 2015. LNCS, vol. 9388, pp. 431–439. Springer, Heidelberg (2015)
Mathur, M.B., Reichling, D.B.: Navigating a social world with robot partners: a quantitative cartography of the Uncanny Valley. Cognition 146, 22–32 (2016)
MacDorman, K.F., Minato, T., Shimada, M., Itakura, S., Cowley, S., Ishiguro, H.: Assessing human likeness by eye contact in an android testbed. In: Proceedings of the XXVII Annual Meeting of the Cognitive Science Society, pp. 21–23, July 2005
Mori, M.: Bukimi no tani [The uncanny valley]. Energy 7(4) 33–35.(Translated by Karl F. MacDorman and Takashi Minato in 2005) within Appendix B for the paper Androids as an Experimental Apparatus: Why is there an uncanny and can we exploit it? In: Proceedings of the CogSci-2005 Workshop: Toward Social Mechanisms of Android Science, pp. 106–118 (1970)
Pak, R., Fink, N., Price, M., Bass, B., Sturre, L.: Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics 55(9), 1059–1072 (2012)
Powers, A., Kiesler, S.: The advisor robot: tracing people’s mental model from a robot’s physical attributes. In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, pp. 218–225. ACM, March 2006
Sato, S., Kawahara, J.I.: Attentional capture by completely task-irrelevant faces. Psychol. Res. 79(4), 523–533 (2015)
Schweinberger, S.R., Burton, A.M., Kelly, S.W.: Asymmetric dependencies in perceiving identity and emotion: Experiments with morphed faces. Percept. Psychophys. 61(6), 1102–1115 (1999)
Smith, M.W., Wiese, W.L.: The effect of ambiguous human-robot stimuli on vigilance performance (in preparation)
Tung, F.-W.: Influence of gender and age on the attitudes of children towards humanoid robots. In: Jacko, J.A. (ed.) Human-Computer Interaction, Part IV, HCII 2011. LNCS, vol. 6764, pp. 637–646. Springer, Heidelberg (2011)
Van Dillen, L.F., Lakens, D., Van Den Bos, K.: At face value: categorization goals modulate vigilance for angry faces. J. Exp. Soc. Psychol. 47(1), 235–240 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Smith, M.A., Wiese, E. (2016). Look at Me Now: Investigating Delayed Disengagement for Ambiguous Human-Robot Stimuli. In: Agah, A., Cabibihan, JJ., Howard, A., Salichs, M., He, H. (eds) Social Robotics. ICSR 2016. Lecture Notes in Computer Science(), vol 9979. Springer, Cham. https://doi.org/10.1007/978-3-319-47437-3_93
Download citation
DOI: https://doi.org/10.1007/978-3-319-47437-3_93
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-47436-6
Online ISBN: 978-3-319-47437-3
eBook Packages: Computer ScienceComputer Science (R0)