Abstract
Robots are being used to socially interact with humans. To enhance the quality of human-robot interaction, engineers aim to build robots with both a humanlike appearance and high mental capacity, but there is a lack of empirical evidence regarding how these two characteristics jointly affect people’s emotional response to robots. The current two experiments (each N = 80) presented robots with either a mechanical or humanlike appearance, with mental capacities operationalized as low or high, and with either self-oriented mentalization to mainly concentrate on the robot itself or other-oriented mentalization to read others’ minds. It was found that when the robots had a humanlike appearance, they were more dislikeable than when they had a mechanical appearance, replicating the uncanny valley effect for appearance. Importantly, given a humanlike appearance, robots with high mental ability elicited stronger dislike than those with low mental ability, showing an uncanny valley effect for mind, but this difference was absent for robots with a mechanical appearance. In addition, this effect was limited to robots with self-oriented mentalization ability and did not extend to robots with other-oriented mentalization ability. Hence, the exterior appearance and interior mental capacity of robots interact to influence people’s emotional reaction to them, and the uncanny valley as it pertains to the mind depends on the robot’s appearance in addition to its mental ability. This implies that social robots with humanlike appearances should be designed with obvious other-directed social abilities to make them more likeable.
Similar content being viewed by others
Notes
The measurement of likeability can be treated as reverse-scaled measure of eeriness.
References
Appel, M., Weber, S., Krause, S., & Mara, M. (2016). On the eeriness of service robots with emotional capabilities. In 2016 11th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 411–412). IEEE.
Appel, M., Izydorczyk, D., Weber, S., Mara, M., & Lischetzke, T. (2020). The uncanny of mind in a machine: Humanoid robots as tools, agents, and experiencers. Computers in Human Behavior, 102, 274–286.
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81.
Bradford, E. E. F., Jentzsch, I., & Gomez, J.-C. (2015). From self to social cognition: Theory of mind mechanisms and their relation to executive functioning. Cognition, 138, 21–34.
Bradford, E. E. F., Jentzsch, I., Gomez, J.-C., Chen, Y., Zhang, D., & Su, Y. (2018). Cross-cultural differences in adult theory of mind abilities: A comparison of native-English speakers and native-Chinese speakers on the self/other differentiation task. Quarterly Journal of Experimental Psychology, 71(12), 2665–2676.
Bradford, E. E. F., Gomez, J. C., & Jentzsch, I. (2019). Exploring the role of self/other perspective-shifting in theory of mind with behavioural and EEG measures. Social Neuroscience, 14(5), 530–544.
Broadbent, E., Kumar, V., Li, X., Sollers, J., Stafford, R. Q., MacDonald, B. A., & Wegner, D. M. (2013). Robots with display screens: A robot with a more humanlike face display is perceived to have more mind and a better personality. PLoS One, 8(8), e72589.
Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology, 68(1), 627–652.
Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759–771.
Cross, E. S., & Ramsey, R. (2021). Mind meets machine: Towards a cognitive science of human-machine interactions. Trends in Cognitive Sciences, 25(3), 200–212.
Dahl, T. S., & Boulos, M. N. K. (2013). Robots in health and social care: A complementary technology to home care and telehealthcare? Robotics, 3(1), 1–21.
Dang, J., & Liu, L. (2021). Robots are friends as well as foes: Ambivalent attitudes toward mindful and mindless AI robots in the United States and China. Computers in Human Behavior, 115, 106612.
Diel, A., & MacDorman, K. F. (2021). Creepy cats and strange high houses: Support for configural processing in testing predictions of nine uncanny valley theories. Journal of Vision, 21(4), 1–20.
Diel, A., Weigelt, S., & MacDorman, K. F. (2022). A meta-analysis of the uncanny valley’s independent and dependent variables. ACM Transactions on Human–Robot Interaction, 11, 1.
Dunfield, K. A. (2014). A construct divided: Prosocial behavior as helping, sharing, and comforting subtypes. Frontiers in Psychology, 5, 958.
Eisenberg, N., & Miller, P. A. (1987). The relation of empathy to prosocial and related behaviors. Psychological Bulletin, 101(1), 91–119.
Eyssel, F., Hegel, F., Horstmann, G., & Wagner, C. (2010). Anthropomorphic inferences from emotional nonverbal cues: A case study. In 19th international symposium in robot and human interactive communication (pp. 646–651). IEEE.
Feinberg, M., Willer, R., Stellar, J., & Keltner, D. (2012). The virtues of gossip: Reputational information sharing as prosocial behavior. Journal of Personality and Social Psychology, 102(5), 1015–1030.
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3–4), 143–166.
Gates, B. (2007). A robot in every home. Scientific American, 296(1), 58–65.
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619.
Gray, K., Jenkins, A. C., Heberlein, A. S., & Wegner, D. M. (2011). Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences, 108(2), 477–479.
Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130.
Han, J., Park, I. W., & Park, M. (2015). Outreach education utilizing humanoid type agent robots. In Proceedings of the 3rd international conference on human-agent interaction (pp. 221–222). ACM.
Hanson, D. (2005). Expanding the aesthetic possibilities for humanoid robots. In IEEE-RAS international conference on humanoid robots (pp. 24–31).
Hegel, F., Gieselmann, S., Peters, A., Holthaus, P., & Wrede, B. (2011). Towards a typology of meaningful signals and cues in social robotics. In robot and human interactive communication (pp. 72–78). RO-MAN.
Heyes, C. M. (1998). Theory of mind in nonhuman primates. Behavioral and Brain Sciences, 21(1), 101–114.
Ho, C. C., MacDorman, K. F., & Pramono, Z. A. D. (2008). Human emotion and the uncanny valley: A GLM, MDS, and Isomap analysis of robot video ratings. In 2008 3rd ACM/IEEE international conference on human-robot interaction (HRI) (pp. 169–176). IEEE.
Horstmann, A. C., & Krämer, N. C. (2019). Great expectations? Relation of previous experiences with social robots in real life or in the media and expectancies based on qualitative and quantitative assessment. Frontiers in Psychology, 10, 939.
Ishiguro, H., & Nishio, S. (2007). Building artificial humans to understand humans. Journal of Artificial Organs, 10(3), 133–142.
Jentsch, E. (1997). On the psychology of the uncanny (1906). Angelaki, 2, 7–16.
Kang, J., & Sundar, S. S. (2019). Social robots with a theory of mind (ToM): Are we threatened when they can read our emotions? In international symposium on ambient intelligence (pp. 80–88). Springer, Cham.
Kätsyri, J., Förger, K., Mäkäräinen, M., & Takala, T. (2015). A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness. Frontiers in Psychology, 6, 390.
Kätsyri, J., de Gelder, B., & Takala, T. (2019). Virtual faces evoke only a weak uncanny valley effect: An empirical investigation with controlled virtual face images. Perception, 48(10), 968–991.
Kim, S. Y., Schmitt, B. H., & Thalmann, N. M. (2019). Eliza in the uncanny valley: Anthropomorphizing consumer robots increases their perceived warmth but decreases liking. Marketing Letters, 30(1), 1–12.
MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7(3), 297–337.
MacDorman, K. F., & Entezari, S. (2015). Individual differences predict sensitivity to the uncanny valley. Interaction Studies, 16(2), 141–172.
MacDorman, K. F., & Chattopadhyay, D. (2016). Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Cognition, 146, 190–205.
Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the uncanny valley. Cognition, 146, 22–32.
Mathur, M. B., Reichling, D. B., Lunardini, F., Geminiani, A., Antonietti, A., Ruijten, P. A. M., Levitan, C. A., Nave, G., Manfredi, D., Bessette-Symons, B., Szuts, A., & Aczel, B. (2020). Uncanny but not confusing: Multisite study of perceptual category confusion in the uncanny valley. Computers in Human Behavior, 103, 21–30.
Mori, M. (1970). The uncanny valley. Energy, 7, 33–35.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.
Müller, B. C. N., Gao, X., Nijssen, S. R. R., & Damen, T. G. E. (2020). I, robot: How human appearance and mind attribution relate to the perceived danger of robots. International Journal of Social Robotics, 13(4), 1–11.
Orehek, E., & Weaverling, C. G. (2017). On the nature of objectification: Implications of considering people as means to goals. Perspectives on Psychological Science, 12(5), 719–730.
Orehek, E., Forest, A. L., & Wingrove, S. (2018). People as means to multiple goals: Implications for interpersonal relationships. Personality and Social Psychology Bulletin, 44(10), 1487–1501.
Otterbacher, J., & Talias, M. (2017). S/he's too warm/agentic! The influence of gender on uncanny reactions to robots. In 2017 12th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 214–223). IEEE.
Poliakoff, E., Beach, N., Best, R., Howard, T., & Gowen, E. (2013). Can looking at a hand make your skin crawl? Peering into the uncanny valley for hands. Perception, 42(9), 998–1000.
Povinelli, D. J., & Vonk, J. (2003). Chimpanzee minds: Suspiciously human? Trends in Cognitive Sciences, 7(4), 157–160.
Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526.
Rosenthal-von der Pütten, A. M., & Krämer, N. C. (2014). How design characteristics of robots determine evaluation and uncanny valley related responses. Computers in Human Behavior, 36, 422–439.
Sabelli, A. M., & Kanda, T. (2016). Robovie as a mascot: A qualitative study for long-term presence of robots in a shopping mall. International Journal of Social Robotics, 8(2), 211–221.
Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2012). The thing that should not be: Predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social Cognitive and Affective Neuroscience, 7(4), 413–422.
Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators & Virtual Environments, 16(4), 337–351.
Simpson, B., & Willer, R. (2008). Altruism and indirect reciprocity: The interaction of person and situation in prosocial behavior. Social Psychology Quarterly, 71(1), 37–52.
Skewes, J., Amodio, D. M., & Seibt, J. (2019). Social robotics and the modulation of social perception and bias. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1771), 20180037.
Stafford, R. Q., MacDonald, B. A., Jayawardena, C., Wegner, D. M., & Broadbent, E. (2014). Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. International Journal of Social Robotics, 6(1), 17–32.
Stein, J. P., & Ohler, P. (2017). Venturing into the uncanny valley of mind—The influence of mind attribution on the acceptance of human-like characters in a virtual reality setting. Cognition, 160, 43–50.
Stein, J. P., Appel, M., Jost, A., & Ohler, P. (2020). Matter over mind? How the acceptance of digital entities depends on their appearance, mental prowess, and the interaction between both. International Journal of Human-Computer Studies, 142, 102463.
Takahashi, H., Terada, K., Morita, T., Suzuki, S., Haji, T., Kozima, H., Yoshikawa, M., Matsumoto, Y., Omori, T., Asada, M., & Naito, E. (2014). Different impressions of other agents obtained through social interaction uniquely modulate dorsal and ventral pathway activities in the social human brain. Cortex, 58, 289–300.
Urgen, B. A., Kutas, M., & Saygin, A. P. (2018). Uncanny valley as a window into predictive processing in the social brain. Neuropsychologia, 114, 181–185.
Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The uncanny valley: Existence and explanations. Review of General Psychology, 19(4), 393–407.
Wedekind, C., & Milinski, M. (2000). Cooperation through image scoring in humans. Science, 288(5467), 850–852.
Yamada, Y., Kawabe, T., & Ihaya, K. (2013). Categorization difficulty is associated with negative evaluation in the “uncanny valley” phenomenon. Japanese Psychological Research, 55(1), 20–32.
Yogeeswaran, K., Złotowski, J., Livingstone, M., Bartneck, C., Sumioka, H., & Ishiguro, H. (2016). The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. Journal of Human-Robot Interaction, 5(2), 29–47.
Acknowledgments
This work was supported by the Fundamental Research Funds for the Provincial Universities of Zhejiang (Grant no. SJWZ2020001).
Author information
Authors and Affiliations
Contributions
Jun Yin, Shiqi Wang, and Meixuan Shao contributed to the study conception and design. Material preparation, data collection and analysis were performed by Jun Yin, Shiqi Wang, Wenjiao Guo and Meixuan Shao. The first draft of the manuscript was written by Jun Yin and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
All raw data can be accessed at the link: https://osf.io/9u54k/?view_only=fadfff217fb64bc8b8692e0ccb0981db
Corresponding author
Ethics declarations
Ethics Statement
The research was conducted in accordance with the relevant APA and authors’ national ethical guidelines and the experimental protocol was approved by the institutional review board at the department of psychology at Ningbo University. All participants received information sheets about the experimental procedure and signed informed consent forms after learning the purpose and procedure of the experiment.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
ESM 1
(DOCX 4751 kb)
Rights and permissions
About this article
Cite this article
Yin, J., Wang, S., Guo, W. et al. More than appearance: the uncanny valley effect changes with a robot’s mental capacity. Curr Psychol 42, 9867–9878 (2023). https://doi.org/10.1007/s12144-021-02298-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12144-021-02298-y