Social Robots with a Theory of Mind (ToM): Are We Threatened When They Can Read Our Emotions?

  • Jin KangEmail author
  • S. Shyam SundarEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1006)


How would human users react to social robots that possess a theory of mind (ToM)? Would robots that can infer their users’ cognitions and emotions threaten their sense of uniqueness and evoke other negative reactions because ToM is a uniquely human trait? If so, can we alleviate these negative user reactions by framing robots as members of our ingroup? We addressed these questions with a 3 (robot’s affective ToM: correct vs. incorrect vs. control) × 2 (robot’s group membership: ingroup vs. outgroup) × 2 (user gender: female vs. male) between-subjects online experiment. Participants were asked to complete an online task with a robot named Pepper that was identified as an ingroup member or outgroup member. They first read a passage describing a past user’s interaction with Pepper, in which the user expressed sarcasm and Pepper correctly or incorrectly identified the user’s sarcasm or made a neutral comment. Males reacted more negatively to Pepper that correctly identified sarcasm and reported lower expected enjoyment with Pepper than females. Ingroup Pepper made participants feel closer to the robot but also threatened their sense of uniqueness than the outgroup Pepper. Design implications for fostering better human-robot interaction (HRI) are discussed.


Theory of mind Threat to uniqueness Social robots Ingroup 


  1. 1.
    Shamay-Tsoory, S.G., Aharon-Peretz, J.: Dissociable prefrontal networks for cognitive and affective theory of mind: a lesion study. Neuropsychologia 45(13), 3054–3067 (2007)CrossRefGoogle Scholar
  2. 2.
    Call, J., Tomasello, M.: Does the chimpanzee have a theory of mind? 30 years later. Trends Cogn. Sci. 12(5), 187–192 (2008)CrossRefGoogle Scholar
  3. 3.
    Hong, A.: Human-Robot Interactions for Single Robots and Multi-Robot Teams. MA Dissertation. University of Toronto, CA (2016)Google Scholar
  4. 4.
    Stephan, A.: Empathy for artificial agents. Int. J. Soc. Robot. 7(1), 111–116 (2015)CrossRefGoogle Scholar
  5. 5.
    Mori, M.: The uncanny valley. Energy 7, 33–35 (1970)Google Scholar
  6. 6.
    Gray, K., Wegner, D.M.: Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 125(1), 125–130 (2012)CrossRefGoogle Scholar
  7. 7.
    MacDoman, K.F., Vasudevan, S.K., Ho, C.C.: Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI Soc. 23(4), 485–510 (2009)CrossRefGoogle Scholar
  8. 8.
    Kaplan, F.: Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots. Int. J. Hum. Robot. 1(3), 465–480 (2004)CrossRefGoogle Scholar
  9. 9.
    Ferrari, F., Paladino, M.P., Jetten, J.: Blurring human–machine distinctions: anthropomorphic appearance in social robots as a threat to human distinctiveness. Int. J. Soc. Robot. 8(2), 287–302 (2016)CrossRefGoogle Scholar
  10. 10.
    Stein, J.P., Liebold, B., Ohler, P.: Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology. Comput. Hum. Behav. 95, 73–82 (2019)CrossRefGoogle Scholar
  11. 11.
    Hogg, M.A.: Social identity theory. In: McKeown, S., Haji, R., Ferguson, N. (eds.) Understanding Peace and Conflict Through Social Identity Theory, pp. 3–17. Springer, Switzerland (2016). Scholar
  12. 12.
    Howard, J.W., Rothbart, M.: Social categorization and memory for in-group and out-group behavior. J. Pers. Soc. Psychol. 38(2), 301–310 (1980)CrossRefGoogle Scholar
  13. 13.
    Levine, M., Prosser, A., Evans, D., Reicher, S.: Identity and emergency intervention: how social group membership and inclusiveness of group boundaries shape helping behavior. Pers. Soc. Psychol. Bull. 31(4), 443–453 (2005)CrossRefGoogle Scholar
  14. 14.
    Häring, M., Kuchenbrandt, D., André, E.: Would you like to play with me?: How robots’ group membership and task features influence human-robot interaction. In: Proceedings of the 2014 International Conference on Human-Robot Interaction, pp. 9–16. ACM, Germany (2014)Google Scholar
  15. 15.
    Eyssel, F., Kuchenbrandt, D.: Social categorization of social robots: anthropomorphism as a function of robot group membership. Br. J. Soc. Psychol. 51(4), 724–731 (2012)CrossRefGoogle Scholar
  16. 16.
    Heerink, M.: Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. In: Proceedings of the 6th International Conference on Human-Robot Interaction (HRI’11), pp. 147–148. ACM, New York (2011)Google Scholar
  17. 17.
    Arras, K.O., Cerqui, D.: Do we want to share our lives and bodies with robots? A 2000 people survey: a 2000-people survey. Technical Report, pp. 1–41 (2005)Google Scholar
  18. 18.
    Nass, C., Fogg, B.J., Moon, Y.: Can computers be teammates? Int. J. Hum. Comput. Stud. 45(6), 669–678 (1996)CrossRefGoogle Scholar
  19. 19.
    Shamay-Tsoory, S.G., Tomer, R., Aharon-Peretz, J.: The neuroanatomical basis of understanding sarcasm and its relationship to social cognition. Neuropsychology 19(3), 288 (2005)CrossRefGoogle Scholar
  20. 20.
    Buhrmester, M., Kwang, T., Gosling, S.D.: Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Persp. Psychol. Sci. 6(1), 3–5 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Media Effects Research Laboratory, Donald P. Bellisario College of CommunicationsThe Pennsylvania State UniversityUniversity ParkUSA

Personalised recommendations