I’m Not Playing Anymore! A Study Comparing Perceptions of Robot and Human Cheating Behavior

  • Kerstin HaringEmail author
  • Kristin Nye
  • Ryan Darby
  • Elizabeth Phillips
  • Ewart de Visser
  • Chad Tossell
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11876)


Cheating is a universally salient and disliked behavior. Previous research has shown that a cheating robot dramatically increases perception of its perceived agency. However, this original research did not directly compare human cheating to robot cheating. We examined whether the human and the robot were evaluated differently in terms of reactionary behaviors as well as attribution of mental states and perception of competence, warmth, agency, and capabilities to experience. This study was able to partially recreate the previous study findings [10] showing that participants were highly socially engaged with the cheating robot and showing hostile reactions to the cheating action of the robot. In contrast, these reactions were not observed for the human condition. Additionally, play interactions with the robot were rated as more discomforting compared to the experience with the human player. Finally, it was found that the robot was perceived as less warm, competent, agentic, and able to experience than the human. This result could be attributed primarily due to the inherent human-like difference in agents. Several implications of this study are discussed with respect to the design of robot behavior and human social norms.


Human-robot interaction Social robots Robot perception Robot behavior 


  1. 1.
    Abubshait, A., Wiese, E.: You look human, but act like a machine: agent appearance and behavior modulate different aspects of human-robot interaction. Front. Psychol. 8, 1393 (2017)CrossRefGoogle Scholar
  2. 2.
    Carpinella, C.M., Wyman, A.B., Perez, M.A., Stroessner, S.J.: The robotic social attributes scale (RoSAS): development and validation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 254–262. ACM (2017)Google Scholar
  3. 3.
    Cosmides, L., Tooby, J.: Cognitive adaptations for social exchange. Adap. Mind: Evol. Psychol. Gener. Cult. 163, 163–228 (1992)Google Scholar
  4. 4.
    Fiske, S.T., Cuddy, A.J., Glick, P.: Universal dimensions of social cognition: warmth and competence. Trends Cogn. Sci. 11(2), 77–83 (2007)CrossRefGoogle Scholar
  5. 5.
    Gray, K., Young, L., Waytz, A.: Mind perception is the essence of morality. Psychol. Inq. 23(2), 101–124 (2012)CrossRefGoogle Scholar
  6. 6.
    Haring, K.S., Watanabe, K., Velonaki, M., Tossell, C.C., Finomore, V.: FFAB–the form function attribution bias in human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 10(4), 843–851 (2018)CrossRefGoogle Scholar
  7. 7.
    Haring, K.S., Watanabe, K., Silvera-Tawil, D., Velonaki, M., Takahashi, T.: Changes in perception of a small humanoid robot. In: 2015 6th International Conference on Automation, Robotics and Applications (ICARA), pp. 83–89. IEEE (2015)Google Scholar
  8. 8.
    Jackson, R.B., Wen, R., Williams, T.: Tact in noncompliance: the need for pragmatically apt responses to unethical commands (2019)Google Scholar
  9. 9.
    Korman, J., Harrison, A., McCurry, M., Trafton, G.: Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 530–531. IEEE (2019)Google Scholar
  10. 10.
    Litoiu, A., Ullman, D., Kim, J., Scassellati, B.: Evidence that robots trigger a cheating detector in humans. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 165–172. ACM (2015)Google Scholar
  11. 11.
    Lucas, G.M., Gratch, J., King, A., Morency, L.P.: It’s only a computer: virtual humans increase willingness to disclose. Comput. Hum. Behav. 37, 94–100 (2014)CrossRefGoogle Scholar
  12. 12.
    Phillips, E., Zhao, X., Ullman, D., Malle, B.F.: What is human-like?: decomposing robots’ human-like appearance using the anthropomorphic robot (abot) database. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 105–113. ACM (2018)Google Scholar
  13. 13.
    Short, E., Hart, J., Vu, M., Scassellati, B.: No fair!! an interaction with a cheating robot. In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 219–226. IEEE (2010)Google Scholar
  14. 14.
    Stafford, R.Q., MacDonald, B.A., Jayawardena, C., Wegner, D.M., Broadbent, E.: Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. Int. J. Soc. Robot. 6(1), 17–32 (2014)CrossRefGoogle Scholar
  15. 15.
    Ullman, D., Leite, L., Phillips, J., Kim-Cohen, J., Scassellati, B.: Smart human, smarter robot: how cheating affects perceptions of social agency. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 36 (2014)Google Scholar
  16. 16.
    Van Lier, J., Revlin, R., De Neys, W.: Detecting cheaters without thinking: testing the automaticity of the cheater detection module. PLoS ONE 8(1), e53827 (2013)CrossRefGoogle Scholar
  17. 17.
    Verplaetse, J., Vanneste, S., Braeckman, J.: You can judge a book by its cover: the sequel.: a kernel of truth in predictive cheating detection. Evol. Hum. Behav. 28(4), 260–271 (2007)CrossRefGoogle Scholar
  18. 18.
    Wiese, E., Metta, G., Wykowska, A.: Robots as intentional agents: using neuroscientific methods to make robots appear more social. Front. Psychol. 8, 1663 (2017)CrossRefGoogle Scholar
  19. 19.
    Zhao, X.: Rethinking anthropomorphism: the antecedents, unexpected consequences, and potential remedy for perceiving machines as human-like. In: Symposium submitted to Proceedings of the Association for Consumer Research (in press) Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of DenverDenverUSA
  2. 2.United States Air Force AcademyColorado SpringsUSA

Personalised recommendations