Beyond Believability: Quantifying the Differences Between Real and Virtual Humans

  • Celso M. de MeloEmail author
  • Jonathan Gratch
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9238)


“Believable” agents are supposed to “suspend the audience’s disbelief” and provide the “illusion of life”. However, beyond such high-level definitions, which are prone to subjective interpretation, there is not much more to help researchers systematically create or assess whether their agents are believable. In this paper we propose a more pragmatic and useful benchmark than believability for designing virtual agents. This benchmark requires people, in a specific social situation, to act with the virtual agent in the same manner as they would with a real human. We propose that perceptions of mind in virtual agents, especially pertaining to agency – the ability to act and plan – and experience – the ability to sense and feel emotion – are critical for achieving this new benchmark. We also review current computational systems that fail, pass, and even surpass this benchmark and show how a theoretical framework based on perceptions of mind can shed light into these systems. We also discuss a few important cases where it is better if virtual humans do not pass the benchmark. We discuss implications for the design of virtual agents that can be as natural and efficient to interact with as real humans.


Believability Mind perception Emotion Virtual vs. real humans 



This research was supported in part by grants NSF IIS-1211064, SES-0836004, and AFOSR FA9550-09-1-0507. The content does not necessarily reflect the position or the policy of any Government, and no official endorsement should be inferred.


  1. 1.
    Bates, J.: The role of emotion in believable agents. Commun. ACM 37, 122–125 (1994)CrossRefGoogle Scholar
  2. 2.
    Mateas, M.: An oz-centric review of interactive drama and believable agents. In: Veloso, M.M., Wooldridge, M.J. (eds.) Artificial Intelligence Today. LNCS (LNAI), vol. 1600, pp. 297–328. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  3. 3.
    Riedl, M.O., Stern, A.: Believable agents and intelligent story adaptation for interactive storytelling. In: Göbel, S., Malkewitz, R., Iurgel, I. (eds.) TIDSE 2006. LNCS, vol. 4326, pp. 1–12. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Lester, J., Stone, B.: Increasing believability in animated pedagogical agents. In: Proceedings of the 1st International Conference on Autonomous Agents (AGENTS), pp. 16–21. ACM, New York (1997)Google Scholar
  5. 5.
    Rose, R., Scheutz, M., Schermerhorn, P.: Towards a conceptual and methodological framework for determining robot believability. Interact. Stud. 11, 314–335 (2010)CrossRefGoogle Scholar
  6. 6.
    Riedl, M.O., Young, R.M.: An objective character believability evaluation procedure for multi-agent story generation systems. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 278–291. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  7. 7.
    Reeves, B., Nass, C.: The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places. Cambridge University Press, New York (1996)Google Scholar
  8. 8.
    Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56, 81–103 (2000)CrossRefGoogle Scholar
  9. 9.
    Sundar, S., Nass, C.: Source orientation in human-computer interaction: programmer, networker, or independent social actor? Commun. Res. 27, 683–703 (2000)CrossRefGoogle Scholar
  10. 10.
    Nass, C., Moon, Y., Carney, P.: Are people polite to computers? Responses to computer-based interviewing systems. J. Appl. Soc. Psychol. 29, 1093–1109 (1999)CrossRefGoogle Scholar
  11. 11.
    Nass, C., Fogg, B., Moon, Y.: Can computers be teammates? Int. J. Hum. Comput. Stud. 45, 669–678 (1996)CrossRefGoogle Scholar
  12. 12.
    Nass, C., Isbister, K., Lee, E.-J.: Truth is beauty: researching conversational agents. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 374–402. MIT Press, Cambridge (2000)Google Scholar
  13. 13.
    Gajadhar, B.J., de Kort, Y.A.W., IJsselsteijn, W.A.: Shared fun is doubled fun: player enjoyment as a function of social setting. In: Markopoulos, P., de Ruyter, B., IJsselsteijn, W.A., Rowland, D. (eds.) Fun and Games 2008. LNCS, vol. 5294, pp. 106–117. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  14. 14.
    Ravaja, N.: The psychophysiology of digital gaming: the effect of a non co-located opponent. Media Psychol. 12, 268–294 (2009)CrossRefGoogle Scholar
  15. 15.
    Hoyt, C., Blascovich, J., Swinth, K.: Social inhibition in immersive virtual environments. Presence 12, 183–195 (2003)CrossRefGoogle Scholar
  16. 16.
    Okita, S., Bailenson, J., Schwartz, D.: The mere belief of social interaction improves learning. In: Proceedings of the Annual Meeting of the Cognitive Science Society (2007)Google Scholar
  17. 17.
    Weibel, D., Wissmath, B., Habegger, S., Steiner, Y., Groner, R.: Playing online games against computer- vs. human-controlled opponents: effects on presence, flow, and enjoyment. Comput. Hum. Behav. 24, 2274–2291 (2008)CrossRefGoogle Scholar
  18. 18.
    Katsyri, J., Hari, R., Ravaja, N., Nummenmaa, L.: The opponent matters: elevated fMRI reward responses to winning against a human versus a computer opponent during interactive video game playing. Cereb. Cortex 23, 2829–2839 (2012)CrossRefGoogle Scholar
  19. 19.
    Lim, S., Reeves, B.: Computer agents versus avatars: responses to interactive game characters controlled by a computer or other player. Int. J. Hum Comput Stud. 68, 57–68 (2010)CrossRefGoogle Scholar
  20. 20.
    Blascovich, J., Loomis, J., Beall, A., Swinth, K., Hoyt, L., Bailenson, J.: Immersive virtual environment technology as a methodological tool for social psychology. Psychol. Inq. 13, 103–124 (2002)CrossRefGoogle Scholar
  21. 21.
    Blascovich, J., McCall, C.: Social influence in virtual environments. In: Dill, K. (ed.) The Oxford Handbook of Media Psychology, pp. 305–315. Oxford University Press, New York (2013)Google Scholar
  22. 22.
    Epley, N., Waytz, A., Cacioppo, J.: On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114, 864–886 (2007)CrossRefGoogle Scholar
  23. 23.
    Epley, N.: Waytz, A. In: Fiske, S., Gilbert, D., Lindsay, G. (eds.) The Handbook of Social Psychology, 5th edn, pp. 498–541. Wiley, New York (2010)Google Scholar
  24. 24.
    Waytz, A., Gray, K., Epley, N., Wegner, D.: Causes and consequences of mind perception. Trends Cogn. Sci. 14, 383–388 (2010)CrossRefGoogle Scholar
  25. 25.
    Haslam, N.: Dehumanization: an integrative review. Pers. Soc. Psychol. Rev. 10, 252–264 (2006)CrossRefGoogle Scholar
  26. 26.
    Gray, H., Gray, K., Wegner, D.: Dimensions of mind perception. Science 315, 619 (2007)CrossRefGoogle Scholar
  27. 27.
    Loughnan, S., Haslam, N.: Animals and androids: Implicit associations between social categories and nonhumans. Psychol. Sci. 18, 116–121 (2007)CrossRefGoogle Scholar
  28. 28.
    Rilling, J., Sanfey, A.: The neuroscience of social decision-making. Ann. Rev. Psychol. 62, 23–48 (2011)CrossRefGoogle Scholar
  29. 29.
    Gallagher, H., Anthony, J., Roepstorff, A., Frith, C.: Imaging the intentional stance in a competitive game. NeuroImage 16, 814–821 (2002)CrossRefGoogle Scholar
  30. 30.
    McCabe, K., Houser, D., Ryan, L., Smith, V., Trouard, T.: A functional imaging study of cooperation in two-person reciprocal exchange. Proc. Nat. Acad. Sci. 98, 11832–11835 (2001)CrossRefGoogle Scholar
  31. 31.
    Riedl, R., Moht, P., Kenning, P., Davis, F., Heekeren, H.: Trusting humans and avatars: behavioral and neural evidence. In: Proceedings of the 32nd International Conference on Information Systems (2011)Google Scholar
  32. 32.
    Rilling, J., Gutman, D., Zeh, T., Pagnoni, G., Berns, G., Kilts, C.: A neural basis for social cooperation. Neuron 35, 395–405 (2002)CrossRefGoogle Scholar
  33. 33.
    Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., Kircher, T.: Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 3, 1–11 (2008)CrossRefGoogle Scholar
  34. 34.
    Kircher, T., Blumel, I., Marjoram, D., Lataster, T., Krabbendam, L., Weber, J., et al.: Online mentalising investigated with functional MRI. Neurosci. Lett. 454, 176–181 (2009)CrossRefGoogle Scholar
  35. 35.
    Sanfey, A., Rilling, J., Aronson, J., Nystrom, L., Cohen, J.: The neural basis of economic decision-making in the ultimatum game. Science 300, 1755–1758 (2003)CrossRefGoogle Scholar
  36. 36.
    van’t Wout, M., Kahn, R., Sanfey, A., Aleman, A.: Affective state and decision-making in the ultimatum game. Exp. Brain Res. 169, 564–568 (2006)CrossRefGoogle Scholar
  37. 37.
    Kahn, P., Kanda, T., Ishiguro, H., Freier, N., Severson, R., Gill, B., et al.: “Robovie, you’ll have to go into the closet now”: children’s social and moral relationships with a humanoid robot. Dev. Psychol. 48, 303–314 (2012)CrossRefGoogle Scholar
  38. 38.
    de Melo, C., Carnevale, P., Gratch, J.: Bridging the gap between human and non-human decision makers. Presented at the annual meeting of the international association for conflict management (2014)Google Scholar
  39. 39.
    Güth, W., Schmittberger, R., Schwarze, B.: An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 3, 367–388 (1982)CrossRefGoogle Scholar
  40. 40.
    de Melo, C., Carnevale, P., Gratch, J.: Social categorization and cooperation between humans and computers. In: Proceedings of the Annual Meeting of the Cognitive Science Society (2014)Google Scholar
  41. 41.
    Crisp, R., Hewstone, M.: Multiple social categorization. Adv. Exp. Soc. Psychol. 39, 163–254 (2007)CrossRefGoogle Scholar
  42. 42.
    Lucas, G., Gratch, J., King, A., Morency, L.-P.: It’s only a computer: virtual humans increase willingness to disclose. Comput. Hum. Behav. 37, 94–100 (2014)CrossRefGoogle Scholar
  43. 43.
    Malle, B., Scheutz. M, Arnold, T., Voiklis, J., Cusimano, C.: Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In: Proceedings of Human-Robot Interaction (2015)Google Scholar
  44. 44.
    Yee, N., Bailenson, J., Rickertsen, K.: A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. In: Proceedings of CHI (2007)Google Scholar
  45. 45.
    Bingsjord, S.: Red-pill robots only, please. IEEE Trans. Affect. Comput. 3, 394–397 (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.USC Marshall School of BusinessLos AngelesUSA
  2. 2.Institute for Creative TechnologiesUniversity of Southern CaliforniaPlaya Vista, Logs AngelesUSA

Personalised recommendations