Skip to main content
Log in

Investigating Trust in Interaction with Inconsistent Embodied Virtual Agents

International Journal of Social Robotics Aims and scope Submit manuscript

Cite this article


Embodied Virtual Agents (EVAs) are used today as interfaces for social robots, educational tutors, game counterparts, medical assistants, as well as companions for the elderly and individuals with psychological or behavioral conditions. Forming a reliable and trustworthy interaction is critical to the success and acceptability of this new form of interaction. In this paper, we report on a study investigating how trust is influenced by the cooperativeness of an EVA as well as an individuals prior experience with other agents. Participants answered two sets of multiple choice questions, working with a different agent in each set. Two types of agent behaviors were possible: Cooperative and Uncooperative. In addition to participants achieving significantly higher performance and having higher trust for the cooperative agent, we found that participants’ trust for the cooperative agent was significantly higher if they interacted with an uncooperative agent in one of the sets, compared to working with cooperative agents in both sets. Furthermore, we found that participants may still decide to choose agent’s suggested answer (which can be incorrect) over theirs, even if they are fairly certain their own answer is the correct one. The results suggest that trust for an EVA is relative and it is dependent on user’s history of interaction with different agents in addition to current agent’s behavior. The findings provide insight into important considerations for creating trustworthy EVAs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Availability of data and material

Available upon request.


  1. A video showing the interface and all six facial expressions on both agents is included as supplemental material.


  1. (2006 (accessed February 3, 2018))

  2. (2009 (accessed February 3, 2018))

  3. Berscheid E, Reis HT (1998) Attraction and close relationships. McGraw-Hill, New York

    Google Scholar 

  4. Bickmore T, Cassell J (2001) Relational agents: a model and implementation of building user trust. In: Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, pp 396–403

  5. Burgoon JK, Guerrero LK, Floyd K (2016) Nonverbal communication. Routledge, London

    Book  Google Scholar 

  6. Butler JK Jr, Cantrell RS (1984) A behavioral decision theory approach to modeling dyadic trust in superiors and subordinates. Psychol Rep 55(1):19–28

    Article  Google Scholar 

  7. Cassell J, Bickmore T (2000) External manifestations of trustworthiness in the interface. Commun ACM 43(12):50–56

    Article  Google Scholar 

  8. Cassell J, Vilhjálmsson HH, Bickmore T (2001) Beat: the behavior expression animation toolkit. In Proceedings of the 28th annual Conference on computer graphics and interactive techniques (SIGGRAPH ’01). Association for Computing Machinery, New York, NY, USA, pp 477–486.

  9. Chen JY, Barnes MJ (2012) Supervisory control of multiple robots: effects of imperfect automation and individual differences. Hum Factors 54(2):157–174

    Article  Google Scholar 

  10. Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H (2013) Impact of robot failures and feedback on real-time trust. In: Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, IEEE Press, pp 251–258

  11. DeVault D, Artstein R, Benn G, Dey T, Fast E, Gainer A, Georgila K, Gratch J, Hartholt A, Lhommet M, et al (2014) Simsensei kiosk: A virtual human interviewer for healthcare decision support. In: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, International Foundation for Autonomous Agents and Multiagent Systems, pp 1061–1068

  12. de Visser EJ, Krueger F, McKnight P, Scheid S, Smith M, Chalk S, Parasuraman R (2012) The world is not enough: trust in cognitive agents. Proc Hum Factors Ergon Soc Annu Meet 56:263–267

    Article  Google Scholar 

  13. Doney PM, Cannon JP (1997) An examination of the nature of trust in buyer-seller relationships. J Mark 61:35–51

    Google Scholar 

  14. Dzindolet MT, Peterson SA, Pomranky RA, Pierce LG, Beck HP (2003) The role of trust in automation reliance. Int J Human-Computer Stud 58(6):697–718

    Article  Google Scholar 

  15. Ekman P (2009) Telling lies: clues to deceit in the marketplace, politics, and marriage (revised edition). WW Norton & Company, New York

    Google Scholar 

  16. Elkins AC, Derrick DC (2013) The sound of trust: voice as a measurement of trust during interactions with embodied conversational agents. Group Decis Negot 22(5):897–913

    Article  Google Scholar 

  17. Fox JE, Boehm-Davis DA (1998) Effects of age and congestion information accuracy of advanced traveler information systems on user trust and compliance. Transp Res Rec 1621(1):43–49

    Article  Google Scholar 

  18. Friesen E, Ekman P (1978) Facial action coding system: a technique for the measurement of facial movement. Palo Alto 3:5

    Google Scholar 

  19. Ghazali AS, Ham J, Barakova EI, Markopoulos P (2018) Effects of robot facial characteristics and gender in persuasive human-robot interaction. Front Robot AI 5:73

    Article  Google Scholar 

  20. Gombolay M, Yang XJ, Hayes B, Seo N, Liu Z, Wadhwania S, Yu T, Shah N, Golen T, Shah J (2018) Robotic assistance in the coordination of patient care. Int J Robot Res 37(10):1300–1316

    Article  Google Scholar 

  21. Gruber D (2018) The effects of mid-range visual anthropomorphism on human trust and performance using a navigation-based automated decision aid

  22. Heerink M, Krose B, Evers V, Wielinga B (2009) Measuring acceptance of an assistive social robot: a suggested toolkit. In: RO-MAN 2009-The 18th IEEE international symposium on robot and human interactive communication, IEEE, pp 528–533

  23. Hyde J, Carter EJ, Kiesler S, Hodgins JK (2016) Evaluating animated characters: facial motion magnitude influences personality perceptions. ACM Trans Appl Percept (TAP) 13(2):8

    Google Scholar 

  24. Jian JY, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cognit Ergon 4(1):53–71

    Article  Google Scholar 

  25. Kang SH, Gratch J (2012) Socially anxious people reveal more personal information with virtual counselors that talk about themselves using intimate human back stories. Annu Rev Cybertherapy Telemed 181:202–207

    Google Scholar 

  26. Kantowitz BH, Hanowski RJ, Kantowitz SC (1997) Driver acceptance of unreliable traffic information in familiar and unfamiliar settings. Hum Factors 39(2):164–176

    Article  Google Scholar 

  27. Kim PH, Ferrin DL, Cooper CD, Dirks KT (2004) Removing the shadow of suspicion: the effects of apology versus denial for repairing competence-versus integrity-based trust violations. J Appl Psychol 89(1):104

    Article  Google Scholar 

  28. Lee J, Moray N (1992) Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10):1243–1270

    Article  Google Scholar 

  29. Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46(1):50–80

    Article  MathSciNet  Google Scholar 

  30. Lisetti C, Amini R, Yasavur U, Rishe N (2013) I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Trans Manag Inf Syst (TMIS) 4(4):19

    Google Scholar 

  31. Lucas GM, Gratch J, King A, Morency LP (2014) It’s only a computer: virtual humans increase willingness to disclose. Comput Hum Behav 37:94–100

    Article  Google Scholar 

  32. Madhavan P, Wiegmann DA, Lacson FC (2006) Automation failures on tasks easily performed by operators undermine trust in automated aids. Hum Factors 48(2):241–256

    Article  Google Scholar 

  33. Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734

    Article  Google Scholar 

  34. Moradinezhad R, Solovey E (2018) Assessing human reaction to a virtual agents facial feedback in a simple q&a setting. In: Front. Hum. Neurosci. Conference Abstract: 2nd international neuroergonomics conference. fnhum, vol 11

  35. Muir BM (1994) Trust in automation: part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11):1905–1922

    Article  Google Scholar 

  36. Muir BM, Moray N (1996) Trust in automation. part ii. experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3):429–460

    Article  Google Scholar 

  37. Naujoks F, Kiesel A, Neukum A (2016) Cooperative warning systems: the impact of false and unnecessary alarms on drivers’ compliance. Accid Anal Prev 97:162–175

    Article  Google Scholar 

  38. Ochs M, Pelachaud C, Sadek D (2008) An empathic virtual dialog agent to improve human-machine interaction. In: Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 1, International Foundation for Autonomous Agents and Multiagent Systems, pp 89–96

  39. Pak R, Fink N, Price M, Bass B, Sturre L (2012) Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics 55(9):1059–1072

    Article  Google Scholar 

  40. Pecune F, Chen J, Matsuyama Y, Cassell J (2018) Field trial analysis of socially aware robot assistant. In: Proceedings of the 17th international conference on autonomous agents and multiagent systems, pp 1241–1249

  41. Rehm M, André E (2005) Catch me if you can: exploring lying agents in social settings. In: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, ACM, pp 937–944

  42. Todorov A, Baron SG, Oosterhof NN (2008) Evaluating face trustworthiness: a model based approach. Soc Cognit Affect Neurosci 3(2):119–127

    Article  Google Scholar 

  43. Todorov A, Olivola CY, Dotsch R, Mende-Siedlecki P (2015) Social attributions from faces: determinants, consequences, accuracy, and functional significance. Ann Rev Psychol 66:519–545

    Article  Google Scholar 

  44. Van Mulken S, André E, Müller J (1999) An empirical study on the trustworthiness of life-like interface agents. In: HCI (2), pp 152–156

  45. Weisband S, Kiesler S (1996) Self disclosure on computer forms: Meta-analysis and implications. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 3–10

Download references


We would like to thank our undergraduate coop students Juan Garcia Lopez and Weidi Tang who helped us in developing the interface. We also thank Karina Glik for her valuable assistance in revising the text of this manuscript.



Author information

Authors and Affiliations


Corresponding author

Correspondence to Reza Moradinezhad.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Code Availability

Available upon request.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 534 KB)

Supplementary material 2 (pdf 172 KB)

Supplementary material 3 (mp4 35302 KB)

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moradinezhad, R., Solovey, E.T. Investigating Trust in Interaction with Inconsistent Embodied Virtual Agents. Int J of Soc Robotics 13, 2103–2118 (2021).

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: