Abstract
Embodied Virtual Agents (EVAs) are used today as interfaces for social robots, educational tutors, game counterparts, medical assistants, as well as companions for the elderly and individuals with psychological or behavioral conditions. Forming a reliable and trustworthy interaction is critical to the success and acceptability of this new form of interaction. In this paper, we report on a study investigating how trust is influenced by the cooperativeness of an EVA as well as an individuals prior experience with other agents. Participants answered two sets of multiple choice questions, working with a different agent in each set. Two types of agent behaviors were possible: Cooperative and Uncooperative. In addition to participants achieving significantly higher performance and having higher trust for the cooperative agent, we found that participants’ trust for the cooperative agent was significantly higher if they interacted with an uncooperative agent in one of the sets, compared to working with cooperative agents in both sets. Furthermore, we found that participants may still decide to choose agent’s suggested answer (which can be incorrect) over theirs, even if they are fairly certain their own answer is the correct one. The results suggest that trust for an EVA is relative and it is dependent on user’s history of interaction with different agents in addition to current agent’s behavior. The findings provide insight into important considerations for creating trustworthy EVAs.
This is a preview of subscription content,
to check access.




Availability of data and material
Available upon request.
Notes
A video showing the interface and all six facial expressions on both agents is included as supplemental material.
References
(2006 (accessed February 3, 2018)) FreshWorld.com. https://placement.freshersworld.com
(2009 (accessed February 3, 2018)) QuizArea.com. http://www.pubquizarea.com
Berscheid E, Reis HT (1998) Attraction and close relationships. McGraw-Hill, New York
Bickmore T, Cassell J (2001) Relational agents: a model and implementation of building user trust. In: Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, pp 396–403
Burgoon JK, Guerrero LK, Floyd K (2016) Nonverbal communication. Routledge, London
Butler JK Jr, Cantrell RS (1984) A behavioral decision theory approach to modeling dyadic trust in superiors and subordinates. Psychol Rep 55(1):19–28
Cassell J, Bickmore T (2000) External manifestations of trustworthiness in the interface. Commun ACM 43(12):50–56
Cassell J, Vilhjálmsson HH, Bickmore T (2001) Beat: the behavior expression animation toolkit. In Proceedings of the 28th annual Conference on computer graphics and interactive techniques (SIGGRAPH ’01). Association for Computing Machinery, New York, NY, USA, pp 477–486. https://doi.org/10.1145/383259.383315
Chen JY, Barnes MJ (2012) Supervisory control of multiple robots: effects of imperfect automation and individual differences. Hum Factors 54(2):157–174
Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H (2013) Impact of robot failures and feedback on real-time trust. In: Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, IEEE Press, pp 251–258
DeVault D, Artstein R, Benn G, Dey T, Fast E, Gainer A, Georgila K, Gratch J, Hartholt A, Lhommet M, et al (2014) Simsensei kiosk: A virtual human interviewer for healthcare decision support. In: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, International Foundation for Autonomous Agents and Multiagent Systems, pp 1061–1068
de Visser EJ, Krueger F, McKnight P, Scheid S, Smith M, Chalk S, Parasuraman R (2012) The world is not enough: trust in cognitive agents. Proc Hum Factors Ergon Soc Annu Meet 56:263–267
Doney PM, Cannon JP (1997) An examination of the nature of trust in buyer-seller relationships. J Mark 61:35–51
Dzindolet MT, Peterson SA, Pomranky RA, Pierce LG, Beck HP (2003) The role of trust in automation reliance. Int J Human-Computer Stud 58(6):697–718
Ekman P (2009) Telling lies: clues to deceit in the marketplace, politics, and marriage (revised edition). WW Norton & Company, New York
Elkins AC, Derrick DC (2013) The sound of trust: voice as a measurement of trust during interactions with embodied conversational agents. Group Decis Negot 22(5):897–913
Fox JE, Boehm-Davis DA (1998) Effects of age and congestion information accuracy of advanced traveler information systems on user trust and compliance. Transp Res Rec 1621(1):43–49
Friesen E, Ekman P (1978) Facial action coding system: a technique for the measurement of facial movement. Palo Alto 3:5
Ghazali AS, Ham J, Barakova EI, Markopoulos P (2018) Effects of robot facial characteristics and gender in persuasive human-robot interaction. Front Robot AI 5:73
Gombolay M, Yang XJ, Hayes B, Seo N, Liu Z, Wadhwania S, Yu T, Shah N, Golen T, Shah J (2018) Robotic assistance in the coordination of patient care. Int J Robot Res 37(10):1300–1316
Gruber D (2018) The effects of mid-range visual anthropomorphism on human trust and performance using a navigation-based automated decision aid
Heerink M, Krose B, Evers V, Wielinga B (2009) Measuring acceptance of an assistive social robot: a suggested toolkit. In: RO-MAN 2009-The 18th IEEE international symposium on robot and human interactive communication, IEEE, pp 528–533
Hyde J, Carter EJ, Kiesler S, Hodgins JK (2016) Evaluating animated characters: facial motion magnitude influences personality perceptions. ACM Trans Appl Percept (TAP) 13(2):8
Jian JY, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cognit Ergon 4(1):53–71
Kang SH, Gratch J (2012) Socially anxious people reveal more personal information with virtual counselors that talk about themselves using intimate human back stories. Annu Rev Cybertherapy Telemed 181:202–207
Kantowitz BH, Hanowski RJ, Kantowitz SC (1997) Driver acceptance of unreliable traffic information in familiar and unfamiliar settings. Hum Factors 39(2):164–176
Kim PH, Ferrin DL, Cooper CD, Dirks KT (2004) Removing the shadow of suspicion: the effects of apology versus denial for repairing competence-versus integrity-based trust violations. J Appl Psychol 89(1):104
Lee J, Moray N (1992) Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10):1243–1270
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46(1):50–80
Lisetti C, Amini R, Yasavur U, Rishe N (2013) I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Trans Manag Inf Syst (TMIS) 4(4):19
Lucas GM, Gratch J, King A, Morency LP (2014) It’s only a computer: virtual humans increase willingness to disclose. Comput Hum Behav 37:94–100
Madhavan P, Wiegmann DA, Lacson FC (2006) Automation failures on tasks easily performed by operators undermine trust in automated aids. Hum Factors 48(2):241–256
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734
Moradinezhad R, Solovey E (2018) Assessing human reaction to a virtual agents facial feedback in a simple q&a setting. In: Front. Hum. Neurosci. Conference Abstract: 2nd international neuroergonomics conference. https://doi.org/10.3389/conf. fnhum, vol 11
Muir BM (1994) Trust in automation: part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11):1905–1922
Muir BM, Moray N (1996) Trust in automation. part ii. experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3):429–460
Naujoks F, Kiesel A, Neukum A (2016) Cooperative warning systems: the impact of false and unnecessary alarms on drivers’ compliance. Accid Anal Prev 97:162–175
Ochs M, Pelachaud C, Sadek D (2008) An empathic virtual dialog agent to improve human-machine interaction. In: Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 1, International Foundation for Autonomous Agents and Multiagent Systems, pp 89–96
Pak R, Fink N, Price M, Bass B, Sturre L (2012) Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics 55(9):1059–1072
Pecune F, Chen J, Matsuyama Y, Cassell J (2018) Field trial analysis of socially aware robot assistant. In: Proceedings of the 17th international conference on autonomous agents and multiagent systems, pp 1241–1249
Rehm M, André E (2005) Catch me if you can: exploring lying agents in social settings. In: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, ACM, pp 937–944
Todorov A, Baron SG, Oosterhof NN (2008) Evaluating face trustworthiness: a model based approach. Soc Cognit Affect Neurosci 3(2):119–127
Todorov A, Olivola CY, Dotsch R, Mende-Siedlecki P (2015) Social attributions from faces: determinants, consequences, accuracy, and functional significance. Ann Rev Psychol 66:519–545
Van Mulken S, André E, Müller J (1999) An empirical study on the trustworthiness of life-like interface agents. In: HCI (2), pp 152–156
Weisband S, Kiesler S (1996) Self disclosure on computer forms: Meta-analysis and implications. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 3–10
Acknowledgements
We would like to thank our undergraduate coop students Juan Garcia Lopez and Weidi Tang who helped us in developing the interface. We also thank Karina Glik for her valuable assistance in revising the text of this manuscript.
Funding
None.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Code Availability
Available upon request.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary material 3 (mp4 35302 KB)
Rights and permissions
About this article
Cite this article
Moradinezhad, R., Solovey, E.T. Investigating Trust in Interaction with Inconsistent Embodied Virtual Agents. Int J of Soc Robotics 13, 2103–2118 (2021). https://doi.org/10.1007/s12369-021-00747-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-021-00747-z