Skip to main content

Whoops! Something Went Wrong: Errors, Trust, and Trust Repair Strategies in Human Agent Teaming

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2021)

Abstract

Human interactions with computerized systems are shifting from using computers as tools, into collaborating with them as teammates via autonomous capabilities. Modern technological advances will inevitably lead to the integration of autonomous systems and will consequently increase the need for effective human agent teaming (HAT). One of the most paramount ideals human operators must discern is their perception of autonomous agents as equal team members. In order to instill this trust within human operators, it is necessary for HAT missions to apply the proper trust repair strategies after a team member commits a trust violation. Identifying the correct trust repair strategy is critical to advancing HAT and preventing degrading team performance or potential misuse. Based on the current literature, this paper addresses key components necessary for effective trust repair and the numerous variables that can further improve upcoming HAT operations. The impacting factors of HAT trust, trust repair strategies, and needed areas of future research are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. OSD: Unmanned systems integrated roadmap 2017–2042 (2017)

    Google Scholar 

  2. Caylor, J.P., Barton, S.L., Zaroukian, E.G., Asher, D.E.: Classification of military occupational specialty codes for agent learning in human-agent teams. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006, p. 110060W. International Society for Optics and Photonics (2019)

    Google Scholar 

  3. Dubey, A., Abhinav, K., Jain, S., Arora, V., Puttaveerana, A.: HACO: a framework for developing human-ai teaming. In: Proceedings of the 13th Innovations in Software Engineering Conference on Formerly known as India Software Engineering Conference, pp. 1–9 (2020)

    Google Scholar 

  4. Endsley, M.R.: Situation awareness misconceptions and misunderstandings. J. Cogn. Eng. Decis. Making 9(1), 4–32 (2015)

    Article  Google Scholar 

  5. Lee, J.D., See, K.A.: Trust in technology: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004)

    Article  Google Scholar 

  6. Schaefer, K.E., Hill, S.G., Jentsch, F.G.: Trust in human-autonomy teaming: a review of trust research from the US Army Research Laboratory Robotics Collaborative Technology Alliance. In: International Conference on Applied Human Factors and Ergonomics, pp. 102–114. Springer, Cham (2019)

    Google Scholar 

  7. Schneider, M., McGrogan, J., Colombi, J.M., Miller, M.E., Long, D.S.: Modeling pilot workload for multi-aircraft control of an unmanned aircraft system. INCOSE Int. Symp. 21(1), 796–810 (2011)

    Article  Google Scholar 

  8. Dikmen, M., Burns, C.: Trust in autonomous vehicles: the case of tesla autopilot and summon. In: 2017 IEEE International Conference on Systems, Man, and Cybernetics, pp. 1093–1098 (2017)

    Google Scholar 

  9. Glikson, E., Woolley, A.W.: Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14(2), 627–660 (2020)

    Article  Google Scholar 

  10. de Visser, E.J., et al.: Almost human: anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. 22(3), 331–349 (2016)

    Article  Google Scholar 

  11. Joe, J.C., O’Hara, J., Medema, H.D., Oxstrand, J.H.: Identifying requirements for effective human-automation teamwork. In: Probabilistic Safety Assessment and Management PSAM (2014)

    Google Scholar 

  12. Jiang, W., Fischer, J., Greenhalgh, C., Ramchurn, S., Wu, F., Jennings, N., Rodden, T.: Social implications of agent-based planning support for human teams. In: 2014 International Conference on Collaboration Technologies and Systems (CTS), pp. 310–317. https://doi.org/10.1109/CTS.2014.6867582 (2014)

  13. Hou, M.: IMPACT: a trust model for human-agent teaming. In: 2020 IEEE International Conference on Human-Machine Systems, pp. 1–4 (2020)

    Google Scholar 

  14. Wright, J.L., Chen, J., Barnes, M.J., Hancock, P.A.: The effect of agent reasoning transparency on automation bias: an analysis of response performance. In: International Conference on Virtual Augmented and Mixed Reality, pp. 465–477 (2016)

    Google Scholar 

  15. Rodriguez, S.S., Chen, J., Deep, H., Lee, J., Asher, D.E., Zaroukian, E.G.: Measuring complacency in humans interacting with autonomous agents in a multi-agent system. In: Pham, T., Solomon, L., Rainey, K. (eds.) Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, p. 39. SPIE (2020)

    Google Scholar 

  16. Wagner, A.R., Robinette, P.: An explanation is not an excuse: trust calibration in an age of transparent robots. In: Trust in Human-Robot Interaction, pp. 197–208. Elsevier (2021)

    Google Scholar 

  17. Nguyen, D.: 1, 2, or 3 in a HAT? How a human-agent team’s composition affects trust and cooperation. Masters Thesis (2020)

    Google Scholar 

  18. Hancock, P.A., Billings, D.R., Oleson, K.E., Chen, J.Y., De Visser, E., Parasuraman, R.: A meta-analysis of factors influencing the development of human-robot trust. Army research lab aberdeen proving ground MD human research and engineering directorate (2011)

    Google Scholar 

  19. Fan, X., Oh, S., McNeese, M., Yen, J., Cuevas, H., Strater, L., Endsley, M.R.: The influence of agent reliability on trust in human agent collaboration. In: Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction, p. 7 (2008)

    Google Scholar 

  20. Chiou, E.K., Lee, J.D.: Cooperation in human-agent systems to support resilience: a microworld experiment. Hum. Factors 58(6), 846–863 (2016)

    Google Scholar 

  21. Chen, J.Y., Barnes, M.J.: Human–agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum. Mach. Syst. 44(1), 13–29 (2014)

    Google Scholar 

  22. Wohleber, R.W., Stowers, K., Chen, J.Y.C., Barnes, M.: Effects of agent transparency and communication framing on human-agent teaming. In: 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 3427–3432 (2017). https://doi.org/10.1109/SMC.2017.8123160

  23. Stuck, R.E., Holthausen, B.E., Walker, B.N.: The role of risk in human-robot trust. In: Nam, C.S., Lyons, J.B. (eds.) Trust in Human-Robot Interaction, pp. 179–194. Elsevier (2021)

    Google Scholar 

  24. Hoff, K.A., Bashir, M.: Trust in automation. Hum. Factors J. Hum. Factors Ergon. Soc. 57(3), 407–434 (2015)

    Article  Google Scholar 

  25. Huang, L., Cooke, N.J., Gutzwiller, R.S., Berman, S., Chiou, E.K., Demir, M., Zhang, W.: Distributed dynamic team trust in human, artificial intelligence, and robot teaming. In: Nam, C.S., Lyons, J.B. (eds.) Trust in Human-Robot Interaction, pp. 301–319. Elsevier (2021)

    Google Scholar 

  26. Carpenter, J.: The quiet professional: An investigation of US military explosive ordnance disposal personnel interactions with everyday field robots (2013)

    Google Scholar 

  27. Tan, Y., Feng, D., Shen, H.: Research for unmanned aerial vehicle components reliability evaluation model considering the influences of human factors. In: MATEC Web of Conferences, p. 139 (2017). https://doi.org/10.1051/matecconf/201713900221

  28. Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., Ivaldi, S.: Trust as indicator of robot functional and so-cial acceptance. An experimental study on user con-formation to iCub answers. Comput. Hum. Behav. 61, 633–655 (2016)

    Google Scholar 

  29. Marinaccio, K., Kohn, S. Parasuraman, R., de Visser, E.: A framework for rebuilding trust in social automation across health-care domains. In: Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care, pp. 201–205 (2015)

    Google Scholar 

  30. Quinn, D.B., Pak, R., de Visser, E.J.: Testing the efficacy of human-human trust repair strategies with machines. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 61(1), 1794–1798 (2017)

    Article  Google Scholar 

  31. Jensen, T., Khan, M.M.H., Albayram, Y., Fahim, M.A.A., Buck, R., Coman, E.: Anticipated emotions in initial trust evaluations of a drone system based on performance and process information. Int. J. Hum. Comput. Interact. 36(4), 316–325 (2020)

    Article  Google Scholar 

  32. McNeese, N.J., Demir, M., Chou, E.K., Cooke, N.J.: Trust and team performance in human-autonomy teaming. Int. J. Electron. Commer. 25(1), 51–72 (2021)

    Article  Google Scholar 

  33. Baker, A.L., Phillips, E.K., Ullman, D., Keebler, J.R.: Toward an understanding of trust repair in human-robot interaction: current research and future directions. ACM Trans. Interact. Intell. Syst. (TiiS) 8(4), 1–30 (2018)

    Article  Google Scholar 

  34. Lewicki, R.J., Polin, B., Lount Jr, R.B.: An exploration of the structure of effective apologies. Negot. Confl. Manage. Res. 9(2), 177–196 (2016)

    Google Scholar 

  35. Kim, P.H., Dirks, K.T., Cooper, C.D., Ferrin, D.L.: When more blame is better than less: the implications of internal vs. external attributions for the repair of trust after a competence-vs. integrity-based trust violation. Organ. Behav. Hum. Decis. Process. 99(1), 49–65 (2006)

    Google Scholar 

  36. Kim, P.H., Ferrin, D.L., Cooper, C.D., & Dirks, K.T.: Removing the shadow of suspicion: the effects of apology versus denial for repairing competence-versus integrity-based trust violations. Journal of Applied Psychology, 89(1), 104. (2004).

    Article  Google Scholar 

  37. Kohn, S.C., Quinn, D., Pak, R., De Visser, E.J., Shaw, T.H.: Trust repair strategies with self-driving vehicles: an exploratory study. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 62(1), 1108–1112 (2018)

    Article  Google Scholar 

  38. Lewicki, R.J., Brinsfield, C.: Trust repair. Annu. Rev. Organ. Psych. Organ. Behav. 4, 287–313 (2017)

    Article  Google Scholar 

  39. De Cremer, D.: To pay or to apologize? On the psychology of dealing with unfair offers in a dictator game. J. Econ. Psychol. 31(6), 843–848 (2010)

    Article  Google Scholar 

  40. Haesevoets, T., Folmer, C.R., De Cremer, D., Van Hiel, A.: Money isn’t all that matters: the use of financial compensation and apologies to preserve relationships in the aftermath of distributive harm. J. Econ. Psychol. 35, 95–107 (2013)

    Article  Google Scholar 

  41. Fehr, R., Gelfand, M.J.: When apologies work: how matching apology components to victims’ self-construals facilitates forgiveness. Organ. Behav. Hum. Decis. Process. 113(1), 37–50 (2010)

    Article  Google Scholar 

  42. Lee, M.K. Kiesler, S., Forlizzi, J., Srinivasa, S., Rybski, P.: Gracefully mitigating breakdowns in robotic services. In: 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2010). https://doi.org/10.1109/HRI.2010.5453195

  43. Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. In: Tapus, A., André, E., Martin, J.-C., Ferland, F., Ammi, M. (eds.) Social Robotics, vol. 9388, pp. 574–583. Springer International Publishing (2015)

    Google Scholar 

  44. Dirks, K.T., Lewicki, R.J., Zaheer, A.: Repairing relationships within and between organizations: building a conceptual foundation. Acad. Manag. Rev. 34(1), 68–84 (2009)

    Article  Google Scholar 

  45. Dewulf, A., et al.: Disentangling approaches to framing in conflict and negotiation research: a meta-paradigmatic perspective. Hum. Relat. 62(2), 155–193 (2009)

    Article  Google Scholar 

  46. Kramer, R.M., Lewicki, R.J.: Repairing and enhancing trust: approaches to reducing organizational trust deficits. Acad. Manag. Ann. 4(1), 245–277 (2010)

    Article  Google Scholar 

  47. Ferrin, D.L., Kim, P.H., Cooper, C.D., Dirks, K.T.: Silence speaks volumes: the effectiveness of reticence in comparison to apology and denial for responding to integrity-and competence-based trust violations. J. Appl. Psychol. 92(4), 893 (2007)

    Google Scholar 

  48. van Diggelen, J., et al.: Pluggable social artificial intelligence for enabling human-agent teaming. In: NATO HFM Symposium on Human Autonomy Teaming (2019)

    Google Scholar 

  49. Lewicki, R.J., Brinsfield, C.: Trust repair. Ann. Rev. Organ. Psychol. Organ. Behav. 4, 287–313 (2017)

    Article  Google Scholar 

  50. de Visser, E.J., Pak, R., Shaw, T.H.: From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics 61(10), 1409–1427 (2018)

    Article  Google Scholar 

  51. Satterfield, K., Baldwin, C., de Visser, E., Shaw., T.: The influence of risky conditions in trust in autonomous systems. Proc. Hum. Factors Ergonomics Soc. Ann. Meet. 61(1), 324–328 (2017)

    Google Scholar 

  52. Wagner, A.R., Robinette, P.: An explanation is not an excuse: trust calibration in an age of transparent robots. In: Trust in Human-Robot Interaction, pp. 197–208. Elsevier (2021). https://doi.org/10.1016/B978-0-12-819472-0.00009-5

  53. de Visser, E.J., et al.: Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 12(2), 459–478 (2020)

    Article  Google Scholar 

  54. Liu, R., Cai, Z., Lewis, M., Lyons, J., Sycara, K.: Trust repair in human-swarm teams. In: 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–6. IEEE (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Meredith Carroll .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rebensky, S. et al. (2021). Whoops! Something Went Wrong: Errors, Trust, and Trust Repair Strategies in Human Agent Teaming. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2021. Lecture Notes in Computer Science(), vol 12797. Springer, Cham. https://doi.org/10.1007/978-3-030-77772-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77772-2_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77771-5

  • Online ISBN: 978-3-030-77772-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics