Skip to main content

Advertisement

Log in

IMPACTS: a trust model for human-autonomy teaming

  • Research Article
  • Published:
Human-Intelligent Systems Integration Aims and scope Submit manuscript

Abstract

A trust model IMPACTS (intention, measurability, performance, adaptivity, communication, transparency, and security) has been conceptualized to build human trust in autonomous systems. A system must exhibit the seven critical characteristics to gain and maintain its human partner’s trust towards an effective and collaborative team in achieving common goals. The IMPACTS model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a human-autonomy interaction context. Positive feedback from subject matter experts who participated in a large-scale exercise controlling multiple unmanned assets indicated the decision aid’s effectiveness. It also demonstrated the IMPACTS model’s utility as a design principle for enabling trust between a human-autonomy team.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Abbass A, Petraki E, Merrick K, Harvey J, Barlow M (2016) Trusted autonomy and cognitive cyber symbiosis: open challenges. Cogn Comput 8:385–408

    Article  Google Scholar 

  • Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160

    Article  Google Scholar 

  • Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, ..., Chatila R (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58:82–115

  • Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A, Bonnefon JF, Rahwan I (2018) The moral machine experiment. Nature 563:59–64

    Article  Google Scholar 

  • Baber C (2017) BOOK REVIEW Intelligent adaptive systems: an interaction-centered design perspective. Ergonomics 60(10):1458–1459

    Article  Google Scholar 

  • Bartik J, Rowe A, Draper M, Frost E, Buchanan A, Evans D, Gustafson E, Lucero C, Omelko V, McDermott P, Wark S, Skinner M, Vince J, Shanahan C, Nowina-Krowicki M, Moy G, Marsh L, Williams D, Pongracic H, Thorpe A, Keirl H, Hou M, Banbury S (2020) Autonomy strategic challenge (ASC) allied IMPACT final report. TTCP TR-ASC-01-2020

  • Brundage M, Avin S, Clark J, Toner H, Eckersley P, Garfinkel B et al. (2018) The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Future of Humanity Institute, University of Oxford

  • Chen JYC, Barnes MJ (2014) Human–agent teaming for multi-robot control: a review of human factors issues. IEEE Transactions on Human–Machine Systems 44:13–29

    Article  Google Scholar 

  • Chen JYC, Barnes M, Selkowitz AR, Stowers K (2016) Effects of agent transparency on human-autonomy teaming effectiveness in Proc. IEEE International Conference on Systems, Man, and Cybernetics SMC 2016, October 9–12, Budapest, Hungary

  • Chen JYC, Lakhmani SG, Stowers K, Selkowitz AR, Wright JL, Barnes M (2018) Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor Issues Ergon Sci 19(3):259–282

    Article  Google Scholar 

  • Cho JH, Chan K, Adali S (2015) A survey on trust modeling. ACM Computing Surveys 48(2):28

    Article  Google Scholar 

  • Computing Community Consortium (2020) Assured autonomy: path toward living with autonomous systems we can trust. Computing Community Consortium, Washington, DC. Retrieved from https://cra.org/ccc/wp-content/uploads/sites/2/2020/10/Assured-Autonomy-Workshop-Report-Final.pdf. Accessed 29 Oct 2020

  • Covey SMR (2008) The speed of trust: the one thing that changes everything. Free Press, New York, NY

    Google Scholar 

  • de Visser EJ, Pak R, Shaw TH (2018) From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics 61(10):1409–1427

    Article  Google Scholar 

  • Defense Science Board (2016) Summer study on autonomy. Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Washington, D.C, pp 20301–23140

    Google Scholar 

  • Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H (2013) Impact of robot failures and feedback on real-time trust. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, pp. 251-258, doi: https://doi.org/10.1109/HRI.2013.6483596

  • Draper M, Calhoun G, Hansen M, Douglass S, Spriggs S, Patzek M, Rowe A, Evans D, Ruff H, Behymer K, Howard M, Bearden G, Frost E (2017) Intelligent multi-unmanned vehicle planner with adaptive collaborative control technologies (IMPACT). 19th International Symposium of Aviation Psychology 226–231

  • Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37(1):32–64

  • Endsley MR (2019) Human factors & aviation safety: testimony to the United States House of Representatives hearing on Boeing 737-Max 8 crashes. Human Factors and Ergonomics Society, December 11. Retrieved from https://www.hfes.org/Portals/0/Documents/Human_Factors_and_the_Boeing_737-Max8-FINAL.pdf?ver=2020-08-28-163636-570. Accessed 16 June 2020

  • Erikirk E (1993) Childhood and society: the landmark work on the social significance of childhood. W. W. Norton & Company, New York

    Google Scholar 

  • Frost E, Calhoun G, Ruff H, Bartik J, Behymer K, Springs S, Buchanan A (2019) Collaboration interface supporting human-autonomy teaming for unmanned vehicle management. In Proceeding of the 20th International Symposium on Aviation Psychology, pp. 151-156

  • Gunning D, Aha DW (2019) DARPA’s explainable artificial intelligence program. AI Mag 40(2):44–58

    Google Scholar 

  • Hancock PA, Billings DR, Oleson KE, Chen JYC, de Visser E, Parasuraman R (2011) A meta-analysis of factors impacting trust in human-robot interaction. Hum Factors 53:517–527

    Article  Google Scholar 

  • Harbers M, Jonker C, van Reimsdijk B (2012) Enhancing team performance through effective communications. Paper presented at The Annual Human-Agent-Robot Teamwork (HART) Workshop. Boston, MA

  • Helldin T (2014) Transparency for future semi-automated systems. PhD dissertation, Örebro Univ, Örebro, Sweden

  • Hoff KA, Bashir M (2015) Trust in automation integrating empirical evidence on factors that influence trust. Hum Factors 57:407–434

    Article  Google Scholar 

  • Hou M, Zhu H, Zhou MC, Arrabito R (2011) Optimizing operator-agent interaction in intelligent adaptive interface design. IEEE Transaction Systems, Man, and Cybernetics Part C: Applications and Reviews 41(2):161–178

    Article  Google Scholar 

  • Hou M, Banbury S, Burns C (2014) Intelligent adaptive systems: an interaction-centered design perspective, 1st edn. CRC Press, Boca Raton

    Google Scholar 

  • Hughes S (2013) Campaigners call for international ban on ‘killer robots’. BBC News. Retrieved from http://www.bbc.co.uk. Accessed 8 Jul 2020

  • Jansen BJ (1999) A software agent for performance enhancement of an information retrieval engine (doctoral dissertation). A & M University, Texas: UMI Dissertation Services

  • Jarvenpaa S, Knoll K, Leidner D (1998) Is anybody out here? Antecedents of trust in global virtual teams. Journal of Management Information Systems 14(4):29–64

    Article  Google Scholar 

  • Jian JY, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cogn Ergon 4(1):53–71

    Article  Google Scholar 

  • Lai K, Oliveira H, Hou M, Yanushkevich SN, Shmerko V (2020a) (In press) Assessing risks of biases in cognitive decision support systems. European Signal Processing Conference

  • Lai K, Yanushkevicha SN, Shmerkoa V, Hou M (2020b) Risk, trust, and bias: causal regulators of biometric-enabled decision support. Special Selection on Intelligent Biometric Systems for Secure Societies, IEEE Access 8:148779–148792

    Google Scholar 

  • Lamb C (2016) The talented Mr. robot: the impact of automation on Canada’s workforce. Brookfield Institute for Innovation and Entrepreneurship, Toronto, Canada, June. Retrieved from https://brookfieldinstitute.ca/wp-content/uploads/TalentedMrRobot_BIIE-1.pdf. Accessed 18 Jul 2020

  • Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46(1):50–80

    Article  MathSciNet  Google Scholar 

  • Mae Pedron S, Jose de Arimateia DC (2020) The future of wars: artificial intelligence (AI) and lethal autonomous weapon systems (LAWS). International Journal of Security Studies, 2(1). Article 2

  • Marks S, Dahir AL (2020) Ethiopian report on 737 max crash blames Boeing. The New York Times. Retrieved from https://www.nytimes.com/2020/03/09/world/africa/ethiopia-crash-boeing.html. Accessed 29 Oct 2020

  • Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20:709–734

    Article  Google Scholar 

  • McAllister DJ (1995) Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad Manag J 38:24–59

    Google Scholar 

  • McColl D, Banbury S, Hou M (2016) Testbed for integrated ground control station experimentation and rehearsal: crew performance and authority pathway concept development. In: Lackey S, Shumaker S (eds) Virtual, Augmented and mixed reality. LNCS, vol 9740. Springer, Heidelberg, pp 433–445

    Google Scholar 

  • McColl D, Heffner K, Banbury S, Charron M, Arrabito R, Hou, M (2017) Authority pathway: intelligent adaptive automation for a UAS ground control station. In Proceedings of HCI International Conference, Vancouver, July

  • McNeese N, Demir M, Chiou E, Cooke, N (2019) Understanding the role of trust in human-autonomy teaming. In Proceedings of the 52nd Hawaii International Conference on System Science, pp 254–263

  • Merritt SM, Ilgen DR (2008) Not all trust is created equal: dispositional and history-based trust in human–automation interactions. Hum Factors 50:194–210

    Article  Google Scholar 

  • Miller C (2000) Intelligent user interfaces for correspondence domains: moving IUI’s “off the desktop”. In Proceedings of the 5th International Conference on Intelligent User Interfaces, pp. 181–186. New York, NY: ACM Press

  • Miller CA, Wu P, Funk H (2007) A computational approach to etiquette and politeness: validation experiments. In D. Nau, & J. Wilkenfeld (Eds.), Proceedings of the First International Conference on Computational Cultural Dynamics, pp.57–65. August 27-28, Menlo Park, CA: AAAI press

  • Muir BM (1994) Trust in automation: part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11):1905–1922

    Article  Google Scholar 

  • Murphy RR, Woods DD (2009) Beyond Asimov: the three laws of responsible robotics. IEEE Intell Syst 24(4):14–20

    Article  Google Scholar 

  • Nassar M, Salah K, Rehman MH, Svetinovic D (2019) Blockchain for explainable and trustworthy artificial intelligence. Data Mining Knowledge Discovery 10(1). https://doi.org/10.1002/widm.1340

  • NATO STO SAS 085 (2013) C2 agility -- task group SAS-085 final report (STO technical report STO-TR-SAS-085). NATO Science and Technology Organization, Brussels

    Google Scholar 

  • NIS Cooperation Group (2019) EU coordinated risk assessment of the cybersecurity of 5G networks. NIS cooperation group. Retrieved from https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=62132. Accessed 29 Oct 2020

  • Olson WA, Sarter NB (2000) Automation management strategies: pilot preferences and operational experiences. Int J Aviat Psychol 10(4):327–341

    Article  Google Scholar 

  • Onnasch L, Wickens CD, Li H, Manzey D (2014) Human performance consequences of stages and levels of qutomation: an integrated meta-analysis. Hum Factors 56(3):476–488

    Article  Google Scholar 

  • Parasuraman R, Sheridan TB, Wickens CD (2000) A model of types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics – Part A 30:286–297

    Article  Google Scholar 

  • Peiters W (2011) Explanation and trust: what to tell the user in security and AI? Ethics in Information Technology 13:53–64

    Article  Google Scholar 

  • Pilkington M (2016) Blockchain technology: principles and applications. Research handbook on digital transformations. Edward Elgar Publishing, pp. 225. https://doi.org/10.4337/9781784717766.00019

  • Rahwan I, Cebrian M, Obradovich N, Bongard J, Bonnefon JF, Breazeal C, Crandall JW, Christakis NA, Couzin ID, Jackson MO, Jennings NR, Kamar E, Kloumann IM, Larochelle H, Lazer D, McElreath R, Mislove A, Parkes DC, Pentland AS, Roberts ME, Shariff A, Tenenbaum JB, Wellman M (2019) Machine behavior. Nature 568:477–486

    Article  Google Scholar 

  • Salas E, Sims DE, Burke CS (2005) Is there a ‘big five’ in teamwork? Small Group Res 36(5):555–599

    Article  Google Scholar 

  • Schaefer KE (2013) The perception and measurement of human–robot trust (doctoral dissertation). University of Central Florida, Orlando

  • Schaefer KE (2016) Measuring trust in human robot interactions: development of the ‘trust perception scale-HRI’. In: Mittu R, Sofge D, Wagner A, Lawless W (eds) Robust intelligence and trust in autonomous systems. Springer, Boston

    Google Scholar 

  • Schaefer KE, Chen JYC, Szalma JL, Hancock PA (2016) A meta-analysis of factors influencing the development of trust in automation. Human Factors 58(3):377–400

    Article  Google Scholar 

  • Schaefer KE, Straubb ER, Chen JYC, Putney J, Evans AW III (2017) Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn Syst Res 46:26–39

    Article  Google Scholar 

  • Sebok A, Wickens CD (2017) Implementing lumberjacks and black swans into model-based tools to support human-automation interaction. Hum Factors 59:189–202

    Article  Google Scholar 

  • Shaw J (2006) Intention in ethics. Canadian J of Philosophy 36(2):187–224

    Article  Google Scholar 

  • Sheridan TB (2002) Humans and automation: system design and research issues. Wiley-Interscience, Santa Monica

    Google Scholar 

  • Sheridan TB (2019a) Extending three existing models to analysis of trust in automation: signal detection, statistical parameter estimation, and model-based control. Hum Factors 61(7):1162–1170

    Article  Google Scholar 

  • Sheridan TB (2019b) Individual differences in attributes of trust in automation: measurement and application to system design. Front Psychol 10:1117

    Article  Google Scholar 

  • Sheridan TB, Parasuraman R (2006) Human-automation interaction. In: Nickerson RS (ed) Reviews of human factors and ergonomics, vol. 1. Santa Monica, HFES

    Google Scholar 

  • Sheridan TB, Verplank WL (1978) Human and computer control of undersea teleoperators (report no. N00014-77-C-0256). MIT Cambridge Man Machine Systems Laboratory, Cambridge, MA

    Book  Google Scholar 

  • Siau K, Wang W (2018) Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal 31(2):47–53

    Google Scholar 

  • Sutton A, Samavi R (2018) Tamper-proof privacy auditing for artificial intelligence systems. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI'18). AAAI Press, pp. 5374–5378. Retrieved from https://www.ijcai.org/Proceedings/2018/0756.pdf. Accessed 16 June 2020

  • Sycara K, Lewis M (2004) Integrating intelligent agents into human teams. In: Salas E, Fiore S (eds) Team cognition: process and performance at the inter and intra-individual level. American Psychological Association, Washington

    Google Scholar 

  • Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. Retrieved from https://arxiv.org/abs/1312.6199. Accessed 16 June 2020

  • Taylor RM, Reising J (eds) (1995) The human-electronic crew: can we trust the team? (report no. WL-TR-96-3039). Paper presented at Third International Workshop on Human-Electronic Crew Teamwork, Cambridge, United Kingdom. Dayton, OH: Wright Air Force Research Laboratory

  • Verberne FM, Ham J, Midden CJH (2012) Trust in smart systems: sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. Hum Factors 54:799–810

    Article  Google Scholar 

  • Vicente KJ (1990) Coherence- and correspondence-driven work domains: implications for systems design. Behaviour and Information Technology 9(6):493–502

    Article  Google Scholar 

  • Vicente KJ (1999) Cognitive work analysis. Lawrence Erlbaum Associates, Mahwah

    Book  Google Scholar 

  • Vigano L, Magazenni D (2018) Explainable security. Retrieved from https://arxiv.org/abs/1807.04178

  • Vorobeychik Y, Kantarcioglu M (2018) Adversarial machine learning, 1st edn. Morgan & Claypool

  • Wang Y, Singh MP (2010) Evidence-based trust: a mathematical model geared for multiagent systems. ACM Trans. Autonomous and Adaptive Systems 5(4):14

    Google Scholar 

  • Wang Y, Hou M, Plataniotis K, Kwong S, Leung H, Tunstel E, Rudas I, Trajkovic L (2020a) Towards a theoretical framework of autonomous systems underpinned by intelligence and systems sciences. IEEE/CAA Journal of Automatica Sinica. https://doi.org/10.1109/JAS.2020.1003432

  • Wang Y, Yanushkevich S, Hou M, Plataniotis K, Coates M, Gavrilova M, Hu Y, Karray F, Leung H, Mohammadi A, Kwong S, Tunstel E, Trajkovic L, Rudas IJ, Kacprzyk J (2020b) A tripartite framework of trustworthiness of autonomous systems. In Proceedings of the 2020 IEEE Systems, Man, and Cybernetic International Conference, Toronto, Canada, Oct., W15.1.1–6.

  • Wickens CD, Onnasch L, Sebok A, Manzey D (2020) Absence of DOA effect but no proper test of the lumberjack effect: a reply to Jamieson and Skraaning (2019). Hum Factors 62(4):530–534

    Article  Google Scholar 

  • Wilson JM, Straus SG, McEvily B (2016) All in due time: the development of trust in computer-mediated and face-to-face teams. Organ Behav Hum Decis Process 99:16–33

    Article  Google Scholar 

  • Yagoda RE (2011) WHAT! You want me to trust a robot? The development of a human robot interaction (HRI) trust scale. M.S. Thesis, Dept. of Psychology, N.Carolina State Univ., Raleigh, NC

  • Yang Q, Liu Y, Chen T, Tong Y (2019) Federated machine learning: concepts and applications. ACM Transactions on Intelligent Systems and Technology 10(2):1–19. https://doi.org/10.1145/3298981

  • Zhang B and Dafoe A (2019) Artificial intelligence: American attitudes and trends. Future of Humanity Institute, University of Oxford. Retrieved from https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/us_public_opinion_report_jan_2019.pdf. Accessed 27 Oct 2020

Download references

Acknowledgments

The authors would like to thank Emily Herbert and Jayshal Sood for their editorial support to this paper. The Royal Canadian Air Force’s support to the APWE and AW2018 evaluation trials is also appreciated.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Hou.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hou, M., Ho, G. & Dunwoody, D. IMPACTS: a trust model for human-autonomy teaming. Hum.-Intell. Syst. Integr. 3, 79–97 (2021). https://doi.org/10.1007/s42454-020-00023-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42454-020-00023-x

Keywords

Navigation