Methods for Developing Trust Models for Intelligent Systems

  • Holly A. YancoEmail author
  • Munjal Desai
  • Jill L. Drury
  • Aaron Steinfeld


Our research goals are to understand and model the factors that affect trust in intelligent systems across a variety of application domains. In this chapter, we present two methods that can be used to build models of trust for such systems. The first method is the use of surveys, in which large numbers of people are asked to identify and rank factors that would influence their trust of a particular intelligent system. Results from multiple surveys, each exploring different application domains, can be used to build a core model of trust and to identify domain specific factors that are needed to modify the core model to improve its accuracy and usefulness. The second method involves conducting experiments where human subjects use the intelligent system, where a variety of factors can be controlled in the studies to explore different factors. Based upon the results of these human subjects experiments, a trust model can be built. These trust models can be used to create design guidelines, to predict initial trust levels before the start of a system’s use, and to measure the evolution of trust over the use of a system. With increased understanding of how to model trust, we can build systems that will be more accepted and used appropriately by target populations.


Automate System Situation Awareness Medical Domain Autonomy Mode Operator Trust 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This research has been supported in part by the National Science Foundation (IIS-0905228 and IIS-0905148) at the University of Massachusetts Lowell and Carnegie Mellon University, respectively, and by The MITRE Corporation Innovation Program (Project 51MSR661-CA; Approved for Public Release; Distribution Unlimited; 15-1753).

Munjal Desai conducted the research described in this chapter while a doctoral student at the University of Massachusetts Lowell. Michelle Carlson of The MITRE Corporation assisted with the design and analysis of the survey-based research. Hyangshim Kwak and Kenneth Voet from the United States Military Academy assisted with the survey-based work during their internships at The MITRE Corporation. Many people in the Robotics Laboratory at the University of Massachusetts Lowell have assisted with the robot testing over several years, including Jordan Allspaw, Daniel Brooks, Sean McSheehy, Mikhail Medvedev, and Katherine Tsui. At Carnegie Mellon University, robot testing was conducted with assistance from Christian Bruggeman, Sofia Gadea-Omelchenko, Poornima Kaniarasu, and Marynel Vázquez.

All product names, trademarks, and registered trademarks are the property of their respective holders.


  1. Baker M, Yanco H (2004) Autonomy mode suggestions for improving human-robot interaction. IEEE Int Conf Syst Man Cybernet 3:2948–2953Google Scholar
  2. Bliss JP, Acton SA (2003) Alarm mistrust in automobiles: how collision alarm reliability affects driving. Appl Ergon 34(6):499–509CrossRefGoogle Scholar
  3. Boehm-Davis DA, Curry RE, Wiener EL, Harrison L (1983) Human factors of flight-deck automation: report on a NASA-industry workshop. Ergonomics 26(10):953–961CrossRefGoogle Scholar
  4. Bruemmer DJ, Dudenhoeffer DD, Marble JL (2002) Dynamic autonomy for urban search and rescue, AAAI Mobile Robot WorkshopGoogle Scholar
  5. Burke JL, Murphy RR, Rogers E, Lumelsky VL, Scholtz J (2004) Final report for the DARPA/NSF interdisciplinary study on human-robot interaction. IEEE Trans Syst Man Cybernet Part C 34(2):103–112CrossRefGoogle Scholar
  6. CasePick Systems (2011) Accessed 30 Dec 2011
  7. Chen JY (2009) Concurrent performance of military tasks and robotics tasks: effects of automation unreliability and individual differences. In: Fourth annual ACM/IEEE international conference on human-robot interaction, pp 181–188Google Scholar
  8. Clark M (2013) States take the wheel on driverless cars, USA Today
  9. Cohen MS, Parasuraman R, Freeman JT (1998) Trust in decision aids: a model and its training implications. Command and control research and technology symposiumGoogle Scholar
  10. Dellaert F, Thorpe C (1998) Robust car tracking using Kalman filtering and Bayesian templates. In: Intelligent transportation systems conference, pp 72–83Google Scholar
  11. Desai M (2012), Modeling trust to improve human-robot interaction, Ph.D. thesis, University of Massachusetts, LowellGoogle Scholar
  12. Desai M, Medvedev M, Vazquez M, McSheehy S, Gadea-Omelchenko S, Bruggeman C, Steinfeld A, Yanco H (2012) Effects of changing reliability on trust of robot systems. In: Seventh annual ACM/IEEE international conference on human-robot interactionGoogle Scholar
  13. Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H (2013) Impact of robot failures and feedback on real-time trust. In: Eighth annual ACM/IEEE international conference on human-robot interactionGoogle Scholar
  14. Desai M, Yanco H (2005) Blending human and robot inputs for sliding scale autonomy. In: IEEE international workshop on robots and human interactive communication, pp 537–542Google Scholar
  15. deVries P, Midden C, Bouwhuis D (2003) The effects of errors on system trust, self-confidence, and the allocation of control in route planning. Int J Hum Comput Stud 58(6):719–735CrossRefGoogle Scholar
  16. Dixon SR, Wickens C (2006) Automation reliability in unmanned aerial vehicle control: a reliance-compliance model of automation dependence in high workload. Hum Factors 48(3):474–486CrossRefGoogle Scholar
  17. Dudek G, Jenkin M, Milios E, Wilkes D (1993) A taxonomy for swarm robots. In: IEEE/RSJ international conference on intelligent robots and systems, pp 441–447Google Scholar
  18. Dzindolet M, Pierce L, Beck H, Dawe L, Anderson B (2001) Predicting misuse and disuse of combat identification systems. Military Psychol 13(3):147–164CrossRefGoogle Scholar
  19. Dzindolet M, Pierce L, Beck H, Dawe L (2002) The perceived utility of human and automated aids in a visual detection task. Hum Factors 44(1):79–94CrossRefGoogle Scholar
  20. Dzindolet MT, Peterson SA, Pomranky RA, Pierce LG, Beck HP (2003) The role of trust in automation reliance. Int J Hum Comput Stud 58(6):697–718CrossRefGoogle Scholar
  21. Endsley M, Kiris E (1995) The out-of-the-loop performance problem and level of control in automation. Hum Factors 37(2):381–394CrossRefGoogle Scholar
  22. Farrell S, Lewandowsky S (2000) A connectionist model of complacency and adaptive recovery under automation. J Exp Psychol 26(2):395–410Google Scholar
  23. Gerkey B, Vaughan R, Howard A (2003) The player/stage project: tools for multi-robot and distributed sensor systems. In: Eleventh international conference on advanced robotics, pp 317–323Google Scholar
  24. Grasmick H, Tittle C, Bursick R Jr, Arneklev B (1993) Testing the core empirical implications of Gottfredson and Hirschi’s general theory of crime. J Res Crime Delinq 30(1):5–29CrossRefGoogle Scholar
  25. Guizzo E (2011) How Google’s self-driving car works. In: IEEE spectrum
  26. Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Human Mental Workload 1(3):139–183CrossRefGoogle Scholar
  27. International Federation of Robotics (IFR) (2011) Statistics about service robots. Accessed 30 Dec 2011
  28. iRobot (2011) iRobot Roomba. Accessed 30 Dec 2011
  29. Jian J, Bisantz A, Drury C (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cognit Ergon 4(1):53–71CrossRefGoogle Scholar
  30. Kiva Systems (2011) Kiva Systems. Accessed 30 Dec 2011
  31. Lee J (1992) Trust, self-confidence and operator’s adaptation to automation, PhD thesis, University of Illinois at Urbana-ChampaignGoogle Scholar
  32. Lee J, Moray N (1991) Trust, self-confidence and supervisory control in a process control simulation. In: IEEE international conference on systems, man, and cybernetics, Charlottesville, pp 291–295Google Scholar
  33. Lee JD, Moray N (1992) Trust, control strategies and allocation of function in human-machine systems. Ergonomics 31(10):1243–1270CrossRefGoogle Scholar
  34. Lin P (2008) Autonomous military robotics: risk, ethics, and design. Technical report, Defense Technical Information CenterGoogle Scholar
  35. Madhani K, Khasawneh M, Kaewkuekool S, Gramopadhye A, Melloy B (2002) Measurement of human trust in a hybrid inspection for varying error patterns. Hum Factors Ergon Soc Annual Meeting 46:418–422CrossRefGoogle Scholar
  36. Mather M, Gorlick MA, Lighthall NR (2009) To brake or accelerate when the light turns yellow? Stress reduces older adults’ risk taking in a driving game. Psychol Sci 20(2):174–176CrossRefGoogle Scholar
  37. Michaud F, Boissy P, Corriveau H, Grant A, Lauria M, Labonte D, Cloutier R, Roux M, Royer M, Iannuzzi D (2007) Telepresence robot for home care assistance. AAAI spring symposium on multidisciplinary collaboration for socially assistive roboticsGoogle Scholar
  38. Moray N, Inagaki T (1999) Laboratory studies of trust between humans and machines in automated systems. Trans Inst Meas Control 21(4–5):203–211CrossRefGoogle Scholar
  39. Moray N, Inagaki T, Itoh M (2000) Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J Exp Psychol 6(1):44–58Google Scholar
  40. Muir BM (1987) Trust between humans and machines, and the design of decision aids. Int J Man-Mach Stud 27(5–6):527–539MathSciNetCrossRefGoogle Scholar
  41. Muir BM (1989) Operators’ trust in and use of automatic controllers in a supervisory process control task. PhD thesis, University of TorontoGoogle Scholar
  42. Neato Robotics (2011) Neato XV-11. Accessed 30 Dec 2011
  43. Ostwald P, Hershey W (2007) Helping Global Hawk fly with the rest of us. Integrated communications, navigation, and surveillance conference, April/MayGoogle Scholar
  44. Parasuraman R (1986) Vigilance, monitoring, and search. In: Boff K, Thomas J, Kaufman L (eds) Handbook of perception and human performance: cognitive processes and performance. Wiley, New YorkGoogle Scholar
  45. Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39(2):230–253CrossRefGoogle Scholar
  46. Prinzel III LJ (2002) The relationship of self-efficacy and complacency in pilot-automation interaction. Technical report, Langley Research CenterGoogle Scholar
  47. Riley V (1994), Human use of automation. PhD thesis, University of MinnesotaGoogle Scholar
  48. Riley V (1996) Operator reliance on automation: theory and data. In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and applications. Lawrence Erlbaum, MahwahGoogle Scholar
  49. Ross J, Irani I, Silberman M, Zaldivar A, Tomlinson B (2010) Who are the crowd workers?: shifting demographics in Amazon Mechanical Turk. In: ACM CHI conference on human factors in computing systems extended abstract, pp 2863–2872Google Scholar
  50. Sanchez J (2006) Factors that affect trust and reliance on an automated aid. PhD thesis, Georgia Institute of TechnologyGoogle Scholar
  51. Sarter N, Woods D, Billings C (1997) Automation surprises. Handbook Hum Factors Ergon 2:1926–1943Google Scholar
  52. Sheridan TB, Verplank WL (1978) Human and computer control of undersea teleoperators. Technical report, Department of Mechanical Engineering, Massachusetts Institute of TechnologyGoogle Scholar
  53. Strickland GE (2013) Watson goes to med school. IEEE Spectrum 50(1):42–45CrossRefGoogle Scholar
  54. Tsui K, Norton A, Brooks D, Yanco H, Kontak D (2011) Designing telepresence robot systems for use by people with special needs. International symposium on quality of life technologiesGoogle Scholar
  55. Yanco HA, Drury J (2004) Classifying human-robot interaction: an updated taxonomy. IEEE Int Conf Syst Man Cybernet 3:2841–2846Google Scholar

Copyright information

© Springer Science+Business Media (outside the USA) 2016

Authors and Affiliations

  • Holly A. Yanco
    • 1
    • 2
    Email author
  • Munjal Desai
    • 3
  • Jill L. Drury
    • 2
  • Aaron Steinfeld
    • 4
  1. 1.Computer Science DepartmentUniversity of Massachusetts LowellLowellUSA
  2. 2.The MITRE CorporationBedfordUSA
  3. 3.Google Inc.Mountain ViewUSA
  4. 4.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations