Skip to main content

Methods for Developing Trust Models for Intelligent Systems

  • Chapter
  • First Online:
Robust Intelligence and Trust in Autonomous Systems

Abstract

Our research goals are to understand and model the factors that affect trust in intelligent systems across a variety of application domains. In this chapter, we present two methods that can be used to build models of trust for such systems. The first method is the use of surveys, in which large numbers of people are asked to identify and rank factors that would influence their trust of a particular intelligent system. Results from multiple surveys, each exploring different application domains, can be used to build a core model of trust and to identify domain specific factors that are needed to modify the core model to improve its accuracy and usefulness. The second method involves conducting experiments where human subjects use the intelligent system, where a variety of factors can be controlled in the studies to explore different factors. Based upon the results of these human subjects experiments, a trust model can be built. These trust models can be used to create design guidelines, to predict initial trust levels before the start of a system’s use, and to measure the evolution of trust over the use of a system. With increased understanding of how to model trust, we can build systems that will be more accepted and used appropriately by target populations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Baker M, Yanco H (2004) Autonomy mode suggestions for improving human-robot interaction. IEEE Int Conf Syst Man Cybernet 3:2948–2953

    Google Scholar 

  • Bliss JP, Acton SA (2003) Alarm mistrust in automobiles: how collision alarm reliability affects driving. Appl Ergon 34(6):499–509

    Article  Google Scholar 

  • Boehm-Davis DA, Curry RE, Wiener EL, Harrison L (1983) Human factors of flight-deck automation: report on a NASA-industry workshop. Ergonomics 26(10):953–961

    Article  Google Scholar 

  • Bruemmer DJ, Dudenhoeffer DD, Marble JL (2002) Dynamic autonomy for urban search and rescue, AAAI Mobile Robot Workshop

    Google Scholar 

  • Burke JL, Murphy RR, Rogers E, Lumelsky VL, Scholtz J (2004) Final report for the DARPA/NSF interdisciplinary study on human-robot interaction. IEEE Trans Syst Man Cybernet Part C 34(2):103–112

    Article  Google Scholar 

  • CasePick Systems (2011) http://www.casepick.com/company. Accessed 30 Dec 2011

  • Chen JY (2009) Concurrent performance of military tasks and robotics tasks: effects of automation unreliability and individual differences. In: Fourth annual ACM/IEEE international conference on human-robot interaction, pp 181–188

    Google Scholar 

  • Clark M (2013) States take the wheel on driverless cars, USA Today http://www.usatoday.com/story/news/nation/2013/07/29/states-driverless-cars/2595613/

  • Cohen MS, Parasuraman R, Freeman JT (1998) Trust in decision aids: a model and its training implications. Command and control research and technology symposium

    Google Scholar 

  • Dellaert F, Thorpe C (1998) Robust car tracking using Kalman filtering and Bayesian templates. In: Intelligent transportation systems conference, pp 72–83

    Google Scholar 

  • Desai M (2012), Modeling trust to improve human-robot interaction, Ph.D. thesis, University of Massachusetts, Lowell

    Google Scholar 

  • Desai M, Medvedev M, Vazquez M, McSheehy S, Gadea-Omelchenko S, Bruggeman C, Steinfeld A, Yanco H (2012) Effects of changing reliability on trust of robot systems. In: Seventh annual ACM/IEEE international conference on human-robot interaction

    Google Scholar 

  • Desai M, Kaniarasu P, Medvedev M, Steinfeld A, Yanco H (2013) Impact of robot failures and feedback on real-time trust. In: Eighth annual ACM/IEEE international conference on human-robot interaction

    Google Scholar 

  • Desai M, Yanco H (2005) Blending human and robot inputs for sliding scale autonomy. In: IEEE international workshop on robots and human interactive communication, pp 537–542

    Google Scholar 

  • deVries P, Midden C, Bouwhuis D (2003) The effects of errors on system trust, self-confidence, and the allocation of control in route planning. Int J Hum Comput Stud 58(6):719–735

    Article  Google Scholar 

  • Dixon SR, Wickens C (2006) Automation reliability in unmanned aerial vehicle control: a reliance-compliance model of automation dependence in high workload. Hum Factors 48(3):474–486

    Article  Google Scholar 

  • Dudek G, Jenkin M, Milios E, Wilkes D (1993) A taxonomy for swarm robots. In: IEEE/RSJ international conference on intelligent robots and systems, pp 441–447

    Google Scholar 

  • Dzindolet M, Pierce L, Beck H, Dawe L, Anderson B (2001) Predicting misuse and disuse of combat identification systems. Military Psychol 13(3):147–164

    Article  Google Scholar 

  • Dzindolet M, Pierce L, Beck H, Dawe L (2002) The perceived utility of human and automated aids in a visual detection task. Hum Factors 44(1):79–94

    Article  Google Scholar 

  • Dzindolet MT, Peterson SA, Pomranky RA, Pierce LG, Beck HP (2003) The role of trust in automation reliance. Int J Hum Comput Stud 58(6):697–718

    Article  Google Scholar 

  • Endsley M, Kiris E (1995) The out-of-the-loop performance problem and level of control in automation. Hum Factors 37(2):381–394

    Article  Google Scholar 

  • Farrell S, Lewandowsky S (2000) A connectionist model of complacency and adaptive recovery under automation. J Exp Psychol 26(2):395–410

    Google Scholar 

  • Gerkey B, Vaughan R, Howard A (2003) The player/stage project: tools for multi-robot and distributed sensor systems. In: Eleventh international conference on advanced robotics, pp 317–323

    Google Scholar 

  • Grasmick H, Tittle C, Bursick R Jr, Arneklev B (1993) Testing the core empirical implications of Gottfredson and Hirschi’s general theory of crime. J Res Crime Delinq 30(1):5–29

    Article  Google Scholar 

  • Guizzo E (2011) How Google’s self-driving car works. In: IEEE spectrum http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works

  • Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Human Mental Workload 1(3):139–183

    Article  Google Scholar 

  • International Federation of Robotics (IFR) (2011) Statistics about service robots. http://www.ifr.org/service-robots/statistics/. Accessed 30 Dec 2011

  • iRobot (2011) iRobot Roomba. http://www.irobot.com/roomba. Accessed 30 Dec 2011

  • Jian J, Bisantz A, Drury C (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cognit Ergon 4(1):53–71

    Article  Google Scholar 

  • Kiva Systems (2011) Kiva Systems. http://www.kivasystems.com/. Accessed 30 Dec 2011

  • Lee J (1992) Trust, self-confidence and operator’s adaptation to automation, PhD thesis, University of Illinois at Urbana-Champaign

    Google Scholar 

  • Lee J, Moray N (1991) Trust, self-confidence and supervisory control in a process control simulation. In: IEEE international conference on systems, man, and cybernetics, Charlottesville, pp 291–295

    Google Scholar 

  • Lee JD, Moray N (1992) Trust, control strategies and allocation of function in human-machine systems. Ergonomics 31(10):1243–1270

    Article  Google Scholar 

  • Lin P (2008) Autonomous military robotics: risk, ethics, and design. Technical report, Defense Technical Information Center

    Google Scholar 

  • Madhani K, Khasawneh M, Kaewkuekool S, Gramopadhye A, Melloy B (2002) Measurement of human trust in a hybrid inspection for varying error patterns. Hum Factors Ergon Soc Annual Meeting 46:418–422

    Article  Google Scholar 

  • Mather M, Gorlick MA, Lighthall NR (2009) To brake or accelerate when the light turns yellow? Stress reduces older adults’ risk taking in a driving game. Psychol Sci 20(2):174–176

    Article  Google Scholar 

  • Michaud F, Boissy P, Corriveau H, Grant A, Lauria M, Labonte D, Cloutier R, Roux M, Royer M, Iannuzzi D (2007) Telepresence robot for home care assistance. AAAI spring symposium on multidisciplinary collaboration for socially assistive robotics

    Google Scholar 

  • Moray N, Inagaki T (1999) Laboratory studies of trust between humans and machines in automated systems. Trans Inst Meas Control 21(4–5):203–211

    Article  Google Scholar 

  • Moray N, Inagaki T, Itoh M (2000) Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J Exp Psychol 6(1):44–58

    Google Scholar 

  • Muir BM (1987) Trust between humans and machines, and the design of decision aids. Int J Man-Mach Stud 27(5–6):527–539

    Article  MathSciNet  Google Scholar 

  • Muir BM (1989) Operators’ trust in and use of automatic controllers in a supervisory process control task. PhD thesis, University of Toronto

    Google Scholar 

  • Neato Robotics (2011) Neato XV-11. http://www.neatorobotics.com/. Accessed 30 Dec 2011

  • Ostwald P, Hershey W (2007) Helping Global Hawk fly with the rest of us. Integrated communications, navigation, and surveillance conference, April/May

    Google Scholar 

  • Parasuraman R (1986) Vigilance, monitoring, and search. In: Boff K, Thomas J, Kaufman L (eds) Handbook of perception and human performance: cognitive processes and performance. Wiley, New York

    Google Scholar 

  • Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39(2):230–253

    Article  Google Scholar 

  • Prinzel III LJ (2002) The relationship of self-efficacy and complacency in pilot-automation interaction. Technical report, Langley Research Center

    Google Scholar 

  • Riley V (1994), Human use of automation. PhD thesis, University of Minnesota

    Google Scholar 

  • Riley V (1996) Operator reliance on automation: theory and data. In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and applications. Lawrence Erlbaum, Mahwah

    Google Scholar 

  • Ross J, Irani I, Silberman M, Zaldivar A, Tomlinson B (2010) Who are the crowd workers?: shifting demographics in Amazon Mechanical Turk. In: ACM CHI conference on human factors in computing systems extended abstract, pp 2863–2872

    Google Scholar 

  • Sanchez J (2006) Factors that affect trust and reliance on an automated aid. PhD thesis, Georgia Institute of Technology

    Google Scholar 

  • Sarter N, Woods D, Billings C (1997) Automation surprises. Handbook Hum Factors Ergon 2:1926–1943

    Google Scholar 

  • Sheridan TB, Verplank WL (1978) Human and computer control of undersea teleoperators. Technical report, Department of Mechanical Engineering, Massachusetts Institute of Technology

    Google Scholar 

  • Strickland GE (2013) Watson goes to med school. IEEE Spectrum 50(1):42–45

    Article  Google Scholar 

  • Thrun S (2010) What we’re driving at. http://googleblog.blogspot.com/2010/10/what-were-driving-at.html

  • Tsui K, Norton A, Brooks D, Yanco H, Kontak D (2011) Designing telepresence robot systems for use by people with special needs. International symposium on quality of life technologies

    Google Scholar 

  • Yanco HA, Drury J (2004) Classifying human-robot interaction: an updated taxonomy. IEEE Int Conf Syst Man Cybernet 3:2841–2846

    Google Scholar 

Download references

Acknowledgements

This research has been supported in part by the National Science Foundation (IIS-0905228 and IIS-0905148) at the University of Massachusetts Lowell and Carnegie Mellon University, respectively, and by The MITRE Corporation Innovation Program (Project 51MSR661-CA; Approved for Public Release; Distribution Unlimited; 15-1753).

Munjal Desai conducted the research described in this chapter while a doctoral student at the University of Massachusetts Lowell. Michelle Carlson of The MITRE Corporation assisted with the design and analysis of the survey-based research. Hyangshim Kwak and Kenneth Voet from the United States Military Academy assisted with the survey-based work during their internships at The MITRE Corporation. Many people in the Robotics Laboratory at the University of Massachusetts Lowell have assisted with the robot testing over several years, including Jordan Allspaw, Daniel Brooks, Sean McSheehy, Mikhail Medvedev, and Katherine Tsui. At Carnegie Mellon University, robot testing was conducted with assistance from Christian Bruggeman, Sofia Gadea-Omelchenko, Poornima Kaniarasu, and Marynel Vázquez.

All product names, trademarks, and registered trademarks are the property of their respective holders.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Holly A. Yanco .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media (outside the USA)

About this chapter

Cite this chapter

Yanco, H.A., Desai, M., Drury, J.L., Steinfeld, A. (2016). Methods for Developing Trust Models for Intelligent Systems. In: Mittu, R., Sofge, D., Wagner, A., Lawless, W. (eds) Robust Intelligence and Trust in Autonomous Systems. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7668-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7668-0_11

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4899-7666-6

  • Online ISBN: 978-1-4899-7668-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics