Chapter

Robust Intelligence and Trust in Autonomous Systems

pp 219-254

Date:

Methods for Developing Trust Models for Intelligent Systems

  • Holly A. YancoAffiliated withComputer Science Department, University of Massachusetts LowellThe MITRE Corporation Email author 
  • , Munjal DesaiAffiliated withGoogle Inc.
  • , Jill L. DruryAffiliated withThe MITRE Corporation
  • , Aaron SteinfeldAffiliated withCarnegie Mellon University

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Our research goals are to understand and model the factors that affect trust in intelligent systems across a variety of application domains. In this chapter, we present two methods that can be used to build models of trust for such systems. The first method is the use of surveys, in which large numbers of people are asked to identify and rank factors that would influence their trust of a particular intelligent system. Results from multiple surveys, each exploring different application domains, can be used to build a core model of trust and to identify domain specific factors that are needed to modify the core model to improve its accuracy and usefulness. The second method involves conducting experiments where human subjects use the intelligent system, where a variety of factors can be controlled in the studies to explore different factors. Based upon the results of these human subjects experiments, a trust model can be built. These trust models can be used to create design guidelines, to predict initial trust levels before the start of a system’s use, and to measure the evolution of trust over the use of a system. With increased understanding of how to model trust, we can build systems that will be more accepted and used appropriately by target populations.