The Role of Trust in Human-Robot Interaction

  • Michael LewisEmail author
  • Katia Sycara
  • Phillip Walker
Open Access
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 117)


As robots become increasingly common in a wide variety of domains—from military and scientific applications to entertainment and home use—there is an increasing need to define and assess the trust humans have when interacting with robots. In human interaction with robots and automation, previous work has discovered that humans often have a tendency to either overuse automation, especially in cases of high workload, or underuse automation, both of which can make negative outcomes more likely. Frthermore, this is not limited to naive users, but experienced ones as well. Robotics brings a new dimension to previous work in trust in automation, as they are envisioned by many to work as teammates with their operators in increasingly complex tasks. In this chapter, our goal is to highlight previous work in trust in automation and human-robot interaction and draw conclusions and recommendations based on the existing literature. We believe that, while significant progress has been made in recent years, especially in quantifying and modeling trust, there are still several places where more investigation is needed.

8.1 Introduction

Robots and other complex autonomous systems offer potential benefits through assisting humans in accomplishing their tasks. These beneficial effects, however, may not be realized due to maladaptive forms of interaction. While robots are only now being fielded in appreciable numbers, a substantial body of experience and research already exists characterizing human interactions with more conventional forms of automation in aviation and process industries.

In human interaction with automation, it has been observed that the human may fail to use the system when it would be advantageous to do so. This has been called disuse (underutilization or under-reliance) of the automation [97]. People also have been observed to fail to monitor automation properly (e.g. turning off alarms) when automation is in use, or they accept the automation’s recommendations and actions when inappropriate [71, 97]. This has been called misuse, complacency, or over-reliance. Disuse can decrease automation benefits and lead to accidents if, for instance, safety systems and alarms are not consulted when needed. Another maladaptive attitude is automation bias  [33, 55, 77, 88, 112], a user tendency to ascribe greater power and authority to automated decision aids than to other sources of advice (e.g. humans). When the decision aid’s recommendations are incorrect, automation bias may have dire consequences [2, 78, 87, 89] (e.g. errors of omission , where the user does not respond to a critical situation, or errors of commission, where the user does not analyze all available information but follows the advice of the automation).

Both naïve and expert users show these tendencies. In [128], it was found that skilled subject matter experts had misplaced trust in the accuracy of diagnostic expert systems. (see also [127]). Additionally the Aviation Safety Reporting System contains many reports from pilots that link their failure to monitor to excessive trust in automated systems such as autopilots or FMS [90, 119]. On the other hand, when corporate policy or federal regulations mandate the use of automation that is not trusted, operators may “creatively disable” the device [113]. In other words: disuse the automation.

Studies have shown [64, 92] that trust towards automation affects reliance (i.e. people tend to rely on automation they trust and not use automation they do not trust). For example, trust has frequently been cited [56, 93] as a contributor to human decisions about monitoring and using automation. Indeed, within the literature on trust in automation, complacency is conceptualized interchangeably as the overuse of automation, the failure to monitor automation, and lack of vigilance [6, 67, 96]. For optimal performance of a human-automation system, human trust in automation should be well-calibrated. Both disuse and misuse of the automation has resulted from improper calibration of trust , which has also led to accidents [51, 97].

In [58], trust is conceived to be an “attitude that an agent (automation or another person) will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.” A majority of research in trust in automation has focused on the relation between automation reliability and operator usage, often without measuring the intervening variable, trust. The utility of introducing an intervening variable between automation performance and operator usage, however, lies in the ability to make more precise or accurate predictions with the intervening variable than without it. This requires that trust in automation be influenced by factors in addition to automation reliability/performance. The three dimensional (Purpose, Process, and Performance) model proposed by Lee and See [58], for example, presumes that trust (and indirectly, propensity to use) is influenced by a person’s knowledge of what the automation is supposed to do (purpose), how it functions (process), and its actual performance. While such models seem plausible, support for the contribution of factors other than performance has typically been limited to correlation between questionnaire responses and automation use. Despite multiple studies of trust in automation, the conceptualization of trust and how it can be reliably modeled and measured is still a challenging problem.

In contrast to automation where system behavior has been pre-programmed and the system performance is limited to the specific actions it has been designed to perform, autonomous systems/robots have been defined as having intelligence-based capabilities that would allow them to have a degree of self governance, which enables them to respond to situations that were not pre-programmed or anticipated in the design. Therefore, the role of trust in interactions between humans and robots is more complex and difficult to understand.

In this chapter, we present the conceptual underpinnings of trust in Sect. 8.2, and then discuss models of, and the factors that affect, trust in automation in Sects. 8.3 and 8.4, respectively. Next, we will discuss instruments for measuring trust in Sect. 8.5, before moving on to trust in the context of human-robot interaction (HRI) in Sect. 8.6 both in how humans influence robots, and vice versa. We conclude in Sect. 8.7 with open questions and areas of future work.

8.2 Conceptualization of Trust

Trust has been studied in a variety of disciplines (including social psychology, human factors, and industrial organization) for understanding relationships between humans or between human and machine. The wide variety of contexts within which trust has been studied leads to various definitions and theories of trust. The different context within which trust has been studied has led to definitions of trust as an attitude, an intention, or a behavior [72, 76, 86]. Both within the inter-personal literature and human-automation trust literature, a widely accepted definition of trust is lacking [1]. However, it is generally agreed that trust is best conceptualized as a multidimensional psychological attitude involving beliefs and expectations about the trustee’s trustworthiness derived from experience and interactions with the trustee in situations involving uncertainty and risk [47]. Trust has also been said to have both cognitive and affective features. In the interpersonal literature, trust is also seen involving affective processes, since trust development requires seeing others as personally motivated by care and concern to protect the trustor’s interests [65]. In the automation literature, cognitive (rather than affective) processes may play a dominant role in the determination of trustworthiness, i.e., the extent to which automation is expected to do the task that it was designed to do [91]. In the trust in automation literature, it has been argued that trust is best conceptualized as an attitude [58] and a relatively well accepted definition of trust is: “ attitude which includes the belief that the collaborator will perform as expected, and can, within the limits of the designer’s intentions , be relied on to achieve the design goals” [85].

8.3 Modeling Trust

The basis of trust can be considered as a set of attributional abstractions (trust dimensions) that range from the trustee’s competence to its intentions. Muir [91] combined the dimensions of trust from two works ([4] and [100]). Barber’s model [4] is in terms of human expectations that form the basis of trust between human and machine. These expectations are persistence, technical competency, and fiduciary responsibility. Although in the subsequent literature, the number and concepts in the trust dimensions vary [58], there seems to be a convergence on the three dimensions—Purpose, Process, and Performance [58]—mentioned earlier, along with correspondences of those to earlier concepts, such as the dimensions in [4], and those of Ability, Integrity, and Benevolence [76]. Ability is the trustee competence in performing expected actions, benevolence is the trustee intrinsic and positive intentions towards the trustor, and integrity is trustee’s adherence to a set of principles that are acceptable to the trustor [76].

Both trust in automation [92] and interpersonal relations literature [37, 53, 84, 107] agree that trust relations are dynamic and varying over time. There are three phases that characterize trust over time: trust formation, where trustors choose to trust trustees and potentially increase their trust over time, trust dissolution, where trustors decide to lower their trust in trustees after a trust violation has occurred, and trust restoration where trust stops decreasing after a trust violation and gets restored (although potentially not to the same level as before the trust violation). Early in the relationship, the trust in the system is based on the predictability of the system’s behavior. Work in the literature has shown shifts in trust in response to changes in properties and performance of the automation [56, 91]. When the automation was reliable, operator trust increased over time and vice versa. Varying levels of trust were also positively correlated with the varying levels of automation use. As trust decreased, for instance, manual control became more frequent. As the operator interacts with the system, he/she attributes dependability to the automation. Prolonged interaction with the automation leads the operator to make generalizations about the automation and broader attributions about his belief in the future behavior of the system (faith). There is some difference in the literature as to when exactly faith develops in the dynamic process of trust development. Whereas [100] argue that interpersonal trust progresses from predictability to dependability to faith, [92] suggest that for trust in automation, faith is a better predictor of trust early rather than late in the relationship.

Some previous work has explored trust with respect to automation versus human trustee [64]. Their results indicate (a) the dynamics of trust are similar, in that faults diminish trust both towards automation or another human, (b) the sole predictor of reliance on automation was the difference between trust and self-confidence, and (c) participants, in human-human experiments, were more likely to delegate a task to a human when the human was thought to have a low opinion of their own trustworthiness. In other words, when participants thought their own trustworthiness in the eyes of others was high, they were more likely to retain control over a task. However, trustworthiness played no role when the collaborative partner was an automated controller, i.e. only participants’ own confidence in their performance determined their decision to retain/obtain control. Other work on trust in humans versus trust in automation [61] explored the extent to which participants trusted identical advice given by an expert system under the belief that it was given by a human or a computer. The results of these studies were somewhat contradictory however. In one study, participants were more confident in the advice of the human (though their agreement with the human advice did not vary versus their agreement on the expert system’s advice), while in the second study, participants agreed more with the advice of the expert system, but had less confidence in the expert system. Similar contradictory results have been shown in HRI studies, where work indicated that errors by a robot did not affect participants’ decisions of whether or not to follow the advice of a robot [111], yet did affect their subjective reports of the robot’s reliability and trustworthiness [104]. Study results by [71], however, indicated that reliance on a human aid was reduced in situations of higher risk.

8.4 Factors Affecting Trust

The factors that are likely to affect Trust in automation have generally been categorized as those pertaining to automation, the operator, and the environment. Most work on factors that have been empirically researched pertains to characteristics of the automation. Here we briefly present relevant work on the most important of these factors.

8.4.1 System Properties

The most important correlates of use of automation have been system reliability and effects of system faults. Reliability typically refers to automation that has some error rate—for example, misclassifying targets. Typically this rate is constant and data is analyzed using session means. Faults are typically more drastic, such as controller that fails making the whole system behave erratically. Faults are typically single events and studied as time series.

System reliability: Prior literature has provided empirical evidence that there is a relationship between trust in automation and the automation’s reliability [85, 96, 97, 98, 102]. Research shows [86] that declining system reliability can lead to systematic decline in trust and trust expectations, and most crucially, these changes can be measured over time. There is also some evidence that only the most recent experiences with the automation affect trust judgments [51, 56].

System faults: System faults are a form of system reliability, but are treated separately because they concern discrete system events and involve different experimental designs. Different aspects of faults influence the relation between trust and automation. Lee and Moray [56] showed that in the presence of continual system faults, trust in the automation reached its lowest point only after six trials, but trust did recover gradually even as faults continued. The magnitude of system faults has differential effects on trust (smaller faults had minimal effect on trust while large faults negatively affected trust and were slower to recover the trust). Another finding [92] showed that faults of varying magnitude diminished trust more than large constant faults. Additionally, it was found that when faults occurred in a particular subsystem, the corresponding distrust did spread to other functions controlled by the same subsystem. The distrust did not, however, spread to independent or similar subsystems.

System predictability: Although system faults affect the trust in the automation, this happens when the human has little a priori knowledge about the faults. Research has shown that when people have prior knowledge of faults, these faults do not necessarily diminish trust in the system [64, 102]. A plausible explanation is that knowing that the automation may fail reduces the uncertainty and consequent risk associated with use of the automation. In other words, predictability may be as (or more) important as reliability.

System intelligibility and transparency: Systems that can explain their reasoning will be more likely to be trusted, since they would be more easily understood by their users [66, 117, 121, 122]. Such explanatory facility may also allow the operator to query the system in periods of low system operation in order to incrementally acquire and increase trust.

Level of Automation: Another factor that may affect trust in the system is its level of automation (i.e. the level of functional allocation between the human and the system). It has been suggested [91, 93] that system understandability is an important factor for trust development. In their seminal work on the subject [116], Sheridan and Verplank propose a scale for assessing the level of automation in a system from 0 to 10, with 0 being no autonomy and 10 being fully autonomous. Since higher levels of automation are more complex, thus potentially more opaque to the operator, higher levels of automation may engender less trust. Some limited empirical work suggests that different levels of automation may have different implications for trust [86]. Their work based on Level 3 [116] automation did not show same results when conducted with Level 7 (higher) automation.

8.4.2 Properties of the Operator

Propensity to trust: In the sociology literature [105] it has been suggested that people have different propensity to trust others and it has been hypothesized that this is a stable personality trait. In the trust in automation literature, there is very limited empirical work on the propensity to trust. Some evidence is provided in [97] suggests that operator’s overall propensity to trust is distinct from trust towards a specific automated system. In other words, it may be the case that an operator has high propensity to trust in automation in general, but faced with a specific automated system, their trust may be very low.

Self Confidence: Self-confidence is a factor of individual difference and one of the few operator characteristics that has been studied in the trust in automation literature. Work in [57] suggested that when trust was higher than self-confidence, automation, rather than manual control would be used and vice versa when trust was lower than self-confidence. However, later work [86], which was conducted with a higher level of automation than [57], did not obtain similar results. It was instead found that trust was influenced by properties of the system (e.g., real or apparent false diagnoses) while self-confidence was influenced by operator traits and experiences (e.g. whether they had been responsible for accidents). Furthermore, it was also found that self-confidence was not affected by system reliability. This last finding was also suggested in the work of [64] which found that self-confidence was not lowered by shifts in automation reliability.

Individual Differences and Culture: It has been hypothesized, and supported by various studies, that individual differences [57, 74, 80, 119] and culture [50] affect the trust behavior of people. The interpersonal relations literature has identified many different personal characteristics of a trustor, such as self-esteem [105, 106], secure attachment [17], and motivational factors [54] that contribute to the different stages in the dynamics of trust. Besides individual characteristics, socio-cultural factors that contribute to differences in trust decisions in these different trust phases have also been identified [8, 10, 32, 37]. For example, combinations of socio-cultural factors that may result in quick trust formation (also called “swift trust” formation in temporary teams [83]) are time pressure [25] and high power distance with authority [16]. People in high power distance (PD) societies expect authority figures to be benign, competent and of high integrity. Thus people in high power distance societies will engage in less vigilance and monitoring for possible violations by authority figures. To the extent then that people of high PD cultures perceive the automation as authoritative, they should be quick to form trust. On the other hand, when violations occur, people in high PD cultures should be slow to restore trust once violations have occurred [11]. Additionally, it has been shown [79] via replication of Hofstede’s [45] cultural dimensions for a very large-scale sample of pilots, that even in such a highly specialized and regulated profession, national culture still exerts a meaningful influence on attitude and behavior over and above the occupational context.

To date, only a handful of studies consider cultural factors and potential differences in the context of trust in automation, with [99, 125] and [22] being exceptions. As the use of automation gets increasingly globalized, it is imperative that we gain an understanding on how trust in automation is conceptualized across cultures and how it influences operator reliance and use of automation, and overall human-system performance.

8.4.3 Environmental Factors

In terms of environmental factors that influence trust in automation, risk seems most important. Research in trust in automation suggests that reliance on automation is modulated by the risk present in the decision to use the automation [101]. People are more averse to using the automation if negative consequences are more probable and, once trust has been lowered, it takes people longer to re-engage the automation in high-risk versus low risk situations [102]. However, knowing the failure behavior of the automation in advance may modify the perception of risk, in that people’s trust in the system does not decrease [101].

8.5 Instruments for Measuring Trust

While a large body of work on trust in automation and robots has developed over the past two decades, standardized measures have remained elusive with many researchers continuing to rely on short idiosyncratically worded questionnaires. Trust (in automation) refers to a cognitive state or attitude, yet it has most often been studied indirectly through its purported influence on behavior often without any direct cognitive measure. The nature and complexity of the tasks and failures studied has varied greatly ranging from simple automatic target recognition (ATR) classification [33], to erratic responses of a controller embedded within a complex automated system [57] to robots misreading QR codes [30]. The variety of reported effects (automation bias , complacency, reliance, compliance, etc.) mirror these differences in tasks and scenarios [27] and [28] have criticized the very construct of trust in automation on the basis of this diversity as an unfalsifiable “folk model” without clear empirical grounding. Although the work cited in the reply to these criticism in [98] as well as the large body of work cited in the review by [96] have begun to examine the interrelations and commonalities of concepts involving trust in automation, empirical research is needed to integrate divergent manifestations of trust within a single task/test population so that common and comparable measures can be developed.

Most “measures” of trust in automation since the original study [92] have been created for individual studies based on face validity and have not in general benefited from the same rigor in development and validation that has characterized measures of interpersonal trust. “Trust in automation” has been primarily understood through its analogy to interpersonal trust and more sophisticated measures of trust in automation have largely depended on rationales and dimensions developed for interpersonal relations, such as ability, benevolence, and integrity.

Three measures of trust in automation, Empirically Derived (ED), Human-Computer Trust (HTC), and SHAPE Automation Trust Index (SATI) have benefited from systematic development and validation. The Empirically Derived 12 item scale developed by [46] was systematically developed, subjected to a validation study [120] and used in other studies [75]. In [46], they developed their scale in three phases beginning with a word elicitation task. They extracted a 12-factor structure used to develop a 12-item scale based on examination of clusters of words. The twelve items roughly correspond to the classic three dimensions: benevolence (purpose), integrity (process), and ability (performance).

The Human-Computer Trust (HTC) instrument developed in [72] demonstrated construct validity and high reliability within their validation sample and has subsequently been used to assess automation in air traffic control (ATC) simulations, most recently in [68]. Subjects initially identified constructs that they believed would affect their level of trust in a decision aid. Following refinement and modification of the constructs and potential items, the instrument was reduced to five constructs (reliability, technical competence, understandability, faith, and personal attachment). A subsequent principal components analysis limited to five factors found most scale items related to these factors.

The SHAPE Automation Trust Index, SATI, [41] developed by the European Organization for the Safety of Air Navigation is the most pragmatically oriented of the three measures. Preliminary measures of trust in ATC systems were constructed based on literature review and a model of the task. This resulted in a seven dimensional scale (reliability, accuracy, understanding, faith, liking, familiarity, and robustness). The measure was then refined in focus groups with air traffic controllers from different cultures rating two ATC simulations. Scale usability evaluations, and construct validity judgments were also collected. The instrument/items have reported reliabilities in the high 80s but its constructs have not been empirically validated.

All three scales have benefited from empirical study and systematic development yet each has its flaws. The ED instrument in [46], for instance, addresses trust in automation in the abstract without reference to an actual system and as a consequence appears to be more a measure of propensity to trust than trust in a specific system. A recent study [115] found scores on the ED instrument to be unaffected by reliability manipulations that produced significant changes in ratings of trust on other instruments. The HTC was developed from a model of trust and demonstrated agreement between items and target dimensions but stopped short of confirmatory factor analysis. Development of the SATI involved the most extensive pragmatic effort to adapt items so they made sense to users and captured aspects of what users believed contributed to trust. However, SATI development neglected psychometric tests of construct validity.

A recent effort [21, 23] has led to a general measure of trust in automation validated across large populations in three diverse cultures, US, Taiwan and Turkey, as representative of Dignity, Face, and Honor cultures [63]. The Cross-cultural measure of trust is consistent with the three (performance, purpose, process) dimensions of [58, 81] and contains two 9 item scales, one measuring the propensity to trust as in [46] and the other measuring trust in a specific system. The second scale is designed to be administered repeatedly to measure the effects of manipulations expected to affect trust while the propensity scale is administered once at the start of an experiment. The scales have been developed and validated for US, Taiwanese, and Turkish samples and are based on 773 responses (propensity scale) and 1673 responses (specific scale).

The Trust Perception Scale-HRI [114, 115] is a psychometrically-developed 40 item instrument intended to measure human trust in robots. Items are based on data collected identifying robot features from pictures and their perceived functional characteristics. While development was guided by the triadic (human, robot, environment) model of trust inspired by the meta-analysis in [43], a factor analysis of the resulting scale found four components corresponding roughly to capability, behavior, task, and appearance. Capability and behavior correspond to two of the dimensions commonly found in interpersonal trust [81] and trust in automation [58], while appearance may have a special significance for trust in robots. The instrument was validated in same-trait and multi-trait analyses producing changes in rated trust associated with manipulation of robot reliability. The scale was developed based on 580 responses and 21 validation participants.

The HRI Trust Scale [131] was developed from items based on five dimensions (team configuration, team process, context, task, and system) identified by 11 subject matter experts (SMEs) as likely to affect trust. A 100 participant Mechanical Turk sample was used to select 37 items representing these dimensions. The HRI Trust Scale is incomplete as a sole measure of trust and is intended to be paired with Rotter’s [105] interpersonal trust inventory when administered. While Lee and See’s dimensions [58] other than “process” are missing from the HRI scale, they are represented in Rotter’s instrument.

Because trust in automation or robots is an attitude, self-report through psychometric instruments such as these provides the most direct measurement. Questionnaires, however, suffer from a number of weaknesses. Because they are intrusive, measurements cannot be conveniently taken during the course of a task but only after the task is completed. This may suffice for automation such as ATR where targets are missed at a fixed rate and the experimenter is investigating the effect of that rate on trust [33], but it does not work in measuring moment to moment trust in a robot reading QR codes to get its directions [30].

8.6 Trust in Human Robot Interaction

Robots are envisioned to be able to process many complex inputs from the environment and be active participants in many aspects of life, including work environments, home assistance, battlefield and crisis response, and others. Therefore, robots are envisioned to transition from tool to teammate as humans transition from operator to teammate in an interaction more akin to human-human teamwork. These envisioned transitions raise a number of general questions: How would human interaction with the robot be affected? How would performance of the human-robot team be affected? How would human performance or behavior be affected? Although there are numerous tasks, environments, and situations of human-robot collaboration, in order to best clarify the role of trust we distinguish two general types of interactions of humans and robots: performance-based interactions, where the focus is on the human influencing/controlling the robot so it can perform useful tasks for the human, and social-based interactions, where the focus is on how the robot’s behavior influences the human’s beliefs and behavior. In both these cases, the human is the trustor and the robot the trustee. In particular, in performance based interactions there is a particular task with a clear performance goal. An example of performance-based interactions is where human and robot collaborate in manufacturing assembly, or a UAV performing surveillance and recognition of victims in a search and rescue mission. Here measures of performance could be accuracy and timing to complete the task. On the other hand, in social interactions, the performance goal is not as crisply defined. An example of such a task is the ability of a robot to influence a human to reveal private knowledge, or how a robot can influence a human to take medicine or do useful exercises.

8.6.1 Performance-Based Interaction: Humans Influencing Robots

A large body of HRI research investigating factors thought to affect behavior via trust, such as reliability, rely strictly on behavioral measures without reference to trust. Meyer’s [82] expected value (EV) theory of alarms provides one alternative by describing the human’s choice as one between compliance (responding to an alarm) and reliance (not responding in the absence of an alarm). The expected values of these decisions are determined by the utilities associated with an uncorrected fault, the cost of intervention and the probabilities of misses (affecting reliance) and false alarms (affecting compliance). Research in [31], for example, investigated the effects of unmanned aerial vehicle (UAV) false alarms and misses on operator reliance inferred from longer reaction times for misses and compliance inferred from shorter reaction times to alarms. While reliance/compliance effects were not found, higher false alarm rates correlated with poorer performance on a monitoring task, while misses correlated with poorer performance on a parallel inspection task. A similar study by [20] of unmanned ground vehicle (UGV) control found participants with higher perceived attentional control were more adversely affected by false alarms (under-compliance) while those with low perceived attentional control were more strongly affected by misses (over-reliance). Reliance and compliance can be measured in much the same way for homogeneous teams of robots as illustrated by a follow up study of teams of UGVs [19] of similar design and results. A similar study [26] involved multiple UAVs manipulating ATR reliability and administering a trust questionnaire, again finding that ratings of trust increased with reliability.

Transparency, common ground, or shared mental models involve a second construct (“process” [58] or “integrity” [76]) believed to affect trust. According to these models, the extent to which a human can understand the way in which an autonomous system works and predict its behavior will influence trust in the system. There is far less research on effects of transparency, with most involving level of automation manipulations. An early study [60] in which all conditions received full information found best performance for an intermediate level of automation that facilitated checks of accuracy (was transparent). Participants, however, made substantially greater use of a higher level of automation that provided an opaque recommendation. In this study, ratings of trust were affected by reliability but not transparency. More recent studies have equated transparency with additional information providing insight into robot behavior. Researchers in [9] compared conditions in which participants observed a simulated robot represented on a map by a status icon (level of transparency 1), overlaid with environmental information such as terrain (level 2), or with additional uncertainty and projection information (level 3). Note that these levels are distinct from Sheridan’s Levels of Automation mentioned previously. What might appear as erratic behavior in level 1, for example, might be “explained”’ by the terrain being navigated in level 2. Participant’s ratings of trust were higher for levels 2 and 3. A second study manipulated transparency by comparing minimal (such as static image) contextual (such as video clip) and constant (such as video) information for a simulated robot team mate with which participants had intermittent interactions but found no significant differences in trust. In [126], researchers took a different approach to transparency by having a simulated robot provide “explanations” of its actions. The robot guided by a POMDP model can make different aspects of its decision making such as beliefs (probability of dangerous chemicals in building) or capabilities (ATR has 70% reliability) available to its human partner. Robot reliability affected both performance and trust. Explanations did not improve performance but did increase trust among those in the high reliability condition. As these studies suggest, reliability appears to have a large effect on trust, reliance/compliance, and performance, while transparency about function has a relatively minor one, primarily influencing trust. The third component of trust in robot’s “purpose” [58] or “benevolence” [76] has been attributed [69, 70, 95] to “transparency” as conveyed by appearance discussed in Sect. 8.6.2. By this interpretation, matching human expectations aroused by a robot’s appearance to its purpose and capabilities can make interactions more transparent by providing a more accurate model to the human.

Studies discussed to this point have treated trust as a dependent variable to be measured at the end of a trial and have investigated whether or not it had been affected by characteristics of the robot or situation. If trust of a robot is modified through a process of interaction, however, it must be continuously varying as evidence accumulates of its trustworthiness or untrustworthiness. This was precisely the conception of trust investigated by Lee and Moray [56] in their seminal study but has been infrequently employed since. An recent example of such a study is reported in [29] where a series of experiments addressing temporal aspects of trust involving levels of automation and robot reliability have been conducted using a robot navigation and barriers task. In that task, a robot navigates through a course of boxes with labels that the operator can read through the robot’s camera and QR codes presumed readable by the robot. The labels contain directions such as “turn right” or “U turn”. In automation modes, robots follow a predetermined course with “failures” appearing to be misread QR codes. Operators can choose either the automation mode or a manual mode in which they determine the direction the robot takes. An initial experiment [29] investigated the effects of reliability drops at different intervals across a trial, finding that decline in trust as measured by post trial survey was greatest if the reliability decline occurred in the middle or final segments. In subsequent experiments, trust ratings were collected continuously by periodic button presses indicating increase or decrease in trust. These studies [30, 49] confirmed the primacy-recency bias in episodes of unreliability and the contribution of transparency in the form of confidence feedback from the robot.

Work in [24] collected similar periodic measures of trust using brief periodically presented questionnaires to participants performing a multi-UAV supervision task to test effects of priming on trust. These same data were used to fit a model similar to that formalized by [39] using decision field theory to address the decision to rely on the automation/robot’s capabilities or to manually intervene based on the balance between the operator’s self-confidence and her trust in the automation/robot. The model contains parameters characterizing information conveyed to operator, inertia in changing beliefs, noise, uncertainty, growth-decay rates for trust and self-confidence, and an inhibitory threshold for shifting between responses. By fitting these parameters to human subject data, the time course of trust (as defined by the model) can be inferred. An additional study of UAV control [38] has also demonstrated good fits for dynamic trust models with matches within 2.3% for control over teams of UGVs. By predicting effects of reliability and initial trust on system performance, such models might be used to select appropriate levels of automation or provide feedback to human operators. In another study involving assisted driving [123], the researchers use both objective (car position, velocity, acceleration, and lane marking scanners) and subjective (gaze detection and foot location) to train a mathematical model to recognize and diagnose over-reliance on the automation. The authors show that their models can be applied to other domains outside automation-assisted driving as well.

Willingness to rely on the automation has been found in the automation literature to correlate with user’s self-confidence in their ability to perform the task [57]. It has been found that if a user is more confident in their own ability to perform the task, they will take control of the automation more frequently if they perceive that the automation does not perform well. However, as robots are envisioned to be deployed in increasingly risky situations, it may be the case that a user (e.g. a soldier) may elect to use a robot for bomb disposal irrespective of his confidence in performing the task. Another factor that has considerably influenced use of automation is user workload. It has been found in the literature that users exhibit over-reliance [7, 40] on the automation in high workload conditions.

Experiments in [104] show that people over-trusted a robot in fire emergency evacuation scenarios conducted with a real robot in a campus building, although the robot was shown to be defective in various ways (e.g. taking a circuitous route rather then the efficient route in guiding the participant in a waiting room before the emergency started). It was hypothesized by the experimenters that the participants, having experienced an interaction with a defective robot, would decrease their trust (as opposed to a non-defective robot), and also that participants’ self-reported trust would correlate with their behavior (i.e their decision to follow the robot or not). The results showed that, in general, participants did not rate the non-efficient robot as a bad guide, and even the ones that rated it poorly still followed it during the emergency. In other words, trust rating and trust behavior were not correlated. Interestingly enough, participants in a previous study with similar scenarios of emergency evacuation in simulation by the same researchers [103] behaved differently, namely participants rated less reliant simulated robots as less trustworthy and were less prone to follow them in the evacuation. The results from the simulation studies of emergency evacuation, namely positive correlation between participants’ trust assessment and behavior, are similar to results in low risk studies [30]. These contradictory results point strongly that more research needs to be done to refine robot, operator and task-context variables and relations that would lead to correct trust calibration, and better understanding of the relationship between trust and performance in human robot interaction.

One important issue is how an agent forms trust in agents it has not encountered before. One approach from the literature in multiagent systems (MAS) investigates how trust forms in ad hoc groups, where agents that had not interacted before come together for short periods of time to interact and achieve a goal, after which they disband. In such scenarios, a decision tree model based on both trust and other factors (such as incentives and reputation) can be used [13]. A significant problem in such systems, known as the cold start problem, is that when such groups form there is little to no prior information on which to base trust assessments. In other words, how does an agent choose who to trust and interact with when they have no information on any agent? Recent work has focused on bootstrapping such trust assessments by using stereotypes [12]. Similar to stereotypes used in interpersonal interactions among humans, stereotypes in MAS are quick judgements based on easily observable features of the other agent. However, whereby human judgements are often clouded by cultural or societal biases , stereotypes in MAS can be constructed in a way that maximizes the accuracy. Further work by the researchers in [14] shows how stereotypes in MAS can be spread throughout the group to improve others’ trust assessments, and can be used by agents to detect unwanted biases received from others in the group. In [15], the authors show how this work can be used by organizations to create decision models based on trust assessments from stereotypes and other historical information about the other agents. Towards Co-adaptive Trust

In other studies [129, 130], Xu and Dudek create an online trust model to allow a robot or other automation to assess the operator’s trust in the system while a mission is ongoing, using the results of the model to adjust the automation behavior on the fly to adapt to the estimated trust level. Their end goal is trust-seeking adaptive robots, which seek to actively monitor and adapt to the estimated trust of the user to allow for greater efficiency in human-robot interactions. Importantly, the authors combined common objective, yet indirect, measures of trust (such as quantity and type of user interaction), with a subjective measure in the form of periodical queries to the operator about their current degree of trust.

In an attempt to develop an objective and direct measure of trust the human has in the system, the authors of [36] use a mathematical decision model to estimate trust by determining the expected value of decisions a trusting operator would make, and then evaluate the user’s decisions in relation to this model. In other words, if the operator deviates largely from the expected value of their decisions, they are said to be less trusting, and vice versa. In another study [108], the authors use two-way trust to adjust the relative contribution of the human input to that of the autonomous controller, as well as the haptic feedback provided to the human operator. They model both robot-to-human and human-to-robot trust, with lower values of the former triggering higher levels of force feedback, and lower values of the latter triggering a higher degree of human control over that of the autonomous robot controller. The authors demonstrate their model can significantly improve performance and lower the workload of operators when compared to previous models and manual control only.

These studies help introduce the idea of “inverse trust”. The inverse trust problem is defined in [34] as determining how “an autonomous agent can modify it’s behavior in an attempt to increase the trust a human operator will have in it”. In this paper, the authors base this measure largely on the number of times the automation is interrupted by a human operator, and uses this to evaluate the autonomous agent’s assessment of change in the operator’s trust level. Instead of determining an absolute numerical value of trust, the authors choose to have the automation estimate changes in the human’s trust level. This is followed in [35] by studies in simulation validating their inverse trust model.

8.6.2 Social-Based Interactions: Robots Influencing Humans

Social robotics deals with humans and robots interacting in ways humans typically interact with each other. In most of these studies, the robot—either by its appearance or its behavior—influences the human’s beliefs about trustworthiness, feelings of companionship, comfort, feelings of connectedness with the robot, or behavior (such as whether the human discloses secrets to the robot or follows the robot’s recommendations). This is distinct from the prior work discussed, such as ATR, where a robot’s actions are not typically meant to influence the feelings or behaviors of its operator. These social human-robot interactions contain affective elements that are closer to human-human interactions. There is a body of literature that looked at how robot characteristics affected ratings of animacy and other human-like characteristics, as well as trust in the robot, without explicitly naming a performance or social goal that the robot would perform. It has been consistently found in the social robotics literature that people tend to judge robot characteristics, such as reliability and intelligence, based on robot appearance. For example, people ascribe human qualities to robots that look more anthropomorphic. Another result of people’s tendency to anthropomorphize robots is that they tend to ascribe animacy and intent to robots. This finding has not been reported just for robots [109] but even for simple moving shapes [44, 48]. Kiesler and Goetz [52] found that people rated more anthropomorphic looking robots as more reliable. Castro-Gonzalez et al. [18] investigated how the combination of movement characteristics with body appearance can influence people’s attributions of animacy, liekeability, trustworthiness, and unpleasantness. They found that naturalistic motion was judged to be more animate, but only if the robot had a human appearance. Moreover, naturalistic motion improved ratings of likeability irrespective of the robot’s appearance. More interestingly, a robot with human-like appearance was rated as more disturbing when its movements were more naturalistic. Participants also ascribe personality traits to robots based on appearance. For instance, in [118], robots with spider legs were rated as more aggressive whereas robots with arms rated as more intelligent than those without arms. Physical appearance is not the only attribute that influences human judgment about robot intelligence and knowledge. For example, [59] found that robots that spoke a particular language (e.g. Chinese) were rated higher in their purported knowledge of Chinese landmarks than robots that spoke English.

Robot appearance, physical presence [3], and matched speech [94] are likely to engender trust in the robot [124] found that empathetic language and physical expression elicits higher trust [62] found that highly expressive pedagogical interfaces engender more trust. A recent meta-analysis by Hancock et al. [43] found that robot characteristics such as reliability, behaviors and transparency influenced people’s rating of trust in a robot. Besides these characteristics, the researchers in [43] also found that anthropomorphic qualities also had a strong influence on ratings of trust, and that trust in robots is influenced by experience with the robot.

Martelato et al. [73] found that if the robot is more expressive, this encourages participants to disclose information about themselves. However, counter to their hypotheses, disclosure of private information by the robot, a behavior that the authors labelled as making the robot more vulnerable, did not engender increased willingness to disclose on the part of the participants. In a study on willingness of children to disclose secrets, Bethel et al. [5] found in a qualitative study that preschool children were found to be as likely to share a secret with an adult as with a humanoid robot.

An interesting study is reported in [111], where the authors studied how errors performed by the robot affect human trustworthiness and willingness of the human to subsequently comply with the robot’s (somewhat unusual) requests. Participants interacted with a home companion robot, in the experimental room that was the pretend home of the robot’s human owner in two conditions, (a) where the robot did not make mistakes and (b) where the robot made mistakes. The study found that the participants’ assessment of robot reliability and trustworthiness was decreased significantly in the faulty robot condition; nevertheless, the participants were not substantially influence in their decisions to comply with the robot’s unusual requests. It was further found that the nature of the request (revocable versus irrevocable) influenced the participants’ decisions on compliance. Interestingly, the results in this study also show that participants attributed less anthropomorphism when the robot made errors, which contradict those found by an earlier study the same authors had performed [110].

8.7 Conclusions and Recommendations

In this chapter we briefly reviewed the role of trust in human-robot interaction. We draw several conclusions, the first of which is that there is no accepted definition of what “trust” is in the context of trust in automation. Furthermore, when participants are asked to answer questions as to their level of trust in a robot or software automation, they are almost never given a definition of trust, leaving open the possibility that different participants are viewing the question of trust differently. From a review of the literature, it is apparent that robots still have not achieved full autonomy, and still lack the attributes that would allow them to be considered true teammates by their human counterparts. This is especially true because the literature is largely limited to simulation, or to specific, scripted interactions in the real world. Indeed, in [42], the authors argue that without human-like mental models and a sense of agency, robots will never be considered equal teammates within a mixed human-robot team. They argue that the reason researchers include robots in common HRI tasks is due to their ability to complement the skills of humans. Yet, because of the tendency of humans to anthropomorphize things they interact with, the controlled interactions researchers develop for HRI studies are more characteristic of human-human interactions. While this tendency to anthropomorphize can be helpful in some cases, it poses a serious risk if this naturally gives humans a higher degree of trust in robots than is warranted. The question of how a robot’s performance influences anthropomorphization is also unclear—with recent studies finding conflicting results [110, 111].

There is a general agreement that the notion of trust involves vulnerability of the trustor to the trustee in circumstances of risk and uncertainty. In the performance-based literature, where the human is relying on the robot to do the whole task or part of the task, it is clear that the participant is vulnerable to the robot with respect to the participant’s performance in the experimental task. In most of the studies in social robotics, however, where the robot is trying to get the participant to do something (e.g. comply with instructions to throw away someone else’s mail, or disclose a secret) it is not clear that the participant is truly vulnerable to the robot (unless we regard breaking a social convention as making oneself vulnerable), merely enjoying the novelty of robots, or feeling pressure to follow experimental procedure. Therefore, the notion that was measured in those studies may not have been trust in the sense that the term is defined in the trust literature. For example in [104], where participants showed compliance with a robot guide even when reliability was ranked lower after an error, the researchers admit several confounding factors (e.g., participants did not have enough time to deliberate). The findings on human tendencies to ascribe reliability, trustworthiness, intelligence and other positive characteristics to robots may prohibit correct estimation of robot’s abilities and prevent correct trust calibration. This is dangerous especially since the use of robots is envisioned to increase, especially in high risk situations such as emergency response and the military.

This overview enables us to provide several recommendations for how future work investigating trust in human-autonomy and human-robot interaction would proceed. First, it would be useful for the community to have a clear definition in each study as to what autonomy and what teammate characteristics the robot in the study possesses. Second, it would be useful for each study to define the notion of trust the author’s espouse, as well as which dimensions of the notion of trust they believe are relevant to the task being investigated. The experimenters should also try to understand, via surveys or other means, what definition of trust the participants have in their heads. A possible idea is that experimenters could even give their definition of trust to the participants and see how this may affect the participants’ answers.

Another recommendation is that, given the novelty of robots for the majority of the population, along with the well-known fact from in-group/out-group studies that people seem to be influenced very easily and for trivial reasons, it would be useful to perform longer duration studies to investigate the transient nature of trust assessments. In other words, how does trust in automation change as a function of how familiar users are with the automation and how much they interact with it over time? One could imaging someone unfamiliar with automation or robots placing a high degree of trust in them due to prior beliefs (which may be incorrect). Over time, this implicit trust may fade as they work more with automation and realize that it is not perfect.

Furthermore, we believe in a need to increase research in the multi-robot systems area, as well as the area of robots helping human teams. As the number of robots increase and hardware and operation costs decrease, it is inevitable that humans will be interacting with larger numbers of robots to perform increasingly complex tasks. Furthermore, trust in larger groups and collectives of robots is no doubt influenced by different factors—specifically those regarding the robots’ behaviors—in addition to single robot control. Similarly, there is little work investigating how multiple humans working together with robots affect each others’ trust levels, which needs to be addressed.

Finally, it would be helpful for the community to define a set of task categories of human-robot interaction with characteristics that involve specific differing dimensions of trust. Such characteristics could be the degree of risk to the trustor, the degree of uncertainty, the degree of potential gain, whether the trustor’s vulnerability is to the reliability of the robot, or the robot’s integrity or benevolence. Other studies should expand on the notion of co-adaptive trust to improve how robots assess their own behavior and how it affects the trust in them by their operator. As communication is key to any collaborative interaction, research should not focus merely on how the human sees the robot, but also how the robot sees the human.



This work is supported by awards FA9550-13-1-0129 and FA9550-15-1-0442.


  1. 1.
    B. D. Adams, D. J. Bryant, and R.D. Webb. Trust in teams: Literature review. Technical Report Technical Report CR-2001-042, Report to Defense and Civil Insitute of Environmental Medicine. Humansystems Inc., 2001Google Scholar
  2. 2.
    Eugenio Alberdi, Andrey Povyakalo, Lorenzo Strigini, Peter Ayton, Effects of incorrect computer-aided detection (cad) output on human decision-making in mammography. Academic radiology 11(8), 909–918 (2004)CrossRefGoogle Scholar
  3. 3.
    Wilma A Bainbridge, Justin Hart, Elizabeth S Kim, and Brian Scassellati. The effect of presence on human-robot interaction. In RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication, pages 701–706. IEEE, 2008Google Scholar
  4. 4.
    Bernard Barber. The logic and limits of trust. Rutgers University Press, 1983Google Scholar
  5. 5.
    Cindy L Bethel, Matthew R Stevenson, and Brian Scassellati. Secret-sharing: Interactions between a child, robot, and adult. In Systems, man, and cybernetics (SMC), 2011 IEEE International Conference on, pages 2489–2494. IEEE, 2011Google Scholar
  6. 6.
    CE Billings, JK Lauber, H Funkhouser, EG Lyman, and EM Huff. Nasa aviation safety reporting system. Technical Report Technical Report TM-X-3445, NASA Ames Research Center, 1976Google Scholar
  7. 7.
    P. David, Biros, Mark Daly, and Gregg Gunsch. The influence of task load and automation trust on deception detection. Group Decision and Negotiation 13(2), 173–189 (2004)CrossRefGoogle Scholar
  8. 8.
    Iris Bohnet, Benedikt Hermann, Richard Zeckhauser, The requirements for trust in gulf and western countries. Quarterly Journal of Economics 125, 811–828 (2010)CrossRefGoogle Scholar
  9. 9.
    Michael W. Boyce, Jessie Y.C. Chen, Anthony R. Selkowitz, and Shan G. Lakhmani. Effects of agent transparency on operator trust. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, HRI’15 Extended Abstracts, pages 179–180, New York, NY, USA, 2015. ACMGoogle Scholar
  10. 10.
    B. Marilynn, Brewer and Roderick M Kramer. The psychology of intergroup attitudes and behavior. Annual review of psychology 36(1), 219–243 (1985)CrossRefGoogle Scholar
  11. 11.
    Joel Brockner, Tom R Tyler, and Rochelle Cooper-Schneider. The influence of prior commitment to an institution on reactions to perceived unfairness: The higher they are, the harder they fall. Administrative Science Quarterly, pages 241–261, 1992Google Scholar
  12. 12.
    Chris Burnett, Timothy J Norman, and Katia Sycara. Bootstrapping trust evaluations through stereotypes. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pages 241–248. International Foundation for Autonomous Agents and Multiagent Systems, 2010Google Scholar
  13. 13.
    Chris Burnett, Timothy J Norman, and Katia Sycara. Decision-making with trust and control in multi-agent systems. Twenty Second International Joint Conference on Artificial Intelligence, 10:241–248, 2011Google Scholar
  14. 14.
    Chris Burnett, Timothy J Norman, and Katia Sycara. Stereotypical trust and bias in dynamic multiagent systems. ACM Transactions on Intelligent Systems and Technology (TIST), 4(2):26, 2013Google Scholar
  15. 15.
    Chris Burnett, Timothy J Norman, Katia Sycara, and Nir Oren. Supporting trust assessment and decision making in coalitions. IEEE Intelligent Systems 29(4), 18–24 (2014)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Dale Carl, V Gupta, and Mansour Javidan. Culture, leadership, and organizations: The globe study of 62 societies, 2004Google Scholar
  17. 17.
    Jude Cassidy, Child-mother attachment and the self in six-year-olds. Child development 59(1), 121–134 (1988)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Álvaro Castro-González, Henny Admoni, Brian Scassellati, Effects of form and motion on judgments of social robots’ animacy, likability, trustworthiness and unpleasantness. International Journal of Human-Computer Studies 90, 27–38 (2016)CrossRefGoogle Scholar
  19. 19.
    Jessie YC Chen, Michael J Barnes, and Michelle Harper-Sciarini. Supervisory control of multiple robots: Human-performance issues and user-interface design. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41(4):435–454, 2011Google Scholar
  20. 20.
    J.Y.C. Chen, P.I. Terrence, Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics 52(8), 907–920 (2009)CrossRefGoogle Scholar
  21. 21.
    Shih-Yi Chien, Michael Lewis, Sebastian Hergeth, Zhaleh Semnani-Azad, and Katia Sycara. Cross-country validation of a cultural scale in measuring trust in automation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 59, pages 686–690. SAGE Publications, 2015Google Scholar
  22. 22.
    Shih-Yi Chien, Michael Lewis, K. Sycara, J-S. Liu, and A. Kumru. Influence of cultural factors in dynamic trust in automation. In Proceedings of the Systems, Man, and Cybernetics Society, 2016Google Scholar
  23. 23.
    Shih-Yi Chien, Zhaleh Semnani-Azad, Michael Lewis, and Katia Sycara. Towards the development of an inter-cultural scale to measure trust in automation. In International Conference on Cross-Cultural Design, pages 35–46. Springer, 2014Google Scholar
  24. 24.
    Andrew S Clare, Mary L Cummings, and Nelson P Repenning. Influencing trust for human–automation collaborative scheduling of multiple unmanned vehicles. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(7):1208–1218, 2015Google Scholar
  25. 25.
    Carsten K.W. De Dreu, Peter, J Carnevale. Motivational bases of information processing and strategy in conflict and negotiation. Advances in experimental social psychology 35, 235–291 (2003)Google Scholar
  26. 26.
    Ewart de Visser, Raja Parasuraman, Adaptive aiding of human-robot teaming effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making 5(2), 209–231 (2011)CrossRefGoogle Scholar
  27. 27.
    Sidney Dekker, Erik Hollnagel, Human factors and folk models. Cognition, Technology & Work 6(2), 79–86 (2004)CrossRefGoogle Scholar
  28. 28.
    Sidney W.A. Dekker, David, D Woods. Maba-maba or abracadabra? progress on human-automation co-ordination. Cognition, Technology & Work 4(4), 240–244 (2002)Google Scholar
  29. 29.
    Munjal Desai. Modeling trust to improve human-robot interaction. PhD thesis, University of Massachusetts Lowell, 2012Google Scholar
  30. 30.
    Munjal Desai, Poornima Kaniarasu, Mikhail Medvedev, Aaron Steinfeld, and Holly Yanco. Impact of robot failures and feedback on real-time trust. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, pages 251–258. IEEE Press, 2013Google Scholar
  31. 31.
    Stephen R Dixon and Christopher D Wickens. Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence in high workload. Human Factors: The Journal of the Human Factors and Ergonomics Society, 48(3):474–486, 2006Google Scholar
  32. 32.
    The identification of culturally endorsed leadership profiles, Peter W Dorfman, Paul J Hanges, and Felix C Brodbeck. Leadership and cultural variation. Culture, leadership, and organizations: The GLOBE study of 62, 669–719 (2004)Google Scholar
  33. 33.
    Mary T Dzindolet, Linda G Pierce, Hall P Beck, and Lloyd A Dawe. The perceived utility of human and automated aids in a visual detection task. Human Factors: The Journal of the Human Factors and Ergonomics Society, 44(1):79–94, 2002Google Scholar
  34. 34.
    Michael W Floyd, Michael Drinkwater, and David W Aha. Adapting autonomous behavior using an inverse trust estimation. In International Conference on Computational Science and Its Applications, pages 728–742. Springer, 2014Google Scholar
  35. 35.
    Michael W Floyd, Michael Drinkwater, and David W Aha. Learning trustworthy behaviors using an inverse trust metric. In Robust Intelligence and Trust in Autonomous Systems, pages 33–53. Springer, 2016Google Scholar
  36. 36.
    Amos Freedy, Ewart DeVisser, Gershon Weltman, and Nicole Coeyman. Measurement of trust in human-robot collaboration. In Collaborative Technologies and Systems, 2007. CTS 2007. International Symposium on, pages 106–114. IEEE, 2007Google Scholar
  37. 37.
    A. Fulmer and Gelfand M. Models for intercultural collaboration and negotiation, chapter Dynamic trust processes: trust dissolution, recovery and stabilization. Springer, 2012Google Scholar
  38. 38.
    Fei Gao, Andrew S Clare, Jamie C Macbeth, and ML Cummings. Modeling the impact of operator trust on performance in multiple robot control. In Spring Symposium AAAI, 2013Google Scholar
  39. 39.
    Ji Gao and John D Lee. Extending the decision field theory to model operators’ reliance on automation in supervisory control situations. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 36(5):943–959, 2006Google Scholar
  40. 40.
    Kate Goddard, Abdul Roudsari, and Jeremy C Wyatt. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1):121–127, 2012Google Scholar
  41. 41.
    P Goillau, C Kelly, M Boardman, and E Jeannot. Guidelines for trust in future atm systems-measures. EUROCONTROL, the European Organization for the Safety of Air Navigation, 2003Google Scholar
  42. 42.
    Victoria Groom, Clifford Nass, Can robots be teammates?: Benchmarks in human-robot teams. Interaction Studies 8(3), 483–500 (2007)CrossRefGoogle Scholar
  43. 43.
    Peter A Hancock, Deborah R Billings, Kristin E Schaefer, Jessie YC Chen, Ewart J De Visser, and Raja Parasuraman. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5):517–527, 2011Google Scholar
  44. 44.
    Fritz Heider, Marianne Simmel, An experimental study of apparent behavior. The American Journal of Psychology 57(2), 243–259 (1944)CrossRefGoogle Scholar
  45. 45.
    Geert Hofstede, Gert Jan Hofstede, and Michael Minkov. Cultures and organizations: Software of the mind, volume 2. Citeseer, 1991Google Scholar
  46. 46.
    Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics 4(1), 53–71 (2000)MathSciNetCrossRefGoogle Scholar
  47. 47.
    R. Gareth, Jones and Jennifer M George. The experience and evolution of trust: Implications for cooperation and teamwork. Academy of management review 23(3), 531–546 (1998)Google Scholar
  48. 48.
    Wendy Ju and Leila Takayama. Approachability: How people interpret automatic door movement as gesture. International Journal of Design, 3(2), 2009Google Scholar
  49. 49.
    Poornima Kaniarasu, Aaron Steinfeld, Munjal Desai, and Holly Yanco. Robot confidence and trust alignment. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, pages 155–156. IEEE Press, 2013Google Scholar
  50. 50.
    Kristiina Karvonen, Lucas Cardholm, and Stefan Karlsson. Designing trust for a universal audience: a multicultural study on the formation of trust in the internet in the nordic countries. In HCI, pages 1078–1082, 2001Google Scholar
  51. 51.
    C. Kelly, M. Boardman, P. Goillau, and E Jeannot (A literature review. European Organization for the Safety of Air Navigation, Guidelines for trust in future atm systems, 2003)Google Scholar
  52. 52.
    Sara Kiesler and Jennifer Goetz. Mental models of robotic assistants. In CHI’02 extended abstracts on Human Factors in Computing Systems, pages 576–577. ACM, 2002Google Scholar
  53. 53.
    H. Peter, Kim, Kurt T Dirks, and Cecily D Cooper. The repair of trust: A dynamic bilateral perspective and multilevel conceptualization. Academy of Management Review 34(3), 401–422 (2009)CrossRefGoogle Scholar
  54. 54.
    W. Arie, Kruglanski, Erik P Thompson, E Tory Higgins, M Atash, Antonio Pierro, James Y Shah, and Scott Spiegel. To" do the right thing" or to" just do it": locomotion and assessment as distinct self-regulatory imperatives. Journal of personality and social psychology 79(5), 793 (2000)CrossRefGoogle Scholar
  55. 55.
    Charles Layton, Philip J Smith, and C Elaine McCoy. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 36(1):94–119, 1994Google Scholar
  56. 56.
    John Lee, Neville Moray, Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10), 1243–1270 (1992)CrossRefGoogle Scholar
  57. 57.
    D. John, Lee and Neville Moray. Trust, self-confidence, and operators’ adaptation to automation. International journal of human-computer studies 40(1), 153–184 (1994)CrossRefGoogle Scholar
  58. 58.
    John D Lee and Katrina A See. Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1):50–80, 2004Google Scholar
  59. 59.
    Sau-lai Lee, Ivy Yee-man Lau, S Kiesler, and Chi-Yue Chiu. Human mental models of humanoid robots. In Proceedings of the 2005 IEEE international conference on robotics and automation, pages 2767–2772. IEEE, 2005Google Scholar
  60. 60.
    Terri Lenox, Michael Lewis, Emilie Roth, Rande Shern, Linda Roberts, Tom Rafalski, and Jeff Jacobson. Support of teamwork in human-agent teams. In Systems, Man, and Cybernetics, 1998. 1998 IEEE International Conference on, volume 2, pages 1341–1346. IEEE, 1998Google Scholar
  61. 61.
    F Javier Lerch, Michael J Prietula, and Carol T Kulik. The turing effect: The nature of trust in expert systems advice. In Expertise in context, pages 417–448. MIT Press, 1997Google Scholar
  62. 62.
    James C Lester, Sharolyn A Converse, Susan E Kahler, S Todd Barlow, Brian A Stone, and Ravinder S Bhogal. The persona effect: affective impact of animated pedagogical agents. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems, pages 359–366. ACM, 1997Google Scholar
  63. 63.
    Angela K.-Y. Leung, Dov Cohen, Within-and between-culture variation: individual differences and the cultural logics of honor, face, and dignity cultures. Journal of personality and social psychology 100(3), 507 (2011)CrossRefGoogle Scholar
  64. 64.
    Stephan Lewandowsky, Michael Mundy, Gerard Tan, The dynamics of trust: comparing humans to automation. Journal of Experimental Psychology: Applied 6(2), 104 (2000)Google Scholar
  65. 65.
    J. David, Lewis and Andrew Weigert. Trust as a social reality. Social forces 63(4), 967–985 (1985)CrossRefGoogle Scholar
  66. 66.
    Michael Lewis, Designing for human-agent interaction. AI Magazine 19(2), 67 (1998)Google Scholar
  67. 67.
    James Llinas, Ann Bisantz, Colin Drury, Younho Seong, and Jiun-Yin Jian. Studies and analyses of aided adversarial decision making. phase 2: Research on human trust in automation. Technical report, DTIC Document, 1998Google Scholar
  68. 68.
    Maria Luz. Validation of a Trust survey on example of MTCD in real time simulation with Irish controllers. PhD thesis, thesis final report. The European Organisation for the Safety of Air Navigation, 2009Google Scholar
  69. 69.
    Joseph B Lyons. Being transparent about transparency. In AAAI Spring Symposium, 2013Google Scholar
  70. 70.
    Joseph B Lyons and Paul R Havig. Transparency in a human-machine context: Approaches for fostering shared awareness/intent. In International Conference on Virtual, Augmented and Mixed Reality, pages 181–190. Springer, 2014Google Scholar
  71. 71.
    Joseph B Lyons and Charlene K Stokes. Human–human reliance in the context of automation. Human Factors: The Journal of the Human Factors and Ergonomics Society, page 0018720811427034, 2011Google Scholar
  72. 72.
    Maria Madsen and Shirley Gregor. Measuring human-computer trust. In 11th australasian conference on information systems, volume 53, pages 6–8. Citeseer, 2000Google Scholar
  73. 73.
    Nikolas Martelaro, Victoria C Nneji, Wendy Ju, and Pamela Hinds. Tell me more: Designing hri to encourage more trust, disclosure, and companionship. In The Eleventh ACM/IEEE International Conference on Human Robot Interation, pages 181–188. IEEE Press, 2016Google Scholar
  74. 74.
    Anthony J Masalonis and Raja Parasuraman. Effects of situation-specific reliability on trust and usage of automated air traffic control decision aids. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 47, pages 533–537. SAGE Publications, 2003Google Scholar
  75. 75.
    Reena Master, Xiaochun Jiang, Mohammad T Khasawneh, Shannon R Bowling, Larry Grimes, Anand K Gramopadhye, and Brian J Melloy. Measurement of trust over time in hybrid inspection systems. Human Factors and Ergonomics in Manufacturing & Service Industries, 15(2):177–196, 2005Google Scholar
  76. 76.
    C. Roger, Mayer, James H Davis, and F David Schoorman. An integrative model of organizational trust. Academy of management review 20(3), 709–734 (1995)Google Scholar
  77. 77.
    John M McGuirl and Nadine B Sarter. Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors: The Journal of the Human Factors and Ergonomics Society, 48(4):656–665, 2006Google Scholar
  78. 78.
    K. Ann, McKibbon and Douglas B Fridsma. Effectiveness of clinician-selected electronic information resources for answering primary care physicians information needs. Journal of the American Medical Informatics Association 13(6), 653–659 (2006)CrossRefGoogle Scholar
  79. 79.
    Ashleigh Merritt, Culture in the cockpit do hofstede dimensions replicate? Journal of cross-cultural psychology 31(3), 283–301 (2000)CrossRefGoogle Scholar
  80. 80.
    Stephanie M Merritt and Daniel R Ilgen. Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Human Factors: The Journal of the Human Factors and Ergonomics Society, 50(2):194–210, 2008Google Scholar
  81. 81.
    Joachim Meyer, Effects of warning validity and proximity on responses to warnings. Human Factors: The Journal of the Human Factors and Ergonomics Society 43(4), 563–572 (2001)CrossRefGoogle Scholar
  82. 82.
    Joachim Meyer, Conceptual issues in the study of dynamic hazard warnings. Human Factors: The Journal of the Human Factors and Ergonomics Society 46(2), 196–204 (2004)CrossRefGoogle Scholar
  83. 83.
    Debra Meyerson, Karl E Weick, and Roderick M Kramer. Swift trust and temporary groups. Trust in organizations: Frontiers of theory and research, 166:195, 1996Google Scholar
  84. 84.
    Raymond E Miles and WE Douglas Creed. Organizational forms and managerial philosophies-a descriptive and analytical review. Research in Organizational Behavior: An Annual Series of Anaylytical Essays and Critical Reviews, 17:333–372, 1995Google Scholar
  85. 85.
    Neville Moray and, T Inagaki. Laboratory studies of trust between humans and machines in automated systems. Transactions of the Institute of Measurement and Control 21(4–5), 203–211 (1999)Google Scholar
  86. 86.
    Neville Moray, Toshiyuki Inagaki, Makoto Itoh, Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. Journal of Experimental Psychology: Applied 6(1), 44 (2000)Google Scholar
  87. 87.
    Kathleen L Mosier, Everett A Palmer, and Asaf Degani. Electronic checklists: Implications for decision making. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 36, pages 7–11. SAGE Publications, 1992Google Scholar
  88. 88.
    Kathleen L Mosier and Linda J Skitka. Human decision makers and automated decision aids: Made for each other? Automation and human performance: Theory and applications, pages 201–220, 1996Google Scholar
  89. 89.
    L. Kathleen, Mosier, Linda J Skitka, Susan Heers, and Mark Burdick. Automation bias: Decision making and performance in high-tech cockpits. The International journal of aviation psychology 8(1), 47–63 (1998)CrossRefGoogle Scholar
  90. 90.
    KL Mosier, LJ Skitka, and KJ Korte. Cognitive and social psychological issues in flight crew/automation interaction. Human performance in automated systems: Current research and trends, pages 191–197, 1994Google Scholar
  91. 91.
    M. Bonnie, Muir. Trust in automation: Part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11), 1905–1922 (1994)CrossRefGoogle Scholar
  92. 92.
    M. Bonnie, Muir and Neville Moray. Trust in automation. part ii. experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)CrossRefGoogle Scholar
  93. 93.
    Bonnie Marlene Muir. Operators’ trust in and use of automatic controllers in a supervisory process control task. PhD thesis, University of Toronto, 1990Google Scholar
  94. 94.
    Clifford Nass and Kwan Min Lee, Does computer-synthesized speech manifest personality? experimental tests of recognition, similarity-attraction, and consistency-attraction. Journal of Experimental Psychology: Applied 7(3), 171 (2001)Google Scholar
  95. 95.
    Scott Ososky, David Schuster, Elizabeth Phillips, and Florian Jentsch. Building appropriate trust in human-robot teams. In AAAI Spring Symposium Series, 2013Google Scholar
  96. 96.
    Raja Parasuraman and Dietrich H Manzey. Complacency and bias in human use of automation: An attentional integration. Human Factors: The Journal of the Human Factors and Ergonomics Society, 52(3):381–410, 2010Google Scholar
  97. 97.
    Raja Parasuraman, Victor Riley, Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society 39(2), 230–253 (1997)CrossRefGoogle Scholar
  98. 98.
    Raja Parasuraman, Thomas B Sheridan, and Christopher D Wickens. Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Journal of Cognitive Engineering and Decision Making, 2(2):140–160, 2008Google Scholar
  99. 99.
    PL Patrick Rau, Ye Li, and Dingjun Li. Effects of communication style and culture on ability to accept recommendations from robots. Computers in Human Behavior 25(2), 587–595 (2009)CrossRefGoogle Scholar
  100. 100.
    K. John, Rempel, John G Holmes, and Mark P Zanna. Trust in close relationships. Journal of personality and social psychology 49(1), 95 (1985)CrossRefGoogle Scholar
  101. 101.
    V. A. Riley. Automation theory and applications, chapter Operator reliance on automation: theory and data, pages 19–35. Mahwah, NJ. Erlbaum, 1996Google Scholar
  102. 102.
    Victor Andrew Riley. Human use of automation. PhD thesis, University of Minneapolis, 1994Google Scholar
  103. 103.
    Paul Robinette, Ayanna M Howard, and Alan R Wagner. Timing is key for robot trust repair. In International Conference on Social Robotics, pages 574–583. Springer, 2015Google Scholar
  104. 104.
    Paul Robinette, Wenchen Li, Robert Allen, Ayanna M Howard, and Alan R Wagner. Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 101–108. IEEE, 2016Google Scholar
  105. 105.
    B. Julian, Rotter. A new scale for the measurement of interpersonal trust1. Journal of personality 35(4), 651–665 (1967)CrossRefGoogle Scholar
  106. 106.
    B. Julian, Rotter. Generalized expectancies for interpersonal trust. American psychologist 26(5), 443 (1971)CrossRefGoogle Scholar
  107. 107.
    M. Denise, Rousseau, Sim B Sitkin, Ronald S Burt, and Colin Camerer. Not so different after all: A cross-discipline view of trust. Academy of management review 23(3), 393–404 (1998)CrossRefGoogle Scholar
  108. 108.
    H Saeidi, F McLane, B Sadrfaidpour, E Sand, S Fu, J Rodriguez, JR Wagner, and Y Wang. Trust-based mixed-initiative teleoperation of mobile robots. In 2016 American Control Conference (ACC), pages 6177–6182. IEEE, 2016Google Scholar
  109. 109.
    Martin Saerbeck and Christoph Bartneck. Perception of affect elicited by robot motion. In Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction, pages 53–60. IEEE Press, 2010Google Scholar
  110. 110.
    Maha Salem, Friederike Eyssel, Katharina Rohlfing, Stefan Kopp, Frank Joublin, To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. International Journal of Social Robotics 5(3), 313–323 (2013)CrossRefGoogle Scholar
  111. 111.
    Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, and Kerstin Dautenhahn. Would you trust a (faulty) robot?: Effects of error, task type and personality on human-robot cooperation and trust. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 141–148. ACM, 2015Google Scholar
  112. 112.
    Nadine B Sarter and Beth Schroeder. Supporting decision making and action selection under time pressure and uncertainty: The case of in-flight icing. Human Factors: The Journal of the Human Factors and Ergonomics Society, 43(4):573–583, 2001Google Scholar
  113. 113.
    M. Paul, Satchell (Routledge, Cockpit Monitoring and Alerting Systems, 1993)Google Scholar
  114. 114.
    Kristin E Schaefer. The perception and measurement of human-robot trust. PhD thesis, University of Central Florida Orlando, Florida, 2013Google Scholar
  115. 115.
    Kristin E Schaefer, Jessie YC Chen, James L Szalma, and PA Hancock. A meta-analysis of factors influencing the development of trust in automation implications for understanding autonomy in future systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, page 0018720816634228, 2016Google Scholar
  116. 116.
    TB Sheridan and W Verplank. Human and computer control of undersea teleoperators. cambridge, ma: Man-machine systems laboratory, department of mechanical engineering, 1978Google Scholar
  117. 117.
    A Simpson, GN Brander, and DRA Portsdown. Seaworthy trust: Confidence in automated data fusion. The Human-Electronic Crew: Can we Trust the Team, pages 77–81, 1995Google Scholar
  118. 118.
    Valerie K Sims, Matthew G Chin, David J Sushil, Daniel J Barber, Tatiana Ballion, Bryan R Clark, Keith A Garfield, Michael J Dolezal, Randall Shumaker, and Neal Finkelstein. Anthropomorphism of robotic forms: a response to affordances? In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 49, pages 602–605. SAGE Publications, 2005Google Scholar
  119. 119.
    L. Indramani, Singh, Robert Molloy, and Raja Parasuraman. Individual differences in monitoring failures of automation. The Journal of General Psychology 120(3), 357–373 (1993)CrossRefGoogle Scholar
  120. 120.
    Randall D Spain, Ernesto A Bustamante, and James P Bliss. Towards an empirically developed scale for system trust: Take two. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 52, pages 1335–1339. SAGE Publications, 2008Google Scholar
  121. 121.
    K Sycara and M Lewis. Forming shared mental models. In Proc. of the 13th Annual Meeting of the Cognitive Science Society, pages 400–405, 1991Google Scholar
  122. 122.
    Katia P Sycara, Michael Lewis, Terri Lenox, and Linda Roberts. Calibrating trust to integrate intelligent agents into human teams. In System Sciences, 1998., Proceedings of the Thirty-First Hawaii International Conference on, volume 1, pages 263–268. IEEE, 1998Google Scholar
  123. 123.
    Kazuya Takeda. Modeling and detecting excessive trust from behavior signals: Overview of research project and results. In Human-Harmonized Information Technology, Volume 1, pages 57–75. Springer, 2016Google Scholar
  124. 124.
    Adriana Tapus, Maja J Mataric, and Brian Scassellati. Socially assistive robotics [grand challenges of robotics]. IEEE Robotics & Automation Magazine 14(1), 35–42 (2007)Google Scholar
  125. 125.
    Lin Wang, Pei-Luen Patrick Rau, Vanessa Evers, Benjamin Krisper Robinson, and Pamela Hinds. When in rome: the role of culture & context in adherence to robot recommendations. In Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction, pages 359–366. IEEE Press, 2010Google Scholar
  126. 126.
    Ning Wang, David V. Pynadath, and Susan G. Hill. Trust calibration within a human-robot team: Comparing automatically generated explanations. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction, HRI ’16, pages 109–116, Piscataway, NJ, USA, 2016. IEEE PressGoogle Scholar
  127. 127.
    E. Karl, Weick. Enacted sensemaking in crisis situations. Journal of management studies 25(4), 305–317 (1988)CrossRefGoogle Scholar
  128. 128.
    P. Richard, Will. True and false dependence on technology: Evaluation with an expert system. Computers in human behavior 7(3), 171–183 (1991)CrossRefGoogle Scholar
  129. 129.
    A. Xu and G. Dudek. Maintaining efficient collaboration with trust-seeking robots. In Intelligent Robots and Systems, 2016.(IROS 2016). Proceedings. 2016 IEEE/RSJ International Conference on, volume 16. IEEE, 2016Google Scholar
  130. 130.
    Anqi Xu and Gregory Dudek. Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 221–228. ACM, 2015Google Scholar
  131. 131.
    E. Rosemarie, Yagoda and Douglas J Gillan. You want me to trust a robot? the development of a human-robot interaction trust scale. International Journal of Social Robotics 4(3), 235–248 (2012)CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Department of Information SciencesUniversity of PittsburghPittsburghUSA
  2. 2.Robotics Institute School of Computer ScienceCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations