1 Tolerance: a New Tool for Human–Agent Interaction Research

Rightly or wrongly, many people believe that the unrelenting rise of autonomous technologies will transform our lives in disturbing and alienating ways: technological unemployment [1], depersonalization of relationships [2], and threats to human uniqueness [3], are but some of the dystopic scenarios evoked by such narratives [4]. Whether these worries are well founded or not, their adverse impact on innovation is being taken seriously by the experts of human–machine interaction [5]. Nonetheless, a metric to rigorously measure the negative influence of these worries and concerns on the prospective adopters of autonomous systems is still lacking.

In this paper, we introduce “Tolerance”, a theoretical construct designed to measure the resilience or insusceptibility of human users toward the effects of alienating convictions and dystopian narratives about autonomous artificial agents (henceforth referred to as “Agents”Footnote 1). Agents are intelligent interactive systems, such as mobile robots, unmanned vehicles, and digital avatars, capable to independently make decisions and interact (physically and/or socially) with complex environments. Many Agents, like co-bots and social robots, are designed to assist humans in their daily activities, communicate, and complete joint tasks with them [6, 7]. Our analyses motivate the adoption of the concept of Tolerance in the context of social robotics and human—Agent interaction (HAI)Footnote 2 [8]. In these fields, Tolerance could effectively complement two contiguous, but essentially distinct, notions—Acceptance and Trust—that are broadly used by researchers [9, 10].

Tolerance for Agents was first conceptualized by Galliott as a measure of the resilience or immunity to the sense of anxiety and hostility that often accompanies the prospect of adopting autonomous technologies [11]. Galliott et al. [12] examined, from a security perspective, the rise of neurotic, possibly paranoid, attitudes towards highly automated and autonomous systems, highlighting the potential risk of non-compliance among the defence personnel who adverse the prospect of operating with or under such systems. This investigation pointed to Theodor Kaczynski’s theoretical elaboration as an epitome of low Tolerance for Agents. Both Kaczynski’s infamous manifesto and his disgraceful actions represent a lasting testament to how estranging narratives and intricate cogitations about automatization can motivate a radical, and in the most extreme cases even violent, rejection of all autonomous technologies [2, 13].Footnote 3

If there is one thing to be learned from Kaczynski’s notorious case, it is that this kind of radical form of rejection (normally reduced by sociologists and social psychologists to “technology resistance” [15]) is not a stereotyped emotional and behavioural reaction mechanically triggered by the objective features of this or that particular Agent, but rather an assertive posture primarily motivated by articulate normative judgments and rather elaborate, albeit rigidly categorical, theorizations about Agents in general. The notion of Intolerance captures how the content of these theorizations motivates the subject’s unsupportiveness or even active antagonism toward any prospect of adopting Agents. By soliciting or disinhibiting negative responses, the content of Intolerant beliefs and worldviews indirectly contributes to undermine the user’s Adoption Propensity; importantly, such an attitude may, but does not necessarily have to, be accompanied by openly hostile verbalizations and visibly anxious behaviours, which may, and often do, remain covert or unexpressed.

According to its own proponents, the notion of technology resistance requires a systematic revision, as it is currently unable to meaningfully account for the beliefs and worldviews that deeply motivate some people to resist technology adoption [15]. Comparatively, the notion of Tolerance we are proposing is more useful to HAI researchers because it can help them monitor subjects’ proneness to what we call “Autonomy Estrangement”, a state of mind characterized by uncanny feelings and disturbing (possibly obsessive or even paranoid) elaborations about autonomous technologies: While a high Tolerance for Agents reduces one’s chances of developing such a psychological state, an Intolerant attitude may open the door to Autonomy Estrangement inviting the most negative behaviours associated with it (e.g., robot avoidance, vandalism, sabotage). At the same time, we argue that Intolerance should not be confused with other negative attitudes toward Agents (e.g., lack of Trust and lack of Acceptance) that may induce analogous behaviours through different psychological mechanisms [16] (Table 1).

Table 1 A revised model of Adoption Propensity for Agents

Like Acceptance and Trust, Tolerance (and, subsequently, Intolerance) is one of the attitudes toward Agents that HAI researchers study to facilitate the integration of autonomous technologies in human environments [16,17,18]. While Acceptance and Trust are well-researched metrics, Tolerance describes an aspect of HAI that has been investigated only scarcely so far and thus requires specific analyses: Acceptance and Trust reflect whether the user’s expectations, needs, and preferences were satisfied by their interaction with some Agents; Tolerance, on the contrary, reflects the categorical beliefs and general worldviews that a user holds about all Agents. These beliefs and worldviews are usually conveyed by narratives characterized by varying levels of complexity and vraisemblance. These narratives, in turn, may, and often do, circulate via popular accounts and influential stories, independently of real interaction with actual Agents; these beliefs and worldviews can be rather resilient to revision, especially when they are based on conspiracy narratives, deeply held prejudices, and irrational or pseudo-rational stereotypes. Ultimately, Tolerance reflects the user’s categorical beliefs about Agents in general (whether real or just imagined), not the quality of the user’s experience of interaction with some particular existing Agents.

When we try to disentangle Tolerance from Acceptance and Trust, the main methodological challenge is that the former has always been embedded within the latter two: while existing models of Acceptance and Trust may at times capture some aspects of Tolerance, they never acknowledge the confines of Tolerance and its specific scope. The overlap between different attitudes is misleading because it conceals Tolerance’s distinctive impact on HAI: The narratives that fuel Tolerance are largely independent of the kinds of direct experiences that motivate Acceptance and Trust. Therefore, the fact that Acceptance and Trust map some aspects of the behavioural expression of Intolerance (e.g., proneness to anxiety and hostility) suggests more a methodological confusion among the definitions of the existing constructs than the objective capability of the existing notions to effectively account for the phenomena described by Tolerance. As each of these three constructs offers a unique contribution to categorise the complex attitudes of humans toward Agents, our goal is to bring to light their specificity and complementarity.

As a first step toward this goal, Sect. 2 problematises the way Acceptance and Trust are prevalently used, arguing that these two notions are not properly distinguished in the literature and, moreover, they do not exhaustively map the attitudes that determine Adoption Propensity. In Sect. 3 we illustrate our concerns through a hypothetical scenario requiring that Tolerance is added to the picture to overcome the deficiencies of Acceptance and Trust. In Sect. 4, we articulate a phenomenology of Tolerance as a form of anxious/hostile attitude towards Agents in general, motivated by narratives that have a normative valence, not experiential engagement.

Section 3 illustrates the deficiencies of Acceptance and Trust through a hypothetical scenario that can be made sense of only by adding Tolerance to the picture. In Sect. 4, we articulate a phenomenology of Tolerance as a form of anxious/hostile attitude towards Agents motivated by narratives that have a normative valence, not experiential engagement, and we fully analyse how such an attitude differs from Acceptance and Trust. In Sect. 5, we outline the principles according to which Tolerance can be recognized and measured, examining five defining concerns/worries to which Intolerant people are particularly prone. This section also introduces two seed questionnaires for measuring different, but closely related, metrics that are relevant to Adoption Propensity: the first questionnaire is specifically designed to assess the level of Tolerance by tracking the presence of said concerns/worries as part of the subject’s beliefs about Agents; the second questionnaire, in turn, is designed to assess the level of Autonomy Estrangement, by tracking the subject’s feelings and hope in relation to Agents.

In Sect. 6 we discuss the general socio-cultural significance of Tolerance in relation to Adoption Propensity. We explain why such a construct is neither intrinsically positive or negative but has nonetheless a contextual normative valence insofar as it helps us red-flag biased, irrational, and possibly fanatic attitudes. We discuss six hypotheses to be tested empirically, indicating possible trajectories for future research aiming at expanding our seminal investigation: the first hypotheses is the inverse correlation between high Tolerance and Autonomy Estrangement; the second, that low Tolerance is exacerbated by the militant propaganda of anti-tech ideologies; the third, that Tolerance correlates to personality traits like neuroticism and closedness; the fourth, that Intolerance tends to be more covert and ambivalent than low Acceptance and Trust; the fifth, that Intolerance for Agents shares a common root with intolerance for humans and tends to resonate with it; the sixth that the best mitigation strategies involve early preventive interventions through education and information. Finally, in the conclusive section, we summarise the key points of our proposal, involving a model of Adoption Propensity informed by Tolerance and Autonomy Estrangement, and we highlight the benefits offered by such a model to the field of HAI.

To offer a conceptual model capable to combine Tolerance with the revised definitions of Acceptance and Trust, we will investigate the content of the beliefs and worldviews typically endorsed by the subjects more prone to Autonomy Estrangement. The seminal model presented in this paper is deliberately open-ended as its primary goal is to raise awareness about the important role of Tolerance in HAI, unearthing its psychological, cultural, and theoretical foundations: We aim to provide some general methodological guidelines for its assessment, not a fully-fledged operationalization of the construct. Due to the novelty of the proposed construct and the absence of relevant experimental data on Tolerance for Agents, our theoretical claims primarily build on phenomenological and genealogical analyses corroborated, when possible, by current experimental results in social robotics and social psychology (especially tolerance for humans).

2 Problematizing the Attitudes Towards Agents

As the function of today’s Agents is still primarily assistive, humans are required to complement or supervise their operations [19]. The successful deployment of Agents depends on their capability to satisfy the users’ expectations, which can be surprisingly heterogeneous (defying common intuitions), implicit (unknown to the users themselves), and even ambivalent (conflicting with other expectations) [20]. Human attitudes toward Agents correlate with perceptions and affects evoked by the Agents’ appearance [21], the user’s familiarity with Agents (i.e., aptitude and previous exposure to similar technologies [22]) and beliefs about Agents (i.e., factual knowledge and more or less accurate judgments, including both judgments based on rational inferences and judgments based on irrational or pseudo-rational preconceptions [23]). These attitudes depend on various psychological, socio-cultural, and aptitudinal factors [24, 25] and are recognizable and measurable through self-reports (explicit method) or external observations (implicit method): the former include the verbal responses generated before, during, or after interaction (i.e., casual utterances and explicit answers to questionnaires or semi-structured interviews); the latter include the behavioural and emotional responses produced during or after interaction with Agents (e.g., posture, gesture, gaze, facial expressions) [23].

Acceptance and Trust arguably are the two most studied attitudes in this field. HAI research scrutinizes both attitudes in the users, designing ad hoc assessment tools (like rating scales) to appropriately measure them [10]. Combining Acceptance and Trust systematically, though, proves difficult, as these two constructs were conceived independently in different contexts for different types of application.

2.1 Acceptance

The notion of Acceptance for Agents is prefigured by the older Technology Diffusion Theory [26], which explained the adoption of innovative technologies with a five-stage process that begins with the public “awareness” of technology’s impact and terminates with the institutional “confirmation” of technology’s adoption. In 1989, Davis introduced his Technology Acceptance Model (TAM) to explain how the propensity to adopt a certain technology (the “behavioural intention” that predicts the subsequent “use” of said technology) is determined by two factors: how that technology seems to improve task performance (“perceived usefulness”) and how effortless its intervention appears (“perceived ease of use”) [27]. TAM’s second iteration enriched the model introducing additional “job-related variables” (job relevance, output quality, result demonstrability, user’s experience) and “personal implications” (social norm, voluntariness of usage, image) [28]. Its following led to a “Unified Theory of Acceptance and Use of Technology” (UTAUT), which again distinguishes two families of factors (“performance expectancy” and “effort expectancy”, corresponding to the previous “perceived usefulness” and “perceived ease of use”, respectively) and introduces additional intervening factors like “social influence” and other “facilitating conditions” [29].

The earliest Acceptance models focussed prevalently on the societal/public dimension of technology adoption. In turn, its more recent iterations explore the personal and group level to enable a more detailed understanding of the user’s experience. Also, recent TAMs tend to be tailored on digital and computer technologies, as they describe “the anchoring and adjustment of human decision-making processes” [30] by means of variables such as: “computer self-efficacy”, “perception of external control”, “computer anxiety”, “computer playfulness”, “perceived enjoyment”, and “objective usability”. Several variants of UTAUT have specifically explored the attitudes of users working within narrow technological domains, introducing more and more specific factors, like those concerning personal interactions with social robots or industrial robots [31] (e.g., investigating Acceptance for a pet-like robot cat [29]).

UTAUT has further evolved under a constant pressure for diversification, branching into multiple narrowly specialized TAMs [32]. This trajectory solicits two remarks: the first is that the need to produce context-specific TAMs is in tension with the cross-disciplinary generalizability of said models, due to the inevitable trade-off between distinctness and transferability of the evaluations across different technological domains (i.e., an evaluative template tailored on a narrow domain becomes more granular when transferring to other domains); the second is that, despite the increasing diversity of TAMs, the polarization between utilitarian factors (objective benefits, usefulness, outcome) and hedonic factors (subjective experience, aesthetics, effort) remains constant throughout consecutive iterations.

2.2 Trust

As a HAI construct, Trust for Agents serves a purpose analogous to Acceptance’s: identifying causal factors that motivate Adoption Propensity as a positive, welcoming attitude. Trust, though, had a different historical genesis: It was first introduced in social psychology to study the emergence of alliances and mutual interpersonal dependencies in human–human (not human—Agent) relationships [33, 34]. The game-theoretical concept of Trust applies to any cooperative scenario in which uncertain, high-stake decisions require confident predictions of the co-operators’ true intentions and capabilities.

This applies to both human and robotic co-operators [35], which is why Trust could soon be incorporated into HAI research: Biros and colleagues define Trust as the attitude of “having confidence in the autonomous system to do the appropriate action” [36]; Lee and See as the expectation that an Agent will “help the user achieve their individual’s goals in a situation characterized by uncertainty and vulnerability” [37]; Wagner as the “belief, held by the trustor, that the trustee will act in a manner that mitigates the trustor’s risk in a situation in which the trustor has put its outcomes at risk” [38]; Oleson and colleagues as the user’s reliance that the Agent will not undertake “actions prejudicial to the well-being” of the user [39]; finally, Hald and colleagues define Trust as the “the combination of feeling physically safe around [the robot] and being able to predict the robot's action in the context of a shared task” [40].

The concept of Trust for Agents remains, however, elusive due to (i) the inherent vagueness of analogies with human–human trust (i.e., Do machines have aims? Do they understand whom their actions will benefit?) and (ii) its dual nature (objective and subjective), which ambiguously refers to an inclination of the trustor and a quality of the trustee at once.

Most definitions highlight the utilitarian component of Trust as they link this attitude with the belief that Agents will function properly, benefit the users, protect their interests, and fulfil their expectations (whether “robot’s actions and behaviours correspond to human’s interest” or not) while preventing major disadvantages (“avoiding that human has something to lose” in relying on the robot) [41]. However, a merely utilitarian notion of Trust is hardly useful if it reduces HAI to an instrumental relationship. The notion of Trust was not invoked when automated technologies were still conceived instrumentally, as passive tools confined within factory premises [42]. Trust became integral to the emerging paradigm of HAI only since the early 2000s, when service Agents introduced in the home and the office environment to assist people finally started to be seen as assistants and co-operators capable to act and make decisions independently [43].

Thus, while the Adoption Propensity of earlier automated technological systems depended exclusively on their more or less successful functioning (tracked by utilitarian variables like “reliability” or “efficacy”), to predict whether autonomous technological systems (Agents) will be adopted, we additionally need to know how Trustworthy they are: that is, we must assess whether the Agent both is capable to help the users and authentically aims to do it, as these two necessary conditions (respectively called “competency” and “integrity” [44]) are key to predict Trust among humans.

Even introducing this clarification, Trustworthiness remains an ambiguous concept. It tracks, at once: (I) a quality of the human—Agent relationship (which is why it presupposes some reciprocity) [45]; (II) an objective quality of the Agent; and (III) a subjective judgment by the user, a judgment that reflects their own psychological profile (i.e., personality traits, individual preferences, anthropomorphic tendencies), background (e.g., professional experience, familiarity with the technology), and beliefs (i.e., what the user thinks of and how they feel about the Agent). That is why Hancock and colleagues [46] distinguished between a “dispositional”, a “situational”, and a “learned” component of Trust, corresponding to whether the trusting attitude was determined by long-term (possibly permanent) inclinations and propensities, transient aspects of an occurring HAI experience, and knowledge about the Agent and their performance. In principle, a similar tripartition can be applied to Acceptance too.

Some authors believe that Trust, like Acceptance, depends on both the utilitarian benefits produced by Agents and the more or less satisfactory experience associated with using them (e.g., “[Trust] predicts not only the quality of HAI but also how willing people are to use social robots for certain tasks” [47]). Two remarks are necessary also in relation to Trust: first, the notion of Trust evolved to incorporate an enriched understanding of the relational dimension of HAI; second, Trust presents an internal polarization between objective and subjective factors, which resembles Acceptance’s internal dualism between utilitarian and hedonic factors.

2.3 Combining Acceptance and Trust

While the polarization embedded in both Acceptance and in Trust is not wrong in itself, we must note that too often it has fuelled a problematic confusion between these two constructs. For example, Gaudiello and colleagues assume that Trust strongly contributes to determine Acceptance when they claim that the users’ “Trust in the robot [can be] considered as a main indicator of acceptance” [48]. Similarly, it is often assumed that Trust can be used to determine the overall Acceptance of a system, in a way that tends to implicitly conflate the two concepts [23, 49]. If Trust is often considered a component or constituent of Acceptance, Acceptance is also frequently considered just another component or constituent of Trust: far from being accidental, this confusion arises because originally Trust and Acceptance were each deemed sufficient to individually explain Adoption Propensity [50].

Unfortunately, this overlap precludes a clear appreciation of the proper scope of these two constructs and confounds their specific contribution to Adoption Propensity. The tendency to conflate them derives, in our opinion, from the fact that the typical model of Acceptance involves an internal dualism between utilitarian and hedonic factors, a dualism that is replicated in the typical model of Trust. Therefore, each attitude comprises the same constitutive components of the other. But, so long as each construct aspires to offer a complete synthesis of both constituents, it will be difficult to differentiate among them. To make Acceptance and Trust complementary, we must reconceptualize either as inherently incomplete and limited in scope. Several multi-dimensional models of Adoption Propensity have been proposed by the literature, introducing an assortment of assessment methods to measure both Acceptance and Trust as distinct but complementary attitudes [23, 51, 52]. The ambiguity (across models) deriving from multiple concurrent definitions of each construct is aggravated by the unintended conceptual overlap (within models) between the two constructs and the underspecified nature of their scope and function.

Shortly, the taxonomical problems are essentially three. First, in most joint accounts, both Trust and Acceptance share a significant number of communal dimensions/indicators [53]. Second, one of the two attitudes is often taken as a dimension or aspect of the other [9, 48, 54]. And third, to add further confusion, both Acceptance and Trust are typically defined by technology adoption models in terms of the user’s “intention”, “willingness”, or “propensity” to use the Agents or interact with [23], which in turn reflects the “actual use” of Agents or at least their “perceived usefulness” [27, 55,56,57]. These ambiguities motivate at least two concerns regarding construct validity: first, how to measure the two constructs with rigorous and specific methods, given that their constitutive components are so similar? Secondly, how to distinguish in which semantic context Acceptance is more relevant than Trust?

To define their distinctive assumptions and methodological ramifications, construct validity of both Trust and Acceptance must be established conceptually, before being tested empirically. First, we must distinguish their respective scopes: Both Acceptance and Trust reflect certain expectations about an Agent, but our previous analyses indicate that Acceptance prevalently focuses on expectations concerning the interaction with the user (i.e., intended as a mode of using the Agent) while Trust primarily focuses on expectations concerning the achievement of a practical goal (the end for which the Agent is used). Thus, we suggest that Acceptance is essentially meant to express value judgments about the Agent’s relationship with the users (i.e., the role played by the Agent as a co-worker, team member, or interaction partner), while Trust is essentially meant to express judgments about the Agent’s performance (i.e., the output of their autonomous work). It is possible, of course, that both judgments are similarly influenced by identical or very similar sets of empirical circumstances, but there is no a priori reason to assume that these variables systematically co-variate.

We propose a refined model of Acceptance and Trust that values their conceptual independency by minimizing the number of traits and communal dimensions that they share. In our model, Acceptance describes the user’s experience of engagement with an Agent and the willingness to recognize said Agent as a legitimate interaction partner or co-worker [27, 58,59,60]. Trust, in turn, tracks how the user acknowledges the Agent’s capability to reliably operate toward the achievement of a certain goal and the user’s willingness to assign tasks and delegate responsibilities to the Agent [42, 43, 61,62,63,64]. By highlighting their specific relevance to HAI more than their similarities and overlaps, our model emphasizes not only the distinctiveness, but also the complementarity of Acceptance and Trust: while Acceptance serves to mapping Adoption Propensity in relation to the quality, robustness, and easiness of the interaction/engagement with the user, Trust serves to mapping Adoption Propensity in relation to the efficacy, efficiency, reliability, and safety of the Agent in performing their tasks.

2.4 Tolerance

Our effort to operationalise Trust and Acceptance via a partial reconfiguration of their existing definitions ultimately aims to make space for Tolerance as a third crucial attitude toward Agents. In fact, the goal of this paper is not to provide a revised account of Acceptance and Trust so much as introducing Tolerance as their indispensable complement. Acceptance and Trust are unable alone to exhaust the complexity of the attitudes toward Agents: They remain inevitably blind to some specific kinds of anxiety and hostility that only the notion of Tolerance can effectively account for. As argued in this paper, Tolerance for Agents is essentially independent of Trust and Acceptance, although the expressions of these attitudes may partly and occasionally align: an increment or decrement in the Trust and/or Acceptance score can situationally, not universally, correspond to increments or decrements in the Tolerance score, and vice versa. Due to its irreducibility to other constructs, we argue that Tolerance is no less fundamental than Trust and Acceptance in any systematic assessment of the users’ expectations toward autonomous technologies.

3 Well-Accepted and Highly Trusted Agents may be Poorly Tolerated

To illustrate the tripartition described in Sect. 2, consider the group dynamics unfolding in the following hypothetical, yet plausible, scenario [65].Footnote 4 Let us imagine that a logistic bot is introduced in an airbase to release the ground crew members from the time-consuming burden of transporting bulky components from one side of a very long hangar to the other. After a few weeks of apparently flawless implementation of the new system, the crew supervisors report that the hangar’s personnel have almost entirely stopped using the bot and, on one occasion, some of them have tried to sabotage it.

Some HRI experts are recruited to run semi-structured interviews with the crew members to investigate their negative attitude toward the new technology. Their investigation reveals that, surprisingly, the crew members found the bot quite useful and reasonably reliable: all of them admit liking its friendly interface and its interactive functions; the majority of them expresses sympathy for the bot, while a smaller group is rather indifferent to it. So, why are they ultimately reluctant to use it? Further investigations indicate that the crew members, while not disliking or distrusting the bot per se, believe that the permanent adoption of bots may deteriorate the professional and personal relationships between two different teams of crewmembers deployed at the opposite sides of the hangar, reducing their opportunities to interact during work hours. This worry alters the perception of their role within the airbase, rousing latent tensions between the two groups. Moreover, although there was no plan to replace the workers, the crew members progressively grew concerned about the possibility of becoming redundant if additional bots were deployed in the hangar. Last but not least, some of the managers declare that assigning logistic tasks to a bot is morally wrong and thus should be avoided because that responsibility should be given only to humans.

The airbase scenario reveals something significant: The hangar crew had a favourable impression about the bot when it was first introduced. This initial response is perfectly accounted for in terms of Acceptance and Trust. What the crew judged positively was their interaction with the bot and the bot’s ability to assist them in their daily duties. However, it soon became evident that their overall propensity to use the bot was not very high: In addition to their positive impressions, the staff members were simultaneously holding a very negative judgment about the bot, which eventually revealed their deeply anxious and hostile attitude toward the machine. Importantly, this second response overridden the first, not by replacing the earlier positive opinions about the bot’s role and performance, but by adding to them a distinct negative judgment: the users never changed their beliefs that the bot was (1) nice to interact with and (2) useful to their work; rather, they expressed a third belief that, despite the positivity of the other two, ended up deteriorating their overall attitude. The positive opinions were surpassed, not suppressed, by the negative one. The negative opinion emerged from the realisation that, although the logbot never directly affected the social relations among the workers, such relations can be interrupted or displaced by bots, alienating the crew members from their traditional roles and making them anxious about the stability of their jobs. It seems unlikely that this negative attitude formed suddenly: it is plausible that it was already prefigured by some latent beliefs, inactive in the back of the users’ mind, so to speak, silently awaiting to be triggered by the relevant circumstances.

Now, perceived social displacement and unemployment anxiety are phenomena broadly linked to negative attitudes toward robots [66, 67]. After all, one might be inclined to believe that these phenomena are best tracked in terms of Acceptance or Trust. If this hypothesis is correct, then the negative attitudes of the users can still be interpreted as long-term transformations occurring within the space of Acceptance and Trust, without involving a third kind of attitude. At a first glance, this explanation seems parsimonious because it prevents a seemingly unjustified proliferation of constructs. However, such an explanation is problematic because the general attitude toward the bot radically changed while leaving completely unmodified the attitudes that had already been assessed: Acceptance and Trust did not change at all even after the general attitude had switched from positive to negative: In other words, the users never felt any urge to revise their judgments concerning the quality of the interaction with the bot or the usefulness of its work.

The judgments concerning the interaction with the bot and its performance had already formed, they were consistently positive, and had never been challenged by the negative beliefs expressed later by the users: The crew members frankly admitted that the bot had never stopped to efficiently, efficaciously, and reliably execute the tasks assigned to it, even when unsupervised; the bot promptly served their goals and anticipated their needs, providing helpful information and making appropriate decisions; the bot still looked friendly and behaved in a pleasant, agreeable manner; it quickly responded to human commands no less than before; interaction with it has never ceased to be smooth and enjoyable; moreover, after two months of use, most of the crew members seemed still moderately inclined, like during the first week, to treat the bot as a teammate or companion (although—and this might be crucial—they did not necessarily treat it as the kind of teammate/companion they prefer to work or spend time with).

That is why the concurrent negative judgments about the bot can hardly be explained simply relying on the classic TAM (which tracks responses concerning the perceived usefulness and ease of use of a specific Agent [27, 54, 58]). A correlation with the personality traits “openness” and “neuroticism” seems plausible and should be investigated [68], but the categorical convictions examined in this context seem only loosely linked to personality-related variables like “affinity for technology interaction” or “technology commitment” (which again can only track inclinations formed through experiences of personal interaction [69]).

We must conclude that the new negative attitude falls outside the scope of widely used constructs like Acceptance and Trust. The bot’s final rejection was not caused by the factors that typically determine a low level of Trust, as the users rejected the Agent even though (and even more so because) it operated effectively and reliably: paradoxically, it was exactly when the Agent operated competently and made appropriate decisions like a human that certain users felt overwhelmed or even aggressed. We infer that the users started to ruminate the idea that only humans deserve to make decisions and operate in such a role. Rejection was not caused by a low level of Acceptance, as the quality of the interaction with the bot was never indicated as the source of the problem: thus, we infer that, although the Agent was perfectly integrated within the existing networks of human practices and did not directly interfere with the personal, professional, or social organisation of the users, the agent’s technological, artificial nature was still identified by the crew as a potential threat or obstacle.

Our interpretation is that the crewmembers’ refused to use the bot when their pre-existing, deeply held negative beliefs finally manifested themselves, making the crewmembers’ low Tolerance discernible. This dynamic, however, did not correspond to a concurrent decline of Acceptance and Trust: The scenario is easily explained assuming that the users’ low Tolerance became explicit—without affecting their Acceptance and Trust—when the situation activated their deeply held negative beliefs about the bot, triggering their latent fears concerning the drawbacks of allowing the bot in the warehouse. A similar interpretation can be extended to any work or home environment in which assistive robots (e.g., artificial butlers or nannies) are introduced. In such scenarios, humans may ultimately refuse to use Agents, although no particular Agent ever did anything wrong or unpleasant.

We need to refine the received theoretical framework if we want to explain the origin and the content of these negative judgments: thus, we introduce the dyad “Tolerance/Intolerance” to unearth the narratives capable to increase or undermine Adoption Propensity by conveying positive and negative beliefs about Agents of a categorical, normative nature. Tolerance is closely linked with the concept of Autonomy Estrangement, that we introduce here to better characterise the alienating condition characterized by pervasive and oppressive feelings of hostility and anxiety associated with Agents in general. We need the concept of Tolerance to capture the idea that preventing Autonomy Estrangement and building a resilience to it requires limiting the impact of negative narratives about Agents.

4 Ambivalent Attitudes Motivated by Conflicting Experiences and Narratives

To establish construct validity, the notions mentioned so far need being clearly distinguished. Tolerance is conceptually independent of both Trust and Acceptance in the sense that it does not systematically co-vary with them: A perfectly trusted and accepted entity is not necessarily well-tolerated, and, in turn, a poorly trusted and accepted entity can be perfectly tolerated. Analogies with human–human interactions in the workplace are both methodologically justified and useful, in this context, due to the deep similarities between human–human and human—Agent attitudes [70]. Think, for example, of when poorly accepted and untrustworthy humans happen to be well tolerated: even an employee disliked by his/her colleagues due to his/her obnoxious personality and questionable work ethics can have a brilliant career if the management is not keen on hiring more qualified personnel, unwilling to question the dysfunctional and self-exculpatory work culture, or attracted by the ‘Brilliant Jerk’ type [71].

A similar constellation can arise when humans deal with non-autonomous technologies. Think of a cohesive group of users that, instead of adopting state-of-the-art technologies, intentionally keep relying on outdated, if not unreliable, systems. In this high Tolerance scenario, the fact that the obsolete systems are untrustworthy and unable to assure smooth and enjoyable interactions with the users does not necessarily prompt the hostility of the users, who may be inclined to keep their low productivity and poorly innovative profile. The motivations behind this innovation resistance can be extremely complex [72, 73]: often they have to do with a culture of leniency or simple reluctance to leave the familiar comfort-zone. We mention their impact on groupwork to exemplify how a highly tolerant attitude may be prompted by motivational trajectories that are entirely orthogonal to the Agent’s inherent trustworthiness and acceptability.

Having dissociated Acceptance, Trust and Tolerance puts us in a better position to account for the user’s mixed responses about the Agent, the causes of which might be unobvious to the HAI researchers and opaque to the users’ themselves. Attitudes towards robots are often ambivalent: i.e., the same user can have multiple conflicted attitudes toward the same robots [74, 75]. This is not surprising, considering the stratifications of implicit beliefs and dispositions and the multifaceted nature of social dynamics [76]: First, not all users are perfectly rational decision-makers (they may be biased against or in favour of Agents); Second, their professional code of behaviour may conflict with their personal values (the user can share their employer’s technology-friendly views, while thinking deep-down that humans should always be prioritized); Third, users may prioritize their personal success over their team’s; Fourth, their first-hand experience may contradict their deeply held beliefs (including, but not limited to, prejudices and stereotypes), which they might be unwilling to revise despite contrary evidence [74].

The fourth point needs clarification. Various empirical studies confirm that, as efficaciously portrayed by writer Robert M. Pirsig in his popularly acclaimed novel [77], some individuals experience a mix of discomfort and revulsion when they have to interact with mechanical devices, a phenomenon that sociologists interpret as technology resistance [15]. It is well-known that users and bystanders often experience a sense of eeriness and displacement when directly dealing with Agents [78, 79]. Studies on the uncanny valley effect indicate that tendencies to repulsion and resistance are even more acute in the case of hyper-realistic autonomous devices with android or zoomorphic features [80]. Crucially, while this kind of negative response can have a dramatic impact on HAI, it is not an indicator of scarce Tolerance for Agents. The logistic bot scenario illustrates a case of low Tolerance (that is: “Intolerance”) because it involves some covert, unobvious anxiety and hostility. Such a negative attitude does not arise from the unsuccessful performance of a certain Agent in a specific task or from their uncanny appearance; rather, it arises from deeply held general worries concerning how, in the long run, Agents may collectively transform the human civilisation.

The logistic bot scenario supports some inferences about Intolerance. First, Intolerance is an attitude typically associated with offline (detached and mediated) rational or pseudo-rational evaluations and normative judgments about all Agents, not from the online (directly engaged) experiential encounter with a certain Agent. Secondly, Tolerance can be, but—importantly—does not need to be affected by actual interaction experience, while Trust and Acceptance are always, necessarily, primarily motivated by perceptual and interactive experiences. The situational component is crucial and essential for Trust and Acceptance, but only secondary or irrelevant to Tolerance. Thirdly, the distinction between Tolerance, on the one hand, and Acceptance/Trust, on the other, reflects a dissociation between the experiential level (the direct perception of and immediate interaction with a particular Agent) and the narrative level (the general beliefs about the Agents’ long-term impact on the work environment and its broader implications).

Negative beliefs and deeply entrenched convictions about Agents, possibly accompanied by negative feelings within the anxiety/hostility spectrum (e.g., frustration, envy, repulsion, etc.) are both a symptom and a cause of the fast erosion of Tolerance. These beliefs usually spring from and consolidate because of compelling narratives (popular stories, influential reports, and spreading opinions such as “robots are coming for our jobs” and “AI will make the most important decisions for us”), independently, or even in spite, of one’s direct interactions with Agents. That is why Tolerant or Intolerant attitudes are more unlikely to fade away despite what the Agent does, says, or looks like, compared to (un)Accepting and (un)Trusting attitudes, which are more situational and context-dependent. The mismatch between these two distinct, possibly conflicting levels of evaluation (interactive experience vs contentful narrative; context-dependent vs context-independent; dispositional and categorial vs situational and particular, etc.) may conduce to more or less severe ambivalences and even result in cognitive dissonance.

To illuminate the crucial, and dramatically overlooked, role of Tolerance for Agents in HAI, we need to develop scales capable to rigorously capture the specificity of Tolerance. Building on our previous inferences, we can identify at least six elements of specificity, to be assessed with appropriate queries.

The first difference is that Tolerant/Intolerant attitudes are assessed evaluating the content of the beliefs and worldviews that subjects hold about Agents. That is, Tolerance reflects the compelling force of the narratives that convey this content, while Acceptance and Trust are assessed evaluating the interactive experiences that humans may or may not have had with certain Agents (based on the frequency and quality of direct engagement).

Second, Tolerance is assessed asking subjects how they agree with categorical judgments about Agents in general, while Trust and Acceptance are measured asking subjects to evaluate their particular instances of interaction with this or that Agent.

Third, Tolerance is assessed investigating subject’s normative beliefs, concerning—for example—whether Agents should or should not be designed in a certain way and whether they do or do not deserve to operate in certain roles, based on whether this is desirable, fair, and appropriate; conversely, Acceptance and Trust are assessed asking questions of a descriptive nature, concerning Agents’ appearance and how they factually operate in certain roles.

Fourth, Tolerance is a dispositional attitude to be appreciated prevalently in the context-independent dimension, often acquired through learning, with little or no situational component; Acceptance and Trust, on the contrary, are primarily determined by the situational context in which the interaction with Agents occurs, and only secondarily by the possible dispositional and learned components.

Fifth, Tolerance is an attitude primarily appreciable in the offline dimension, because it is primarily and necessarily motivated by beliefs and judgments formed outside of or anyway independently of HAI—which is why Tolerance is assessed primarily in a long-term time scale or without precise temporal references; Acceptance and Trust, on the contrary, are primarily appreciable in the online dimension—which is why they are assessed in a short-term time scale, considering temporally specific events.

Sixth, due to its narrative and thetic nature, Tolerance primarily expresses a kind of contentful knowledge-that, concerning positive or negative facts about Agents; conversely, due to their episodic and experiential nature, Acceptance and Trust, primarily attest a direct performative familiarity developed through satisfactory or unsatisfactory instances of practical interaction with Agents.

In short, Tolerance is prototypically assessed by tracking the subject’s normative, categorical, acontextual narratives and beliefs that reflect their factual knowledge of Agents in a detached and atemporal dimension; conversely, Acceptance of Trust are prototypically assessed tracking the subject’s descriptive accounts of contingent, practical instances that reflect the subject’s interactive competence in a situationally engaged and temporally specified experiential dimension. All these prototypical differences are reflected by survey questions to be asked for assessing the user’s attitudes toward Agents. Table 2 shows that the questions used to assess Tolerance are substantially different from the questions used to assess Acceptance and Trust. These observations will be useful to draft a rudimentary protocol for the initial assessment of Tolerance.

Table 2 Specific scope of Tolerance and its key differences with other HAI attitudes

5 The Assessment of Tolerance: Negative Narratives, Beliefs, and Concerns

While Acceptance and Trust are experientially justified, Tolerance is primarily motivated by the conviction that the nature of autonomous technologies is not only innovative and disruptive but also inherently estranging and dangerous. But what are exactly the negative beliefs about autonomous technologies that reveal a high or low Tolerance for Agents? To answer this question, we must investigate the narratives that motivate Tolerance or Intolerance.

These narratives are schematically mapped by existing technology resistance models [15], which are helpful at least to distinguish two macro-groups of concerns associated with autonomy: (A) concerns over disruptive societal changes and (B) concerns over Agents’ dominance. However, this level of granularity is not sufficient. These general concerns correspond to two narrative frameworks under which we have grouped five specific worries about autonomous technologies, each of them testified by at least five negative beliefs about Agents. The list presented in Table 3 may not (and it is not meant to) be exhaustive, but it comprises a sufficient number of concerns to assess the subject’s degree of Tolerance for Agents.

Table 3 Concerns about autonomous technologies evidencing Intolerance for Agents

The five abovementioned concerns can be assessed by survey questions explicitly designed to measure whether and how intensely the subject holds the corresponding beliefs (Table 4). The final evaluation is based on a total score calculated by adding up the values of all the individual answers, which range from 0 to 4 to reflect agreement level).

Table 4 Designing a scale for Tolerance—sample questionnaire

As per our previous analyses, the five concerns examined here do not arise from direct interaction (i.e., a situated form of online engagement) with Agents but from a detached judgment (including beliefs, narratives, or theories) that the user formed during an undefined period of time and that does not necessarily coincide with the time of real interaction.

All these concerns essentially reflect the same fundamental worry that technological systems could, sooner or later, govern humanity, estranging human lives from their natural purposes, responsibilities, challenges, and risks to the point of making human existence meaningless and unrecognizable. This kind of apocalyptic worry, epitomized by Kaczynski’s manifesto [2], informs in various manners the relationship with all technologies, but it is particularly forceful in the field of HAI, where it specifically characterizes Intolerance for Agents. That is because, much more than any other kind of technological systems, Agents can be designed to act in an advisory, supervisory, or executive role: Unlike other automated or digital technologies, autonomous systems can explicitly or implicitly change the goals and priorities of humans, thus they also have the power to estrange humans from their pre-existing goals and priorities, luring them with fictitious and counterfeited imitations of authentic aspirations [13, 81].

Some anti-tech ideologies [2] prophesize a dystopic scenario: Because of autonomous technologies, our ways of living and working risk becoming more and more dependent on impersonal systems; however, such systems can only mimic or simulate, not share or understand, human goals and aspirations; thus, the tendency to offload personal decisions on Agents progressively deprives humans of the related responsibility, privileges, and dignity [11, 13, 82]. When chronically exacerbated, this narrative motivates the fear that an alienated fate awaits those humans who have accepted to be governed by autonomous technologies: to become empty, purposeless shells who live risk-free but ultimately inauthentic and meaningless lives [81, 83].

Humans experience an alienating psychological state when they are overwhelmed by the prophetic belief that autonomous technologies are making or could make human life meaningless and empty [13, 84]. We call this state “Autonomy Estrangement”, which is characterised by anxious (disoriented, distressed) and hostile (frustrated, aggressive) feelings and wishes in relation to Agents and the dystopic narratives that involve Agents (listed in Table 5). Autonomy Estrangement involves various manifestations of robo-phobia [85] and techno-phobia [86], with varying degrees of severity ranging from mild scepticism toward Agents to a major paranoid obsession. Whether they are justified or not, Autonomy Estrangement correlate to a radical rejection of all Agents, which may or may not be expressed through distressed emotive responses (anxiety, disorientation, hostility, and frustration). Autonomy Estrangement strongly undermines Adoption Propensity because it fuels hostile, rejective attitude toward Agents motivated by negative judgments and narratives. Discussing whether the content of the underlying beliefs and worries (which are mapped by the Tolerance scale) is reasonably justified or not is beyond the scope of this paper, but it is obvious that the condition of distress and aggressiveness caused by Autonomy Estrangement is inherently undesirable and draws a problematic trajectory for HAI.

Table 5 Designing a scale for Autonomy Estrangement—sample questions

While closely related, Intolerance for Agents and Autonomy Estrangement are not conceptually the same: the former is a general attitude conditioned by beliefs, the latter a psychological state expressing an emotional and wishful response to those beliefs. Intolerant beliefs may contribute to cause estranged emotional states; in turn, estranged emotional states per se do not generate new narratives. They can, however, contribute to exacerbate beliefs conveyed by existing narratives. The relationship between Intolerance and Autonomy Estrangement is somewhat analogous to the relationship between a weak immune system and a viral infection. It is possible to inductively infer how the former operates based on what the latter is doing, but not the reverse. A strong immune system (high Tolerance) helps prevent or mitigate the infection, but it never provides a 100% immunity. Also, we must be aware that other conditions (in our metaphor: low Trust and low Acceptance) may at times produce symptoms like those associated with the viral infection, which is why it is important to distinguish between different aetiologies. As we will see in the next section, the symptoms of anxiety and hostility are well evident in the radical ideologies that invoke rebellion against technology, including those endorsed by anti-tech and neo-luddite movements [2, 87].

6 Scope for Future Work: the Irrational Roots of Intolerance

Tolerance for Agents is a value-neutral construct, thus even very low or very high Tolerance values should not be attributed an intrinsically positive or negative valence: they are not meant to attract contempt or approval or be considered inherently deviant or pathological. Apart from patently aberrant cases, HAI researchers must assume methodological neutrality toward the beliefs and narratives that fuel Tolerant or Intolerant attitudes toward Agents, such as “Agents will make our world a happier place” or “AI is likely to kill all of us”. The Tolerance score provides only an indication of whether, and how robustly, such beliefs and narratives were endorsed by the subject. Tolerance is inherently limited in scope because it cannot tell us whether the beliefs endorsed by the subject have rational, justified bases (empirical evidence or valid inferences) or rather irrational, faulty ones (biases, invalid reasoning). If we lived in a Terminator-like apocalyptic scenario (machine Armageddon and killer-robots taking over human civilization), the surviving humans would probably have a very low Tolerance for Agents, and that would have to be considered perfectly justified.

However, we do not live in the Terminator’s world, which is why only a small minority of individuals fear machines as if they were going to take over at any moment. That is why, after establishing the in-principle neutrality of the concept of Tolerance, we must also remark that one of the most compelling reasons to introduce Tolerance in HAI research is that, in our world, extremely Tolerant or Intolerant responses can be symptoms of biased, irrational, and possibly fanatic attitudes. This suspicion is supported by independently collected data suggesting that some widespread categorical beliefs and normative judgments concerning Agents are based on preconceptions deriving from popular narratives (e.g., sci-fi representations) [88] that are supported neither by scientific knowledge nor by practical familiarity with Agents [39, 56]. It has been pointed out that the very negative beliefs often happen to be misinformed or even semi-delusional: some catastrophic predictions about autonomous technologies are certainly based on reasonable concerns, motivated by the unprecedented, unpredictable effects of scale production and the rapidly evolving nature of Agents, while other worries seem prevalently based on techno-apocalyptic clichés, which often seem deliberately fuelled by conspirationist propaganda. Distinguishing between the two is not always easy.

Thus, despite its inherent neutrality, Tolerance is a useful instrument to red-flag the kinds of extreme antagonistic attitudes that deserve to be investigated in greater depth. Our discussion motivates some hypotheses concerning these attitudes. These hypotheses can and should be empirically tested in future studies using the concept of Intolerance.

The first hypothesis that deserves to be tested empirically is that low Tolerance levels increase Autonomy Estrangement, which in turn discourages the adoption of autonomous technologies. Our conjecture is that Autonomy Estrangement negatively correlates with Adoption Propensity and tends to indirectly co-vary with Tolerance for Agents in combination with other variables, thus a high score of Autonomy Estrangement predicts a low score of Tolerance. The anxiety and hostility produced as specific expressions of Autonomy Estrangement reinforce negative narratives in opinion groups and ideological movements characterized by low Tolerance, further exacerbating their collective negative attitudes toward Agents.

The second hypothesis is that the most extreme judgments about Agents originate from militant discourses [81, 89] and thus, once examined carefully, might reveal the distinctive mark of an organized effort (motivated by ideological or political agendas), to discredit or undermine autonomous technologies, provoking a public reaction against their adoption. Proponents and sympathizers of the contemporary “anti-tech” discourse, like those who inspired Kaczynski, programmatically spread negative narratives toward autonomous technologies, portraying Agents as capable of and aiming to take over and subvert the social world [2, 84]. Also, the anarcho-primitivist and the neo-luddite movements openly advocate antagonistic and possibly violent reactions against Agents based on anti-modern and naturistic principles that only partly align with Kaczynski’s doctrine [89]. Some social psychologists have interpreted these reactions as symptoms of a claustrophobic anxiety deriving from the primordial perception of technology as a dehumanizing force [90, 91]. The notion of Intolerance is useful because it can help researchers verify whether the strongest negative responses against Agents correlate systematically with conspirative, possibly semi-delusional, discourses about autonomous systems, shedding light on the political and ideological source of Autonomy Estrangement [92]. While strongly corroborated by well-known facts, for now this correlation is just a hypothesis, to be validated empirically.

The third hypothesis is that Tolerance correlates with some of the variables that define an individual’s psychological profile, such as personality. Personality traits like “openness” and “neuroticism” [68] and personality-related variables like “affinity for technology interaction” or “technology commitment” [69] might exacerbate Tolerant or Intolerant attitudes. Alternatively, it is possible that they aggravate only the effects of Autonomy Estrangements. Other psychological and cultural variables (e.g., individual and collective preferences, anthropomorphic tendencies, innovativeness) might also help us illuminate the mechanisms underlying Tolerance.

The fourth hypothesis is that an Intolerant attitude tends to produce less obvious and more covert behavioural manifestations, compared to low Acceptance or Trust. The reasons of this asymmetry deserve to be investigated. We suspect that, in many cases, a person with low Tolerance toward Agents can unconsciously inhibit, deliberately hide, or even explicitly deny the hostile and anxious impulses associated with Intolerance, in a way that resembles how racist people hide their intolerant beliefs over specific outgroups: not only can such beliefs be biased and irrational, but the racist person may also be unaware of holding them, or inclined to deny to hold them because ashamed of their own racism, or simply unwilling to publicly admit to hold such views.Footnote 5 It could also be because Intolerance, unlike Trust and Acceptance, is often conceptualized as a structural weakness, specifically an undesirable susceptibility, an uncontrollable rigidity, or a lack of resilience. Probably, most people are unwilling to admit they have a weakness, rigidity, or susceptibility of this kind.

The fifth hypothesis is that Intolerance for Agents shares a common root with intolerance for humans and possibly resonates with it. This is the kind of intolerance that, unfortunately, has historically tended to target certain ethnic groups and minorities.Footnote 6 Both intolerance for humans and Intolerance for Agents tend to be grounded in dogmatic beliefs, not direct experience: like racist and sexist intolerance is not justified by the characteristic qualities and abilities of the target groups, so Intolerance for Agents is not necessarily motivated by an objective assessment of their functionalities and technical specifications; like racist intolerance is ultimately grounded into the unfortunate persuasion that one race is superior to the other, also technological Intolerance primarily appeals to the dogma of human exceptionalism, even when Agents are more efficient and efficacious than humans in target roles/activities.Footnote 7 Thus, interrogating the irrational source of intolerance for humans can be useful to identify the dogmatic roots on which at least some forms of Intolerance for Agents are based.

We suspect that research on Intolerance for Agents can learn a lot from existing studies about intolerance for humans. An intolerant attitude is more than just personal antipathy, irritation, or subjective aversion motivated by conflictual circumstances or personal incompatibility: even when intolerance manifests itself through idiosyncratic behaviours, stimulated by extemporary intuitions and superficial feelings, such behaviours, intuitions, and feelings are just the effects of a pre-existing discriminatory attitude, not its motivating cause. Thus, the truest source of intolerance is to be found in a systematic, immovable rejection that, even before being validated by the direct encounter with its live object, is already fuelled by the conviction that such an object is essentially inappropriate, deviant, or wrong. According to Verkuyten, Adelman and Yogeeswaran [79], this conviction may originate either from prejudice (i. e., a negative evaluation of an individual or group, based on group membership) or from deliberation (i.e., a misguided inference): either way, intolerance tends to be impermeable to evidence because it presupposes a normative (albeit poor) judgment that was formed and dogmatically endorsed before direct experience, or even in spite of it. Most often, intolerance is an effect of a negative stereotype about a whole category or a group, i.e. a poorly motivated or lazy judgment that the intolerant person is eager to confirm without testing or revising [79].

The classical concept of toleration/tolerance developed in the context of human relationships, but it is in many ways capable to illuminate the notion of Tolerance for Agents. It seems very likely that tolerance for humans and Tolerance for Agents share a communal psychological and cultural root and it is even possible that they reinforce one another. This remains to be investigated empirically, using specifically designed protocols. Nevertheless, some fundamental normative differences between Tolerance for Agents and tolerance for humans are probably beyond empirical investigation, being essentially metaphysical in nature or deeply ingrained in our value system: discriminating humans based on their ethnicity or gender is always morally unacceptable, but discriminating between humans and Agents may be an appropriate choice justified by scientific, design, and moral counts.

The sixth hypothesis to be investigated empirically concerns the prevention of irrationally biased Intolerant attitudes and the mitigation of extreme forms of Autonomy Estrangement. The history of racism and discrimination suggests that the intolerant’s mind is impermeable to critical revision and self-diagnosis, as intolerance for human neighbours, immigrants or colleagues is unlikely to disappear even when these individuals are perfectly agreeable people and behave impeccably. The source of the intolerant negative biases toward specific groups and races remains beyond utilitarian or hedonistic considerations [54]. Similarly, Intolerance for Agents is neither caused by the Agent’s demeanour and appearance nor by an objective incompatibility with the user: its primary cause remains the Intolerant user’s narrow perspective and rigid stance. What are the most effective remedies and mitigation strategies for a problem that is more in the mind of the judger than in the objective features (behaviour, character, competences, etc.) of the judged? The first approach, to be tested empirically, consists in countering deeply held negative beliefs through supervised positive experiences with Agents, not only aiming at familiarizing with autonomous technologies but also at dissolving categorical prejudices, deeply ingrained biases, and hostile narratives. Another approach to be tested empirically is the one postulating that the formation and petrification of discriminatory prejudices can be prevented by early educational interventions informed by values like diversity, equity, and inclusion [96]. This assumption would have to be tested against the competing assumption that Intolerance is primarily fuelled by personality traits, which are only partly shaped by education (especially when they seem hereditary, consolidated during early development, or continuously reinforced by the family environment).

7 Conclusions

With a view to concretely apply the concepts examined so far, let us summarize the main outcomes of our analyses: we call “Tolerance” (for Agents) the user’s insusceptibility or resilience to Autonomy Estrangement, that is the suffocating sense of isolation, displacement, and frustration that human users may experience when they think (for right or wrong reasons) that their lives are (or risk being) controlled by Agents, i.e. by intelligent systems that, while created by humans, operate independently of human control.

Autonomy Estrangement is best theorized in terms of the (real or just feared) impoverishment of the human life allegedly caused by the dominating presence of automated monitoring and control systems. When suffering from Autonomy Estrangement, technology users experience anxiety/disorientation over disruptive changes (configured as disruption of modus operandi and way of living and/or disruption of the network of socio-political practices) and/or hostility/frustration against technological dominance (configured as fear of being made redundant, losing authority, and/or losing human uniqueness).

For convenience, we use the term “Intolerance” to indicate the property inverse to Tolerance, that is the user’s susceptibility or proneness to Autonomy Estrangement. Thus, Tolerance and Intolerance are inverse representations of the same phenomenological continuum, with Intolerance increasing when Tolerance decreases and vice versa. Whether the user has a Tolerant or Intolerant attitude for a certain Agent primarily depends on the beliefs that the user holds about Agents’ capacity to impoverish the human life (i.e., including, but not limited to, negative biased narratives about autonomous technology) and only secondarily on how such beliefs are supported by experiences of direct interaction with Agents (i.e., the positive and negative experiences that respectively justify and undermine the belief that Agent can impoverish the human life). Due to the self-reinforcing nature of discriminatory biases (via selective attention and confirmation), Intolerant beliefs about the Agent are likely to be reinforced by negative interactions without being equally corrected by positive interactions.

The difference between Acceptance, Trust, and Tolerance is crucial. Not all Intolerant users display their negative emotions when dealing with Agents and, conversely, the presence of negative emotions in a user is not a sufficient reason to infer that they poorly tolerate an Agent, as the same emotions could be triggered by other dynamics (e.g., Acceptance and Trust, when the Agent does not operate reliably and is not easy to interact with). Combining different diagnostic heuristics, we can develop three independent multi-dimensional scales that concretely distinguish between Tolerance, Acceptance, and Trust. Table 6 summarizes their distinctive features, sketching the variables that would need to be considered when developing and validating an appropriate measurement instrument.

Table 6 Synoptic table of the differences between key attitudes toward Agents

An important reason to introduce Tolerance in HAI research is that the distinction between Tolerance, on the one hand, and Acceptance and Trust, on the other, reflect a key distinction between different kinds of stakeholders: The users, i.e., those whom will have to work with (and possibly be replaced by) Agents; and the owners, i.e., decision-makers like employers or political authorities, who purchase Agents because they find their adoption convenient. Their perspectives do not necessarily align and often tend to be diametrically opposed. Whereas an account that relies only on Acceptance and Trust alone might conceal this divergence, a Tolerance-based account is likely to show any polarisation between these two groups.

HAI Researchers tend to focus on whether Agents function efficiently and reliably in general, as this indicates that they are inherently desirable by different kinds of stakeholders. However, researchers cannot assume that users and owners have the same goals and expectations, as if the satisfaction of users automatically implied the success of the owners and vice versa. This inference is unwarranted because it builds on the tacit assumption that all stakeholders are unbiased, rational agents whose individual interests and goals perfectly align with those of their organisation/group. In many everyday real-life scenarios, individuals may even have personal motivations (not necessarily ethical or rational) to work against their teammates, sabotaging team outcomes. HAI researchers thus should not overlook such individual motivations, because neglecting the influence exerted by envy, fear, egoism, and prejudice, inevitably means becoming unable to account for several socio-psychological dynamics associated with technological unemployment and the conflicts experienced by humans forced to compete with Agents [66, 67].

HAI research needs to introduce the notion of Tolerance precisely because a model relying exclusively on Acceptance and Trust cannot account for the whole spectrum of motivations (including personal preferences, worries, and aspirations) that determine Adoption Propensity, beyond what the users or the employers are willing to explicitly state or confess to themselves. An Agent that typically scores very well in Acceptance and Trust is an agent that an employer and HR manager would find very promising and desirable [25], but it would not necessarily be well-tolerated by other users, precisely because these users know that the goals of the company (profit maximisation, minimization or externalisation of liabilities, reduction of dependency on human labour) do not necessarily coincide with their individual goals. In this case, the employer has reasons to see the Agent as a competitor or an usurper because the Agent has all the relational qualities and the professional skills necessary to do their job in a way that satisfies the expectations of the employers.

Tolerance is the construct we need to account for these dynamics because it is best equipped to map judgments and beliefs about Agents (including prejudices and stereotypes), independently of how the users express them during their experience of direct interaction with the Agent. An account that tracks Tolerance, in addition to Acceptance and Trust, is preferable because it is in the position to finally represent all parties and interests involved.