Abstract
With the increasing abilities of robots, the prediction of user decisions needs to go beyond the usability perspective, for example, by integrating distinctive beliefs and trust. In an online study (N = 400), first, the relationship between general trust in service robots and trust in a specific robot was investigated, supporting the role of general trust as a starting point for trust formation. On this basis, it was explored—both for general acceptance of service robots and acceptance of a specific robot—if technology acceptance models can be meaningfully complemented by specific beliefs from the theory of planned behavior (TPB) and trust literature to enhance understanding of robot adoption. First, models integrating all belief groups were fitted, providing essential variance predictions at both levels (general and specific) and a mediation of beliefs via trust to the intention to use. The omission of the performance expectancy and reliability belief was compensated for by more distinctive beliefs. In the final model (TB-RAM), effort expectancy and competence predicted trust at the general level. For a specific robot, competence and social influence predicted trust. Moreover, the effect of social influence on trust was moderated by the robot's application area (public > private), supporting situation-specific belief relevance in robot adoption. Taken together, in line with the TPB, these findings support a mediation cascade from beliefs via trust to the intention to use. Furthermore, an incorporation of distinctive instead of broad beliefs is promising for increasing the explanatory and practical value of acceptance modeling.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Service robots are rapidly advancing to the edge of broad social dissemination in domains of public and private everyday life. This ‘new breed’ of robots is more than automated technology. They interact in social settings, react and adapt to people and situations, and thus are subject to emotional and social responses on the side of their human interaction partners [1, 2]. In this, different users commonly perceive robots differently (e.g., based on their robot-related attitudes; [3,4,5,6,7]) and while certain users might accept and use a robot, others might reject it. Also, different application areas—e.g., in private households vs. public spaces—and levels of autonomy place additional demands on users and human–robot interaction (HRI) design. Therefore, understanding the psychological processes of how people perceive these new technical agents, build up attitudes and expectations, and arrive at decisions in interacting with robots is meaningful to predict decision-making and acceptance in HRI. This, in turn, provides a meaningful basis to inform acceptable, efficient, safe, and human-centered design of robot appearance and interaction strategies (e.g., [8,9,10]), as well as dissemination strategies at a societal level.
The prediction of users' intentions to interact with and to use technology has been a research focus for many years with essentially two predominant traditions: the technology acceptance models (the different versions of the technology acceptance model, TAM; e.g., [11,12,13,14]) and frameworks incorporating trust as a main antecedent of technology-related behavior (e.g., [15, 16]). While these two perspectives share a common underlying theoretical tradition, they are typically discussed separately. A theoretical integration of the two perspectives is promising for better understanding the psychological processes associated with HRI and to facilitate a positive integration. The shared underlying theoretical approach are attitude-to-behavior models, which theoretically substantiated the study of the cascade from beliefs over attitudes to behavioral intentions—particularly in the theory of reasoned action (TRA; [17]) and the theory of planned behavior (TPB; [18]) as an advancement of the former. The TPB focuses on psychological variables affecting an intended behavior [17, 19]. The basic assumption is that behavior is essentially influenced by the intention to perform that behavior. This intention is assumed to build on the three core constructs of the TPB—social norm, attitude towards the behavior, and perceived behavioral control—which, in turn, are based on associated beliefs. The TPB was transferred to the domain of technology acceptance by the TAM and its various advancements. The Unified Theory of Acceptance and Use of Technology (UTAUT) is a recent and widely used derivation of the TAM tradition. However, it is the result of a scientific process over several decades, in which theorizing developed away from the original idea of attitude-based behavior prediction in the sense of the TPB.
Presently, only a partially coherent conglomerate of technology acceptance models exists that are not well integrated in terms of modeled constructs, underlying definitions, and measurement of constructs. Especially, at this point, there is no systematic investigation of the belief structure that underlies the adoption of robots. If constructs are not well-defined and theoretically integrated, acceptance models like the UTAUT provide only restricted value for understanding the psychological foundation of decisions in HRI (see also [20]). This hinders deriving meaningful design implications, reliable prediction of user behavior, and cumulative improvements of the scientific understanding of technology acceptance. A promising direction here is replacing overlapping, atheoretical beliefs with more distinct and theoretically founded ones and integrating these in the beliefs-attitudes cascade from the TPB to predict the intention to use. Thereby, a meaningful extension is the inclusion of trust as a mediator.
1.1 Goal and Contribution of this Research
Against this background, this research aims at an integration of beliefs from different theoretical streams (TAM, UTAUT, trust) into the original theoretical structure of the TPB. In doing so, the general assumption of attitude-based definitions of trust in automation (e.g., [15]) that trust mediates the relationship between beliefs about technology and the intention to use is empirically tested.
As a first step in understanding the relevance of trust for robot adoption, this study investigates how general trust in service robots affects trust in and the intention to use a newly introduced robot. Also, the relevance of beliefs and comparability of the beliefs structure in these two levels of specificity of trust (general trust in service robots and trust in a specific robot) and the intention to use is explored. From this, an integrated economic trustworthiness beliefs model for robot acceptance (TB-RAM), maximizing both model parsimony and predictivity, is empirically explored, optimized, and validated in a two-part online study. In this, subjects evaluated their perceptions of (a) service robots in general as a category and (b) a specific assistance robot. Additionally, the moderation of the relationship between the identified beliefs and trust in automation by situational variables and robot characteristics was explored. More specifically, the role of social influence in different social settings (private vs. public) and of perceived behavioral control in different levels of robot autonomy (partly vs. fully automated) was experimentally investigated.
This work contributes to clarify the role of beliefs from three theoretical streams (UTAUT, trust beliefs, TPB) for trust and the intention to use robot. In addition to previous research, by modeling specific instead of overarching beliefs to predict acceptance and investigating their relative predictive power in different settings and for different robots, this research builds a foundation for human-centered HRI design. Moreover, focusing on trust—a theoretically differentiated and empirically well-studied variable—as a psychological mediator between the formation of beliefs about robots and the intention to use them, offers insights in psychological processes during robot familiarization. Based on this, we discuss challenges of acceptance modeling in HRI, propose strategies to overcome these, and apply these strategies for modeling the acceptance of service robots in general as well as specific robots in two application contexts.
2 Theoretical Background
In the tradition of technology acceptance modeling, numerous studies have been conducted that predicted behavioral decisions in the interaction with technology on the basis of intentions. Acceptance of technology is commonly defined as the intention to use (or interact with) a robot (e.g., [21]). As the acceptance of robots is a central prerequisite for their adoption, the psychological process in which acceptance is formed and the variables affecting this process are of central interest for a human-centered HRI design. In the following, related literature is reviewed along a) technology acceptance models, b) trust in automation and robots, and c) integrated trust-acceptance models.
2.1 Technology Acceptance Modeling: the TAM and the UTAUT
To predict usage behavior (i.e., acceptance) or rejection of new technology and increase usage frequency, to date, numerous competing models have been developed (e.g. [12, 14, 22,23,24]). Most models are based on the TAM [11,12,13], describing motivational processes that mediate between technology characteristics and user behavior originally in the domain of information systems in organizational contexts. The basic assumption of the TAM is that the intention to use technology is based on two fundamental determinants: the perceived usefulness—the assessment of the expected outcomes of the technology—and the perceived ease of use—whether users believe that they have the necessary skills and resources to use the technology successfully [11,12,13].
To formulate a consensus among the numerous acceptance models that emerged after the TAM, Venkatesh and colleagues [14] proposed the UTAUT with four subjective variables influencing the intention to use a system. Performance expectancy largely coincides with perceived usefulness from the TAM and is defined as "the degree to which an individual believes that using the system will help him or her to attain gains in job performance" ([14], p. 447). The construct reflects external motivational factors affecting task accomplishments and outcomes through expected usefulness and benefits. Effort expectancy, defined as "the degree of ease associated with the use of a system" ([14], p. 450), is composed of three constructs from different models, one of which is the perceived ease of use. Social influence reflects "the degree to which an individual perceives that important others believe he or she should use the new system" ([14], p. 451). The fourth predictor of the UTAUT—facilitating conditions—refers to beliefs about the organizational and technical infrastructure supporting system use [14].
The application contexts of the models span a wide range of different technologies, including word processors [13], telemedicine technologies [25], gerontechnology [26], online banking [27], and vehicle monitoring systems [28]. Several meta-analyses quantified the predictive validity of the TAM and the UTAUT supporting substantial variance explanation for the intention to use technical systems [29,30,31,32,33,34,35]. The TAM was also transferred to HRI for investigating the acceptance and usage of specific types of robots, for certain tasks and contexts as well as for specific user groups (e.g., [20, 36,37,38,39,40,41,42,43,44,45]). Examples are the Almere model [36], the persuasive robots acceptance model (PRAM, [44]), and the robot acceptance model for care (RAM-care, [38]).
In this, state of the art research methods on robot acceptance modelling are quite heterogeneous. While some of the mentioned studies apply online surveys with pictures or videos of robots as stimulus material (e.g., [36,37,38]), others investigated robot acceptance of first encounters in laboratory studies with real robots (e.g., [44]) or the development of acceptance over time (e.g., [36]). Based on the TPB, the usual procedure of deriving those models is to first select relevant beliefs for the particular application domain of the robot, present a robot stimulus, and then query the determinants of the TAM in the form of self-report questionnaires. Also, commonly, in these studies, the original model is modified and supplemented by additional factors specific for HRI and the application area (e.g., social presence, compliance, reactance, or perceived technology unemployment).
2.2 Restricted Applicability of the TAM/UTAUT to HRI and Directions for Enhancing the Value of Acceptance Modeling in HRI
The variety of modifications of the TAM and UTAUT in the field of HRI indicates that the variables of the original models are not specific enough and therefore their value for enhancing the understanding of underlying processes of decision-making in the interaction with robots might be restricted (see e.g., [20, 46, 47]). This is not surprising as HRI is considerably more dynamic, social, and interactive than the original application areas of the TAM and UTAUT. Also, both models aim to maximize model economy, using only a small number of variables to predict technology adoption instead of increasing the understanding of the characteristic of the systems and psychological processes leading to it (e.g., [46]). Accordingly, based on these and other reasons, these models have restrictions that make theoretically sound derivations for the design of complex AI-based technologies and scientific knowledge gain fairly difficult [46, 47].
The current need for improvement of acceptance modeling in HRI relate to three challenges a) the restricted number of determinants of use and related to this b) overly broad and inflexible definitions of these determinants, and c) the limited theoretical integration of technology acceptance models with their original psychological foundations in the TPB. These challenges are elaborated in the following along four general strategies to overcome them by deriving, building, and empirically validating acceptance models in HRI:
-
1.
Modeling distinctive, theoretically meaningful beliefs instead of broad, statistically derived beliefs.
-
2.
Ordering the predictors for acceptance and behavior in accordance with the theoretical structure of the TPB.
-
3.
Developing acceptance models at different levels of specificity.
-
4.
Integration of attitudes towards robots as a mediating level (e.g., trust) between the level of beliefs and the intention.
Modeling distinctive and theoretically meaningful beliefs. Regarding the first and second challenge, the restricted number of determinants of technology use results in the models being too inflexible to be practically relevant [46], especially for more sophisticated, autonomous technologies like service robots outside the work and organizational context [20]. This is reflected in the large number of modified models for specific contexts, technologies, and user groups, to which various variables have been added to (successfully) increase the explained variance of technology use (e.g., [20, 36, 41, 42, 48,49,50]). As service robots can be viewed as interaction partners with socially adaptive capabilities beyond mere technological tools, the proposed determinants of technology use might not satisfactorily explain the processes leading to (affective) user responses, technology adoption, acceptance, and a positive user experience in HRI. Although the predictive power of the constructs is indisputable, it is difficult to assess and interpret their meaning because of the conceptual difficulty in distinguishing them from each other and from outcome variables. This criticism applies in particular to performance expectancy, which can hardly be theoretically separated from the overlapping acceptance of a system due to its broad definition and its measurement with items that are not easily distinguishable from acceptance scales. This is related to the point raised by Straub and Burton-Jones [52] that a reasonable person would rather not indicate to use a system which s/he does not find useful. Accordingly, the authors themselves acknowledge an overlap and shared variance between the UTAUT-constructs (e.g., facilitating conditions and effort expectancy, [14, 53]). Also, facilitating conditions appear to be only vaguely defined and so system- and domain-specific that the items are difficult to answer and apply practically.
Given the wide range of applications and functionalities, the beliefs underlying user acceptance need to be reconsidered in terms of their meaningfulness and informativeness for AI-based technology like service robots. In this regard, beliefs like performance expectancy might be too global to provide value for understanding the origin of technology acceptance in psychological processes and thus be replaced by more specific beliefs from psychological theory like the TPB and trust literature.
Ordering the predictors for acceptance and behavior in accordance with the theoretical structure of the TPB. The TAM originally evolved from attitude-to-behavior models (TRA and TPB; [17,18,19, 54]), which assume that the intention to engage in a behavior is essentially influenced by beliefs and the attitude towards the behavior. While with the transfer of the TPB to the TAM several theoretical assumptions were changed, with the additional modifications of the UTAUT, the theoretical basis was further diluted (e.g., [46]). This is reflected in (data-driven) model modifications, in which neither the inclusion of additional variables nor their placement in the process are always theoretically sufficiently justified (e.g., [36, 37, 39, 40, 42]). Essentially, attitudes were removed from the original TPB cascade, leaving the essential differentiation between attitudes and beliefs in psychological research behind [55,56,57]. As the three-step mediation cascade of the TPB (beliefs-attitudes-behavioral intention) is an essential theoretical contribution of the model, the omission of the mediating attitude level might be one explanation for the reported small effect sizes of the relationship between UTAUT variables and the intention to use (except for performance expectancy; see e.g., in the meta-analysis by [51]). Therefore, the (re)integration of attitudes as a mediator at a more global and affective level and ordering the variables again at the three original levels of the TPB might increase insight into psychological processes of HRI adoption and has repeatedly been called for (e.g., [30, 31]),—even by the authors of the UTAUT [58]. In line with this, there are already approaches in the field of robotics integrating formerly not included TPB constructs (e.g., social norm, attitude, and perceived behavioral control) to predict the intention to use and acceptance of robots (e.g., [20, 36, 38, 39, 42,43,44, 59]).
Developing acceptance models for different levels of specificity. Attitudes vary in their generality vs. specificity depending upon the object they refer to [60, 61]. While, for example, the attitude toward the future is rather general as it refers to a whole class of objects, events or stimuli, the attitude toward a certain technology (e.g., robots) can be considered as comparably specific. Beyond that, there may be even more specific attitudes for a particular representative of this category, such as for a specific privately-owned robot. In HRI, a prominent attitude variable that has often been investigated is negative robot attitudes (e.g., NARS; [3,4,5,6, 62]). Also, trust in automation has prominently been conceptualized as an attitude [15]. Therefore, trust might constitute a promising mediating variable to gain an understanding of the psychological processes between the construction of beliefs about robots and actually deciding how to interact with them. In this research, trust towards service robots is investigated at two levels of specificity: (a) general for the category of service robots and (b) specific for a certain assistance robot.
Integrating trust as a mediating attitude. Several authors found a relationship between trust and the intention to use and good (or improved) model fits for acceptance models that included trust (among other variables, e.g., [28, 58, 63,64,65]). Therefore, the integration of trust and antecedent trust beliefs might contribute to the theoretical foundation and meaningful applicability of acceptance models to understand and predict behavior in HRI. In this work, on the basis of an integration of the UTAUT, the TPB, and trust, the intention to use service robots in general and the intention to use a specific robot are predicted by a model with the three levels beliefs, attitudes, and intention to use. In the following, the relevance of trust for understanding the adoption of robots is discussed.
2.3 Trust in Automation and Trust in Robots
Mayer and colleagues [66] define trust as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" (p. 712). Trust has been transferred to human-technology interaction since the late 80s (e.g., [67, 68]). The perspective on trust in automation as an attitude has gained momentum in recent years (e.g., [15, 69, 70]). In particular, the definition of Lee and See [15] is often referred to in this context, defining trust "as the attitude that an agent will help achieve an individual's goal in a situation characterized by uncertainty and vulnerability" (p. 51). Trust was hypothesized to be based on expectations and beliefs about how the trustee will behave (e.g., [16, 66, 71]). Mainly, these are built up by perceived characteristics constituting the perceived trustworthiness of the trustee (e.g., [72]). In this regard, trust in robots has been conceptualized as a subjective variable that is established in the psychological learning process, in which expectations are built up from provided information about a robot prior to and during the interaction (e.g., [3, 16, 73]) and has been found to be a potent subjective predictor of behavioral outcomes in HRI (e.g., [74, 75]).
For facilitating an effective, safe, and comfortable interaction with automated technology, e.g., robots, a calibrated level of trust—a situation in which the degree of trust is in line with the actual capabilities of the technology [67, 76]—represents an important design goal. But not only the degree of trust (no trust—some trust—a lot of trust) but also the specificity of trust is subject to trust calibration (e.g., [15]). For example, trust can be related to all members of a category of technological systems (e.g., service robots in general), a specific representative of such a category (e.g., a specific robot), or a certain function of a robot (e.g., grasping an object with a manipulator). At this point, the role of general trust for service robots as an overarching category for the formation of trust in specific robots (exemplars of this category) has not been investigated sufficiently and thus is addressed in this research.
A manifold of variables have been found to affect trust in robots (e.g., robot-, human- and context-based; [69, 77]). In face of the manifold of influencing variables and although trust has been widely recognized as a central construct in explaining human interaction with technology, few theoretically grounded models explaining the formation and development of trust and its relation to behavioral decisions have been presented—and even fewer have been empirically validated. Similar to the TAM, the transfer of the TRA to explain the formation of trust as a specific attitude towards technology was brought forward by [15]). In their model, they simplify and adapt the TRA-interrelations to explain trust-based decisions in the interaction with automated technology. The main assumptions of the model have been extended (e.g., [70]), integrated, and in part empirically supported (e.g., [16]). Yet, at this point, there has been no integrative investigation of the central proposed beliefs-trust cascade and the role of trust beliefs for the emergence of the trust attitude in the technical domain. Based on the work in interpersonal trust by Rempel and colleagues [78] and trust in automation (e.g., [67, 79]), this research investigates the relative role of trust and antecedent trust beliefs for the explanation of the intention to use of service robots in comparison to the UTAUT beliefs.
Several studies integrated trust into the TPB, TAM and UTAUT [24, 28, 58, 59, 63,64,65, 80]. For example, Buckley and colleagues [81] showed that trust explained additional variance in the intention to use an automated vehicle, both over the TAM and TPB constructs. In a meta-analytic approach, Wu and colleagues [34] showed high correlations of trust and the TAM predictors. In the same manner, trust was integrated as a predictor for robot usage and acceptance in HRI. However, some studies support an effect of trust on the intention to use a robot (e.g., [37,38,39,40, 58]), while others do not [36, 38]. These contradicting findings might be explained by the widely varying structure and placement of trust in the models. While some authors model trust as a direct antecedent of the intention to use, others model trust as an antecedent of the TAM beliefs or along with constructs from the TPB such as attitudes. In line with the definition of Lee and See [15] of trust as an attitude and the original TAM-trust models (e.g., [28, 65, 80]), this study investigates the role of trust as a mediator between beliefs about robots and intention to use service robots.
Over the years, a manifold of different models and structures of trust beliefs in different research streams on trust-related behavior have been proposed (e.g., [67, 68, 71, 72, 79]). Prominently, Mayer and colleagues [66] differentiate ability, benevolence, and integrity as factors influencing trust. This differentiation is in line with the traditional view that trust is built on the basis of different belief facets capturing the competence of the trustee on the one hand and the trustee's character on the other hand (e.g., [72, 82]). This research focuses on ability-based trustworthiness beliefs about the trustee's performance based on its "capabilities, knowledge, or expertise" ([79], p.1244). This facet of trustworthiness beliefs is best captured by the performance level of trust attributions, which Lee and Moray [79] propose based on the work of Rempel and colleagues [78] and Muir [67, 68] as the expectation of a system's "consistent, stable, and desirable performance or behaviour" ([79], p. 1246). In line with the discussion in Lee and See [15], who define this factor as referring "to the current and historical operation of the automation and includes characteristics such as reliability, predictability, and ability […] [m]ore specifically, […] to the competency or expertise as demonstrated by its ability to achieve the operator's goals" ([15], p.59), in this study, the expected reliability, understandability, and competence are included as trust beliefs (see also [68, 83,84,85]).
Reliability was defined by Stowers and colleagues [86] as the consistency with which someone completes tasks. Dragan and colleagues [87] explained predictability as a characteristic of robots to make their "intentions clear to its human collaborator". Understandability—which is closely related to predictability—was defined by Madsen and Gregor [83] as the extent to which "the human supervisor or observer can form a mental model and predict future system behavior" (p.11). Competence describes the perceived ability of a robot to perform its task correctly and efficiently. McKnight and colleagues [88] model systems predictability and competence to affect trust. Merritt and Illgen [89] found that trustworthiness beliefs mediate the relationship of automation characteristics and trust in the systems. In this study, the included trust beliefs are used to extend the perspective of the UTAUT by trust as a variable that might shed more light on the psychological processes in which learning about robots and building expectations and beliefs about them leads to decisions in the interaction with robots. Thereby, reliability is—comparably to performance expectancy—viewed as a belief that conceptually is very closely related to trust. It describes the perceived trustworthiness of a technological system on a very general level covering both aspects of "can-do" and "will-do" expectations (possible jingle-jangle fallacy). Therefore, in this study, it is investigated if the two other modeled trust beliefs (competence and understandability) are sufficient to predict trust.
2.4 Influences of Situational Variables and Robot Characteristics
A broad array of robot characteristics and situational characteristics have been found to affect trust (e.g., [69, 77]) and acceptance (e.g., [21, 90]). In the same manner, they might also affect the relative importance of beliefs for both outcomes. This is in line with basic theorizing of the TRA and TPB, postulating that the relative importance of predictors can vary across situations and behaviors [91]. It can be concluded that the effects of specific beliefs on attitudes towards robots can vary for different types of robots, tasks, or user groups (e.g., [21, 41, 92]), which is also emphasized in reviews on variables affecting robot acceptance (e.g., [48,49,50, 90]). Also, trust is essentially conceptualized as a variable affecting decision-making and behavior under certain situational circumstances—namely situations in which the trustor feels uncertain and vulnerable (e.g., [93]). While many of the relationships between beliefs about technology and trust might be generalizable, the role of some beliefs for trust might change depending on the character of the task and robot under consideration (similar to moderation effects of user characteristics, e.g., [14, 94]). In the context of HRI, this might especially be the case for the two TPB beliefs, social influence, and perceived behavioral control, as their relative relevance might change over settings and combinations of robots and tasks.
Social influence The importance of the interaction context of HRI has been underlined in research in the domains of care robots [38], social robots [20], service robots [45], and public robots [8, 37]. In this study, it is investigated if the relationship of social norms and trust changes as a function of the interaction context. If the process and outcome of an HRI task is not publicly visible, the perception of the robot might not be as strongly affected by what others think about it (the social influence belief). Accordingly, the importance of social influence should be higher in contexts in which relevant others can observe and judge HRI. Therefore, in the present study, the application context (public vs. private) was manipulated as a possible moderator of the relative importance of social influence for trust.
Perceived behavioral control Additionally, it was investigated if the role of the belief about the perceived behavioral control—defining the scope of influence the user has on the task outcome with their behavior—is affected by the level of autonomy of a robot. While, in general, systems providing some kind of control seem to be trusted more [70, 95], at this point, no simple relationship between trust and automation level was coherently supported (e.g., [96, 97]). In a study with a hospital transport robot, higher perceived control was positively related with patients' trust and intention to use it [73]. This points into the direction that, beyond the objective possibility of intervention, the perception of control might help to gain a better understanding of the nature of the relationship between automation level, trust and the intention to use a robot. In this sense, more negative attitudes towards robots were found in situations in which people perceived to have lower control over a robot with high agency [98]. While in low autonomy robots (e.g., teleoperation), the users' behavior strongly influences the robot's task outcome, this is not the case for highly autonomous robots. It is assumed that the perception of the own ability to control the interaction with the robot has a higher relevance for trust in robots with lower levels than higher levels of autonomy.
2.5 Investigated Trust Beliefs Model
The presented study starts from the basic idea of taking the UTAUT back to its theoretical basis in the TPB and to integrate trust as an attitude in a stronger theoretical manner. To overcome the above-mentioned limitations, a trustworthiness beliefs model integrating the TPB, UTAUT, and trust perspectives is proposed (see Fig. 1). In line with the TPB, attitudes are assumed to be established substantially by beliefs, which can be understood as the subjective representation of probabilities that certain attributes are linked to a specific object (e.g., the object ‘robot’ has the attribute ‘competent’; [60, 91]). Well-established trust beliefs (reliability, competence, understandability) are integrated along with core constructs of UTAUT beliefs (performance expectancy, effort expectancy, social influence) and the TPB (social influence and perceived behavioral control). In accordance with the original TPB structure, all beliefs are modeled to directly influence trust, which in turn mediates the relationship between beliefs and the intention to use service robots in general and the specific robot investigated. In contrast to previous models, a small number of discrete beliefs was aimed for that can be generalized to various robots and application contexts. To this end, in addition to using very simple robot stimuli (prototype sketch of a mechanical service robot), the model was validated at two different specification levels and tested in two different application contexts (public vs. private) with a considerably large sample.
As a first step in the model evaluation, the individual relevancy and combined predictive power of the beliefs of each theoretical stream were inspected separately (at both levels of specificity). In a second step, the full model, integrating all beliefs for the prediction of trust and the intention to use, was investigated. In a model iteration, the two broad and overlapping beliefs performance expectancy and reliability were omitted from the model to allow for a more specific, informative, and parsimonious beliefs structure. As an important criterion for the value of this enhanced model, its predictive power was compared to that of the full model. Additionally, to further explore the fit and adequacy of integrating trust as a mediator in the model structure, the direct paths from the beliefs to the intention to use were estimated in another iteration. Finally, to provide an understanding of the situational specificity of belief-trust relationships, it was investigated if the relevancy of the belief social influence and perceived behavioral control changes over situational settings and with different robot abilities.
2.6 Hypotheses and Research Questions
In line with the theoretical considerations and the proposed model, the following hypotheses were tested:
Hypothesis 1 (H1)
General trust in service robots predicts trust in a specific robot in the early familiarization process.
Hypothesis 2 (H2)
Trust predicts the intention to use service robots in general (H2.1) and the intention to use a specific assistance robot (H2.2).
Hypothesis 3 (H3)
The effect of general trust on the intention to use a specific robot is mediated by specific trust in the robot.
Hypothesis 4 (H4)
UTAUT (H4.1), trust (H4.2), and control belief(s) (H4.3) predict general trust in the category of service robots and trust in a specific assistance robot (H4.4–6).
Hypothesis 5 (H5)
UTAUT (H5.1), trust (H5.2), and control belief(s) (H5.3) predict the intention to use service robots in general and the intention to use a specific assistance robot (H5.4–6).
Hypothesis 6 (H6)
In line with the proposed mediation cascade, the effect of beliefs on the intention to use is mediated by trust in the general (H6.1) and the specific model (H6.2).
Hypothesis 7 (H7)
The effect of the perceived behavioral control on trust in a robot is stronger for a partly compared to a fully automated robot (H7.1). The effect of social influence is higher in a public compared to a private setting (H7.2).
Also, the following research questions were addressed:
Research question 1 (RQ1)
Does removing performance expectancy and reliability reduce variance explanation in trust and the intention to use?
Research question 2 (RQ2)
Which variance proportion from beliefs to the intention to use is mediated by trust?
Research question 3 (RQ3)
Which additional direct effects from the beliefs to the intention to use do occur?
3 Method
To investigate the hypotheses and research questions, a mixed-design online study was conducted in which beliefs, trust, and intention to use were measured. A correlative and a 2 × 2 experimental design were combined. In the latter, a specific robot's context of use (IV1: private household vs. public space) and level of autonomy (IV2: partly vs. fully automated) were manipulated.
3.1 Sample
The sample was recruited online with a professional panel provider, who compensated participants monetarily. Prerequisites for participation were German as native language and a minimum age of 18 years. An equal distribution of gender and age group (18–29, 30–49, 50–64, > 65 years) was aimed at to reach a heterogeneous sample.
Participants with a processing time determined to be too short (< 40% of median, med = 35.38 min, 17 participants), with no variance (flatliners, 38 participants), and multivariate outliers (Mahalanobis distance > 38; 25 participants) were excluded. The final sample consisted of N = 400 participants (51.50% female) with a mean age of M = 49.71 years (SD = 17.74). 19.80% indicated owning a robot (vacuuming, cleaning, mowing, toys, and spoken dialogue assistance robots).
3.2 Procedure, Experimental Design and Materials
Data was collected with the online survey tool Unipark (Questback GmbH, 2019). After informed consent and a survey on demographics, disposition questionnaires were filled out (not part of this research). Subjects were then given a definition and explanation of service robots (see supplementary material). Subsequently, participants answered questions about their beliefs, trust, and intention to use in regard to service robots in general. Afterwards, subjects were presented with seven specific examples of service robots (vacuum robot, reception robot, learning robot, delivery robot, security robot, mowing robot, and cleaning robot, see 4.2) in randomized order for which they indicated their trust. After this, subjects were introduced to an assistance robot and received information on its appearance, sensors, and functionality along with a sketch of the prototype (Fig. 2). Then, vignettes were presented, containing information about the application area and the robot's autonomy level. In a pre-study (N = 48), comprehensibility of the vignettes was rated, M = 6.70, SD = 0.47; range: 1–7, as well as the robot’s realism, M = 4.98, SD = 1.41, and conceivability, M = 5.85, SD = 0.88. After the pre-study the vignettes were slightly adjusted.
The application area of the robot was manipulated with a list of different tasks suitable for private households or grocery shopping in the supermarket (e.g., storing groceries). The autonomy level of the robot was manipulated with different descriptions for high autonomy (fully autonomous functioning without double-checking with the user) and low autonomy (robot requires consent for each step in the task). Additionally, three specific assistance tasks (carry over objects for cooking, tidy up objects, and store objects) were described in more detail for each application area and level of autonomy (e.g., public/low autonomy: "You stand at the checkout […]. The robot moves next to you and asks if it can assist with your purchases. You can confirm the desired action. Then the robot puts your purchases into your shopping cart […]"). The descriptions between the two areas of application were standardized in as many aspects as possible. All descriptions of the assistance robot can be found in the supplementary material. Subsequently, all model constructs were measured again with reference to the described robot prototype. At the end of the study, prior experience and expertise as well as own ownership of a service robot were measured.
3.3 Study Questionnaires
To assess the model constructs, established scales from the original models were used where available and adjusted to fit the study context. The reference object was either changed to ‘robots’ (in general) or to ‘the robot’. All constructs were measured on a 7-point Likert-scale (1 = not agree at all, 7 = totally agree). If no German translation was available, items were translated into German by two independent translators.
The UTAUT constructs were measured with the items from Venkatesh and colleagues [14] whereby some items (one per subscale) were replaced or excluded to adjust the scale to the context of HRI (e.g., "The senior management of this business has been helpful in the use of the system." was excluded). Trust beliefs were measured with scales based on Madsen and Gregor ([83]; reliability and understandability) and Gong ([99]; competence). The measurements of perceived behavioral control and intention to use were adapted from Taylor and Todd [22] and Forster and colleagues [100]. Learned trust was measured with the LETRAS-G [16]. All scale reliabilities were in an acceptable range (α > .70, [101], Table 1) except for social influence. As for the latter the two items did not overlap, a single-item measure was used.
4 Statistical Analysis and Results
To test the study hypotheses and research questions, a combination of regression analyses, mediation analyses, structural equation modeling (SEM), and moderation analyses based on multigroup modeling was applied.
For the regression models, mean values were z-standardized and robust R2 estimates were calculated. For assessing multicollinearity, the variance inflation factor (VIF), the eigenvalues, and the condition index scores were inspected.
For the mediation and the exploration of the investigated trustworthiness beliefs model, SEM was applied. First, a full model for general and specific robot usage intention was estimated, followed by a reduced model. Additionally, all models were fitted with direct effects. In a last step, the external influencing variables (application area and level of autonomy) were investigated as moderators in a manifest path model of the enhanced model for specific robot use. Robust Maximum Likelihood estimation and test statistics, and corrected SEs were used [102]. All constructs were modeled as single factors. To rule out bias by non-normal distributions of indirect effects (e.g., [103]), percentile bootstrapped 95%-confidence intervals (CI) were calculated to evaluate the significance of indirect effects (5000 iterations). RMSEA and SRMR were used as primary indicators of model fit [104].
To investigate H7, multiple group CFAs were calculated. In case of a significant difference in the fit of the model between groups, a moderation is present. A precondition for this is that before the regression coefficient is introduced into the multigroup model, metric invariance is established between the groups [105].
4.1 Data Preparation and Manipulation Checks
Analyses were conducted with R version 4.0.3 and the package lavaan [106]. Means, standard deviations, and zero-order correlations of all included scales are provided in the Appendix (A Table 6). There was no missing data and multivariate outliers were excluded, hence for this, preconditions for SEM were met. To test for group effects, a series of general linear models predicting trust with the interaction of each belief and the independent variables was conducted. Except for performance expectancy and effort expectancy, no such interactions were present. ANOVAs did not result in any mean differences in trust and the intention to use between the experimental groups. Regarding manipulation checks, the experimental groups differed significantly for the perceived autonomy of the assistance robot, Mfully = 5.59, SDfully = 1.20 vs. Mpartly = 4.36, SDpartly = 1.56, F(1,398) = 78.40, p < .001, and the indication of the application area, Mpublic = 5.57, SDpublic = 1.76 vs. Mprivate = 2.97, SDprivate = 1.88, F(1,398) = 204.5, p < .001 (semantic differential with 1 = private setting and 7 = public setting).
4.2 Relationship of Trust Variables and the Intention to Use
To test the hypothesized relationships between general, specific trust and the intention to use (H1-2), latent zero-order effects were investigated in regressions. In line with H1, general trust in service robots positively predicted specific trust in the assistance robot (β = 0.74, p < .001). Also, for the seven specific service robots, general trust significantly predicted the specific trust in those (Table 2). Similarly, general trust in service robots predicted the general intention to use, βgeneral trust = 0.74, p < .001, as well as specific trust in the described assistance robot predicted the intention to use, βspecific trust = 0.68, p < .001, supporting H2.
To test if the effect of general trust in service robots on the intention to use a specific robot is mediated by specific trust in the robot (H3), a latent mediation model was calculated (Fig. 3). In support of H3, the indirect effect was significant, β = 0.51, [0.37, 0.64].
4.3 Prediction of Trust by Belief Groups
To test H4 on the prediction of trust by the three beliefs groups, four latent regressions were run each for the two trust variables under investigation in the following order: (1) the UTAUT beliefs: performance expectancy, effort expectancy, and social influence, (2) beliefs from trust literature: reliability, competence, and understandability, (3) perceived behavioral control from the TPB, and 4) all beliefs in combination. This procedure was chosen to get an understanding of the predictiveness of the single beliefs groups (Table 3).
For general trust, the UTAUT and the trust beliefs both explained 59% of variance, UTAUT: F(3, 396) = 192.0, p < .001, trust beliefs: F(3, 396) = 191.1, p < .001. Perceived behavioral control explained 44% of the variance, F(1, 398) = 319.1, p < .001. The combined model explained 66.5% of the variance, F(7, 392) = 114.1, p < .001. In the combined model, performance expectancy, β = 0.20, p < .001, effort expectancy, β = – 0.23, p < .001, reliability, β = 0.32, p < .001, competence, β = 0.11, p = .003, and perceived behavioral control, β = 0.19, p < .001, significantly predicted general trust. There was no indication of multicollinearity.
For specific trust, in all three separate regression models all beliefs were significant predictors. The UTAUT beliefs explained 59%, F(3, 396) = 194.3, p < .001, the trust beliefs 63%, F(3, 396) = 228.8, p < .001, and the perceived behavioral control 49%, F(1, 398) = 383.8, p < .001, of the variance of trust. The combined model increased prediction of trust considerably with 68% explained variance, F(7, 392) = 124.5, p < .001. In the combined regression model, again performance expectancy, β = 0.15, p < .001, effort expectancy, β = – 0.13, p = .012, reliability, β = 0.26, p < .001, competence, β = 0.16, p < .001, and perceived behavioral control, β = 0.24, p < .001, were significant predictors. Again, none of the inspected indices suggested serious multicollinearity between predictors.
4.4 Prediction of the Intention to Use Robots by Belief Groups
For testing H5 on the role of the beliefs for predicting the intention to use, the same procedure as for testing H4 was applied (see Table 3).
For the general intention to use service robots, the UTAUT beliefs explained 69% of variance, F(3, 396) = 302.1, p < .001, with all predictors being significant. The trust beliefs explained 55% of variance, F(3, 396) = 165.8, p < .001, also with all beliefs significantly predicting the intention to use. Perceived behavioral control explained 51% of variance, F(1, 398) = 413.5, p < .001. The combined model explained 72.5% of variance with significant path weights of all UTAUT beliefs and perceived behavioral control, F(7, 392) = 151.5, p < .001. Multicollinearity was not detected.
For the intention to use the assistance robot, a similar pattern of findings resulted. The UTAUT beliefs explained 75%, F(3, 396) = 401.7, p < .001, and the trust beliefs 46% of variance, F(3, 396) = 112.3, p < .001. While all UTAUT beliefs were significant predictors, among the trust beliefs, understandability was not significant. Perceived behavioral control explained 31% of the variance in the specific intention to use, F(1, 398) = 176.8, p < .001. The combined model explained about 76% of the variance with all UTAUT beliefs, understandability, and perceived behavioral control as significant predictors, F(7, 392) = 182.3, p < .001. Again, there was no indication of multicollinearity.
4.5 Validation of the Trustworthiness Beliefs Model for Robot Acceptance
To test H6 and RQ1-3 in regard to the general mediation structure from beliefs through trust, the relative importance of the investigated beliefs groups, and to develop an efficient trustworthiness beliefs model for robot acceptance, a series of SEMs were conducted (Table 4). For this, we specified models in which the intention to use was explained by trust, which in turn was regressed on different sets of beliefs.
As a first step, a full model including the proposed beliefs, trust, and the intention to use was fitted to the data for the general and the specific intention to use (Fig. 4, Table 4, full model). Both models showed a good fit to the data. In both the general and specific model, the intention to use was explained by trust to a considerable degree, which in turn was well explained by the antecedently ordered UTAUT and trust beliefs (R2adj-general trust = 0.84, R2adj-specific trust = 0.82). While in the general model, the performance expectancy from the UTAUT as well as the reliability and understandability were found to be significant predictors for trust, in the specific model reliability predicted trust significantly. Taken together, these findings support the role of the trust beliefs as a meaningful addition to the UTAUT beliefs for the prediction of robot acceptance at both levels of specificity.
4.6 Exploration of an Enhanced Trustworthiness Beliefs Model for Robot Acceptance
In a second step, the performance expectancy and the reliability were omitted from the SEMs to reduce suppressing variance and to allow for an investigation of the relative relevance of the remaining, more distinctive beliefs for trust and the intention to use (Table 4, enhanced model). In a third step, to get a better understanding of the extent of mediated variance by trust, a model with direct paths from the modeled beliefs to the intention to use was calculated (Table 5).
For the model predicting general trust in service robots and the intention to use, the omission of the two general beliefs resulted in a model with comparable fit and only a slight reduction of the explained variance in trust. The reduced model in comparison to the full model had a considerably decreased AIC and BIC, indicating improved parsimony while keeping the prediction of trust and the intention to use comparable. In the model, the two beliefs effort expectancy and competence were significant predictors for trust. The inclusion of direct paths in the third model led to a slight increase in explained variance in the intention to use (from 66 to 73%) with social influence being a significant direct predictor pointing in the direction of further mediating variables at the attitude level.
In the reduced model for predicting the intention to use the assistance robot, the omission of the general beliefs performance expectancy and reliability led to a somewhat reduced explained variance in trust (8%) and the intention to use (3%). However, model fit and parsimony were improved as indicated by AIC and BIC. In this model, perceived competence of the robot and social influence significantly predicted trust in the assistance robot. Also, the path weight from effort expectancy to trust missed significance, β = -0.49, SE = 0.43, p = .264, although its magnitude indicated that this effect might be meaningful. Again, the inclusion of direct effects increased the explained variance of the intention to use by 11% with a significant direct effect of social influence, indicating that additional mediators might play a role.
4.7 Moderation of Beliefs-Trust Relationships by Application Area and Robot Characteristics
As a precondition for the multiple group CFAs to test H7, at least partial scalar measurement invariance for the two models for each IV was indicated by non-significant χ2-comparison tests. First, it was tested whether the influence of perceived behavioral control on specific trust changes as a function of the robot's autonomy level. A comparison of the two models with and without equated regression coefficients revealed no significant difference, Δχ2(1) = 1.05, p = .305, opposing H7.1. Second, the effect of the application area on the effect of social influence on trust in the assistance robot was significant, as indicated by a χ2-difference test, Δχ2(1) = 12.11, p < .001. In line with H7.2, the effect of social influence on trust in the robot was higher in the public, β = 0.57, than in the private setting, β = 0.41.
5 Discussion
On the basis of an integration of three theoretical streams, altogether seven beliefs from the TPB, the UTAUT, and trust in automation literature were used to predict trust and the intention to use service robots at two levels of specificity: a) general for the group of service robots and b) for a specific assistance robot that was introduced as a prototype either in a public or private application area. Furthermore, the role of the application context and the robot's level of autonomy for the relative importance of beliefs for trust was investigated.
5.1 Role of General Trust in Service Robots
In a first step, in support of H1, it was shown that trust in the category of service robots predicted trust in the investigated assistance robot and the other provided service robots with different application areas and tasks. Towards establishing trust as a mediator into the structure of technology acceptance models, in a second step, it was shown that trust predicted the intention to use for both service robots in general and the specific service robot, corresponding with H2 and previous research [37,38,39,40, 58]. In further support of the relevance of general trust in service robots as a starting point for users' decisions in HRI, its effect on the intention to use the investigated robot was mediated by specific trust (supporting H3).
The combined support of H1-3 underlines the notion that trust formation and calibration starts before the actual interaction with a specific robot and even before users know about a specific robot (e.g., Kraus [16]). The individual learning history of users with a category of technological systems seems to build a baseline expectation towards single members of this category, guiding information processing during the early stages of learning to trust this specific system. This means that for a newly introduced robot the accumulated knowledge and derived beliefs and attitudes about service robots in general might affect expectations and trust formation. This is in line with work showing the influence of general robot attitudes (e.g., [3, 107, 108]) or dispositional personality variables such as the propensity to trust automation (e.g., [3, 16, 89, 108,109,110] on trust. In the same manner, this resembles reported associations between different levels and layers of trust, for example, the propensity to trust, initial, and dynamic learned trust [3, 110].
5.2 Relevance of Beliefs Groups
On the basis of empirical support for the role of trust for the intention to use robots (e.g. [39, 40, 94], in this study, the predictiveness of different groups of beliefs for trust and the intention to use (at the two addressed levels of specificity) was explored. In support of H4, in a series of regressions, it was found that the three belief groups on their own predicted substantial proportions of the variances of general trust in service robots and specific trust in an assistance robot. Also, as the predicted variance proportions were substantially increased in both the model for general and specific trust, the extension of the UTAUT by trust and TPB beliefs seems worthwhile.
In the same manner as for trust, all three belief groups were able to predict both levels of the intention to use—in agreement with H5. The UTAUT beliefs performed better for predicting the intention to use than the trust beliefs. Yet, again the addition of the trust and TPB beliefs led to somewhat higher R2 for predicting the general intention to use. The high prediction by performance expectancy for both levels of the intention to use points into the direction of RQ1 that performance expectancy might be conceptually too close to acceptance (and the intention to use) to be meaningfully distinguishable at a theoretical level. Therefore, in the following, the value of a reduced trust beliefs model integrating the streams of TPB, UTAUT, and trust in automation for predicting the intention to use service robots was explored in more detail.
5.3 Exploration of an Enhanced Trustworthiness Beliefs Model
In all iterations of the model on the general level for the category of service robots, trust was a strong predictor for the intention to use. Additonally, effects of trustworthiness beliefs on the intention to use the robot were mediated by trust (in line with H6.1). In the initial full model, performance expectancy and reliability significantly positively predicted trust. Interestingly, understandability was negatively related to trust (as opposed to its positive association in the simple multivariate regression), pointing into the direction of a possible suppressing effect. After the omission of performance expectancy and reliability, variance explanation in trust did not essentially decrease (RQ1). In the enhanced model, generalized trust in service robots was significantly predicted by effort expectancy (negatively) and the perceived competence of the robot. In line with a possible suppression in the full model, understandability of service robots was no longer a significant predictor for trust. In the model allowing for direct effects, additionally, there was a direct effect of social influence on the intention to use service robots in general. Thus, in this model, trust mediated a considerable part but not the complete effect of the investigated beliefs on the intention to use (RQ 2 + 3). Thereby, the direct effect of the social norm on the intention to use might be explainable by the increased observability and visibility of behavior as compared to trust – which unlike objective behavior is a subjective perception.
In the specific model investigating the role of beliefs and trust for the intention to use the assistance robot, trust predicted the intention to use very well and in a similar range as in the generalized model. Also, in line with H6.2, trust partly mediated the effect from the trustworthiness beliefs to the intention to use. In the initial full model, only the effect of reliability was significant. After omission of reliability, the perceived competence of the robot and the social influence significantly predicted trust in the assistance robot. Also, the effort expectancy showed a comparably high beta weight that did not reach significance. In the model allowing for direct effects from trustworthiness beliefs to the intention to use, social influence showed a significant direct effect on the intention to use the robot. The direct paths indicate that besides trust other attitudes might be meaningful additional mediators in the model structure, further enhancing the understanding of psychological processes during familiarizing with new robots.
In both models, no direct effects of perceived behavioral control on trust or the intention to use could be found. This could be explained by the conceptual closeness of perceived behavioral control to the belief effort expectancy, which might have resulted in suppression of variance of the perceived behavioral control. It is possible that these variables gain importance in direct interaction with robots which can be addressed in future research by applying a more experimental setup including direct interaction with a robot.
Findings show that, in both models, the perceived competence of robots predicts trust significantly. Thus, if users have the belief that a robot is actually capable of performing well in a task, they tend to trust it more. In our study, this belief was more predictive for trust than all other included variables. Also, it was found that the effort expectancy explains variance at the general level of trust. The negative relationship illustrates that users do not only assess the actual characteristics of robots but also their own capability of interacting with it. This is also illustrated in findings from other studies supporting the relationship between effort expectancy or ease of use and trust [65] or the role of self-perceptions for trust in automated systems (e.g., [110]).
Also, social influence was a significant predictor for trust in the model including specific trust. In addition to the belief about the capabilities of the robot the interferences others draw from the observation of the interaction with a service robot are influencing trust. If users think that others would approve of them using a service robot, they trust these robots more. To conclude, trust in service robots is not only a function of how the robot itself is perceived but rather, also self-evaluative beliefs as well as its embeddedness in a social context and the beliefs about what relevant others think affect trust.
5.4 Theoretical Implications for Modeling Robot Acceptance
Taken together, in support of H6, the good fits of both full models support the meaningfulness of the TPB beliefs-attitude cascade for integrating the UTAUT and trust perspective in the prediction of intention to use robots (see also [15]). While related integrations were proposed and implemented before (e.g., [24, 36,37,38, 38,39,40, 59, 63,64,65, 80]), contradicting results hindered an integration of findings. In this research, a clear theoretical structure was used to model variables and overly broad beliefs theoretically not disjunct from mediating and outcome variables (trust and the intention to use) were omitted.
In doing so, this research aimed to integrate different research streams building on social cognitive attitude-to-behavior theories strengthening the theoretical foundation of robot acceptance modeling. Here through the integration of trust, psychological theories on attitude formation processes can increase the understanding of how beliefs affect the interaction with robots. In this, the psychological mechanisms for building up a mental model and beliefs about its scope of functioning, capabilities, and limitations are starting points to inform human-centered robot and HRI design. Essentially, models on attitude formation and change like the TPB or the Elaboration Likelihood Model [111] or similar theories from cognitive and social psychology are meaningful and promising directions for the derivation of hypotheses and study designs in HRI research. These streams of research, in line with the CASA paradigm [1], might help to further strengthen the understanding of processes in which the perception of robot characteristics and the observation of robots feed into trust formation and the interaction with robots. This might provide progress for HRI research in integrating findings on robot characteristics like anthropomorphic robot design, robot gender, speech, facial characteristics, movement, etc. by providing an enhanced understanding for potential moderator variables on the side of users or the situation in which information is presented. This research underlines that the consideration of complexity can indeed meaningfully extend our understanding of trust processes and user behavior in the interaction with robots.
In this research, the relative informativeness of beliefs from different model families was investigated. Naturally, the included beliefs share some variance as they are part of the same processes. In line with our reasoning, it was shown that in predicting trust in robots, unspecific, overlapping beliefs can be meaningfully replaced by more distinctive beliefs without endangering the predictive power of trust and acceptance models. There, both the performance expectancy from the UTAUT and the reliability were omitted, resulting in stronger associations of the remaining beliefs without substantially reducing variance explanation. On a theoretical level, performance expectancy is not clearly distinguishable from acceptance. Subjective reliability cannot be measured separately from trust.
The reduced models allow differentiated and, at the same time, the economic prediction of trust and the intention to use robots. In doing so, they enhance the theoretical embeddedness of the model in the attitude to behavior perspective, allowing a more theoretically founded derivation of implications for trustworthy robot design and dissemination.
5.5 Role of Situational Variables and Robot Characteristics for Beliefs-Trust Prediction
The study's findings support that the application area of a robot can affect the relevance of beliefs for trust formation, partially supporting H7. This underlines the role of changing environments for variances in the interpretation of the very same information about robots. Also, it points in the direction that while a general meaningful structure of acceptance models might help to increase understanding of the formation of user decisions and behavior in regard to different robots, the relative relevance of beliefs for trust and the intention to use might change over settings and for different robots. This underlines the relevance of theoretical considerations for the integration of variables in such models over a purely data-driven rationale for variable inclusion or exclusion.
5.6 Practical Implications
This study's findings support the mediation of the effect of beliefs on usage intentions by trust and thereby underline the relevance of considering individual trust processes in making use of available information in building up expectations and intentions to interact with robots. In our study, we found strong evidence for a relationship between trust in service robots as a general category and trust in specific robots.
This holds several implications for robot dissemination and design practice. The sum of communication and experiences about robots builds into trust formation in single robots. In this regard, the availability and the content of media like science fiction movies, computer games, or press articles in which robots play a role might be essential for learning what to expect from robots in general, and this might be transferred to new specific robots people get to know. Therefore, this potential influence of the way robots are represented in media should be considered by artists, press and those in charge of programs. Responsibilities in this regard might be on the side of the government and robot manufacturers. In order to facilitate calibrated trust in (future) users of and interaction partners with service robots, the public needs to be addressed with objective and transparent information about the actual capabilities, processes, and limitations of robots. This includes advertisements (e.g., in social media), which should paint a realistic picture of what robots can or cannot do.
In the enhanced trust beliefs models especially the relevance of three beliefs was supported: competence, effort expectancy, and social influence. This finding illustrates the combined influences of three sources of information that play a role for the perception of robots and the decision to interact with them.
First, the relevance of competence underlines the well-investigated role of perceived robot ability and performance on trust. Perceived competence seems to be the most essential consideration when being confronted with a new robot. Therefore, to enhance a calibrated level of trust, all communication about the robot's features, ability, and reliability—from external sources but also from the robot itself—should be realistic. With this, a balanced usage behavior and interaction of users is facilitated, neither leading to distrust and inefficiently reduced robot reliance nor to overtrust and an overly optimistic and risky usage pattern of the robot. Several studies report that trust is not necessarily reduced by communication about possible errors of automated systems or even the experience of such errors in a long-term perspective (e.g. [112, 113]). Rather if they are not associated with substantial danger and risk, such information and experience might favor a realistic picture about the robot and foster appropriate decisions during HRI.
Second, the relevance of effort expectancy for trust at a general level sheds light on the relevance of self-evaluative beliefs in the formation of trust in a specific robot. While the role of self-evaluations for robot acceptance has been discussed in HRI before, up to now there are no conclusive results (e.g., [114, 115]). In other domains of the interaction with automated technology, a positive relationship between self-esteem and self-efficacy and trust in an automated driving system has been reported (e.g., [110, 116]). People who perceive less complications and barriers for using service robots successfully tend to trust them more in general as well. For general communication about robots, information about how common concerns and perceived problems for using robots successfully can be overcome might considerably help to increase trust and acceptance.
Third, following from the role of social influence in the model for the specific intention to use, the interpersonal visibility and contextual embeddedness of HRI should be addressed in robot design and dissemination. People care about what they communicate by using robots and what others think about this. Therefore, the societal discussion about what it means to use a robot on a normative level needs to be extended and made visible as it considerably affects trust levels and the adoption of robots. This is further substantiated by the findings on the role of context for the relevance of the effect from the belief to social influence to trust, indicating a stronger effect in public as in private settings. As from a perspective of technological readiness level service robots in public are among the first robots people will interact with in their daily lives, strategies for trust calibration and reduction of normative concerns should be implemented in the public sector as these are essential for raising acceptance levels.
5.7 Strengths, Limitations, and Future Research
This work contributes to the current state of research with a theoretical review and (re)integration of different research streams (acceptance models, TPB, and trust) and their application in HRI. Considerable strengths of the study are the integration of these theoretical streams, the theoretical breakdown of the interrelationship of several groups of variables, a combination of a correlative and an experimental approach, and a large, heterogeneous sample allowing for sophisticated statistical analyses. While previous research rather focused on the acceptance of single (specific) robots, this work explicitly differentiated between the broad category of service robots in general and a specific representative robot of that category. Also, the model was applied in two application contexts.
The presented study has limitations that need to be addressed in future research. First of all, the study was conducted online with vignettes without actual interaction with a robot. Related to that, second, no actual behavioral measure was integrated. The online setting was chosen to make a large sample possible needed to conduct the appropriate statistical methods for the investigated hypotheses and research questions. Future studies might validate the model in real-life experiments and investigate its relevance for behavioral variables in actual HRI. Third, participants were from a merely German sample and only had restricted prior experience with robots. As culture might be an important factor influencing specifics in technology adoption, findings on the relative importance of the investigated beliefs need to be validated in sample in other cultures (e.g., in a Japanese sample). Whatsoever, the basic contributions of this work in terms of the psychological processes involved in the formation of trust and the intention to use robots are likely to be robust to culture-specific variances. In regard to the restricted prior experiences of the sample, while common in most of today's studies, research on the role of this variable is encouraged as it might be important to understand belief and attitude formation. Fourth, in this research the trust beliefs model was only investigated in regard to one specific robot. Potentially, the role of single beliefs changes for different robots and different contexts, which raises a number of challenging research questions for future studies. Fifth, the situational relevance of beliefs for trust and the intention to use might be smaller in online settings and thus should be investigated again in real-life experiments where stronger effects can be expected. Sixth, the study used comparably short scales for some of the investigated constructs. While this resulted from the complex study design to guarantee economy and motivation in participants, the findings should be validated. Also, as many beliefs have been proposed as being meaningful for understanding technology acceptance, this study could not assess all beliefs. Especially, the role of "will-do" trustworthiness beliefs concerning motive and moral attributions towards technology (i.e., integrity and benevolence) need further investigation. The relative role of these ability-related beliefs in different robots and interaction scenarios might lead to additional insights in psychological trust processes in HRI. In this regard, the role of top-down vs. bottom-up processes is of interest, and future studies might investigate how prior experience vs. the actual perception of robot characteristics and abilities during the early interaction with robots build into trust formation and calibration. Hereby, additional mediation variables on the intention to use robots beside trust and factors explaining differences of the interrelations of modeled variables at the general category level of service robot and the specific level might be identified.
5.8 Conclusion
In this work we theoretically derived and validated a generalizable acceptance model (TB-RAM) for service robots including trust and trustworthiness beliefs. Based on a thorough review, we first discussed the shortcomings of current acceptance modeling and proposed strategies to overcome them. Second, beliefs from three research streams (acceptance models, TPB, trust in automation) were (re)integrated into the structure of the TPB. Third, in a large-scale online study, the TB-RAM model was applied to two levels of trust—general trust in the category of service robots and specific trust in a particular assistance robot—and validated in two contexts—public and private—and two levels of autonomy.
Results show that trust in service robots as a general category predicts trust in specific robots as representatives of that category, which, in turn, mediates the effect of generalized trust on the intention to use a specific robot. This underlines the role of general for specific trust and, with this, the substantial relevance of the sum of experiences with robots for establishing expectations, beliefs, trust, and using newly introduced robots.
Furthermore, the combination of beliefs from the TPB (perceived behavioral control), the UTAUT (social influence, performance expectancy, effort expectancy), and trust literature (reliability, competence, understandability) substantially explained variance in general and specific trust, as well as the intentions to use both service robots in general as well as the focused specific robots. In line with the basic assumption of this research, dropping the overlapping beliefs performance expectancy and reliability did neither substantially reduce explained variance in trust nor model fit. Taken together, the reported findings support the meaningfulness of integrating the three theoretical perspectives to enhance the understanding of psychological processes involved in HRI and robot adoption and, in this, aiming to model distinctive beliefs instead of overlapping general beliefs. Also, this emphasizes the role of trust as a mediator of the effect from robot-related beliefs to the intention to use service robots, for both general trust in service robots and specific trust for single representatives of this category.
Additionally, the findings underline situation-specific relevance of beliefs for trust and the intention to use a specific robot, as indicated by the higher social influence in the public than in the private application context. This sheds light on the processes in which both trust and behavioral intentions in regard to robots are formed.
Taken together, this research provides a meaningful theoretical extension of technology acceptance modeling in the domain of HRI and other automated technology, which allowed for the derivation of some general directions for enhancing trustworthy and human-centered robot interaction design.
Data Availability
The dataset generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Nass C, Moon Y (2000) Machines and mindlessness: social responses to computers. J Soc Issues 56:81–103. https://doi.org/10.1111/0022-4537.00153
Rosenthal-von der Pütten AM, Schulte FP, Eimler SC et al (2014) Investigations on empathy towards humans and robots using fMRI. Comput Hum Behav 33:201–212. https://doi.org/10.1016/j.chb.2014.01.004
Miller L, Kraus J, Babel F, Baumann M (2021) More than a feeling—Interrelation of trust layers in human-robot interaction and the role of user dispositions and state anxiety. Front Psychol 12:592711. https://doi.org/10.3389/fpsyg.2021.592711
Nomura T, Kanda T, Suzuki T (2006) Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI & Soc 20:138–150. https://doi.org/10.1007/s00146-005-0012-7
Nomura T, Suzuki T, Kanda T, Kato K (2006) Measurement of negative attitudes toward robots. IS 7:437–454. https://doi.org/10.1075/is.7.3.14nom
Syrdal DS, Dautenhahn K, Koay K, Walters M (2009) The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study
Złotowski J, Yogeeswaran K, Bartneck C (2017) Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int J Hum Comput Stud 100:48–54. https://doi.org/10.1016/j.ijhcs.2016.12.008
Babel F, Kraus JM, Baumann M (2021) Development and testing of psychological conflict resolution strategies for assertive robots to resolve human-robot goal conflict. Front Robot AI 7:591448. https://doi.org/10.3389/frobt.2020.591448
Babel F, Vogt A, Hock P et al (2022) Step aside! VR-based evaluation of adaptive robot conflict resolution strategies for domestic service robots. Int J Soc Robot. https://doi.org/10.1007/s12369-021-00858-7
Babel F, Hock P, Kraus J, Baumann M (2022) It will not take long! Longitudinal effects of robot conflict resolution strategies on compliance, acceptance and trust. In: Proceedings of the 2022 ACM/IEEE international conference on human-robot interaction. IEEE Press, Sapporo, Hokkaido, Japan, pp 225–235
Davis FD (1985) A technology acceptance model for empirically testing new end-user information systems: theory and results. Massachusetts Institute of Technology
Davis FD (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 13:319. https://doi.org/10.2307/249008
Davis FD, Bagozzi RP, Warshaw PR (1989) User acceptance of computer technology: a comparison of two theoretical models. Manage Sci 35:982–1003. https://doi.org/10.1287/mnsc.35.8.982
Venkatesh M, Davis D (2003) user acceptance of information technology: toward a unified view. MIS Q 27:425. https://doi.org/10.2307/30036540
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors J Hum Factors Ergon Soc 46:50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Kraus JM (2020) Psychological processes in the formation and calibration of trust in automation. Dissertation, Universität Ulm
Ajzen I, Fishbein M (1975) A Bayesian analysis of attribution processes. Psychol Bull 82:261–277. https://doi.org/10.1037/h0076477
Ajzen I (1985) From intentions to actions: a theory of planned behavior. In: Kuhl J, Beckmann J (eds) Action control. Springer, Berlin, pp 11–39
Fishbein M, Ajzen I (1980) Understanding attitudes and predicting social behavior. Prentice Hall, Englewood Cliffs
de Graaf MMA, Ben Allouch S, van Dijk JAGM (2019) Why would i use this in my home? A model of domestic social robot acceptance. Hum-Comput Interaction 34:115–173. https://doi.org/10.1080/07370024.2017.1312406
Naneva S, Sarda Gou M, Webb TL, Prescott TJ (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robotics 12:1179–1201. https://doi.org/10.1007/s12369-020-00659-4
Taylor S, Todd PA (1995) understanding information technology usage: a test of competing models. Inf Syst Res 6:144–176. https://doi.org/10.1287/isre.6.2.144
Thompson RL, Higgins CA, Howell JM (1991) Personal computing: toward a conceptual model of utilization. MIS Q 15:125–143. https://doi.org/10.2307/249443
Ghazizadeh M, Lee JD, Boyle LN (2012) Extending the technology acceptance model to assess automation. Cogn Tech Work 14:39–49. https://doi.org/10.1007/s10111-011-0194-3
Hu PJ, Chau PYK, Sheng ORL, Tam KY (1999) Examining the technology acceptance model using physician acceptance of telemedicine technology. J Manag Inf Syst 16:91–112. https://doi.org/10.1080/07421222.1999.11518247
Chen K, Chan AHS (2014) Gerontechnology acceptance by elderly Hong Kong Chinese: a senior technology acceptance model (STAM). Ergonomics 57:635–652. https://doi.org/10.1080/00140139.2014.895855
Luarn P, Lin H-H (2005) Toward an understanding of the behavioral intention to use mobile banking. Comput Hum Behav 21:873–891. https://doi.org/10.1016/j.chb.2004.03.003
Ghazizadeh M, Peng Y, Lee JD, Boyle LN (2012) Augmenting the technology acceptance model with trust: commercial drivers’ attitudes towards monitoring and feedback. Proc Hum Factors Ergon Soc Annu Meet 56:2286–2290. https://doi.org/10.1177/1071181312561481
Baptista G, Oliveira T (2016) A weight and a meta-analysis on mobile banking acceptance research. Comput Hum Behav 63:480–489. https://doi.org/10.1016/j.chb.2016.05.074
Blut M, Wang C, Schoefer K (2016) Factors influencing the acceptance of self-service technologies: a meta-analysis. J Serv Res 19:396–416. https://doi.org/10.1177/1094670516662352
Dwivedi YK, Rana NP, Jeyaraj A et al (2019) Re-examining the Unified Theory of Acceptance and Use of Technology (UTAUT): towards a revised theoretical model. Inf Syst Front 21:719–734. https://doi.org/10.1007/s10796-017-9774-y
King WR, He J (2006) A meta-analysis of the technology acceptance model. Inf Manag 43:740–755. https://doi.org/10.1016/j.im.2006.05.003
Wang X, Goh DH-L (2017) Video game acceptance: a meta-analysis of the extended technology acceptance model. Cyberpsychol Behav Soc Netw 20:662–671. https://doi.org/10.1089/cyber.2017.0086
Wu K, Zhao Y, Zhu Q et al (2011) A meta-analysis of the impact of trust on technology acceptance model: investigation of moderating influence of subject and context type. Int J Inf Manage 31:572–581. https://doi.org/10.1016/j.ijinfomgt.2011.03.004
Yousafzai SY, Foxall GR, Pallister JG (2007) Technology acceptance: a meta-analysis of the TAM: part 2. J Model Manag 2:281–304. https://doi.org/10.1108/17465660710834462
Heerink M, Kröse B, Evers V, Wielinga B (2010) Assessing acceptance of assistive social agent technology by older adults: the almere model. Int J of Soc Robotics 2:361–375. https://doi.org/10.1007/s12369-010-0068-5
Abrams AMH, Dautzenberg PSC, Jakobowsky C, et al (2021) A theoretical and empirical reflection on technology acceptance models for autonomous delivery robots. In: Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction. Association for Computing Machinery, New York, NY, USA, pp 272–280
Turja T, Aaltonen I, Taipale S, Oksanen A (2020) Robot acceptance model for care (RAM-care): a principled approach to the intention to use care robots. Inf Manag 57:103220. https://doi.org/10.1016/j.im.2019.103220
Han J, Conti D (2020) The use of UTAUT and post acceptance models to investigate the attitude towards a telepresence robot in an educational setting. Robotics 9:34. https://doi.org/10.3390/robotics9020034
Alaiad A, Zhou L (2014) The determinants of home healthcare robots adoption: an empirical investigation. Int J Med Inf 83:825–840. https://doi.org/10.1016/j.ijmedinf.2014.07.003
Forgas-Coll S, Huertas-Garcia R, Andriella A, Alenyà G (2021) How do consumers’ gender and rational thinking affect the acceptance of entertainment social robots? Int J Soc Robot. https://doi.org/10.1007/s12369-021-00845-y
Shin D-H, Choo H (2011) Modeling the acceptance of socially interactive robotics: social presence in human–robot interaction. IS 12:430–460. https://doi.org/10.1075/is.12.3.04shi
Fridin M, Belokopytov M (2014) Acceptance of socially assistive humanoid robot by preschool and elementary school teachers. Comput Hum Behav 33:23–31. https://doi.org/10.1016/j.chb.2013.12.016
Ghazali AS, Ham J, Barakova E, Markopoulos P (2020) Persuasive robots acceptance model (PRAM): roles of social responses within the acceptance model of persuasive robots. Int J Soc Robot 12:1075–1092. https://doi.org/10.1007/s12369-019-00611-1
Stock RM, Merkle M (2017) A service Robot Acceptance Model: User acceptance of humanoid robots during service encounters. In: 2017 IEEE international conference on pervasive computing and communications workshops (PerCom Workshops). IEEE, Kona, HI, pp 339–344
Benbasat I, Barki H (2007) Quo vadis, TAM? J Assoc Inf Syst 8:212–218
Shachak A, Kuziemsky C, Petersen C (2019) Beyond TAM and UTAUT: future directions for HIT implementation research. J Biomed Inform 100:103315. https://doi.org/10.1016/j.jbi.2019.103315
Young JE, Hawkins R, Sharlin E, Igarashi T (2009) Toward acceptable domestic robots: applying insights from social psychology. Int J Soc Robot 1:95–108. https://doi.org/10.1007/s12369-008-0006-y
Beer JM, Prakash A, Mitzner TL, Rogers WA (2011) Understanding robot acceptance (Technical Report HFA-TR-1103). Georgia Institute of Technology, School of Psychology – Human Factors and Aging Laboratory, Atlanta
Broadbent E, Stafford R, MacDonald B (2009) Acceptance of healthcare robots for the older population: review and future directions. Int J Soc Robot 1:319–330. https://doi.org/10.1007/s12369-009-0030-6
Taiwo A, Downe A (2013) The theory of user acceptance and use of technology (UTAUT): a meta-analytic review of empirical findings. J Theor Appl Inf Technol 49:48–58
Straub D, Burton-Jones A (2007) Veni, Vidi, Vici: breaking the TAM Logjam. J Assoc Inf Syst 8(4):223–229. https://doi.org/10.17705/1jais.00124
Venkatesh V (2000) Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model. Inf Syst Res 11:342–365. https://doi.org/10.1287/isre.11.4.342.11872
Fishbein M (1967) Readings in attitude theory and measurement. Wiley, New York
Campbell DT (1963) Social attitudes and other acquired behavioral dispositions. In: Psychology: a study of a science. Study II. Empirical substructure and relations with other sciences. Volume 6. Investigations of man as socius: Their place in psychology and the social sciences. McGraw-Hill, New York, pp 94–172
Fishbein M, Raven BH (1962) The AB Scales: an operational definition of belief and attitude. Hum Relations 15:35–44. https://doi.org/10.1177/001872676201500104
Katz D (1960) The functional approach to the study of attitudes. Public Opin Q 24:163. https://doi.org/10.1086/266945
Venkatesh V, Thong JYL, Chan FKY et al (2011) Extending the two-stage information systems continuance model: incorporating UTAUT predictors and the role of context: context, expectations and IS continuance. Inf Syst J 21:527–555. https://doi.org/10.1111/j.1365-2575.2011.00373.x
Wu I-L, Chen J-L (2005) An extension of Trust and TAM model with TPB in the initial adoption of on-line tax: an empirical study. Int J Hum Comput Stud 62:784–808. https://doi.org/10.1016/j.ijhcs.2005.03.003
Albarracín D, Chan MPS, Jiang D (2018) Attitudes and attitude change: social and personality considerations about specific and general patterns of behavior. In: The Oxford Handbook of Personality and Social Psychology. Oxford University Press
Sherman SJ, Fazio RH (1983) Parallals between attitudes and traits as predictors of behavior. J Pers 51:308–345. https://doi.org/10.1111/j.1467-6494.1983.tb00336.x
Tsui KM, Desai M, Yanco HA, et al (2010) Using the "negative attitude toward robots scale" with telepresence robots. In: Proceedings of the 10th Performance Metrics for Intelligent Systems Workshop on - PerMIS '10. ACM Press, Baltimore, Maryland, p 243
Gefen D, Karahanna E, Straub DW (2003) Inexperience and experience with online stores: the importance of tam and trust. IEEE Trans Eng Manage 50:307–321. https://doi.org/10.1109/TEM.2003.817277
Pavlou PA (2003) Consumer acceptance of electronic commerce: integrating trust and risk with the technology acceptance model. Int J Electron Commer 7:101–134. https://doi.org/10.1080/10864415.2003.11044275
Kassim ES, Jailani SFAK, Hairuddin H, Zamzuri NH (2012) Information system acceptance and user satisfaction: the mediating role of trust. Proc Soc Behav Sci 57:412–418. https://doi.org/10.1016/j.sbspro.2012.09.1205
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. AMR 20:709–734. https://doi.org/10.5465/amr.1995.9508080335
Muir BM (1987) Trust between humans and machines, and the design of decision aids. Int J Man Mach Stud 27:527–539. https://doi.org/10.1016/S0020-7373(87)80013-5
Muir BM (1994) Trust in automation: part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37:1905–1922. https://doi.org/10.1080/00140139408964957
Hancock PA, Kessler TT, Kaplan AD et al (2021) Evolving trust in robots: specification through sequential and comparative meta-analyses. Hum Factors 63:1196–1229. https://doi.org/10.1177/0018720820922080
Hoff KA, Bashir M (2015) Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57:407–434. https://doi.org/10.1177/0018720814547570
McKnight DH, Chervany NL (2001) What trust means in e-commerce customer relationships: an interdisciplinary conceptual typology. Int J Electron Commer 6:35–59. https://doi.org/10.1080/10864415.2001.11044235
Colquitt JA, Scott BA, LePine JA (2007) Trust, trustworthiness, and trust propensity: a meta-analytic test of their unique relationships with risk taking and job performance. J Appl Psychol 92:909–927. https://doi.org/10.1037/0021-9010.92.4.909
Schüle M, Kraus JM, Babel F, Reißner N (2022) Patients' trust in hospital transport robots: evaluation of the role of user dispositions, anxiety, and robot characteristics. In: Proceedings of the 2022 ACM/IEEE international conference on human-robot interaction. IEEE Press, Sapporo, Hokkaido, Japan, pp 246–255
Sanders T, Kaplan A, Koch R et al (2019) The relationship between trust and use choice in human-robot interaction. Hum Factors 61:614–626. https://doi.org/10.1177/0018720818816838
Robinette P, Howard AM, Wagner AR (2017) Effect of robot performance on human-robot trust in time-critical situations. IEEE Trans Human-Mach Syst 47:425–436. https://doi.org/10.1109/THMS.2017.2648849
Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39:230–253. https://doi.org/10.1518/001872097778543886
Hancock PA, Billings DR, Schaefer KE et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53:517–527. https://doi.org/10.1177/0018720811417254
Rempel JK, Holmes JG, Zanna MP (1985) Trust in close relationships. J Pers Soc Psychol 49:95–112. https://doi.org/10.1037/0022-3514.49.1.95
Lee J, Moray N (1992) Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35:1243–1270. https://doi.org/10.1080/00140139208967392
Zhang T, Tao D, Qu X et al (2019) The roles of initial trust and perceived risk in public’s acceptance of automated vehicles. Transp Res Part C Emerg Technol 98:207–220. https://doi.org/10.1016/j.trc.2018.11.018
Buckley L, Kaye S-A, Pradhan AK (2018) Psychosocial factors associated with intended use of automated vehicles: a simulated driving study. Accid Anal Prev 115:202–208. https://doi.org/10.1016/j.aap.2018.03.021
Butler JK (1991) Toward understanding and measuring conditions of trust: evolution of a conditions of trust inventory. J Manag 17:643–663. https://doi.org/10.1177/014920639101700307
Madsen M, Gregor S (2000) Measuring human-computer trust. In: Proceedings of the 11th Australasian Conference on Information Systems. pp 6–8
Chancey ET, Bliss JP, Yamani Y, Handley HAH (2017) Trust and the compliance-reliance paradigm: the effects of risk, error bias, and reliability on trust and dependence. Hum Factors 59:333–345. https://doi.org/10.1177/0018720816682648
Muir BM, Moray N (1996) Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39:429–460. https://doi.org/10.1080/00140139608964474
Stowers K, Oglesby J, Sonesh S et al (2017) A framework to guide the assessment of human-machine systems. Hum Factors 59:172–188. https://doi.org/10.1177/0018720817695077
Dragan AD, Lee KCT, Srinivasa SS (2013) Legibility and predictability of robot motion. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, Tokyo, Japan, pp 301–308
McKnight DH, Cummings LL, Chervany NL (1998) Initial trust formation in new organizational relationships. Acad Manag Rev 23:473–490. https://doi.org/10.2307/259290
Merritt SM, Ilgen DR (2008) Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Hum Factors 50:194–210. https://doi.org/10.1518/001872008X288574
de Graaf MMA, Ben Allouch S (2013) Exploring influencing variables for the acceptance of social robots. Robot Auton Syst 61:1476–1486. https://doi.org/10.1016/j.robot.2013.07.007
Ajzen I (1991) The theory of planned behavior. Organ Behav Hum Decis Process 50:179–211. https://doi.org/10.1016/0749-5978(91)90020-T
Biermann H, Brauner P, Ziefle M (2020) How context and design shape human-robot trust and attributions, Paladyn. J Behav Robot 12:74–86. https://doi.org/10.1515/pjbr-2021-0008
Thielmann I, Hilbig BE (2015) Trust: an integrative review from a person-situation perspective. Rev Gen Psychol 19:249–277. https://doi.org/10.1037/gpr0000046
Liu K, Tao D (2022) The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services. Comput Hum Behav 127:107026. https://doi.org/10.1016/j.chb.2021.107026
Verberne FMF, Ham J, Midden CJH (2012) Trust in smart systems: sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. Hum Factors 54:799–810. https://doi.org/10.1177/0018720812443825
French B, Duenser A, Heathcote A (2018) Trust in automation – A literature review (CSIRO Report EP184082). CSIRO, Australia
Rani MRA, Sinclair MA, Case K (2000) Human mismatches and preferences for automation. Int J Prod Res 38:4033–4039. https://doi.org/10.1080/00207540050204894
Zafari S, Koeszegi ST (2021) Attitudes toward attributed agency: role of perceived control. Int J Soc Robot 13:2071–2080. https://doi.org/10.1007/s12369-020-00672-7
Gong L (2008) How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Comput Hum Behav 24:1494–1509. https://doi.org/10.1016/j.chb.2007.05.007
Forster Y, Hergeth S, Naujoks F, Krems JF (2018) How Usability can save the day—Methodological considerations for making automated driving a success story. In: Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, Toronto ON Canada, pp 278–290
Ullman JB (2012) Structural equation modeling. In: Tabachnick BG, Fidell LS (eds) Using multivariate statistics. Pearson, Boston
Satorra A, Bentler PM (1994) Corrections to test statistics and standard errors in covariance structure analysis. Latent variables analysis: applications for developmental research. Sage Publications Inc, Thousand Oaks, pp 399–419
Hayes AF (2009) Beyond baron and kenny: statistical mediation analysis in the new millennium. Commun Monogr 76:408–420. https://doi.org/10.1080/03637750903310360
Moshagen M, Auerswald M (2018) On congruence and incongruence of measures of fit in structural equation modeling. Psychol Methods 23:318–336. https://doi.org/10.1037/met0000122
Hsiao Y-Y, Lai MHC (2018) The impact of partial measurement invariance on testing moderation for single and multi-level data. Front Psychol 9:740. https://doi.org/10.3389/fpsyg.2018.00740
Rosseel Y (2012) lavaan: an R Package for structural equation modeling. J Stat Softw. https://doi.org/10.18637/jss.v048.i02
Wang L, Rau P-LP, Evers V, et al (2010) When in Rome: the role of culture & context in adherence to robot recommendations. In: 2010 5th ACM/IEEE international conference on human-robot interaction (HRI). IEEE, Osaka, Japan, pp 359–366
Tussyadiah IP, Zach FJ, Wang J (2020) Do travelers trust intelligent service robots? Ann Tour Res 81:102886. https://doi.org/10.1016/j.annals.2020.102886
Merritt SM, Heimbaugh H, LaChapell J, Lee D (2013) I trust It, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system. Hum Factors 55:520–534. https://doi.org/10.1177/0018720812465081
Kraus J, Scholz D, Baumann M (2021) What’s driving me? Exploration and validation of a hierarchical personality model for trust in automated driving. Hum Factors 63:1076–1105. https://doi.org/10.1177/0018720820922653
Petty RE, Cacioppo JT (1986) The elaboration likelihood model of persuasion. In: Petty RE, Cacioppo JT (eds) Communication and persuasion: central and peripheral routes to attitude change. Springer, New York, pp 1–24
Kraus JM, Forster Y, Hergeth S, Baumann M (2019) Two routes to trust calibration: effects of reliability and brand information on trust in automation. Int J Mobile Hum Comput Interact 11:1–17. https://doi.org/10.4018/IJMHCI.2019070101
Kraus J, Scholz D, Stiegemeier D, Baumann M (2020) The more you know: trust dynamics and calibration in highly automated driving and the effects of take-overs, system malfunction, and system transparency. Hum Factors 62:718–736. https://doi.org/10.1177/0018720819853686
Gruber ME, Hancock PA (2021) The self-evaluation maintenance model in human-robot interaction: a conceptual replication. In: Li H, Ge SS, Wu Y et al (eds) Social robotics. Springer International Publishing, Cham, pp 268–280
Kamide H, Kawabe K, Shigemi S, Arai T (2013) Social Comparison between the Self and a Humanoid. In: Herrmann G, Pearson MJ, Lenz A et al (eds) Social robotics. Springer International Publishing, Cham, pp 190–198
Kraus J, Scholz D, Messner E-M et al (2020) Scared to trust? – Predicting trust in highly automated driving by depressiveness, negative self-evaluations and state anxiety. Front Psychol 10:2917. https://doi.org/10.3389/fpsyg.2019.02917
Acknowledgements
The authors would like to thank Florian Angerer, Jessica Pätz and Liza Dixon for support in this research.
Funding
Open Access funding enabled and organized by Projekt DEAL. This research has been conducted within the interdisciplinary research project ‘RobotKoop’, which is funded by the German Ministry of Education and Research (Grant Number 16SV7967).
Author information
Authors and Affiliations
Contributions
JK generated the project idea, collected the data, performed the analyses, and led the manuscript write-up. LM generated the project idea, collected the data, supported the analyses, and led the manuscript write-up. MK supported data analyses and manuscript write-up. FB supported data collection and manuscript write-up. DS supported analyses and manuscript write-up. JM supported data analyses and manuscript write-up. MB supported the generation of the project idea and manuscript write-up.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Ethical Approval
The study was carried out in accordance with the Declaration of Helsinki. The participants provided their written informed consent to participate in this study. Ethical review and approval were not required for the study on human participants in accordance with the local legislation and institutional requirements.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendix
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kraus, J., Miller, L., Klumpp, M. et al. On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int J of Soc Robotics 16, 1223–1246 (2024). https://doi.org/10.1007/s12369-022-00952-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-022-00952-4