1 Introduction

Service robots are rapidly advancing to the edge of broad social dissemination in domains of public and private everyday life. This ‘new breed’ of robots is more than automated technology. They interact in social settings, react and adapt to people and situations, and thus are subject to emotional and social responses on the side of their human interaction partners [1, 2]. In this, different users commonly perceive robots differently (e.g., based on their robot-related attitudes; [3,4,5,6,7]) and while certain users might accept and use a robot, others might reject it. Also, different application areas—e.g., in private households vs. public spaces—and levels of autonomy place additional demands on users and human–robot interaction (HRI) design. Therefore, understanding the psychological processes of how people perceive these new technical agents, build up attitudes and expectations, and arrive at decisions in interacting with robots is meaningful to predict decision-making and acceptance in HRI. This, in turn, provides a meaningful basis to inform acceptable, efficient, safe, and human-centered design of robot appearance and interaction strategies (e.g., [8,9,10]), as well as dissemination strategies at a societal level.

The prediction of users' intentions to interact with and to use technology has been a research focus for many years with essentially two predominant traditions: the technology acceptance models (the different versions of the technology acceptance model, TAM; e.g., [11,12,13,14]) and frameworks incorporating trust as a main antecedent of technology-related behavior (e.g., [15, 16]). While these two perspectives share a common underlying theoretical tradition, they are typically discussed separately. A theoretical integration of the two perspectives is promising for better understanding the psychological processes associated with HRI and to facilitate a positive integration. The shared underlying theoretical approach are attitude-to-behavior models, which theoretically substantiated the study of the cascade from beliefs over attitudes to behavioral intentions—particularly in the theory of reasoned action (TRA; [17]) and the theory of planned behavior (TPB; [18]) as an advancement of the former. The TPB focuses on psychological variables affecting an intended behavior [17, 19]. The basic assumption is that behavior is essentially influenced by the intention to perform that behavior. This intention is assumed to build on the three core constructs of the TPB—social norm, attitude towards the behavior, and perceived behavioral control—which, in turn, are based on associated beliefs. The TPB was transferred to the domain of technology acceptance by the TAM and its various advancements. The Unified Theory of Acceptance and Use of Technology (UTAUT) is a recent and widely used derivation of the TAM tradition. However, it is the result of a scientific process over several decades, in which theorizing developed away from the original idea of attitude-based behavior prediction in the sense of the TPB.

Presently, only a partially coherent conglomerate of technology acceptance models exists that are not well integrated in terms of modeled constructs, underlying definitions, and measurement of constructs. Especially, at this point, there is no systematic investigation of the belief structure that underlies the adoption of robots. If constructs are not well-defined and theoretically integrated, acceptance models like the UTAUT provide only restricted value for understanding the psychological foundation of decisions in HRI (see also [20]). This hinders deriving meaningful design implications, reliable prediction of user behavior, and cumulative improvements of the scientific understanding of technology acceptance. A promising direction here is replacing overlapping, atheoretical beliefs with more distinct and theoretically founded ones and integrating these in the beliefs-attitudes cascade from the TPB to predict the intention to use. Thereby, a meaningful extension is the inclusion of trust as a mediator.

1.1 Goal and Contribution of this Research

Against this background, this research aims at an integration of beliefs from different theoretical streams (TAM, UTAUT, trust) into the original theoretical structure of the TPB. In doing so, the general assumption of attitude-based definitions of trust in automation (e.g., [15]) that trust mediates the relationship between beliefs about technology and the intention to use is empirically tested.

As a first step in understanding the relevance of trust for robot adoption, this study investigates how general trust in service robots affects trust in and the intention to use a newly introduced robot. Also, the relevance of beliefs and comparability of the beliefs structure in these two levels of specificity of trust (general trust in service robots and trust in a specific robot) and the intention to use is explored. From this, an integrated economic trustworthiness beliefs model for robot acceptance (TB-RAM), maximizing both model parsimony and predictivity, is empirically explored, optimized, and validated in a two-part online study. In this, subjects evaluated their perceptions of (a) service robots in general as a category and (b) a specific assistance robot. Additionally, the moderation of the relationship between the identified beliefs and trust in automation by situational variables and robot characteristics was explored. More specifically, the role of social influence in different social settings (private vs. public) and of perceived behavioral control in different levels of robot autonomy (partly vs. fully automated) was experimentally investigated.

This work contributes to clarify the role of beliefs from three theoretical streams (UTAUT, trust beliefs, TPB) for trust and the intention to use robot. In addition to previous research, by modeling specific instead of overarching beliefs to predict acceptance and investigating their relative predictive power in different settings and for different robots, this research builds a foundation for human-centered HRI design. Moreover, focusing on trust—a theoretically differentiated and empirically well-studied variable—as a psychological mediator between the formation of beliefs about robots and the intention to use them, offers insights in psychological processes during robot familiarization. Based on this, we discuss challenges of acceptance modeling in HRI, propose strategies to overcome these, and apply these strategies for modeling the acceptance of service robots in general as well as specific robots in two application contexts.

2 Theoretical Background

In the tradition of technology acceptance modeling, numerous studies have been conducted that predicted behavioral decisions in the interaction with technology on the basis of intentions. Acceptance of technology is commonly defined as the intention to use (or interact with) a robot (e.g., [21]). As the acceptance of robots is a central prerequisite for their adoption, the psychological process in which acceptance is formed and the variables affecting this process are of central interest for a human-centered HRI design. In the following, related literature is reviewed along a) technology acceptance models, b) trust in automation and robots, and c) integrated trust-acceptance models.

2.1 Technology Acceptance Modeling: the TAM and the UTAUT

To predict usage behavior (i.e., acceptance) or rejection of new technology and increase usage frequency, to date, numerous competing models have been developed (e.g. [12, 14, 22,23,24]). Most models are based on the TAM [11,12,13], describing motivational processes that mediate between technology characteristics and user behavior originally in the domain of information systems in organizational contexts. The basic assumption of the TAM is that the intention to use technology is based on two fundamental determinants: the perceived usefulness—the assessment of the expected outcomes of the technology—and the perceived ease of use—whether users believe that they have the necessary skills and resources to use the technology successfully [11,12,13].

To formulate a consensus among the numerous acceptance models that emerged after the TAM, Venkatesh and colleagues [14] proposed the UTAUT with four subjective variables influencing the intention to use a system. Performance expectancy largely coincides with perceived usefulness from the TAM and is defined as "the degree to which an individual believes that using the system will help him or her to attain gains in job performance" ([14], p. 447). The construct reflects external motivational factors affecting task accomplishments and outcomes through expected usefulness and benefits. Effort expectancy, defined as "the degree of ease associated with the use of a system" ([14], p. 450), is composed of three constructs from different models, one of which is the perceived ease of use. Social influence reflects "the degree to which an individual perceives that important others believe he or she should use the new system" ([14], p. 451). The fourth predictor of the UTAUT—facilitating conditions—refers to beliefs about the organizational and technical infrastructure supporting system use [14].

The application contexts of the models span a wide range of different technologies, including word processors [13], telemedicine technologies [25], gerontechnology [26], online banking [27], and vehicle monitoring systems [28]. Several meta-analyses quantified the predictive validity of the TAM and the UTAUT supporting substantial variance explanation for the intention to use technical systems [29,30,31,32,33,34,35]. The TAM was also transferred to HRI for investigating the acceptance and usage of specific types of robots, for certain tasks and contexts as well as for specific user groups (e.g., [20, 36,37,38,39,40,41,42,43,44,45]). Examples are the Almere model [36], the persuasive robots acceptance model (PRAM, [44]), and the robot acceptance model for care (RAM-care, [38]).

In this, state of the art research methods on robot acceptance modelling are quite heterogeneous. While some of the mentioned studies apply online surveys with pictures or videos of robots as stimulus material (e.g., [36,37,38]), others investigated robot acceptance of first encounters in laboratory studies with real robots (e.g., [44]) or the development of acceptance over time (e.g., [36]). Based on the TPB, the usual procedure of deriving those models is to first select relevant beliefs for the particular application domain of the robot, present a robot stimulus, and then query the determinants of the TAM in the form of self-report questionnaires. Also, commonly, in these studies, the original model is modified and supplemented by additional factors specific for HRI and the application area (e.g., social presence, compliance, reactance, or perceived technology unemployment).

2.2 Restricted Applicability of the TAM/UTAUT to HRI and Directions for Enhancing the Value of Acceptance Modeling in HRI

The variety of modifications of the TAM and UTAUT in the field of HRI indicates that the variables of the original models are not specific enough and therefore their value for enhancing the understanding of underlying processes of decision-making in the interaction with robots might be restricted (see e.g., [20, 46, 47]). This is not surprising as HRI is considerably more dynamic, social, and interactive than the original application areas of the TAM and UTAUT. Also, both models aim to maximize model economy, using only a small number of variables to predict technology adoption instead of increasing the understanding of the characteristic of the systems and psychological processes leading to it (e.g., [46]). Accordingly, based on these and other reasons, these models have restrictions that make theoretically sound derivations for the design of complex AI-based technologies and scientific knowledge gain fairly difficult [46, 47].

The current need for improvement of acceptance modeling in HRI relate to three challenges a) the restricted number of determinants of use and related to this b) overly broad and inflexible definitions of these determinants, and c) the limited theoretical integration of technology acceptance models with their original psychological foundations in the TPB. These challenges are elaborated in the following along four general strategies to overcome them by deriving, building, and empirically validating acceptance models in HRI:

  1. 1.

    Modeling distinctive, theoretically meaningful beliefs instead of broad, statistically derived beliefs.

  2. 2.

    Ordering the predictors for acceptance and behavior in accordance with the theoretical structure of the TPB.

  3. 3.

    Developing acceptance models at different levels of specificity.

  4. 4.

    Integration of attitudes towards robots as a mediating level (e.g., trust) between the level of beliefs and the intention.

Modeling distinctive and theoretically meaningful beliefs. Regarding the first and second challenge, the restricted number of determinants of technology use results in the models being too inflexible to be practically relevant [46], especially for more sophisticated, autonomous technologies like service robots outside the work and organizational context [20]. This is reflected in the large number of modified models for specific contexts, technologies, and user groups, to which various variables have been added to (successfully) increase the explained variance of technology use (e.g., [20, 36, 41, 42, 48,49,50]). As service robots can be viewed as interaction partners with socially adaptive capabilities beyond mere technological tools, the proposed determinants of technology use might not satisfactorily explain the processes leading to (affective) user responses, technology adoption, acceptance, and a positive user experience in HRI. Although the predictive power of the constructs is indisputable, it is difficult to assess and interpret their meaning because of the conceptual difficulty in distinguishing them from each other and from outcome variables. This criticism applies in particular to performance expectancy, which can hardly be theoretically separated from the overlapping acceptance of a system due to its broad definition and its measurement with items that are not easily distinguishable from acceptance scales. This is related to the point raised by Straub and Burton-Jones [52] that a reasonable person would rather not indicate to use a system which s/he does not find useful. Accordingly, the authors themselves acknowledge an overlap and shared variance between the UTAUT-constructs (e.g., facilitating conditions and effort expectancy, [14, 53]). Also, facilitating conditions appear to be only vaguely defined and so system- and domain-specific that the items are difficult to answer and apply practically.

Given the wide range of applications and functionalities, the beliefs underlying user acceptance need to be reconsidered in terms of their meaningfulness and informativeness for AI-based technology like service robots. In this regard, beliefs like performance expectancy might be too global to provide value for understanding the origin of technology acceptance in psychological processes and thus be replaced by more specific beliefs from psychological theory like the TPB and trust literature.


Ordering the predictors for acceptance and behavior in accordance with the theoretical structure of the TPB. The TAM originally evolved from attitude-to-behavior models (TRA and TPB; [17,18,19, 54]), which assume that the intention to engage in a behavior is essentially influenced by beliefs and the attitude towards the behavior. While with the transfer of the TPB to the TAM several theoretical assumptions were changed, with the additional modifications of the UTAUT, the theoretical basis was further diluted (e.g., [46]). This is reflected in (data-driven) model modifications, in which neither the inclusion of additional variables nor their placement in the process are always theoretically sufficiently justified (e.g., [36, 37, 39, 40, 42]). Essentially, attitudes were removed from the original TPB cascade, leaving the essential differentiation between attitudes and beliefs in psychological research behind [55,56,57]. As the three-step mediation cascade of the TPB (beliefs-attitudes-behavioral intention) is an essential theoretical contribution of the model, the omission of the mediating attitude level might be one explanation for the reported small effect sizes of the relationship between UTAUT variables and the intention to use (except for performance expectancy; see e.g., in the meta-analysis by [51]). Therefore, the (re)integration of attitudes as a mediator at a more global and affective level and ordering the variables again at the three original levels of the TPB might increase insight into psychological processes of HRI adoption and has repeatedly been called for (e.g., [30, 31]),—even by the authors of the UTAUT [58]. In line with this, there are already approaches in the field of robotics integrating formerly not included TPB constructs (e.g., social norm, attitude, and perceived behavioral control) to predict the intention to use and acceptance of robots (e.g., [20, 36, 38, 39, 42,43,44, 59]).


Developing acceptance models for different levels of specificity. Attitudes vary in their generality vs. specificity depending upon the object they refer to [60, 61]. While, for example, the attitude toward the future is rather general as it refers to a whole class of objects, events or stimuli, the attitude toward a certain technology (e.g., robots) can be considered as comparably specific. Beyond that, there may be even more specific attitudes for a particular representative of this category, such as for a specific privately-owned robot. In HRI, a prominent attitude variable that has often been investigated is negative robot attitudes (e.g., NARS; [3,4,5,6, 62]). Also, trust in automation has prominently been conceptualized as an attitude [15]. Therefore, trust might constitute a promising mediating variable to gain an understanding of the psychological processes between the construction of beliefs about robots and actually deciding how to interact with them. In this research, trust towards service robots is investigated at two levels of specificity: (a) general for the category of service robots and (b) specific for a certain assistance robot.


Integrating trust as a mediating attitude. Several authors found a relationship between trust and the intention to use and good (or improved) model fits for acceptance models that included trust (among other variables, e.g., [28, 58, 63,64,65]). Therefore, the integration of trust and antecedent trust beliefs might contribute to the theoretical foundation and meaningful applicability of acceptance models to understand and predict behavior in HRI. In this work, on the basis of an integration of the UTAUT, the TPB, and trust, the intention to use service robots in general and the intention to use a specific robot are predicted by a model with the three levels beliefs, attitudes, and intention to use. In the following, the relevance of trust for understanding the adoption of robots is discussed.

2.3 Trust in Automation and Trust in Robots

Mayer and colleagues [66] define trust as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" (p. 712). Trust has been transferred to human-technology interaction since the late 80s (e.g., [67, 68]). The perspective on trust in automation as an attitude has gained momentum in recent years (e.g., [15, 69, 70]). In particular, the definition of Lee and See [15] is often referred to in this context, defining trust "as the attitude that an agent will help achieve an individual's goal in a situation characterized by uncertainty and vulnerability" (p. 51). Trust was hypothesized to be based on expectations and beliefs about how the trustee will behave (e.g., [16, 66, 71]). Mainly, these are built up by perceived characteristics constituting the perceived trustworthiness of the trustee (e.g., [72]). In this regard, trust in robots has been conceptualized as a subjective variable that is established in the psychological learning process, in which expectations are built up from provided information about a robot prior to and during the interaction (e.g., [3, 16, 73]) and has been found to be a potent subjective predictor of behavioral outcomes in HRI (e.g., [74, 75]).

For facilitating an effective, safe, and comfortable interaction with automated technology, e.g., robots, a calibrated level of trust—a situation in which the degree of trust is in line with the actual capabilities of the technology [67, 76]—represents an important design goal. But not only the degree of trust (no trust—some trust—a lot of trust) but also the specificity of trust is subject to trust calibration (e.g., [15]). For example, trust can be related to all members of a category of technological systems (e.g., service robots in general), a specific representative of such a category (e.g., a specific robot), or a certain function of a robot (e.g., grasping an object with a manipulator). At this point, the role of general trust for service robots as an overarching category for the formation of trust in specific robots (exemplars of this category) has not been investigated sufficiently and thus is addressed in this research.

A manifold of variables have been found to affect trust in robots (e.g., robot-, human- and context-based; [69, 77]). In face of the manifold of influencing variables and although trust has been widely recognized as a central construct in explaining human interaction with technology, few theoretically grounded models explaining the formation and development of trust and its relation to behavioral decisions have been presented—and even fewer have been empirically validated. Similar to the TAM, the transfer of the TRA to explain the formation of trust as a specific attitude towards technology was brought forward by [15]). In their model, they simplify and adapt the TRA-interrelations to explain trust-based decisions in the interaction with automated technology. The main assumptions of the model have been extended (e.g., [70]), integrated, and in part empirically supported (e.g., [16]). Yet, at this point, there has been no integrative investigation of the central proposed beliefs-trust cascade and the role of trust beliefs for the emergence of the trust attitude in the technical domain. Based on the work in interpersonal trust by Rempel and colleagues [78] and trust in automation (e.g., [67, 79]), this research investigates the relative role of trust and antecedent trust beliefs for the explanation of the intention to use of service robots in comparison to the UTAUT beliefs.

Several studies integrated trust into the TPB, TAM and UTAUT [24, 28, 58, 59, 63,64,65, 80]. For example, Buckley and colleagues [81] showed that trust explained additional variance in the intention to use an automated vehicle, both over the TAM and TPB constructs. In a meta-analytic approach, Wu and colleagues [34] showed high correlations of trust and the TAM predictors. In the same manner, trust was integrated as a predictor for robot usage and acceptance in HRI. However, some studies support an effect of trust on the intention to use a robot (e.g., [37,38,39,40, 58]), while others do not [36, 38]. These contradicting findings might be explained by the widely varying structure and placement of trust in the models. While some authors model trust as a direct antecedent of the intention to use, others model trust as an antecedent of the TAM beliefs or along with constructs from the TPB such as attitudes. In line with the definition of Lee and See [15] of trust as an attitude and the original TAM-trust models (e.g., [28, 65, 80]), this study investigates the role of trust as a mediator between beliefs about robots and intention to use service robots.

Over the years, a manifold of different models and structures of trust beliefs in different research streams on trust-related behavior have been proposed (e.g., [67, 68, 71, 72, 79]). Prominently, Mayer and colleagues [66] differentiate ability, benevolence, and integrity as factors influencing trust. This differentiation is in line with the traditional view that trust is built on the basis of different belief facets capturing the competence of the trustee on the one hand and the trustee's character on the other hand (e.g., [72, 82]). This research focuses on ability-based trustworthiness beliefs about the trustee's performance based on its "capabilities, knowledge, or expertise" ([79], p.1244). This facet of trustworthiness beliefs is best captured by the performance level of trust attributions, which Lee and Moray [79] propose based on the work of Rempel and colleagues [78] and Muir [67, 68] as the expectation of a system's "consistent, stable, and desirable performance or behaviour" ([79], p. 1246). In line with the discussion in Lee and See [15], who define this factor as referring "to the current and historical operation of the automation and includes characteristics such as reliability, predictability, and ability […] [m]ore specifically, […] to the competency or expertise as demonstrated by its ability to achieve the operator's goals" ([15], p.59), in this study, the expected reliability, understandability, and competence are included as trust beliefs (see also [68, 83,84,85]).

Reliability was defined by Stowers and colleagues [86] as the consistency with which someone completes tasks. Dragan and colleagues [87] explained predictability as a characteristic of robots to make their "intentions clear to its human collaborator". Understandability—which is closely related to predictability—was defined by Madsen and Gregor [83] as the extent to which "the human supervisor or observer can form a mental model and predict future system behavior" (p.11). Competence describes the perceived ability of a robot to perform its task correctly and efficiently. McKnight and colleagues [88] model systems predictability and competence to affect trust. Merritt and Illgen [89] found that trustworthiness beliefs mediate the relationship of automation characteristics and trust in the systems. In this study, the included trust beliefs are used to extend the perspective of the UTAUT by trust as a variable that might shed more light on the psychological processes in which learning about robots and building expectations and beliefs about them leads to decisions in the interaction with robots. Thereby, reliability is—comparably to performance expectancy—viewed as a belief that conceptually is very closely related to trust. It describes the perceived trustworthiness of a technological system on a very general level covering both aspects of "can-do" and "will-do" expectations (possible jingle-jangle fallacy). Therefore, in this study, it is investigated if the two other modeled trust beliefs (competence and understandability) are sufficient to predict trust.

2.4 Influences of Situational Variables and Robot Characteristics

A broad array of robot characteristics and situational characteristics have been found to affect trust (e.g., [69, 77]) and acceptance (e.g., [21, 90]). In the same manner, they might also affect the relative importance of beliefs for both outcomes. This is in line with basic theorizing of the TRA and TPB, postulating that the relative importance of predictors can vary across situations and behaviors [91]. It can be concluded that the effects of specific beliefs on attitudes towards robots can vary for different types of robots, tasks, or user groups (e.g., [21, 41, 92]), which is also emphasized in reviews on variables affecting robot acceptance (e.g., [48,49,50, 90]). Also, trust is essentially conceptualized as a variable affecting decision-making and behavior under certain situational circumstances—namely situations in which the trustor feels uncertain and vulnerable (e.g., [93]). While many of the relationships between beliefs about technology and trust might be generalizable, the role of some beliefs for trust might change depending on the character of the task and robot under consideration (similar to moderation effects of user characteristics, e.g., [14, 94]). In the context of HRI, this might especially be the case for the two TPB beliefs, social influence, and perceived behavioral control, as their relative relevance might change over settings and combinations of robots and tasks.


Social influence The importance of the interaction context of HRI has been underlined in research in the domains of care robots [38], social robots [20], service robots [45], and public robots [8, 37]. In this study, it is investigated if the relationship of social norms and trust changes as a function of the interaction context. If the process and outcome of an HRI task is not publicly visible, the perception of the robot might not be as strongly affected by what others think about it (the social influence belief). Accordingly, the importance of social influence should be higher in contexts in which relevant others can observe and judge HRI. Therefore, in the present study, the application context (public vs. private) was manipulated as a possible moderator of the relative importance of social influence for trust.


Perceived behavioral control Additionally, it was investigated if the role of the belief about the perceived behavioral control—defining the scope of influence the user has on the task outcome with their behavior—is affected by the level of autonomy of a robot. While, in general, systems providing some kind of control seem to be trusted more [70, 95], at this point, no simple relationship between trust and automation level was coherently supported (e.g., [96, 97]). In a study with a hospital transport robot, higher perceived control was positively related with patients' trust and intention to use it [73]. This points into the direction that, beyond the objective possibility of intervention, the perception of control might help to gain a better understanding of the nature of the relationship between automation level, trust and the intention to use a robot. In this sense, more negative attitudes towards robots were found in situations in which people perceived to have lower control over a robot with high agency [98]. While in low autonomy robots (e.g., teleoperation), the users' behavior strongly influences the robot's task outcome, this is not the case for highly autonomous robots. It is assumed that the perception of the own ability to control the interaction with the robot has a higher relevance for trust in robots with lower levels than higher levels of autonomy.

2.5 Investigated Trust Beliefs Model

The presented study starts from the basic idea of taking the UTAUT back to its theoretical basis in the TPB and to integrate trust as an attitude in a stronger theoretical manner. To overcome the above-mentioned limitations, a trustworthiness beliefs model integrating the TPB, UTAUT, and trust perspectives is proposed (see Fig. 1). In line with the TPB, attitudes are assumed to be established substantially by beliefs, which can be understood as the subjective representation of probabilities that certain attributes are linked to a specific object (e.g., the object ‘robot’ has the attribute ‘competent’; [60, 91]). Well-established trust beliefs (reliability, competence, understandability) are integrated along with core constructs of UTAUT beliefs (performance expectancy, effort expectancy, social influence) and the TPB (social influence and perceived behavioral control). In accordance with the original TPB structure, all beliefs are modeled to directly influence trust, which in turn mediates the relationship between beliefs and the intention to use service robots in general and the specific robot investigated. In contrast to previous models, a small number of discrete beliefs was aimed for that can be generalized to various robots and application contexts. To this end, in addition to using very simple robot stimuli (prototype sketch of a mechanical service robot), the model was validated at two different specification levels and tested in two different application contexts (public vs. private) with a considerably large sample.

Fig. 1
figure 1

Investigated trustworthiness beliefs model for robot acceptance (TB-RAM)

As a first step in the model evaluation, the individual relevancy and combined predictive power of the beliefs of each theoretical stream were inspected separately (at both levels of specificity). In a second step, the full model, integrating all beliefs for the prediction of trust and the intention to use, was investigated. In a model iteration, the two broad and overlapping beliefs performance expectancy and reliability were omitted from the model to allow for a more specific, informative, and parsimonious beliefs structure. As an important criterion for the value of this enhanced model, its predictive power was compared to that of the full model. Additionally, to further explore the fit and adequacy of integrating trust as a mediator in the model structure, the direct paths from the beliefs to the intention to use were estimated in another iteration. Finally, to provide an understanding of the situational specificity of belief-trust relationships, it was investigated if the relevancy of the belief social influence and perceived behavioral control changes over situational settings and with different robot abilities.

2.6 Hypotheses and Research Questions

In line with the theoretical considerations and the proposed model, the following hypotheses were tested:

Hypothesis 1 (H1)

General trust in service robots predicts trust in a specific robot in the early familiarization process.

Hypothesis 2 (H2)

Trust predicts the intention to use service robots in general (H2.1) and the intention to use a specific assistance robot (H2.2).

Hypothesis 3 (H3)

The effect of general trust on the intention to use a specific robot is mediated by specific trust in the robot.

Hypothesis 4 (H4)

UTAUT (H4.1), trust (H4.2), and control belief(s) (H4.3) predict general trust in the category of service robots and trust in a specific assistance robot (H4.4–6).

Hypothesis 5 (H5)

UTAUT (H5.1), trust (H5.2), and control belief(s) (H5.3) predict the intention to use service robots in general and the intention to use a specific assistance robot (H5.4–6).

Hypothesis 6 (H6)

In line with the proposed mediation cascade, the effect of beliefs on the intention to use is mediated by trust in the general (H6.1) and the specific model (H6.2).

Hypothesis 7 (H7)

The effect of the perceived behavioral control on trust in a robot is stronger for a partly compared to a fully automated robot (H7.1). The effect of social influence is higher in a public compared to a private setting (H7.2).

Also, the following research questions were addressed:

Research question 1 (RQ1)

Does removing performance expectancy and reliability reduce variance explanation in trust and the intention to use?

Research question 2 (RQ2)

Which variance proportion from beliefs to the intention to use is mediated by trust?

Research question 3 (RQ3)

Which additional direct effects from the beliefs to the intention to use do occur?

3 Method

To investigate the hypotheses and research questions, a mixed-design online study was conducted in which beliefs, trust, and intention to use were measured. A correlative and a 2 × 2 experimental design were combined. In the latter, a specific robot's context of use (IV1: private household vs. public space) and level of autonomy (IV2: partly vs. fully automated) were manipulated.

3.1 Sample

The sample was recruited online with a professional panel provider, who compensated participants monetarily. Prerequisites for participation were German as native language and a minimum age of 18 years. An equal distribution of gender and age group (18–29, 30–49, 50–64, > 65 years) was aimed at to reach a heterogeneous sample.

Participants with a processing time determined to be too short (< 40% of median, med = 35.38 min, 17 participants), with no variance (flatliners, 38 participants), and multivariate outliers (Mahalanobis distance > 38; 25 participants) were excluded. The final sample consisted of N = 400 participants (51.50% female) with a mean age of M = 49.71 years (SD = 17.74). 19.80% indicated owning a robot (vacuuming, cleaning, mowing, toys, and spoken dialogue assistance robots).

3.2 Procedure, Experimental Design and Materials

Data was collected with the online survey tool Unipark (Questback GmbH, 2019). After informed consent and a survey on demographics, disposition questionnaires were filled out (not part of this research). Subjects were then given a definition and explanation of service robots (see supplementary material). Subsequently, participants answered questions about their beliefs, trust, and intention to use in regard to service robots in general. Afterwards, subjects were presented with seven specific examples of service robots (vacuum robot, reception robot, learning robot, delivery robot, security robot, mowing robot, and cleaning robot, see 4.2) in randomized order for which they indicated their trust. After this, subjects were introduced to an assistance robot and received information on its appearance, sensors, and functionality along with a sketch of the prototype (Fig. 2). Then, vignettes were presented, containing information about the application area and the robot's autonomy level. In a pre-study (N = 48), comprehensibility of the vignettes was rated, M = 6.70, SD = 0.47; range: 1–7, as well as the robot’s realism, M = 4.98, SD = 1.41, and conceivability, M = 5.85, SD = 0.88. After the pre-study the vignettes were slightly adjusted.

Fig. 2
figure 2

Prototype information and sketch presented to participants with a human silhouette for size comparison

The application area of the robot was manipulated with a list of different tasks suitable for private households or grocery shopping in the supermarket (e.g., storing groceries). The autonomy level of the robot was manipulated with different descriptions for high autonomy (fully autonomous functioning without double-checking with the user) and low autonomy (robot requires consent for each step in the task). Additionally, three specific assistance tasks (carry over objects for cooking, tidy up objects, and store objects) were described in more detail for each application area and level of autonomy (e.g., public/low autonomy: "You stand at the checkout […]. The robot moves next to you and asks if it can assist with your purchases. You can confirm the desired action. Then the robot puts your purchases into your shopping cart […]"). The descriptions between the two areas of application were standardized in as many aspects as possible. All descriptions of the assistance robot can be found in the supplementary material. Subsequently, all model constructs were measured again with reference to the described robot prototype. At the end of the study, prior experience and expertise as well as own ownership of a service robot were measured.

3.3 Study Questionnaires

To assess the model constructs, established scales from the original models were used where available and adjusted to fit the study context. The reference object was either changed to ‘robots’ (in general) or to ‘the robot’. All constructs were measured on a 7-point Likert-scale (1 = not agree at all, 7 = totally agree). If no German translation was available, items were translated into German by two independent translators.

The UTAUT constructs were measured with the items from Venkatesh and colleagues [14] whereby some items (one per subscale) were replaced or excluded to adjust the scale to the context of HRI (e.g., "The senior management of this business has been helpful in the use of the system." was excluded). Trust beliefs were measured with scales based on Madsen and Gregor ([83]; reliability and understandability) and Gong ([99]; competence). The measurements of perceived behavioral control and intention to use were adapted from Taylor and Todd [22] and Forster and colleagues [100]. Learned trust was measured with the LETRAS-G [16]. All scale reliabilities were in an acceptable range (α > .70, [101], Table 1) except for social influence. As for the latter the two items did not overlap, a single-item measure was used.

Table 1 Number of items, Cronbachs’s α and item examples for used scales and constructs of the model, N = 400

4 Statistical Analysis and Results

To test the study hypotheses and research questions, a combination of regression analyses, mediation analyses, structural equation modeling (SEM), and moderation analyses based on multigroup modeling was applied.

For the regression models, mean values were z-standardized and robust R2 estimates were calculated. For assessing multicollinearity, the variance inflation factor (VIF), the eigenvalues, and the condition index scores were inspected.

For the mediation and the exploration of the investigated trustworthiness beliefs model, SEM was applied. First, a full model for general and specific robot usage intention was estimated, followed by a reduced model. Additionally, all models were fitted with direct effects. In a last step, the external influencing variables (application area and level of autonomy) were investigated as moderators in a manifest path model of the enhanced model for specific robot use. Robust Maximum Likelihood estimation and test statistics, and corrected SEs were used [102]. All constructs were modeled as single factors. To rule out bias by non-normal distributions of indirect effects (e.g., [103]), percentile bootstrapped 95%-confidence intervals (CI) were calculated to evaluate the significance of indirect effects (5000 iterations). RMSEA and SRMR were used as primary indicators of model fit [104].

To investigate H7, multiple group CFAs were calculated. In case of a significant difference in the fit of the model between groups, a moderation is present. A precondition for this is that before the regression coefficient is introduced into the multigroup model, metric invariance is established between the groups [105].

4.1 Data Preparation and Manipulation Checks

Analyses were conducted with R version 4.0.3 and the package lavaan [106]. Means, standard deviations, and zero-order correlations of all included scales are provided in the Appendix (A Table 6). There was no missing data and multivariate outliers were excluded, hence for this, preconditions for SEM were met. To test for group effects, a series of general linear models predicting trust with the interaction of each belief and the independent variables was conducted. Except for performance expectancy and effort expectancy, no such interactions were present. ANOVAs did not result in any mean differences in trust and the intention to use between the experimental groups. Regarding manipulation checks, the experimental groups differed significantly for the perceived autonomy of the assistance robot, Mfully = 5.59, SDfully = 1.20 vs. Mpartly = 4.36, SDpartly = 1.56, F(1,398) = 78.40, p < .001, and the indication of the application area, Mpublic = 5.57, SDpublic = 1.76 vs. Mprivate = 2.97, SDprivate = 1.88, F(1,398) = 204.5, p < .001 (semantic differential with 1 = private setting and 7 = public setting).

4.2 Relationship of Trust Variables and the Intention to Use

To test the hypothesized relationships between general, specific trust and the intention to use (H1-2), latent zero-order effects were investigated in regressions. In line with H1, general trust in service robots positively predicted specific trust in the assistance robot (β = 0.74, p < .001). Also, for the seven specific service robots, general trust significantly predicted the specific trust in those (Table 2). Similarly, general trust in service robots predicted the general intention to use, βgeneral trust = 0.74, p < .001, as well as specific trust in the described assistance robot predicted the intention to use, βspecific trust = 0.68, p < .001, supporting H2.

Table 2 Mean values, standard deviations, and standardized regression coefficients for several service robots with application area and task

To test if the effect of general trust in service robots on the intention to use a specific robot is mediated by specific trust in the robot (H3), a latent mediation model was calculated (Fig. 3). In support of H3, the indirect effect was significant, β = 0.51, [0.37, 0.64].

Fig. 3
figure 3

Mediation model for general trust on specific intention to use via specific trust

4.3 Prediction of Trust by Belief Groups

To test H4 on the prediction of trust by the three beliefs groups, four latent regressions were run each for the two trust variables under investigation in the following order: (1) the UTAUT beliefs: performance expectancy, effort expectancy, and social influence, (2) beliefs from trust literature: reliability, competence, and understandability, (3) perceived behavioral control from the TPB, and 4) all beliefs in combination. This procedure was chosen to get an understanding of the predictiveness of the single beliefs groups (Table 3).

Table 3 Regression models for different sets of beliefs on general and specific robot trust (left) and intention to use (right)

For general trust, the UTAUT and the trust beliefs both explained 59% of variance, UTAUT: F(3, 396) = 192.0, p < .001, trust beliefs: F(3, 396) = 191.1, p < .001. Perceived behavioral control explained 44% of the variance, F(1, 398) = 319.1, p < .001. The combined model explained 66.5% of the variance, F(7, 392) = 114.1, p < .001. In the combined model, performance expectancy, β = 0.20, p < .001, effort expectancy, β = – 0.23, p < .001, reliability, β = 0.32, p < .001, competence, β = 0.11, p = .003, and perceived behavioral control, β = 0.19, p < .001, significantly predicted general trust. There was no indication of multicollinearity.

For specific trust, in all three separate regression models all beliefs were significant predictors. The UTAUT beliefs explained 59%, F(3, 396) = 194.3, p < .001, the trust beliefs 63%, F(3, 396) = 228.8, p < .001, and the perceived behavioral control 49%, F(1, 398) = 383.8, p < .001, of the variance of trust. The combined model increased prediction of trust considerably with 68% explained variance, F(7, 392) = 124.5, p < .001. In the combined regression model, again performance expectancy, β = 0.15, p < .001, effort expectancy, β = – 0.13, p = .012, reliability, β = 0.26, p < .001, competence, β = 0.16, p < .001, and perceived behavioral control, β = 0.24, p < .001, were significant predictors. Again, none of the inspected indices suggested serious multicollinearity between predictors.

4.4 Prediction of the Intention to Use Robots by Belief Groups

For testing H5 on the role of the beliefs for predicting the intention to use, the same procedure as for testing H4 was applied (see Table 3).

For the general intention to use service robots, the UTAUT beliefs explained 69% of variance, F(3, 396) = 302.1, p < .001, with all predictors being significant. The trust beliefs explained 55% of variance, F(3, 396) = 165.8, p < .001, also with all beliefs significantly predicting the intention to use. Perceived behavioral control explained 51% of variance, F(1, 398) = 413.5, p < .001. The combined model explained 72.5% of variance with significant path weights of all UTAUT beliefs and perceived behavioral control, F(7, 392) = 151.5, p < .001. Multicollinearity was not detected.

For the intention to use the assistance robot, a similar pattern of findings resulted. The UTAUT beliefs explained 75%, F(3, 396) = 401.7, p < .001, and the trust beliefs 46% of variance, F(3, 396) = 112.3, p < .001. While all UTAUT beliefs were significant predictors, among the trust beliefs, understandability was not significant. Perceived behavioral control explained 31% of the variance in the specific intention to use, F(1, 398) = 176.8, p < .001. The combined model explained about 76% of the variance with all UTAUT beliefs, understandability, and perceived behavioral control as significant predictors, F(7, 392) = 182.3, p < .001. Again, there was no indication of multicollinearity.

4.5 Validation of the Trustworthiness Beliefs Model for Robot Acceptance

To test H6 and RQ1-3 in regard to the general mediation structure from beliefs through trust, the relative importance of the investigated beliefs groups, and to develop an efficient trustworthiness beliefs model for robot acceptance, a series of SEMs were conducted (Table 4). For this, we specified models in which the intention to use was explained by trust, which in turn was regressed on different sets of beliefs.

Table 4 Standardized path coefficients and confidence intervals of the SEMs for the full and enhanced trustworthiness beliefs model for service robots in general and for a specific assistance robot

As a first step, a full model including the proposed beliefs, trust, and the intention to use was fitted to the data for the general and the specific intention to use (Fig. 4, Table 4, full model). Both models showed a good fit to the data. In both the general and specific model, the intention to use was explained by trust to a considerable degree, which in turn was well explained by the antecedently ordered UTAUT and trust beliefs (R2adj-general trust = 0.84, R2adj-specific trust = 0.82). While in the general model, the performance expectancy from the UTAUT as well as the reliability and understandability were found to be significant predictors for trust, in the specific model reliability predicted trust significantly. Taken together, these findings support the role of the trust beliefs as a meaningful addition to the UTAUT beliefs for the prediction of robot acceptance at both levels of specificity.

Fig. 4
figure 4

Results for the SEMs with standardized path coefficients and model fit indices for service robots in general (left column) and the assistance robot (right column)—(1) full model with all beliefs, (2) enhanced model, and (3) enhanced model with direct effects from beliefs on intention to use. Solid lines indicate positive effects, dashed lines indicate negative effects

4.6 Exploration of an Enhanced Trustworthiness Beliefs Model for Robot Acceptance

In a second step, the performance expectancy and the reliability were omitted from the SEMs to reduce suppressing variance and to allow for an investigation of the relative relevance of the remaining, more distinctive beliefs for trust and the intention to use (Table 4, enhanced model). In a third step, to get a better understanding of the extent of mediated variance by trust, a model with direct paths from the modeled beliefs to the intention to use was calculated (Table 5).

Table 5 Standardized path coefficients and confidence intervals of the SEMs for the enhanced trustworthiness beliefs models with direct effects from beliefs on the intention to use at both levels of specificity

For the model predicting general trust in service robots and the intention to use, the omission of the two general beliefs resulted in a model with comparable fit and only a slight reduction of the explained variance in trust. The reduced model in comparison to the full model had a considerably decreased AIC and BIC, indicating improved parsimony while keeping the prediction of trust and the intention to use comparable. In the model, the two beliefs effort expectancy and competence were significant predictors for trust. The inclusion of direct paths in the third model led to a slight increase in explained variance in the intention to use (from 66 to 73%) with social influence being a significant direct predictor pointing in the direction of further mediating variables at the attitude level.

In the reduced model for predicting the intention to use the assistance robot, the omission of the general beliefs performance expectancy and reliability led to a somewhat reduced explained variance in trust (8%) and the intention to use (3%). However, model fit and parsimony were improved as indicated by AIC and BIC. In this model, perceived competence of the robot and social influence significantly predicted trust in the assistance robot. Also, the path weight from effort expectancy to trust missed significance, β = -0.49, SE = 0.43, p = .264, although its magnitude indicated that this effect might be meaningful. Again, the inclusion of direct effects increased the explained variance of the intention to use by 11% with a significant direct effect of social influence, indicating that additional mediators might play a role.

4.7 Moderation of Beliefs-Trust Relationships by Application Area and Robot Characteristics

As a precondition for the multiple group CFAs to test H7, at least partial scalar measurement invariance for the two models for each IV was indicated by non-significant χ2-comparison tests. First, it was tested whether the influence of perceived behavioral control on specific trust changes as a function of the robot's autonomy level. A comparison of the two models with and without equated regression coefficients revealed no significant difference, Δχ2(1) = 1.05, p = .305, opposing H7.1. Second, the effect of the application area on the effect of social influence on trust in the assistance robot was significant, as indicated by a χ2-difference test, Δχ2(1) = 12.11, p < .001. In line with H7.2, the effect of social influence on trust in the robot was higher in the public, β = 0.57, than in the private setting, β = 0.41.

5 Discussion

On the basis of an integration of three theoretical streams, altogether seven beliefs from the TPB, the UTAUT, and trust in automation literature were used to predict trust and the intention to use service robots at two levels of specificity: a) general for the group of service robots and b) for a specific assistance robot that was introduced as a prototype either in a public or private application area. Furthermore, the role of the application context and the robot's level of autonomy for the relative importance of beliefs for trust was investigated.

5.1 Role of General Trust in Service Robots

In a first step, in support of H1, it was shown that trust in the category of service robots predicted trust in the investigated assistance robot and the other provided service robots with different application areas and tasks. Towards establishing trust as a mediator into the structure of technology acceptance models, in a second step, it was shown that trust predicted the intention to use for both service robots in general and the specific service robot, corresponding with H2 and previous research [37,38,39,40, 58]. In further support of the relevance of general trust in service robots as a starting point for users' decisions in HRI, its effect on the intention to use the investigated robot was mediated by specific trust (supporting H3).

The combined support of H1-3 underlines the notion that trust formation and calibration starts before the actual interaction with a specific robot and even before users know about a specific robot (e.g., Kraus [16]). The individual learning history of users with a category of technological systems seems to build a baseline expectation towards single members of this category, guiding information processing during the early stages of learning to trust this specific system. This means that for a newly introduced robot the accumulated knowledge and derived beliefs and attitudes about service robots in general might affect expectations and trust formation. This is in line with work showing the influence of general robot attitudes (e.g., [3, 107, 108]) or dispositional personality variables such as the propensity to trust automation (e.g., [3, 16, 89, 108,109,110] on trust. In the same manner, this resembles reported associations between different levels and layers of trust, for example, the propensity to trust, initial, and dynamic learned trust [3, 110].

5.2 Relevance of Beliefs Groups

On the basis of empirical support for the role of trust for the intention to use robots (e.g. [39, 40, 94], in this study, the predictiveness of different groups of beliefs for trust and the intention to use (at the two addressed levels of specificity) was explored. In support of H4, in a series of regressions, it was found that the three belief groups on their own predicted substantial proportions of the variances of general trust in service robots and specific trust in an assistance robot. Also, as the predicted variance proportions were substantially increased in both the model for general and specific trust, the extension of the UTAUT by trust and TPB beliefs seems worthwhile.

In the same manner as for trust, all three belief groups were able to predict both levels of the intention to use—in agreement with H5. The UTAUT beliefs performed better for predicting the intention to use than the trust beliefs. Yet, again the addition of the trust and TPB beliefs led to somewhat higher R2 for predicting the general intention to use. The high prediction by performance expectancy for both levels of the intention to use points into the direction of RQ1 that performance expectancy might be conceptually too close to acceptance (and the intention to use) to be meaningfully distinguishable at a theoretical level. Therefore, in the following, the value of a reduced trust beliefs model integrating the streams of TPB, UTAUT, and trust in automation for predicting the intention to use service robots was explored in more detail.

5.3 Exploration of an Enhanced Trustworthiness Beliefs Model

In all iterations of the model on the general level for the category of service robots, trust was a strong predictor for the intention to use. Additonally, effects of trustworthiness beliefs on the intention to use the robot were mediated by trust (in line with H6.1). In the initial full model, performance expectancy and reliability significantly positively predicted trust. Interestingly, understandability was negatively related to trust (as opposed to its positive association in the simple multivariate regression), pointing into the direction of a possible suppressing effect. After the omission of performance expectancy and reliability, variance explanation in trust did not essentially decrease (RQ1). In the enhanced model, generalized trust in service robots was significantly predicted by effort expectancy (negatively) and the perceived competence of the robot. In line with a possible suppression in the full model, understandability of service robots was no longer a significant predictor for trust. In the model allowing for direct effects, additionally, there was a direct effect of social influence on the intention to use service robots in general. Thus, in this model, trust mediated a considerable part but not the complete effect of the investigated beliefs on the intention to use (RQ 2 + 3). Thereby, the direct effect of the social norm on the intention to use might be explainable by the increased observability and visibility of behavior as compared to trust – which unlike objective behavior is a subjective perception.

In the specific model investigating the role of beliefs and trust for the intention to use the assistance robot, trust predicted the intention to use very well and in a similar range as in the generalized model. Also, in line with H6.2, trust partly mediated the effect from the trustworthiness beliefs to the intention to use. In the initial full model, only the effect of reliability was significant. After omission of reliability, the perceived competence of the robot and the social influence significantly predicted trust in the assistance robot. Also, the effort expectancy showed a comparably high beta weight that did not reach significance. In the model allowing for direct effects from trustworthiness beliefs to the intention to use, social influence showed a significant direct effect on the intention to use the robot. The direct paths indicate that besides trust other attitudes might be meaningful additional mediators in the model structure, further enhancing the understanding of psychological processes during familiarizing with new robots.

In both models, no direct effects of perceived behavioral control on trust or the intention to use could be found. This could be explained by the conceptual closeness of perceived behavioral control to the belief effort expectancy, which might have resulted in suppression of variance of the perceived behavioral control. It is possible that these variables gain importance in direct interaction with robots which can be addressed in future research by applying a more experimental setup including direct interaction with a robot.

Findings show that, in both models, the perceived competence of robots predicts trust significantly. Thus, if users have the belief that a robot is actually capable of performing well in a task, they tend to trust it more. In our study, this belief was more predictive for trust than all other included variables. Also, it was found that the effort expectancy explains variance at the general level of trust. The negative relationship illustrates that users do not only assess the actual characteristics of robots but also their own capability of interacting with it. This is also illustrated in findings from other studies supporting the relationship between effort expectancy or ease of use and trust [65] or the role of self-perceptions for trust in automated systems (e.g., [110]).

Also, social influence was a significant predictor for trust in the model including specific trust. In addition to the belief about the capabilities of the robot the interferences others draw from the observation of the interaction with a service robot are influencing trust. If users think that others would approve of them using a service robot, they trust these robots more. To conclude, trust in service robots is not only a function of how the robot itself is perceived but rather, also self-evaluative beliefs as well as its embeddedness in a social context and the beliefs about what relevant others think affect trust.

5.4 Theoretical Implications for Modeling Robot Acceptance

Taken together, in support of H6, the good fits of both full models support the meaningfulness of the TPB beliefs-attitude cascade for integrating the UTAUT and trust perspective in the prediction of intention to use robots (see also [15]). While related integrations were proposed and implemented before (e.g., [24, 36,37,38, 38,39,40, 59, 63,64,65, 80]), contradicting results hindered an integration of findings. In this research, a clear theoretical structure was used to model variables and overly broad beliefs theoretically not disjunct from mediating and outcome variables (trust and the intention to use) were omitted.

In doing so, this research aimed to integrate different research streams building on social cognitive attitude-to-behavior theories strengthening the theoretical foundation of robot acceptance modeling. Here through the integration of trust, psychological theories on attitude formation processes can increase the understanding of how beliefs affect the interaction with robots. In this, the psychological mechanisms for building up a mental model and beliefs about its scope of functioning, capabilities, and limitations are starting points to inform human-centered robot and HRI design. Essentially, models on attitude formation and change like the TPB or the Elaboration Likelihood Model [111] or similar theories from cognitive and social psychology are meaningful and promising directions for the derivation of hypotheses and study designs in HRI research. These streams of research, in line with the CASA paradigm [1], might help to further strengthen the understanding of processes in which the perception of robot characteristics and the observation of robots feed into trust formation and the interaction with robots. This might provide progress for HRI research in integrating findings on robot characteristics like anthropomorphic robot design, robot gender, speech, facial characteristics, movement, etc. by providing an enhanced understanding for potential moderator variables on the side of users or the situation in which information is presented. This research underlines that the consideration of complexity can indeed meaningfully extend our understanding of trust processes and user behavior in the interaction with robots.

In this research, the relative informativeness of beliefs from different model families was investigated. Naturally, the included beliefs share some variance as they are part of the same processes. In line with our reasoning, it was shown that in predicting trust in robots, unspecific, overlapping beliefs can be meaningfully replaced by more distinctive beliefs without endangering the predictive power of trust and acceptance models. There, both the performance expectancy from the UTAUT and the reliability were omitted, resulting in stronger associations of the remaining beliefs without substantially reducing variance explanation. On a theoretical level, performance expectancy is not clearly distinguishable from acceptance. Subjective reliability cannot be measured separately from trust.

The reduced models allow differentiated and, at the same time, the economic prediction of trust and the intention to use robots. In doing so, they enhance the theoretical embeddedness of the model in the attitude to behavior perspective, allowing a more theoretically founded derivation of implications for trustworthy robot design and dissemination.

5.5 Role of Situational Variables and Robot Characteristics for Beliefs-Trust Prediction

The study's findings support that the application area of a robot can affect the relevance of beliefs for trust formation, partially supporting H7. This underlines the role of changing environments for variances in the interpretation of the very same information about robots. Also, it points in the direction that while a general meaningful structure of acceptance models might help to increase understanding of the formation of user decisions and behavior in regard to different robots, the relative relevance of beliefs for trust and the intention to use might change over settings and for different robots. This underlines the relevance of theoretical considerations for the integration of variables in such models over a purely data-driven rationale for variable inclusion or exclusion.

5.6 Practical Implications

This study's findings support the mediation of the effect of beliefs on usage intentions by trust and thereby underline the relevance of considering individual trust processes in making use of available information in building up expectations and intentions to interact with robots. In our study, we found strong evidence for a relationship between trust in service robots as a general category and trust in specific robots.

This holds several implications for robot dissemination and design practice. The sum of communication and experiences about robots builds into trust formation in single robots. In this regard, the availability and the content of media like science fiction movies, computer games, or press articles in which robots play a role might be essential for learning what to expect from robots in general, and this might be transferred to new specific robots people get to know. Therefore, this potential influence of the way robots are represented in media should be considered by artists, press and those in charge of programs. Responsibilities in this regard might be on the side of the government and robot manufacturers. In order to facilitate calibrated trust in (future) users of and interaction partners with service robots, the public needs to be addressed with objective and transparent information about the actual capabilities, processes, and limitations of robots. This includes advertisements (e.g., in social media), which should paint a realistic picture of what robots can or cannot do.

In the enhanced trust beliefs models especially the relevance of three beliefs was supported: competence, effort expectancy, and social influence. This finding illustrates the combined influences of three sources of information that play a role for the perception of robots and the decision to interact with them.

First, the relevance of competence underlines the well-investigated role of perceived robot ability and performance on trust. Perceived competence seems to be the most essential consideration when being confronted with a new robot. Therefore, to enhance a calibrated level of trust, all communication about the robot's features, ability, and reliability—from external sources but also from the robot itself—should be realistic. With this, a balanced usage behavior and interaction of users is facilitated, neither leading to distrust and inefficiently reduced robot reliance nor to overtrust and an overly optimistic and risky usage pattern of the robot. Several studies report that trust is not necessarily reduced by communication about possible errors of automated systems or even the experience of such errors in a long-term perspective (e.g. [112, 113]). Rather if they are not associated with substantial danger and risk, such information and experience might favor a realistic picture about the robot and foster appropriate decisions during HRI.

Second, the relevance of effort expectancy for trust at a general level sheds light on the relevance of self-evaluative beliefs in the formation of trust in a specific robot. While the role of self-evaluations for robot acceptance has been discussed in HRI before, up to now there are no conclusive results (e.g., [114, 115]). In other domains of the interaction with automated technology, a positive relationship between self-esteem and self-efficacy and trust in an automated driving system has been reported (e.g., [110, 116]). People who perceive less complications and barriers for using service robots successfully tend to trust them more in general as well. For general communication about robots, information about how common concerns and perceived problems for using robots successfully can be overcome might considerably help to increase trust and acceptance.

Third, following from the role of social influence in the model for the specific intention to use, the interpersonal visibility and contextual embeddedness of HRI should be addressed in robot design and dissemination. People care about what they communicate by using robots and what others think about this. Therefore, the societal discussion about what it means to use a robot on a normative level needs to be extended and made visible as it considerably affects trust levels and the adoption of robots. This is further substantiated by the findings on the role of context for the relevance of the effect from the belief to social influence to trust, indicating a stronger effect in public as in private settings. As from a perspective of technological readiness level service robots in public are among the first robots people will interact with in their daily lives, strategies for trust calibration and reduction of normative concerns should be implemented in the public sector as these are essential for raising acceptance levels.

5.7 Strengths, Limitations, and Future Research

This work contributes to the current state of research with a theoretical review and (re)integration of different research streams (acceptance models, TPB, and trust) and their application in HRI. Considerable strengths of the study are the integration of these theoretical streams, the theoretical breakdown of the interrelationship of several groups of variables, a combination of a correlative and an experimental approach, and a large, heterogeneous sample allowing for sophisticated statistical analyses. While previous research rather focused on the acceptance of single (specific) robots, this work explicitly differentiated between the broad category of service robots in general and a specific representative robot of that category. Also, the model was applied in two application contexts.

The presented study has limitations that need to be addressed in future research. First of all, the study was conducted online with vignettes without actual interaction with a robot. Related to that, second, no actual behavioral measure was integrated. The online setting was chosen to make a large sample possible needed to conduct the appropriate statistical methods for the investigated hypotheses and research questions. Future studies might validate the model in real-life experiments and investigate its relevance for behavioral variables in actual HRI. Third, participants were from a merely German sample and only had restricted prior experience with robots. As culture might be an important factor influencing specifics in technology adoption, findings on the relative importance of the investigated beliefs need to be validated in sample in other cultures (e.g., in a Japanese sample). Whatsoever, the basic contributions of this work in terms of the psychological processes involved in the formation of trust and the intention to use robots are likely to be robust to culture-specific variances. In regard to the restricted prior experiences of the sample, while common in most of today's studies, research on the role of this variable is encouraged as it might be important to understand belief and attitude formation. Fourth, in this research the trust beliefs model was only investigated in regard to one specific robot. Potentially, the role of single beliefs changes for different robots and different contexts, which raises a number of challenging research questions for future studies. Fifth, the situational relevance of beliefs for trust and the intention to use might be smaller in online settings and thus should be investigated again in real-life experiments where stronger effects can be expected. Sixth, the study used comparably short scales for some of the investigated constructs. While this resulted from the complex study design to guarantee economy and motivation in participants, the findings should be validated. Also, as many beliefs have been proposed as being meaningful for understanding technology acceptance, this study could not assess all beliefs. Especially, the role of "will-do" trustworthiness beliefs concerning motive and moral attributions towards technology (i.e., integrity and benevolence) need further investigation. The relative role of these ability-related beliefs in different robots and interaction scenarios might lead to additional insights in psychological trust processes in HRI. In this regard, the role of top-down vs. bottom-up processes is of interest, and future studies might investigate how prior experience vs. the actual perception of robot characteristics and abilities during the early interaction with robots build into trust formation and calibration. Hereby, additional mediation variables on the intention to use robots beside trust and factors explaining differences of the interrelations of modeled variables at the general category level of service robot and the specific level might be identified.

5.8 Conclusion

In this work we theoretically derived and validated a generalizable acceptance model (TB-RAM) for service robots including trust and trustworthiness beliefs. Based on a thorough review, we first discussed the shortcomings of current acceptance modeling and proposed strategies to overcome them. Second, beliefs from three research streams (acceptance models, TPB, trust in automation) were (re)integrated into the structure of the TPB. Third, in a large-scale online study, the TB-RAM model was applied to two levels of trust—general trust in the category of service robots and specific trust in a particular assistance robot—and validated in two contexts—public and private—and two levels of autonomy.

Results show that trust in service robots as a general category predicts trust in specific robots as representatives of that category, which, in turn, mediates the effect of generalized trust on the intention to use a specific robot. This underlines the role of general for specific trust and, with this, the substantial relevance of the sum of experiences with robots for establishing expectations, beliefs, trust, and using newly introduced robots.

Furthermore, the combination of beliefs from the TPB (perceived behavioral control), the UTAUT (social influence, performance expectancy, effort expectancy), and trust literature (reliability, competence, understandability) substantially explained variance in general and specific trust, as well as the intentions to use both service robots in general as well as the focused specific robots. In line with the basic assumption of this research, dropping the overlapping beliefs performance expectancy and reliability did neither substantially reduce explained variance in trust nor model fit. Taken together, the reported findings support the meaningfulness of integrating the three theoretical perspectives to enhance the understanding of psychological processes involved in HRI and robot adoption and, in this, aiming to model distinctive beliefs instead of overlapping general beliefs. Also, this emphasizes the role of trust as a mediator of the effect from robot-related beliefs to the intention to use service robots, for both general trust in service robots and specific trust for single representatives of this category.

Additionally, the findings underline situation-specific relevance of beliefs for trust and the intention to use a specific robot, as indicated by the higher social influence in the public than in the private application context. This sheds light on the processes in which both trust and behavioral intentions in regard to robots are formed.

Taken together, this research provides a meaningful theoretical extension of technology acceptance modeling in the domain of HRI and other automated technology, which allowed for the derivation of some general directions for enhancing trustworthy and human-centered robot interaction design.